Amazon Nova
342 points by scbenet a day ago | 136 comments
  • scbenet a day ago |
    • kajecounterhack a day ago |
      TL;DR comparison of models vs frontier models on public benchmarks here https://imgur.com/a/CKMIhmm
      • brokensegue a day ago |
        So looks like they are trying to win on speed over raw metric performance
        • azinman2 a day ago |
          Either that, or that’s just where they landed.
      • SparkyMcUnicorn a day ago |
        This doesn't include all the benchmarks.

        The one that really stands out is GroundUI-1K, where it beats the competition by 46%.

        Nova Pro looks like it could be a SOTA-comparable model at a lower price point.

        • oblio 21 hours ago |
          SOTA?
          • camel_Snake 21 hours ago |
            "State of the Art", if that's what you were asking.
        • maeil 20 hours ago |
          Just means it's better at one specific task than the others, which has always been the case. For each of Sonnet, GPT and Gemini I can readily name a task they are individually the best at. At the same time the consensus that Sonnet 3.5 is overall the currently strongest model remains correct, and that's what most people care about. Additionally most people do tasks that all of the models perform similarly at, or they can't be bothered to optimize every task by using the best model for that one task. Which makes sense since not a single cloud provider has all three of them. Now this one will likely be AWS-exclusive too.
        • retinaros 20 hours ago |
          in the berkeley function calling it is similar than 4-o for multi turn while being way faster
  • baxtr a day ago |
    As a side comment: the sound quality of the auto generated voice clip is really poor.

    No match for Google's NotebookLM podcasts.

    • wenc a day ago |
      The autogenerated voice is Amazon Polly which is an old AWS speech synthesis service which doesn’t use the latest technology.

      It’s irrelevant to the article, which is about Nova.

    • griomnib a day ago |
      If you haven’t seen it this may be the best use of ai-podcast I’ve seen: https://youtu.be/gfr4BP4V1R8
      • bongodongobob a day ago |
        This is goddamn hilarious, thank you.
        • griomnib 16 hours ago |
          Let us all thank the YouTube God.
  • xnx a day ago |
    More options/competition is good. When will we see it on https://lmarena.ai/ ?
    • glomgril 19 hours ago |
      looks like it's there now
  • teilo a day ago |
    So that's what I missed at the keynote.
  • htrp a day ago |
    No parameter counts?
  • HarHarVeryFunny a day ago |
    Since Amazon are building their own frontier models, what's the point of their relationship with Anthropic ?
    • tokioyoyo a day ago |
      If you play all sides, you’ll always come on top.
      • worldsayshi a day ago |
        Yeah Copilot includes Claude now.
      • cdchn 19 hours ago |
        This is Amazon's core e-commerce business model but for AI. You sell everybody else's stuff and also offer an Amazon Basics version.
    • tinyhouse a day ago |
      I can only guess.

      1. A company the size of Amazon has enough resources and unique internal data no one else has access to that it makes sense for them to build their own models. Even if it's only for internal use

      2. Amazon cannot beat Anthropic at this game. They are far a head of them in terms of performance and adoption. Building these models in-house doesn't mean it's a bad idea to also invest in Anthropic

      • PartiallyTyped a day ago |
        Also not putting all of your eggs in one basket.
    • blackeyeblitzar a day ago |
      Commoditizing complements
    • jonathaneunice a day ago |
      Different models have different strengths and weaknesses, especially here in the early days when models and their capabilities progress several times per year. The apps, programs, and systems based on models need to know how to exploit their specific strengths and weaknesses. So they are not infinitely interchangeable. Over time some of that differentiation will erode, but it will probably take years.

      AWS having customers using its own model probably improves AWS's margins, but having multiple models available (e.g. Anthropic's) improves their ability to capture market share. To date, AWS's efforts (e.g. Q, CodeWhisperer) have not met with universal praise. So for at least for the present, it makes sense to bring customers to AWS to "do AI" whether they're using AWS's models or someone else's.

      • sdesol 19 hours ago |
        > Different models have different strengths and weaknesses

        I would add different errors as well. Here are two examples where GPT-4o and Claude 3.5 Sonnet cannot tell that "GitHub" is spelled like "GitHub".

        GPT-4o: https://app.gitsense.com/?doc=6c9bada92&model=GPT-4o&samples...

        Claude 3.5 Sonnet: https://app.gitsense.com/?doc=905f4a9af74c25f&model=Claude+3...

        I don't think there will be one model that will rule them all, unless there is a breakthrough. If things continue on the same path, I think Amazon, Microsoft and Google will be the last ones standing, since they can provide models from all the major LLM players.

    • Muskyinhere a day ago |
      Customers want choices. They just sell all models.
    • qgin a day ago |
      Not sure if this was the goal, but it does work well from a product perspective that Nova is a super-cheap model that is comparable to everything BUT Claude.
    • cowsandmilk 19 hours ago |
      Why does RDS support Oracle and MS SQL databases? Because customers want them.
  • andrewstuart a day ago |
    It's not clear what the use cases are for this, who is it aimed at.
    • dvh a day ago |
      Shareholders?
      • christhecaribou a day ago |
        The real “customers”.
    • mystcb a day ago |
      I'd say, people that need it. Which could be the same for all the other models out there.

      To create one model that is great at everything is probably a pipedream. Much like creating a multi-tool that can do everything- but can it? I wouldn't trust a multi-tool to take a wheel nut off a wheel, but I would find it useful if I suddenly needed a cross-head screw taken out of something.

      But then I also have a specific crosshead screwdriver that is good at just taking out cross-head screws.

      Use the right tool for the right reason. In this case, there maybe a legal reason why someone might need to use it. It might be that this version of a model can create something better that another model can't. It might be that for cost reasons you are within AWS, that it makes sense to use a model at the cheaper cost than say something else.

      So yeah, I am sure it will be great for some people, and terrible for others... just the way things go!

      • dgfitz a day ago |
        > I'd say, people that need it.

        Nobody needs Reddit hallucinations about programming.

    • petesergeant a day ago |
      https://artificialanalysis.ai/leaderboards/models seems to suggest Nova Lite is half the price of 4o-mini, and a chunk faster too, with a bit of quality drop-off. I have no loyalty to OpenAI, if it does as well as 4o-mini in the eval suite, I'll switch. I was hoping "Gemini 1.5 Flash (Sep)" would pass muster for similar reasons, but it didn't.
    • faizshah 20 hours ago |
      It seems to be that it is faster and cheaper with slightly lower than SOTA quality. There’s an emerging subset of AI companies building features on “good enough” lightweight models: https://www.wired.com/story/how-do-you-get-to-artificial-gen...

      So I guess that’s who it’s for.

      I’ve only spent an hour with it though obviously.

    • mrg3_2013 19 hours ago |
      This is just Amazon's 'me too' play. Doubt anyone serious in LLM space would consider this
  • xendo a day ago |
    Some independent latency and quality evaluations already available at https://artificialanalysis.ai/ Looks to be cheap and fast.
  • blackeyeblitzar a day ago |
    It would be nice if this was a truly open source model like OLMo: https://venturebeat.com/ai/truly-open-source-llm-from-ai2-to...
    • sourcepluck a day ago |
      Is it narrowly open source, or somewhat open source, in some way? Thanks for that link, anyway!
      • blackeyeblitzar 20 hours ago |
        As far as I can tell Amazon’s nova is fully closed source. Maybe because their goal is to get you to pay them for hosting.
  • mikesurowiec a day ago |
    A rough idea of the price differences...

      Per 1k tokens        Input   |  Output
      Amazon Nova Micro: $0.000035 | $0.00014
      Amazon Nova Lite:  $0.00006  | $0.00024
      Amazon Nova Pro:   $0.0008   | $0.0032
    
      Claude 3.5 Sonnet: $0.003    | $0.015
      Claude 3.5 Haiku:  $0.0008   | $0.0004
      Claude 3 Opus:     $0.015    | $0.075
    
    Source: AWS Bedrock Pricing https://aws.amazon.com/bedrock/pricing/
    • Bilal_io a day ago |
      You have added another zero for Haiku, its output cost is $0.004
      • indigodaddy a day ago |
        Thanks that had confused me when I compared same to Nova Pro
      • mikesurowiec 20 hours ago |
        You're absolutely right, apologies!
    • warkdarrior a day ago |
      Eyeballing it, Nova seems to be 1.5 order of magnitude cheaper than Claude, at all model sizes.
    • holub008 a day ago |
      Has anyone found TPM/RPM limits on Nova? Either they aren't limited, or the quotas haven't been published yet: https://docs.aws.amazon.com/general/latest/gr/bedrock.html#l...
      • tmpz22 a day ago |
        Maybe they want to gauge demand for a bit first?
    • Tepix a day ago |
      I suggest you give the price per million token as seems to be the standard.
      • oblio 21 hours ago |
        I'm guessing they just copy pasted from the official docs page.
      • 8n4vidtmkvmk 14 hours ago |
        From my personal table https://i.imgur.com/WwL9XkG.png

        Price is pretty good. I'm assuming 3.72 chars/tok on average though.. couldn't find that # anywhere.

    • sheepscreek 18 hours ago |
      It’s fascinating that Amazon is investing heavily in Anthropic while simultaneously competing with them.
      • panabee 18 hours ago |
        Amazon is a retailer and strives to offer choice, whether of books or compute services.

        AWS is the golden goose. If Amazon doesn't tie up Anthropic, AWS customers who need a SOTA LLM will spend on Azure or GCP.

        Think of Anthropic as the "premium" brand -- say, the Duracell of LLMs.

        Nova is Amazon's march toward a house brand, Amazon Basics if you will, that minimizes the need for Duracell and slashes cost for customers.

        Not to mention the potential benefits of improving Alexa, which has inexcusably languished despite popularizing AI services.

        :Edited for readability

        • coredog64 5 hours ago |
          Minor nit: These days I think Ads has taken over as the golden goose, but that doesn’t diminish the contributions of AWS.
          • jazzyjackson 3 hours ago |
            Is that why Amazon's product search is terrible? Because it's more profitable for them when I scroll through 5 pages of junk than if I can navigate immediately to the thing I want?
      • dotBen 17 hours ago |
        It’s fascinating that Amazon Web Services have so many overlapping and competing services to achieve the same objective. Efficiency/small footprint was never their approach :D

        For example, look how many different types of database they offer (many achieve the same objective but different instantiation)

        https://aws.amazon.com/products/?aws-products-all.sort-by=it...

        • bushbaba 15 hours ago |
          To quote, “right tool for right job”.
        • UltraSane 5 hours ago |
          Soon AWS is going to need an LLM just to recommend what service a customer should use.
          • htrp 5 hours ago |
            Let me tell you about Amazon Q
        • Jfurrrio2 5 hours ago |
          They are not competing those are offerings. "AWS has many offerings" is a completly different thing than saying they compete against each other.
      • donavanm 11 hours ago |
        As others said the product isnt the model, its the API based token usage. Happily selling whatever model you need, with easy integrations from the rest of your aws stack, is the entire point.
    • Havoc 18 hours ago |
      Doesn’t look particularly favourable versus deepseek and qwen. Main deepseek is about same price as smallest nova.

      I guess it depends on how sensitive your data is

    • jerrygoyal 15 hours ago |
      does anyone know performance benchmark
  • indigodaddy a day ago |
    Unfortunate that this seems to be inextricably tied to Amazon Bedrock though in order to use it..
  • jklinger410 a day ago |
    It's really amusing how bad Amazon is at writing and designing UI. For a company of their size and scope it's practically unforgivable. But they always get away with it.
    • smt88 a day ago |
      You say they "get away with it," but it makes more sense to conclude that UI design has a lot lower ROI than we assume it does as users.
      • wilg a day ago |
        Or that design instincts are backwards
      • wavemode a day ago |
        You can't conclude that.

        At best, you can conclude that outdated product design doesn't always ruin a business (clearly). But you can't conclude the inverse (that investing in modern product design doesn't ever help a business).

        • handfuloflight a day ago |
          That's a great point. Further, there are many sizeable businesses built on top of AWS where they deliver the abstractions with compression that earns them their margin.

          Case in point: tell me, from the point of view of the user, how many steps it takes to deploy a NextJS/React ecosystem website with Vercel and with AWS, start to finish.

        • rrrrrrrrrrrryan a day ago |
          I think they have plenty of competition in the cloud computing space. It seems fair to say that their strategy of de-prioritizing UI/UX in favor of getting features out the door more quickly and cheaply has benefitted them.

          However, I don't think it's fair to say that this trade-off always wins out. Rather, they've carved out their own ecological niche and, for now, they're exploiting it well.

      • nekoashide a day ago |
        Oh I'm sure, the ACM UI was impossible to use for years to find certificates, they improved it, but, it will never have the same level of functionality that the API gives you and that's the bread and butter.
        • dandrew5 21 hours ago |
          Imagine a native desktop app that let you build a UI with very basic elements, à la Visual Basic, and behind each of those elements is an associated AWS CLI command. Such that "aws s3 ls" attached to a list element would render an account's buckets.

          The AWS APIs are so expansive, a product like this could offer a complete replacement for the default web console and maybe even charge for it. Does anyone know if such a solution exists? Perhaps some more generic "shell-to-ui" application? If not, I'm interested in building one if anybody would like to contribute.

    • Bilal_io a day ago |
      That happens when you ask SWE to design. To fix this, Amazon will need to do extensive UX research and incrementally make changes until the UI doesn't look the same and is more usable. Because users hate sudden change.
    • MudAndGears a day ago |
      Everyone I know who's worked at Amazon says Jeff Bezos has his hand in everything to the detriment of product design.

      I've heard multiple accounts of him seeing a WIP and asking for changes that compromise the product's MVP.

      • queuebert a day ago |
        Is this why the everything besides the front page still looks like someone's first website from 1998?
      • genghisjahn a day ago |
        "Jeff Bezos is an infamous micro-manager. He micro-manages every single pixel of Amazon's retail site. He hired Larry Tesler, Apple's Chief Scientist and probably the very most famous and respected human-computer interaction expert in the entire world, and then ignored every goddamn thing Larry said for three years until Larry finally -- wisely -- left the company."

        https://gist.github.com/kislayverma/d48b84db1ac5d737715e8319...

        I read that post every couple of years or so.

        • foundry27 21 hours ago |
          Why was this being downvoted? It’s first-party evidence that substantiates the claims of the parent comment, and adds interesting historical context from major industry players
    • acdha a day ago |
      What do you think is comparable but better? I think you’re really seeing that they have a large organization with a lot of people working on different complex products, which makes major changes much harder to coordinate, and their market skews technical and prioritizes functionality higher than design so there isn’t a huge amount of pressure.
    • jgalt212 a day ago |
      It's similarly amazing how long the YouTube home page takes to load, but it's a top 5 destination no matter how bad its Lighthouse Score is.
    • outworlder a day ago |
      > It's really amusing how bad Amazon is at writing and designing UI.

      For most of AWS offerings, it literally doesn't matter and logging in to AWS Console is a break glass thing.

      Case in point: this very article. It uses boto3 to interface with AWS.

      • trallnag a day ago |
        At the same time, AWS has tons of services that are explicitly designed for usage through the console. For example, many features of CloudWatch
        • oblio 21 hours ago |
          Those are probably implemented by interns.
  • zapnuk a day ago |
    They missed a big opportunity by not offering eu-hosted versions.

    Thats a big thing for complience. All LLM-providers reserve the right to save (up to 30days) and inspect/check prompts for their own complience.

    However, this means that company data is potentionally sotred out-of-cloud. This is already problematic, even more so when the storage location is outside the EU.

    • Tepix a day ago |
      I'm not sure if hosting it in the EU will do any good for Amazon, there's still the US CLOUD Act: It doesn't really matter where the data is located.
      • physicsguy 11 hours ago |
        It makes a really big difference for anyone doing business in Europe though.

        Legally we're only allowed to use text-embeddings-3-large at work because Azure don't host text-embeddings-3-small within a European region.

  • diggan a day ago |
    > The model processes inputs up to 300K tokens in length [...] up to 30 minutes of video in a single request.

    I wonder how fast it "glances" an entire 30 minute video and takes until the first returned token. Anyone wager a guess?

  • potlee a day ago |
    > The Nova family of models were trained on Amazon’s custom Trainium1 (TRN1) chips,10 NVidia A100 (P4d instances), and H100 (P5 instances) accelerators. Working with AWS SageMaker, we stood up NVidia GPU and TRN1 clusters and ran parallel trainings to ensure model performance parity

    Does this mean they trained multiple copies of the models?

    • glomgril 19 hours ago |
      Models like this are experimentally pretrained or tuned hundreds of times over many months to optimize the datamix, hyperparams, architecture, etc. When they say "ran parallel trainings" they are probably referring to parity tests that were performed along the way (possibly also for the final training runs). Different hardware means different lower-level libraries, which can introduce unanticipated differences. Good to know what they are so they can be ironed out.

      Part of it could also be that they'd prefer to move all operations to the in-house trn chips, but don't have full confidence in the hardware yet.

      Def ambiguous though. In general reporting of infra characteristics for LLM training is left pretty vague in most reports I've seen.

  • jmward01 a day ago |
    No audio support: The models are currently trained to process and understand video content solely based on the visual information in the video. They do not possess the capability to analyze or comprehend any audio components that are present in the video.

    This is blowing my mind. gemini-1.5-flash accidentally knows how to transcribe amazingly well but it is -very- hard to figure out how to use it well and now Amazon comes out with a gemini flash like model and it explicitly ignores audio. It is so clear that multi-modal audio would be easy for these models but it is like they are purposefully holding back releasing it/supporting it. This has to be a strategic decision to not attach audio. Probably because the margins on ASR are too high to strip with a cheap LLM. I can only hope Meta will drop a mult-modal audio model to force this soon.

    • xendo a day ago |
      They also announced speech to speech and any to any models for early next year. I think you are underestimating the effort required to release 5 competitive models at the same time.
    • plumeria 16 hours ago |
      Is Gemini better than Whisper for transcribing?
      • jmward01 an hour ago |
        'better' is always a loaded term with ASR. Gemini 1.5 flash can transcribe for 0.01/hour of audio and gives strong results. If you want timing and speaker info you need to use the previous version and a -lot- of tweaking of the prompt or else it will hallucinate the timing info. Give it a try. It may be a lot better for your use case.
  • adt a day ago |
  • zacharycohn a day ago |
    I really wish they would left-justify instead of center-justify the pricing information so I'm not sitting here counting zeroes and trying to figure out how they all line up.
  • Super_Jambo a day ago |
    No embedding endpoints?
  • lukev a day ago |
    This is a digression, but I really wish Amazon would be more normal in their product descriptions.

    Amazon is rapidly developing its own jargon such that you need to understand how Amazon talks about things (and its existing product lineup) before you can understand half of what they're saying about a new thing. The way they describe their products seems almost designed to obfuscate what they really do.

    Every time they introduce something new, you have to click through several pages of announcements and docs just to ascertain what something actually is (an API, a new type of compute platform, a managed SaaS product?)

    • Miraste a day ago |
      That may be generally true, but the linked page says Nova is a series of foundation models in the first sentence.
      • lukev 21 hours ago |
        Yeah but even then they won't describe it using the same sort of language that everyone else developing these things does. How many parameters? What kind of corpus was it trained on? MoE, single model, or something else? Will the weights be available?

        It doesn't even use the words "LLM", "multimodal" or "transformer" which are clearly the most relevant terms here... "foundation model" isn't wrong but it's also the most abstract way to describe it.

        • meta_x_ai 21 hours ago |
          None of those matters (except multimodal). If you are running a business, the only thing that matters is

          a) How does it perform on my set of evals

          b) What is the cost/latency of serving it to my consumers.

          It shouldn't matter to me how many parameters, corpus it is trained on, whether it's LLM or Transformer or something else

          • marcosdumay 5 hours ago |
            > How does it perform on my set of evals

            What kinds of eval? Personally, I have no idea what kind of data you can throw at a "foundation model" and what kind of response you will get.

            The only thing it says is that there's machine learning involved... Once you get enough context to understand it's not a spin-off of a TV series.

        • alach11 20 hours ago |
          > How many parameters? What kind of corpus was it trained on?

          It's rare for the leading model providers to answer these questions.

          As someone who applies these models daily, I agree with the dead comment from meta_x_ai. Your questions are interesting/relevant to a person developing these models, but less important to the average person utilizing these models through Bedrock.

          • jnwatson 16 hours ago |
            Amazon is not a "leading model provider".
    • kvakvs a day ago |
      Amazontalk: We will save you costs Human language: We will make profit while you think you're saving the costs

      Amazontalk: You can build on <product name> to analyze complex documents... Human language: There is no product, just some DIY tools.

      Amazontalk: Provides the intelligence and flexibility Human language: We will charge your credit card in multiple obscure ways, and we'll be smart about it

    • oblio 21 hours ago |
      Once upon a time there were (and still are) mainframes (and SAP is similar in this respect). These insular systems came with their own tools, their own ecosystem, their own terminology, their own certifications, etc. And you could rent compute & co on them.

      If you think of clouds as being cross continent mainframes, a lot more things make a more sense.

      • danielmarkbruce 19 hours ago |
        "distributed mainframes".
    • foobarian 16 hours ago |
      If you figure out what a security group is, let me know :-D
      • rsrsrs86 16 hours ago |
        Lol

        What’s the subnet of the security group of my user group for Aws lambda application in a specific environment that calls kms to get a secret for….

  • TheAceOfHearts a day ago |
    They really should've tried to generate better video examples, those two videos that they show don't seem that impressive when you consider the amount of resources available to AWS. Like what even is the point of this? It's just generating more filler content without any substance. Maybe we'll reach the point where video generation gets outrageously good and I'll be proven wrong, but right now it seems really disappointing.

    Right now when I see obviously AI generated images for book covers I take that as a signal of low quality. If AI generated videos continue to look this bad I think that'll also be a clear signal of low quality products.

  • ndr_ 21 hours ago |
    Setting up AWS so you can try it via Amazon Bedrock API is a hassle, so I made a step-by-step guide: https://ndurner.github.io/amazon-nova. It's 14+ steps!
    • simonw 21 hours ago |
      Thank you!
    • teruakohatu 20 hours ago |
      Thanks for that. Are there any proxies that can communicate with bedrock and serve it via a OpenAI style api?
      • moduspol 19 hours ago |
        You'd have to deploy it yourself, but there's this:

        https://github.com/aws-samples/bedrock-access-gateway

        • teruakohatu 19 hours ago |
          Thanks. That is quite a heavy stack!
      • popinman322 17 hours ago |
        Try LiteLLM; their core LLM proxy is open source. As an added bonus it also supports other major providers.
    • OJFord 18 hours ago |
      Your 14 steps appear to be 'create an IAM user'..?
      • Spivak 18 hours ago |
        If you're already in the AWS ecosystem or have worked in it, it's no problem. If you're used to "make OpenAI account, add credit card, copy/paste API key" it can be a bit daunting.
      • scosman 17 hours ago |
        find to supported region, request model access, wait for model access, create policy, create user, attach policy... it's not comparable
      • weitendorf 17 hours ago |
        AWS does not use the exact same authn/authz/identity model or terminology as other providers, and for people familiar with other models, it's pretty non-trivial to adapt to. I recently posted a rant about this to https://www.reddit.com/r/aws/comments/1geczoz/the_aws_iam_id...

        Personally I am more familiar with directly using API keys or auth tokens than AWS's IAM users (which are more similar to what I'd call "service accounts").

    • SaggyDoomSr 17 hours ago |
      Nice! FWIW, The only nova model I see on the HuggingFace user space page is us.amazon.nova-pro-v1:0. I cloned the repo and added the other nova options in my clone, but you might want to add them to yours. (I would do a PR, but... I'm lazy and it's a trivial PR :-)).
      • ndr_ 12 hours ago |
        OK! I only add what people are interested in, so noted with thanks - will do! :-)
    • tootie 16 hours ago |
      I'm so confused on the value prop of Bedrock. It's seems like it wants to be guardrails for implementing RAG with popular models but it's not the least but intuitive. Is it actually better than setting up a custom pipeline?
      • ndr_ 12 hours ago |
        The value I get is: 1) one platform, largely one API, several models, 2) includes Claude 3.5 "unlimited" pay-as-you-go, 3) part of our corporate infra (SSO, billing, ... corporate discussions are easier to have)

        I'm using none to very little of the functionality they have added recently: not interested in RAG, not interested in Guardrails. Just Claude access, basically.

    • fumeux_fume 15 hours ago |
      This appears to be a way to steal and harvest aws credentials. No one should be following any of these steps.
      • ndr_ 12 hours ago |
        Do you have any evidence for this accusation?

        This is a guide for the casual observer who wants to try things out, given that getting started with other AI platforms is so much more straightforward. It's all open source, with transparent hosting, catering to any remaining concerns someone interested in exactly that may have.

        • placardloop 12 hours ago |
          The most common way for an AWS account to be hacked, by far, is mishandling of AWS IAM user credentials. AWS has even gone so far as to provide multiple warnings in the AWS console that you should never create long-lived IAM user credentials unless you really need to do so and really know what you are doing (aka not a “casual observer who wants to try things out”).

          This blog post encourages you to do this known dangerous thing, instructs you to bypass these warnings, and then paste these credentials into an untrusted app that is made up of 1000+ lines of code. Yes, the 1000+ lines of code are available for a security audit, but let’s be real: the “casual observer who wants to try things out” is not going to actually review all (if any) of the code, and likely not even realize they should review it.

          I give kudos to you for wanting to be helpful, but the instructions in this blog (“do this dangerous thing, but trust me it’s okay, and then do this other dangerous thing, but trust me it’s okay”) is exactly what nefarious actors would ask of unsuspecting victims, too, and following such blog posts is a practice that should not be generally encouraged.

        • xendo 11 hours ago |
          Sharing your IAM credentials is like sharing your password. Just don't do it, regardless of the intentions. Even if this one doesn't steal anything it creates a precedence that will let people think it's ok and make them easier targets in the future. Besides, bedrock already has a console, so what's the point of using your UI?
    • d4rkp4ttern 6 hours ago |
      Setting up Azure LLM access is a similar hellish process. I learned after several days that I had to look at the actual endpoint URL to determine how to set the “deployment name” and “version” etc.
  • smallnix 20 hours ago |
    Do these work with the bedrock converse API?
    • dheerkt 13 hours ago |
      yeah converse api supports all models on bedrock, or atleast all the text2text ones
  • mrg3_2013 19 hours ago |
    DOA

    When marketing talks about price delta and not quality of the output, it is DOA. For LLMs, quality is a more important metric and Nova would always try to play catch with the leaderboard forever.

    • xnx 17 hours ago |
      Maybe. The major models seem to be about tied in terms of quality right now, so cost and ease of use (e.g. you already have an AWS account set up for billing) could be a differentiator.
      • mrg3_2013 17 hours ago |
        Using LLMs via Bedrock is 10x more painful than using direct APIs. I could see cost consolidation via cloud marketplace a play - but I don't see Amazon's own LLM initiatives ever taking off. They should just lose those shops and buy one of the frontier models (while it is still cheap)
  • m3kw9 16 hours ago |
    Using Amazon or google cloud api and forgot about it? Surprise bill in a few months.
  • astoilkov 12 hours ago |
    Any ideas on how to use the new models through JavaScript in the browser or Node.js?
  • siquick 12 hours ago |
    Is there any difference in latency when calling models via Bedrock vs calling the providers APIs directly?