I've been working extensively with Large Language Models (LLMs) for the past couple of years, and during this time, I've had the opportunity to use almost every provider on the market. This includes OpenAI, Anthropic, Together, Fireworks, Azure OpenAI, Google Gemini, and, of course, AWS Bedrock. I've also been an AWS user for over a decade. However, something has become strikingly clear to me since I started adapting AWS Bedrock for one of our clients. While AWS Bedrock (and AWS in general) is incredibly powerful, its onboarding process is significantly more complex compared to other LLM providers.
Let's say you just want to experiment with a new model, like Nova, on AWS. Here's a simplified version of the steps involved:
- Set up the proper IAM role and permissions.
- Generate an access key.
- Choose the foundation models you want to use.
- Submit an access request.
- Wait for approval.
- Open the playground and select the model.
- Finally, you can start chatting.
That's a lot of hoops to jump through just to say "Hello" to an LLM! In contrast, many other providers have adopted a much simpler approach.
For example, with Google Gemini:
- Open the playground.
- Get an API key.
- Choose a model.
- Start chatting.
In fact, Google even lets you use the Gemini API for free in many cases, which is amazing for experimentation and quick prototyping.
Top comments (0)