Everyone can build AI apps, that's the motto promoted by AWS PartyRock and it's the first AWS service hosted outside the AWS console. Just for FYI, AWS has multiple AI-related offerings.
Amazon SageMaker: It allows you to host your own machine learning models.
Amazon Bedrock: It's a fully managed service provided by Amazon that enables you to make API calls to access models hosted on AWS.
AWS PartyRock: It's a one-of-a-kind playpen. It's a no-code setup where you can drag and drop AI apps. To put it correctly, you can create a great app using a prompt.
AWS PartyRock is by far the easiest approach to creating your GenAI apps. It's a kind of no-code solution for AI where everyone can drag and drop to create AI apps with widgets. AWS PartyRock utilizes underlying models deployed in Bedrock. It exposes a subset of models, and let's delve into them to understand the details. When you're creating your apps with AWS PartyRock, it's important for you to be aware of the underlying Large Language Models (LLMs). Selecting the correct model will improve efficiency and performance. Once you decide to do a quick MVP with PartyRock and move that to Bedrock, you need to make sure you're using optimal models to optimize your costs too.
Claude
Claude is an LLM developed by Anthropic. It relies on an approach called Constitution AI, which is designed for training AI systems, specifically LLMs, to be harmless and helpful without relying on human feedback. Two models exposed by PartyRock include Claude and Claude Instant. Claude Instant is supposed to be faster, less expensive, and lighter. Claude models are more geared towards AI assistance-based use cases, where the assistant aims to be more helpful, honest, and harmless. Claude Instant has access to a larger knowledge base, including more up-to-date information. It is much faster and has the ability to provide real-time responses with many more features compared to Claude.
Claude is suited for general consumer use cases, while Claude Instant is suitable for both consumer and enterprise use cases in customer service, education, and healthcare. In summary, Claude Instant supports fast dialogue, summarization, and text analysis, making it ideal for customer support and quick content creation. On the other hand, Claude supports more creative dialogue, tech-related discussions, and content generation for education.
Jurassic-2
AI21 Labs has introduced Jurassic-2 models, which are more up-to-date models trained on the latest dataset. They provide 30% faster responses compared to their predecessor, Jurassic-1, and support multiple changes. Jurassic-2 has advanced instruction-following capabilities. There are two popular models within the Jurassic-2 series: Jurassic-2 Mid and Jurassic-2 Ultra. Jurassic-2 Mid offers enhanced text generation capabilities, supporting the resolution of text generation tasks related to complex topics. Jurassic-2 Ultra is a larger and more powerful model in the Jurassic series, making it the best choice for more complex language processing tasks and generative text applications. Both Mid and Ultra support an 8192 max token limit.
In summary, Jurassic-2 Ultra supports advanced question and answer, summarization, draft generation, and complex reasoning, while Mid supports question and answer, summarization, draft generation, and information extraction. It is worth noting that Ultra is more expensive than Mid.
Titan
Amazon Titan is another series of LLMs developed by Amazon. Titan models support high-performing image, multimodal, and text models. These models adhere to responsible AI principles with the ability to incorporate content filtering and input rejection mechanisms. Safety is prioritized over other considerations in these models. Titan Text Lite consists of a 4k max token/image limit. It supports English and its capabilities include summarization and copyrighting. On the other hand, Titan Text Express supports an 8k tokens max limit with English and 100+ other languages in preview. Its capabilities include retrieval generation, text generation, brainstorming, and question-and-answer chat.
Command
Cohere has released Command, which is a text generation model. It is trained to follow user commands and be instantly useful in practical business applications. COmmand is renowned as a future-proof LLM that continuously improves. It has been trained on practical use cases that support reliable business applications such as summarization, copywriting, dialogue, extraction, and question answering. COmmand comes with a 4k token max limit, and its capabilities include advanced chat and text generation in English.
Llama 2
Mata has released the Llama 2 series. Llama 2 is an LLM ranging from 7 billion to 70 billion parameters. These series of models are fine-tuned for chat, innovative crafting, and with a focus on safety. Llama consists of Llama-2 13b-chat and Llama-2 70b chat. Both have a 4k max token limit and support English. Both are good for assistance like chats.
Stable Diffusion XL
Stability AI has released Stable Diffusion XL. It provides great capabilities in text-to-image generation, offering use cases such as personalized image generation, advertising, and marketing. It comes with a 77-token limit and is widely used in advertising, media, and the gaming industry.
I know you may want a summary of all models in one place. Here you go!
Model | Tasks | Description | Strengths | Weaknesses | Use when | Compute needs |
---|---|---|---|---|---|---|
Claude | Text generation | Small GPT model focused on safe conversational AI | Safe and consistent responses. Good for basic chat. | Limited capabilities beyond basic conversation. | You want a polite and harmless chatbot. | Low |
Claude Instant | Text generation | Claude model optimized for fast response generation | Very low latency text generation. Good for real-time chat. | Quality tradeoffs for lower latency. | You need real-time conversational AI. | Low |
Jurassic-2 Mid | Text generation | Mid-sized conversational model with safety constraints | Capable generation with strong safety. | Less creative than models without safety constraints. | You want safe conversational AI. | Medium |
Jurassic-2 Ultra | Text generation | Large conversational model with safety constraints | Most capable safe conversational AI today. Highly coherent. | Very large model size and compute needs. | You want cutting-edge safe conversational AI. | High |
Titan Text Lite | Text generation | Smaller Titan model optimized for efficiency | Fast generation speed with limited compute | Lower creativity and coherence | Need fast text generation on low compute | Low |
Titan Text Express | Text generation | Medium-sized Titan model balancing capabilities and efficiency | Good text generation with moderate compute | Lower quality than full Titan model | Want good text generation with moderate compute | Medium |
Command | Text generation | Large conversational AI model focused on usefulness | Highly capable text generation, knowledge, reasoning | Very large model size, high compute needs | Want cutting-edge capabilities from text AI | High |
Stable Diffusion XL | Image generation | Large diffusion model for photorealistic image generation | Highly realistic image generation. Creative freedom. | Can lack coherence. Very large model size. | You want to generate highly realistic images. | High |
Liama 2 Chat 13b | Text generation | Smaller conversational model from Anthropic | Safe conversational model with lower compute. | Less capable than larger models. | Want safe conversational AI on low compute. | Low |
Liama 2 Chat 70b | Text generation | Larger conversational model with strong safety | Very safe conversations. High coherence. | Large model size and compute requirements. | Want highly safe conversational AI. | High |
For now, you may be overwhelmed with a lot of model details. Let's delve into a few use cases and aim to understand which model best suits your needs. Here, we are going to concentrate on apps or ideas for Site Reliability Engineering (SRE)
Automating Incident Response
Claude or Jurassic-2 could be used for automating incident response. Their natural language capabilities allow them to analyze incident tickets, logs, and metrics to determine root causes and suggest remediations.
Claude and Jurassic excel in NLP, ingesting and comprehending unstructured text data for incident management. Their language understanding, natural language generation, and adaptability to diverse domains make them ideal for complex, real-world incident response workflows.
Capacity planning and forecasting
Titan could help with capacity planning and forecasting. Its numerical reasoning skills enable it to model future workload needs based on historical data.
Titan excels in numerical reasoning, analyzing historical timeseries data for capacity planning. Its skills in temporal modeling, statistical reasoning, and natural language generation make it ideal for complex forecasting tasks in SRE.
Automating infrastructure provisioning and configuration
Command is well-suited for automating infrastructure provisioning and configuration. It can translate high-level infrastructure requests into detailed scripts and configurations. Command excels in translating natural language infrastructure requests into executable scripts, proficiently handling parsing, code completion, and error handling. Its versatility with various tools ensures tailored scripts for diverse infrastructure stacks, emphasizing automation and best practices.
Monitoring and alert tunning
Llama2 could assist with monitoring and alert tuning. It can analyze metrics, logs, and alerts to identify noisy or redundant alerts and suggest optimizations. Llama2 excels in natural language processing, pattern detection, and cross-data correlation, enhancing its capacity to optimize alerts and monitoring. Its adaptability ensures sustained relevance in dynamically changing system conditions.
Document and Diagram creation
Stable Diffusion XL's generative capabilities can be leveraged for documentation and diagram generation. It can create diagrams depicting architecture, infrastructure, or workflows based on text prompts. Stable Diffusion XL, a generative AI model, excels in creating diverse and detailed visuals for documentation, offering flexibility, coherence, and precise control. Its open-source nature ensures transparency and enhances automated documentation for SRE teams.
We are reaching the final section of my post. I hope you now have a good understanding of some high-profile LLMs, their detailed capabilities, and potential use cases in the world of SRE. Let's now explore best practices for selecting an LLM.
- Begin by clearly defining your use case and target output. Provide specific details, outline tasks, data requirements, and performance expectations. Evaluate LLMs against these criteria.
- When assessing LLM capabilities, delve into aspects such as training data objectives, strengths, and limitations. Ensure a thorough understanding of what you are getting into.
- Conduct technical validation to ascertain the LLM's fitness for your purpose. Test the model with sample or real data to validate your use cases.
- Consider the ethical and legal aspects associated with AI. Be cautious and ensure thorough consideration of these factors.
- Lastly, while some LLMs offer great capabilities, assess their commercial viability before diving in. Understand the associated costs and evaluate the feasibility of usage.
Finally, here are a few common pitfalls you may want to avoid. I won't delve into the science below, but I hope you grasp these points:
- Choosing a trendy or popular LLM may not be the best fit for your requirements. Be mindful of that.
- Larger LLMs are not always the solution. Sometimes, even a smaller LLM may perform a better job.
- Be vigilant about LLM limitations and cautious of biases and ethical risks.
- Occasionally, accuracy outweighs latency in importance.
- Open source is not always the best, but you also want to avoid vendor lock-in.
- LLM observability is a significant part of continuous refinement; plan in advance.
Top comments (0)