DEV Community

Cover image for Fine-Tuning Fundamentals - Generative AI For Beginners (v2)
Nitya Narasimhan, Ph.D for Microsoft Azure

Posted on • Edited on

Fine-Tuning Fundamentals - Generative AI For Beginners (v2)

3 Resources to Jumpstart Your Generative AI Journey:

  1. Generative AI for Beginners Curriculum
  2. Build Generative AI Apps Code-First With Azure AI
  3. Responsible AI Resources For Developers

Welcome to the seventh post in my This Week in AI News series! Today, I want to continue my post on Generative AI For Beginners and move on from prompt engineering to the related topic of fine tuning. Let's dive in!


Generative AI For Beginners (v2)

In the previous post, I covered the v1 edition of this open-source curriculum (chapters 1-12) released in Oct 2023 - with specific focus on "Prompt Engineering Fundamentals", the chapter I contributed.

In today's post, I'll dive into the v2 extension to the curriculum that was just released in Feb 2024 (chapters 13-18) - and focus in more detail on the "Fine Tuning" chapter I contributed. But first, let's take a look at what's new in v2:

GenAI For Beginners v2

Lesson Description
Securing Generative AI Covers the adversarial threat landscape for AI and looks at options for security testing, data protection, and safety evaluation (including red teaming)
Generative AI App Lifecycle covers the paradigm shift from MLOps to LLMOps - and explores the workflow and tools to streamline end-to-end development
Retrieval Augmented Generation Covers a core technique to improve LLM response quality by grounding it in your own data and using embeddings and vector databases.
Open-Source Models Covers benefits of open-source models like Llama2, Mistral and Falcon - and the value of model hubs like Hugging Face for discovery & integration.
AI Agent Systems Covers evolution of AI apps from assistants (interactive, chat) to agents (autonomous, task execution) e.g., AutoGen, LangChain Agents & JARVIS.
Fine-Tuning Models Covers ability to retrain foundation models with new examples to improve response quality or reduce usage costs & complexity for prompt engineering.

Prompt Engineering To Fine Tuning

In the v1 edition, I talked about the value of prompt engineering to improve the quality of model responses to user questions. Specifically, we looked at ways to construct the prompt using techniques like few-shot learning, prompt templates, system prompts and more - all of which enhance the default user prompt with additional content or context to guide the LLM towards more relevant responses.

But prompt engineering alone may not be enough:

  • Cost: Models have tokenization limits and usage costs that can constrain the degree to which you can enhance the default prompt. This limits the number of examples you can add in primary content, or the richness of responses in completions.
  • Customization: You may want to add new skills or capabilities to the model that enhance its behavior across user interactions. Doing this on a per-prompt basis is inefficient and may not even be possible.

So, how can you take advantage of the rich language models available to you, while getting the cost-effective and customized user experiences you need? This is where fine-tuning models can help.


Fine-Tuning Fundamentals

In lesson 18 of this curriculum, we tackle this challenge by learning about Fine-Tuning Models using new data or examples. The lesson and covers the following topics at a high-level, providing related resources and a hands-on assignment for self-guided deeper dives. Check out the illustrated guide below for more detail on what each topic covers.

Sub-Topic Description
Introduction Understand foundation models and related concepts for response quality
Motivation Learn why fine-tuning matters and when to start exploring it as an option
Process Understand the steps in a fine-tuning workflow, and the related challenges
Data Prep Ensure you have the right data quantity and quality for fine-tuning
Training Run the fine-tuning job, monitor progress, then test & iterate for quality
Deployment Make your fine-tuned model available for real-world interactions, know the constraints

Fine Tuning Sketchnote

Fine-tuning is a fascinating topic with great potential for experimentation and learning. Expect more updates to that lesson with focus on walkthroughs of assignments that show the application of these concepts to real-world use cases and models. For now, start by exploring the resources for self-guided learning provided in that chapter.


Summary & Next Steps

In this post, we looked at what the v2 edition of the Generative AI for Beginners curriculum provides, with some focus on the Fine Tuning Fundamentals chapter that concludes it.

This is a fast-evolving space so expect more updates to the curriculum - your feedback and contributions are welcome! Start your journey today by forking the repo to your profile and exploring the lessons at your own pace!!

Happy learning!


3 Resources to Jumpstart Your Generative AI Journey:

  1. Generative AI for Beginners Curriculum
  2. Build Generative AI Apps Code-First With Azure AI
  3. Responsible AI Resources For Developers

Top comments (0)