Disclaimer
This blog reflects my learnings from Augment your LLM Using Retrieval Augmented Generation by NVIDIA. The content is based on topics covered in the course and my understanding of them through some practical examples. If you are not sure, what RAG is, I would suggest you to check out my following blog.
Exploring RAG: Why Retrieval-Augmented Generation is the Future?
Ďēv Šhãh 🥑 ・ Oct 1
Introduction to Vector Embeddings
Vector Embeddings is the process of representing complex data like words, sentences, documents, or images into numerical representations that computers can easily understand. This is the significance of embeddings. The data is passed to the Embedding Model which in turns returns the embeddings.
Note: Different embedding models produce distinct results.
Understanding Semantic Relationships
Vector Embeddings allows machines to understand the relationship between different words based on their context and meaning. In essence, embeddings capture the semantic similarity between words, placing words that are related closer together in the vector space, and unrelated words farther apart.
Example
Consider the following words: “king,” “queen,” “man,” and “woman.”
When these words are processed using an embedding technique, they are transformed into vectors:
- “king” → [0.8, 0.5, 0.3]
- “queen” → [0.7, 0.4, 0.6]
- “man” → [0.6, 0.2, 0.5]
- “woman” → [0.5, 0.1, 0.4]
The vectors for “king” and “queen” are close to each other, indicating they are both royal figures but differ in gender. Similarly, “man” and “woman” are also close, reflecting their relatedness as gender counterparts.
The closeness or distance between vectors reflects the semantic similarity or dissimilarity between words and phrases. This arrangement enables machines to comprehend relationships and meanings, allowing for better language understanding.
Analogy Using Vectors
If you take the vector difference: “king” - “man” + “woman,” the resulting vector should be close to “queen.” This means that the model understands that just as a king is to a man, a queen is to a woman, demonstrating the semantic relationship between these terms.
These embeddings are stored in a Vector Database before being retrieved and passed to a Large Language Model (LLM).
Citation
I would like to acknowledge that I took help from ChatGPT to structure my blog simplify content and generate relevant examples.
Top comments (0)