In Part 1, we set up PostgreSQL with pgvector. Now, let's see how vector search actually works. π§
Contents
- Prerequisites
- Understanding Vector Search
- Project Setup
- Exploring Vector Search
- Working with JSON and Vectors
- Performance Tips
- Resources
Prerequisites π
- Completed Part 1 setup for pgvector
- OpenAI API key
Understanding Vector Search π
A vector is a list of numbers that represents position or direction:
2D Vector: [x, y] π Like coordinates on a map
3D Vector: [x, y, z] π² Like a point in 3D space
When AI processes content, it creates special vectors called "embeddings" (1536 dimensions) to represent meaning. These embeddings are stored in the database, allowing us to perform similarity search:
π "How to use Docker"
[0.23, 0.45, 0.12, ...] # 1536-dimensional vector
π "Docker tutorial"
[0.24, 0.44, 0.11, ...] π€ Very Similar! (Distance: 0.2)
π "Chocolate cake recipe"
[0.89, 0.12, 0.67, ...] π« Not Related! (Distance: 0.9)
- Vectors let AI understand similarity mathematically
- Vector search finds similar content by comparing distances
- pgvector stores embeddings efficiently
- Works across any language (it's all just numbers!)
Project Setup βοΈ
Updated Project Structure
vector-search/
βββ .env
βββ compose.yml
βββ requirements.txt
βββ postgres/ # Part 1: Database setup
β βββ schema.sql
βββ scripts/ # New: Data loading
βββ Dockerfile
βββ load_data.py
1. Set Up OpenAI API
Create .env
:
OPENAI_API_KEY=your_api_key # Get from platform.openai.com
2. Create Data Loading Script
Create scripts/load_data.py
to fetch books and generate embeddings:
import openai
client = openai.OpenAI()
def get_embedding(text: str):
"""Generate embedding using OpenAI API"""
response = client.embeddings.create(
model="text-embedding-3-small",
input=text
)
return response.data[0].embedding
def load_books_to_db():
"""Load books with embeddings into PostgreSQL"""
# 1. Fetches books from Open Library API
books = fetch_books()
for book in books:
# 2.Create text description for embedding
description = f"Book titled '{book['title']}' by {', '.join(book['authors'])}. "
description += f"Published in {book['first_publish_year']}. "
description += f"This is a book about {book['subject']}."
# 3. Generate embedding using OpenAI
embedding = get_embedding(description)
# 4. Stores books and embeddings in PostgreSQL
store_book(book["title"], json.dumps(book), embedding)
Full source code is available on GitHub π
Also create requirements.txt
and scripts/Dockerfile
.
3. Update Docker Compose
Update compose.yml
to add the data loader:
services:
# ... existing db service from Part 1
data_loader:
build:
context: .
dockerfile: scripts/Dockerfile
environment:
- DATABASE_URL=postgresql://postgres:password@db:5432/example_db
- OPENAI_API_KEY=${OPENAI_API_KEY}
depends_on:
- db
command: python load_data.py
4. Load Sample Data
docker compose up --build
Exploring Vector Search π¦
First, connect to the database:
docker exec -it pgvector-db psql -U postgres -d example_db
Inspecting Embeddings
Check what the vectors look like:
-- View first 5 dimensions of an embedding
SELECT
name,
(replace(replace(embedding::text, '[', '{'), ']', '}')::float[])[1:5] as first_dimensions
FROM items
LIMIT 1;
π‘ Each embedding from OpenAI's model:
- Has 1536 dimensions
- Contains values between -1 and 1
- Represents text meaning mathematically
- Outputs in
[...]
format, which needs to be converted to PostgreSQL's{...}
array format for array operations
Finding Similar Books
Search for books about web development:
WITH web_book AS (
SELECT embedding FROM items WHERE name LIKE '%Web%' LIMIT 1
)
SELECT
item_data->>'title' as title,
item_data->>'authors' as authors,
embedding <=> (SELECT embedding FROM web_book) as similarity
FROM items
ORDER BY similarity
LIMIT 3; -- Returns the 3 most similar books
Working with JSON and Vectors β‘οΈ
JSON Operators
Use ->>
to extract text value from a JSON field:
-- Get title from the 'item_data' JSON column
SELECT item_data->>'title' FROM items;
Vector Search Operators
pgvector supports multiple distance functions. Here are the two most commonly used operators.
L2 Distance: <->
Measures straight-line (Euclidean) distance between vectors:
-- Find similar books using L2 distance
SELECT
name,
embedding <-> (
SELECT embedding FROM items WHERE name LIKE '%Web%' LIMIT 1
) as distance
FROM items
ORDER BY distance
LIMIT 3;
Cosine Distance: <=>
Measures angle-based (cosine) distance between vectors:
-- Find similar books using Cosine distance
SELECT
name,
embedding <=> (
SELECT embedding FROM items WHERE name LIKE '%Web%' LIMIT 1
) as distance
FROM items
ORDER BY distance
LIMIT 3;
π‘ Tips
- OpenAI recommends
<=>
(Cosine distance) for their embeddings. - Smaller distance means higher similarity.
Performance Tips π
Query Optimization
Cache query vectors instead of subquerying:
-- β Inefficient: Subquery runs for every row
SELECT name, embedding <=> (
SELECT embedding FROM items WHERE name LIKE '%Web%' LIMIT 1
) as distance
FROM items
ORDER BY distance
LIMIT 3;
-- β
Better: Query vector calculated once
WITH query_embedding AS (
SELECT embedding FROM items WHERE name LIKE '%Web%' LIMIT 1
)
SELECT
name,
embedding <=> (SELECT embedding FROM query_embedding) as distance
FROM items
ORDER BY distance
LIMIT 3;
Indexing
Choose an index based on your needs:
-- Option 1: IVFFlat (Less memory, good for development)
CREATE INDEX ON items USING ivfflat (embedding vector_cosine_ops)
WITH (lists = 100);
-- Option 2: HNSW (Faster searches, more memory)
CREATE INDEX ON items USING hnsw (embedding vector_cosine_ops);
Resources π
Hope this helps you build something cool. Feel free to drop a comment below! π¬
Top comments (0)