Here's a comprehensive example demonstrating Redis caching
strategies and cache invalidation policies with Golang
, incorporating an in-memory SQLite
database and a Gin-based RESTful API
. This will give you insight into common caching patterns and approaches often used in real-world applications.
1. Caching
Common Caching Strategies:
- Cache-aside (Lazy Loading): Data is loaded into the cache only when requested, with the database as the primary source.
- Write-through: Data is written to the cache and database simultaneously, keeping them in sync.
- Write-behind: Data is first written to the cache, then asynchronously to the database, improving write performance.
- Read-through: Applications fetch data directly from the cache, which loads missing data from the database.
Cache Invalidation Policies:
- Time-based (TTL): Cache entries expire after a set time-to-live (TTL).
- Manual Invalidation: Explicitly clearing cache entries when data changes.
- Event-based: Cache updates based on specific triggers/events, such as database updates.
- Least Recently Used (LRU): Evicts the least recently accessed items when the cache reaches capacity.
- Least Frequently Used (LFU): Removes the least frequently accessed items to make room for new entries. Each policy ensures cached data remains fresh, balancing performance and consistency.
Each policy ensures cached data remains fresh, balancing performance and consistency.
2. Setup
Install the required packages:
go get github.com/go-redis/redis/v8
go get github.com/gin-gonic/gin
go get gorm.io/gorm
go get gorm.io/driver/sqlite
3. Define Models and Initialize Database with GORM
package main
import (
"context"
"fmt"
"log"
"time"
"github.com/gin-gonic/gin"
"github.com/go-redis/redis/v8"
"gorm.io/driver/sqlite"
"gorm.io/gorm"
)
var (
ctx = context.Background()
client = redis.NewClient(&redis.Options{
Addr: "localhost:6379",
})
)
type Product struct {
ID uint `gorm:"primaryKey"`
Name string `gorm:"size:100"`
Price int
}
func initDB() *gorm.DB {
db, err := gorm.Open(sqlite.Open(":memory:"), &gorm.Config{})
if err != nil {
log.Fatal(err)
}
db.AutoMigrate(&Product{})
db.Create(&Product{Name: "Product1", Price: 100})
return db
}
4. Using Redis Hash for Product Caching
A Redis hash can store field-value pairs associated with a product, making it easy to cache product details.
func getProductByIDHash(db *gorm.DB, id uint) (Product, error) {
cacheKey := fmt.Sprintf("product:%d", id)
var product Product
// Check if product data is in Redis hash
res, err := client.HGetAll(ctx, cacheKey).Result()
if err == nil && len(res) > 0 {
product.ID = id
product.Name = res["name"]
product.Price, _ = strconv.Atoi(res["price"])
return product, nil
}
// Load from database if cache is empty
if err := db.First(&product, id).Error; err != nil {
return product, err
}
// Cache product data in Redis hash
client.HMSet(ctx, cacheKey, map[string]interface{}{
"name": product.Name,
"price": product.Price,
})
// Time-based (TTL) Invalidation
client.Expire(ctx, cacheKey, time.Minute)
return product, nil
}
5. Redis List for Recent Products
Store recently accessed product IDs in a Redis list.
func addToRecentProductsList(id uint) {
client.LPush(ctx, "recent_products", id)
client.LTrim(ctx, "recent_products", 0, 9) // Keep only 10 most recent products
}
6. Write-through Caching
Write-through caching writes updates simultaneously to the cache and database, ensuring both stay in sync.
func createOrUpdateProductWriteThrough(db *gorm.DB, id uint, name string, price int) error {
// Update the database
product := Product{ID: id, Name: name, Price: price}
if err := db.Save(&product).Error; err != nil {
return err
}
// Update the cache with the latest product data
cacheKey := fmt.Sprintf("product:%d", id)
client.HMSet(ctx, cacheKey, map[string]interface{}{
"name": name,
"price": price,
})
// Time-based (TTL) Invalidation
client.Expire(ctx, cacheKey, time.Minute) // Optional expiration
return nil
}
This function first writes the updated product to the database and then synchronously updates the cache to keep it in sync. Any read request for this product after an update will get fresh data.
7. Manual Invalidation
Manual invalidation removes outdated entries from the cache explicitly, such as when data changes. Hereβs a function to invalidate a product cache entry manually:
func invalidateProductCache(id uint) error {
cacheKey := fmt.Sprintf("product:%d", id)
_, err := client.Del(ctx, cacheKey).Result() // Remove the cache entry
return err
}
To use this, simply call invalidateProductCache(id)
when data is modified directly in the database without cache involvement.
8. Event-based Invalidation
Event-based invalidation can be used to clear cache entries in response to specific application events, such as a significant data update or a deletion.
Letβs assume a scenario where we listen for an event when a product is deleted and clear the related cache.
func deleteProductEventBased(db *gorm.DB, id uint) error {
// Delete the product from the database
if err := db.Delete(&Product{}, id).Error; err != nil {
return err
}
// Emit an event to clear the cache
return invalidateProductCache(id)
}
To illustrate this fully, imagine integrating this with an event-driven architecture where deleteProductEventBased
is triggered by an external event handler. Here, cache invalidation happens as a result of the event, ensuring the cache reflects the latest state after the deletion.
9. Implementing Cache-aside for Lists
func getRecentProducts(db *gorm.DB) ([]Product, error) {
productIDs, err := client.LRange(ctx, "recent_products", 0, -1).Result()
if err != nil {
return nil, err
}
var products []Product
for _, idStr := range productIDs {
id, _ := strconv.Atoi(idStr)
product, err := getProductByIDHash(db, uint(id))
if err == nil {
products = append(products, product)
}
}
return products, nil
}
10. Redis Transaction for Atomic Updates
Use Redis transactions to perform atomic operations.
func updateProductWithTransaction(db *gorm.DB, id uint, name string, price int) error {
cacheKey := fmt.Sprintf("product:%d", id)
// Start a Redis transaction
_, err := client.TxPipelined(ctx, func(pipe redis.Pipeliner) error {
// Update cache inside the transaction
pipe.HMSet(ctx, cacheKey, map[string]interface{}{
"name": name,
"price": price,
})
pipe.Expire(ctx, cacheKey, time.Minute)
return nil
})
// Update the database after the Redis transaction
if err == nil {
db.Model(&Product{}).Where("id = ?", id).Updates(Product{Name: name, Price: price})
}
return err
}
11. Adding API Endpoints
func main() {
db := initDB()
router := gin.Default()
router.POST("/product", func(c *gin.Context) {
var req struct {
ID uint `json:"id"`
Name string `json:"name"`
Price int `json:"price"`
}
if err := c.BindJSON(&req); err != nil {
c.JSON(400, gin.H{"error": "Invalid request"})
return
}
err := createOrUpdateProductWriteThrough(db, req.ID, req.Name, req.Price)
if err != nil {
c.JSON(500, gin.H{"error": err.Error()})
return
}
c.JSON(200, gin.H{"status": "created/updated"})
})
router.DELETE("/product/:id", func(c *gin.Context) {
id, _ := strconv.Atoi(c.Param("id"))
err := deleteProductEventBased(db, uint(id))
if err != nil {
c.JSON(500, gin.H{"error": err.Error()})
return
}
c.JSON(200, gin.H{"status": "deleted"})
})
router.POST("/product/invalidate/:id", func(c *gin.Context) {
id, _ := strconv.Atoi(c.Param("id"))
err := invalidateProductCache(uint(id))
if err != nil {
c.JSON(500, gin.H{"error": err.Error()})
return
}
c.JSON(200, gin.H{"status": "cache invalidated"})
})
router.GET("/product/:id", func(c *gin.Context) {
id, _ := strconv.Atoi(c.Param("id"))
product, err := getProductByIDHash(db, uint(id))
if err != nil {
c.JSON(500, gin.H{"error": err.Error()})
return
}
addToRecentProductsList(uint(id))
c.JSON(200, product)
})
router.PUT("/product/:id", func(c *gin.Context) {
id, _ := strconv.Atoi(c.Param("id"))
var req struct {
Name string `json:"name"`
Price int `json:"price"`
}
if err := c.BindJSON(&req); err != nil {
c.JSON(400, gin.H{"error": "Invalid request"})
return
}
if err := updateProductWithTransaction(db, uint(id), req.Name, req.Price); err != nil {
c.JSON(500, gin.H{"error": err.Error()})
return
}
c.JSON(200, gin.H{"status": "updated"})
})
router.GET("/recent_products", func(c *gin.Context) {
products, err := getRecentProducts(db)
if err != nil {
c.JSON(500, gin.H{"error": err.Error()})
return
}
c.JSON(200, products)
})
router.Run(":8080")
}
12. Write-behind caching strategy (Optional)
In a Write-behind caching strategy, updates are written to the cache first and asynchronously to the database later. This improves write performance by not waiting on the database immediately, but it requires careful management to ensure data consistency.
To implement this, we'll store data in the cache and use a background worker to periodically flush changes to the database. Hereβs how this might look in our Golang example with Redis and GORM.
1. Modify the Create/Update Function to Write Only to Cache
First, the function to create or update products will write data only to the cache.
func createOrUpdateProductWriteBehind(id uint, name string, price int) error {
// Write data to Redis cache
cacheKey := fmt.Sprintf("product:%d", id)
client.HMSet(ctx, cacheKey, map[string]interface{}{
"name": name,
"price": price,
})
client.Expire(ctx, cacheKey, time.Minute) // Optional expiration
// Track this operation in a queue for background processing
client.RPush(ctx, "write-behind-queue", cacheKey)
return nil
}
In this example, we push the cacheKey
to a Redis list called write-behind-queue
, which the background worker will process.
2. Background Worker for Database Synchronization
A background worker will listen for cache entries in write-behind-queue
and write them to the database.
func writeBehindWorker(db *gorm.DB) {
for {
// Pop from queue
cacheKey, err := client.LPop(ctx, "write-behind-queue").Result()
if err == redis.Nil {
time.Sleep(time.Second) // Sleep if queue is empty
continue
} else if err != nil {
log.Println("Error fetching from queue:", err)
continue
}
// Fetch data from cache
values, err := client.HGetAll(ctx, cacheKey).Result()
if err != nil || len(values) == 0 {
continue
}
// Write to the database
idStr := strings.TrimPrefix(cacheKey, "product:")
id, _ := strconv.Atoi(idStr)
price, _ := strconv.Atoi(values["price"])
product := Product{ID: uint(id), Name: values["name"], Price: price}
if err := db.Save(&product).Error; err != nil {
log.Println("Error saving to DB:", err)
continue
}
// Optionally, delete the cache entry if the cache is temporary
client.Del(ctx, cacheKey)
}
}
3. Start the Worker
Run the worker as a goroutine when the application starts:
func main() {
db := initDB()
go writeBehindWorker(db) // Start the background worker
router := gin.Default()
// Your routes here...
router.Run(":8080")
}
Explanation of Write-behind Behavior
- Create or Update: When a product is created or updated, the data is immediately stored in Redis. The background worker will write this data to the database asynchronously.
-
Queue Processing: The worker monitors
write-behind-queue
, pops each cache entry, retrieves product details from Redis, and saves them to the database.
This approach maximizes write performance since updates are first saved in memory and only later committed to the database, useful for write-intensive applications where eventual consistency is acceptable.
Some thoughts
In practice, Write-through caching is more commonly used than Write-behind. This is because write-through ensures immediate consistency between the cache and database by updating both simultaneously, which simplifies data integrity and reduces the risk of stale data. Write-behind, while improving performance in write-heavy applications, introduces complexity with asynchronous writes and may lead to temporary inconsistencies if the database is not updated immediately.
Write-through is generally favored in scenarios where real-time data accuracy is critical, while write-behind is used selectively when performance is prioritized over strict consistency.
13. Additional Strategies
-
LRU and LFU Eviction Policies: These can be configured in Redis using the
maxmemory-policy
setting to handle least-recently and least-frequently used evictions. -
Expiration Events: Listen to Redis expiration events to handle expired keys programmatically by subscribing to
__keyevent@0__:expired
.
This example demonstrates how Redis can be combined with an in-memory database like SQLite to achieve various caching strategies and invalidation policies. In production scenarios, more advanced patterns can also incorporate Pub/Sub and Lua scripting for complex data requirements.
14. Other Potiential Integrations
1. Sets
func setExample(client *redis.Client) {
client.SAdd(ctx, "languages", "Go", "Python")
languages, _ := client.SMembers(ctx, "languages").Result()
fmt.Println("Set Members:", languages)
}
2. Sorted Sets
func sortedSetExample(client *redis.Client) {
client.ZAdd(ctx, "scores", &redis.Z{Score: 90, Member: "Alice"})
rank, _ := client.ZRank(ctx, "scores", "Alice").Result()
fmt.Println("Sorted Set Rank:", rank)
}
3. Transactions
func transactionExample(client *redis.Client) {
client.Watch(ctx, func(tx *redis.Tx) error {
_, err := tx.Pipelined(ctx, func(pipe redis.Pipeliner) error {
pipe.Set(ctx, "counter", 1, 0)
pipe.Incr(ctx, "counter")
return nil
})
return err
}, "counter")
}
4. Pipelining
func pipelineExample(client *redis.Client) {
pipe := client.Pipeline()
pipe.Set(ctx, "foo", "bar", 0)
pipe.Get(ctx, "foo")
_, _ = pipe.Exec(ctx)
}
5. Pub/Sub
func pubsubExample(client *redis.Client) {
sub := client.Subscribe(ctx, "mychannel")
defer sub.Close()
msg, _ := sub.ReceiveMessage(ctx)
fmt.Println("Pub/Sub Message:", msg.Payload)
}
6. Lua Scripting
func luaScriptExample(client *redis.Client) {
script := redis.NewScript("return redis.call('SET', KEYS[1], ARGV[1])")
script.Run(ctx, client, []string{"name"}, "Alice")
}
7. Expiration and Persistence
func expirationExample(client *redis.Client) {
client.Set(ctx, "temp", "value", time.Second*10)
ttl, _ := client.TTL(ctx, "temp").Result()
fmt.Println("TTL:", ttl)
}
8. HyperLogLog
func hyperLogLogExample(client *redis.Client) {
client.PFAdd(ctx, "unique_visitors", "user1", "user2")
count, _ := client.PFCount(ctx, "unique_visitors").Result()
fmt.Println("HyperLogLog Count:", count)
}
9. Streams
func streamExample(client *redis.Client) {
client.XAdd(ctx, &redis.XAddArgs{
Stream: "mystream",
Values: map[string]interface{}{"user": "Alice", "action": "login"},
})
msgs, _ := client.XRead(ctx, &redis.XReadArgs{
Streams: []string{"mystream", "0"},
}).Result()
fmt.Println("Stream Messages:", msgs)
}
10. Scan Operations
func scanExample(client *redis.Client) {
iter := client.Scan(ctx, 0, "*", 0).Iterator()
for iter.Next(ctx) {
fmt.Println("Key:", iter.Val())
}
}
11. Keyspace Notifications and Expiration Events
Configure Redis to emit keyspace notifications:
CONFIG SET notify-keyspace-events Ex
Listen for expiration events:
func listenForExpiration(client *redis.Client) {
sub := client.PSubscribe(ctx, "__keyevent@0__:expired")
for msg := range sub.Channel() {
fmt.Println("Expiration Event:", msg.Payload)
}
}
12. RediSearch with Full-Text Search
Requires RediSearch module:
func rediSearchExample(client *redis.Client) {
client.Do(ctx, "FT.CREATE", "index", "ON", "HASH", "SCHEMA", "title", "TEXT", "content", "TEXT")
client.HSet(ctx, "doc1", "title", "Hello World", "content", "Golang Redis example")
client.Do(ctx, "FT.SEARCH", "index", "Golang")
}
13. Bloom Filters
Requires the RedisBloom module:
func bloomFilterExample(client *redis.Client) {
client.Do(ctx, "BF.ADD", "bloom", "golang")
exists, _ := client.Do(ctx, "BF.EXISTS", "bloom", "golang").Result()
fmt.Println("Bloom Filter Exists:", exists)
}
These examples illustrate the basics of using Redis in Go and cover a broad range of common Redis features. Each function is designed to demonstrate specific functionality within Redis using idiomatic Go code.
If you found this helpful, let me know by leaving a π or a comment!, or if you think this post could help someone, feel free to share it! Thank you very much! π
Top comments (0)