Introduction
Developers often turn to Lua scripting in order to add rate-limiting, custom data types, or intricate transactional logic to Redis. But while Redis’ lightning fast data operations and Lua’s flexibility make for a formidable combination, there are still limitations.
Among these challenges are issues related to long-running scripts blocking the data store and constraints on non-atomic script execution, all of which can impact the performance and scalability of applications relying on Redis. Additionally, Redis’s single-threaded architecture and its approach to horizontal scaling introduce difficulties, particularly when executing Lua scripts across a distributed data environment.
That's where Dragonfly comes in. It’s a drop-in replacement for Redis that not only preserves the strengths of Lua scripting but also addresses some of these key challenges, thanks to its vertically scalable, multi-threaded, asynchronous architecture. In this blog post we will explore where Redis falls short when it comes to Lua scripting, and how Dragonfly offers a superior experience.
1. Dragonfly’s multi-threaded, asynchronous architecture provides better performance for long running and/or computationally-heavy scripts
Redis operates in a single-threaded manner, meaning it processes one operation at a time. This becomes a bottleneck when executing long-running or computationally heavy Lua scripts, as they can block other operations until completion.
Dragonfly, on the other hand, is constructed around a multi-threaded, asynchronous architecture. This approach provides several advantages. First, it promotes higher throughput due to parallel processing. Second, it allows multiple script execution units to run concurrently, which is particularly beneficial for computationally intensive scripts, such as those computing hashes or aggregating values.
Furthermore, Dragonfly supports asynchronicity, meaning regular commands can be mixed with commands from an already running script as long as the script's atomicity is preserved. This keeps Dragonfly available for incoming requests even while a script is executing.
To illustrate this, let's consider a simple Lua script that pushes values into a list:
local n = tonumber(ARGV[1])
for i = 1, n do
redis.call('LPUSH', KEYS[1], i)
end
We can run this script on a Dragonfly instance with only a single core. If the argument is large enough, it will take quite some time. Yet the instance is fully responsive and can handle commands.
2. Dragonfly offers special optimizations for write-heavy scripts
Lua scripts are often used to insert or update many values at once. In those cases they usually issue many simple sequential write commands. If their output is discarded - i.e. not stored in variables or used inside conditions in the script - then there is great potential for parallelization.
Dragonfly, in its recent versions, introduces optimizations for this exact scenario. It is capable of executing all the write commands in parallel as long as they do not influence the overall execution flow of the script. This feature enhances performance by utilizing multi-threading. It can be enabled by passing the following flag: lua_auto_async=true
.
Take, for example, a script that pushes values from a JSON array to multiple lists and trims these lists if they become too long. We test this script in both Redis and Dragonfly using a single connection on a single thread to ensure that there is no interference from multiple scripts running simultaneously.
local messages = cjson.decode(ARGV[1])
for _, key in ipairs(KEYS) do
for _, message in ipairs(messages) do
redis.pcall("LPUSH", key, message)
redis.pcall("LTRIM", key, 0, 50)
end
end
return "OK"
Our benchmark results show that Dragonfly is around 35% faster than Redis for executing sequential commands from a single connection, highlighting its effective use of parallelization.
memtier_benchmark --command="EVALSHA {sha} 5 l1 l2 l3 l4 l5 '[1,2,3,4,5]'" --hide-histogram --test-time=5 --distinct-client-seed -t 1 -c 1 --pipeline 5
Dragonfly | 16k QPS |
Redis | 12k QPS |
Moreover, when we run the benchmark allowing multiple scripts to execute in parallel from 4 threads, Dragonfly demonstrates almost triple the performance compared to Redis. It's capable of effectively handling more than 2 million operations per second on an instance running on only 4 cores (40k invocations per second, each perform 50 operations).
memtier_benchmark --command="EVALSHA {sha} 5 __key__ __key__ __key__ __key__ __key__ '[1,2,3,4,5]'" --hide-histogram --test-time=5 --distinct-client-seed -t 4 -c 50 --pipeline 5
Dragonfly | 41.5k QPS |
Redis | 14.5k QPS |
This implies that Dragonfly scales more efficiently compared to Redis. Its architecture leverages multi-threading, making it well-suited for high-throughput use cases, while Redis' single-threaded nature might limit its scalability in similar scenarios.
3. Dragonfly allows you to scale Lua scripts vertically
Redis supports only horizontal scaling with Redis Cluster. The absence of vertical scaling brings its own set of challenges when running Lua scripts. When you're using a cluster, data is distributed across multiple nodes based on a hash slot mechanism, which means different keys may reside on different nodes.
In Redis, Lua scripts are executed atomically. This means that a script is a single, indivisible operation which runs from start to finish without any other operation interrupting it. When you're using a cluster, this atomicity is preserved, but with a key limitation: a single Lua script cannot operate on keys that are stored on multiple nodes.
This means that when you're writing Lua scripts for a Redis Cluster, you have to ensure that all keys used in a single Lua script are located on the same node. In practice, this usually means you have to use a concept called "hash tags" to ensure certain keys end up on the same node. Yet it many cases, when the data is only loosely interconnected and different access patterns are used in the application, correctly assigning “hash tags” to all keys might be simply impossible
This also means that script-heavy systems tend to scale more effectively vertically than horizontally. Vertical scaling allows every script to access all keys fully.
Let's examine a specific example. Suppose we run a game development company, and we store leaderboards for different game rooms in our datastore. Our anti-cheat team develops a new heuristic that needs to quickly identify users who made it into the top 10 in at least half of a particular set of game rooms. These suspicious users need to be stored in a separate set, frequently accessed by our anti-cheat software.
If we use a cluster, then our script needs to execute on all nodes. The results of that script (suspicious users for each node) are sent back to the application, which then aggregates them into a final set of suspicious users.
With Dragonfly, we can conduct the entire operation with a single operation, removing the need for costly round trips between our application and multiple nodes.
4. Dragonfly offers a way to run Lua scripts non-atomically
Lua scripts run atomically by default, meaning that each operation is executed in one uninterrupted sequence without any other command being able to interleave. This is crucial when executing Lua scripts that require atomic execution for consistency. However, it can pose challenges when you want to use scripts just to decrease network round trips without actually needing atomicity for the entire script.
For instance, suppose you need to compute a live metric on all recently active users whose profiles are cached in the datastore. If atomicity is enforced while the script calculates this metric, it would block all access to user profiles, potentially causing significant slowdowns in your application. Yet the metric doesn’t need to be exact, meaning it can run in a non isolated state where user profiles are constantly updated. A potential solution would be calculating the metric in the application by querying the datastore for user profiles with pipelined requests. However if the metric is quick to compute and the user profiles are large, our approach would produce lots of unneeded and excessive traffic.
Dragonfly presents a solution to this problem by offering the ability to run scripts non-atomically. It means that other commands can interleave with a script’s commands even if they access the same keys. The script’s execution becomes analogous to a series of pipelined commands, only that it's not produced by a client, but the script itself.
Here's how it works: Dragonfly provides specific script flags such as:
- "disable-atomicity" allows non-atomic execution
- "allow-undeclared-keys" permits the script to access undeclared keys.
(Note that accessing undeclared keys is generally discouraged by Redis and is disabled by default in Dragonfly, due to its unpredictable behavior in a multi-threaded asynchronous environment.)
Here's a simple Lua script example which computes metrics for an anticheat team:
#!lua flags=disable-atomicity,allow-undeclared-keys
local cursor = "0"
repeat
local result = redis.call("SCAN", cursor, "MATCH", "user:*")
cursor = result[1]
for _, user in ipairs(result[2]) do
process_stats(user)
end
until cursor == "0"
Even if this script takes a few seconds to execute, user profiles remain fully accessible, and the cache continues to operate normally, with only a slight reduction in throughput.
5. Dragonfly can be configured so that certain keys are not evictable
When Redis is used as a cache, it implements an eviction policy: data that isn’t accessed frequently is automatically removed to free up memory for new or more frequently accessed data. This eviction strategy is efficient; however, it creates a problem when you want to store auxiliary data that may not be accessed often but is still important and needs to be readily accessible.
For instance, let’s say that you have a Lua script that calculates debug metrics running on Redis once a day. The output metrics are infrequently accessed and therefore subject to eviction. On the other hand, you don’t want to have to store them in a separate database or datastore - ideally you can store the debug metrics in the cache itself.
In a scenario like this, Dragonfly offers significantly more control than Redis. It provides a STICK key [keys…]
command that makes certain keys not evictable.
To go back to our example, we can write a simple Lua script that will calculate the debug metrics, and to make the result not evictable, we use the STICK
option of the SET
command.
local result = calc_metric()
redis.call('SET', 'last-metric', result, 'STICK')
Conclusion
As we've explored, Dragonfly provides you with the tools to mitigate common issues encountered with Redis and Lua scripting, whether it's handling long-running scripts, executing large numbers of write commands efficiently, providing better scalability options, or offering the ability to run non-atomic scripts. You can try Dragonfly today by viewing our docs or checking out the Dragonfly project on Github.
Top comments (0)