Ever struggled with the growing size of your OpenSearch cluster indices and wondered how you could efficiently manage them?
You can definitely leverage Index State Management (ISM) policies but knowing the intricacies of sharding goes a long way in helping you scale your cluster efficiently while keeping its performance optimal and most importantly keep a check on cost ($$).
One might wonder why the need to learn sharding strategies in a managed service. Isn't the service supposed to handle it for you? Well, yes and NO. One analogy that I can give is of cars. There are cars with automatic gears and cars with manual gears. A lot of people these days prefer automatic cars. They run great on highways but what if you need to navigate the car through crowded traffic streets with curvy turns? The automatic car "would" work but it would be far from efficient, performant and scalable. The same goes for managed services.
So how do you shard your indices efficiently? Let's begin with the basics.
First determine if your indices are something that can be organised as time-based indices. In that case, opting for day-wise, weekly, monthly or even yearly indices may make sense. Note that you can always mix-n-match meaning you can club between day-wise, weekly, monthly, yearly. Say your current data flows into day-wise indices and then you could have an ISM policy such that data older than 90 days is re-indexed into monthly indices. Or you can have monthly indices for current year and past years monthly indices could be re-indexed into yearly indices.
You can also leverage the open-source curator tool to manage all this using yaml scripts, as an alternative to ISM policies.
So the question that arises is: why would you begin with day-wise indices if you want to merge (aka reindex) them later into monthly? why begin with monthly if you want to reindex them to yearly?
The answer to this lies in "performance" and "efficiency" and also balancing indexing (write performance) and querying (read performance).
Your current indices are getting live data. So you want to maximise indexing aka write performance. To maximize indexing performance, you can have more shards. More shards means more parallel writes leading to efficient writes.
But here's the catch - the more the shards, more the time taken to search across all of them! Meaning query performance will suffer with more shards. Thus, there's a trade-off between indexing vs query performance and the trick lies in striking a balance.
So how do we strike that balance. One way is to keep current day's index with more number of shards but past indices which have no data flowing in, can be "reindexed" to reduce the no of shards and then force-merged to reduce to a single (1) segment. When you re-index, the index name will change but with aliasing, this is simplified. Simply flip your alias to point to the new index name. This strategy helps to boost search/query performance and at the same time keep the indexing performance great as well. Win-win!
So how many shards should you begin with?
Arrive at how much data flows per day. Say 30 GB. Now, how many nodes do you have? Say 3 nodes. In that case, having 3 primary shards will be optimal as each node will have 1 shard and you can configure 1 replica. However, in this case, the primary shard size of 10 GB is way too small. Ideally, shards should be around 30-50 GB.
Say daily data flow is 16 GB. In that case, just having 1 primary shard would do. Remember that too many small-sized shards is very detrimental to performance as there's context switching involved with underlying lucene indices.
Let's say the indices are monthly indices and per day's average data flow is 30 GB. Thus per month it would average 30 GB * 30 days = 900 GB. Assuming 3 nodes, configuring this index with 12 shards would mean each primary shard size is 900/12 = 75 GB. The ideal size is 30-50 GB and this exceeds it by a huge margin. So let's look at having say 21 shards. In this case, each primary shard would be 900/21 = 43 GB. That's acceptable. With 24 shards, 900/24 = 37.5 GB primary shard size is also a good option. Remember that too large shards take a long time to move between nodes and slow down cluster recovery.
This also helps you to plan your cluster capacity and scale it accordingly. Archiving old data into cold storage (s3/Azure storage blob/gcs bucket) is a good option. Another option is to snapshot old indices and then just delete them. The snapshots are stored in gcs/s3/azblobs which is cheap and can be easily restored on demand.
Let's say you have an index for a customer master and the dataset is v less like just a master inventory and less than 10 MB. In that case, a single shard and single replica would suffice. You can also opt for 2 replicas but it all depends. If you take regular snapshots, then you can manage fine with 1 replicas. More replicas means more storage and more $$.
Time-based indices have an advantage when it comes to snapshots. Say you have monthly indices and today is 31-March-2024 and you have snapshotted data pertaining to March-2022 into snap-my_monthly_index_2022_03. In that case, you can delete the index corresponding to March 2022 from your cluster if no searches are being performed against it and save storage costs. And in case it's later needed, you can easily restore the index quickly from snapshots.
Hopefully, this beginner guide helps you in your sharding journey. Good luck.
Top comments (0)