DEV Community

Cover image for Performance Boost: 10 Expert Tips for Optimizing Your Amazon OpenSearch Service Cluster
Alexey Vidanov for AWS Community Builders

Posted on • Originally published at tecracer.com

Performance Boost: 10 Expert Tips for Optimizing Your Amazon OpenSearch Service Cluster

By implementing these recommendations, you can maximize the potential of your Amazon OpenSearch Service domain, delivering an improved search experience while optimizing costs and maintaining security. Let's explore these expert tips to supercharge your OpenSearch cluster.

This guide updates our 10 expert tips for 2024, grouped into key areas: Hardware, Indexing, Monitoring, Sharding, and Query Optimization. We’ll also discuss why keeping your OpenSearch version up to date is crucial for unlocking performance improvements.

Let's get started.

Hardware

abstract hardware

  1. Leverage Graviton3 and OR1 Instances for Better Performance The new Graviton3-based instancesC7g (compute-optimized), M7g (general-purpose), R7g (memory-optimized), and R7gd (with local SSD storage)—offer significant performance boosts over their predecessors. They deliver up to 30% better compute performance and improved energy efficiency, making them ideal for a variety of OpenSearch workloads.
    • C7g is ideal for compute-heavy search operations and analytics workloads.
    • M7g suits a balanced mix of compute, memory, and storage needs, perfect for general-purpose OpenSearch domains.
    • R7g and R7gd are designed for memory-intensive use cases, with R7gd also offering local SSD storage for high-speed access to frequently used data.
    • Use OR1 Instances for Cost-Effective Long-Running Workloads The OR1 family is designed to handle long-running, steady-state workloads like log ingestion and monitoring at a lower cost. While not as powerful as the Graviton3-based instances, OR1 instances are optimized for cost savings and long-term log storage, balancing performance with reduced pricing for operational workloads.
  2. Start big Remember, it's easier to measure the excess capacity in an overpowered cluster than the deficit in an underpowered one, so it's recommended to start with a larger cluster than you think you need, then test and scale down to an efficient cluster that has the extra resources to ensure stable operations during periods of increased activity.

Indexing

abstract indexing big data

  1. Use bulk ingest requests and employ multi-threading Bulk requests are more efficient than individual index requests. For example, a single thread can index 1000 small documents per second, but with bulk requests, it can index 100,000 to 250,000 documents per second. The bulk API is a powerful tool for indexing multiple documents in a single request, reducing the overhead of individual indexing requests. The optimal bulk size varies depending on the use case, but a good starting point is between 5-15MB.

To enhance indexing throughput, employ multi-threading. This can be achieved using OpenSearch SDKs and libraries like opensearch-py. By creating 10-20 threads per node, you can significantly boost your indexing performance.

  1. Optimize Minimize frequent updates: To maximize efficiency in OpenSearch, minimize frequent updates to the same document. This prevents the accumulation of deleted documents and large segment sizes. Instead, collect necessary updates in your application and selectively transmit them to OpenSearch, reducing overhead and improving performance. As an example, when storing stock information in the index, it's recommended to represent it using levels (e.g., available, low, not available) instead of numerical values. This approach ensures efficient storage and retrieval of stock data in OpenSearch.

Do not index everything: Disabling indexing for specific fields by setting "index": false in the field mapping can help optimize storage, improve indexing performance.

Tune your _source field :

  • The _source field in OpenSearch is a special field that holds the original JSON object that was indexed. This field is automatically stored for each indexed document and is returned by default in search results.

  • The primary advantage of the _source field is that it allows you to access the original document directly from the search results. This can be particularly useful for debugging purposes or for performing partial updates to documents.

  • However, storing the _source field does increase storage requirements. Each indexed document essentially gets stored twice: once in the inverted index for searching and once in the _source field.

  • If your use case doesn't require accessing the original document in search results, you can disable storing the _source field to save storage space. This can be done by setting "enabled": false in the _source field mapping.

Monitoring

monitoring

  1. Use CloudWatch Monitoring tools like Amazon CloudWatch can be used to track indexing performance and identify bottlenecks. Enabling Slow Logs can save a lot of time. Set up the recommended CloudWatch alarms for Amazon OpenSearch Service
  2. Profile queries Profiling your OpenSearch queries can provide valuable insight into how your queries are being executed and where potential performance bottlenecks may be occurring. The Profile API in OpenSearch is a powerful tool for this purpose.
    • To use the Profile API, simply append ?profile=true to your search queries. This will return a detailed breakdown of your query's execution, including information about how long each operation took and how the query was rewritten internally.
    • The output of the Profile API is divided into sections for each shard that participated in the response. Within each shard section, you'll find details about the query and aggregation trees.
    • The query tree shows how the query was executed across the inverted index, including the time taken by each term. The aggregation tree, on the other hand, shows how the aggregations were computed, including the time taken by each bucket.
    • By analyzing this information, you can identify which parts of your query are taking the most time and adjust them accordingly. This could involve changing the structure of your query, adjusting your index mappings, or modifying your OpenSearch cluster configuration.
    • Remember, profiling adds overhead to your queries, so it's best to use it sparingly and only in a testing or debugging environment.

Sharding

sharding

  1. Find an optimal shard number and size: The ideal shard size in OpenSearch is typically between 10GB and 50GB for workloads where search latency is a key performance objective, and 30-50GB for write-heavy workloads such as log analytics. Large shards can make it difficult for OpenSearch to recover from failure, but having too many small shards can cause performance issues and out of memory errors.

The number of primary shards for an index should be determined based on the amount of data you have and your expected data growth. A general guideline is to try to keep shard size between 10–30 GiB for workloads where search latency is a key performance objective, and 30–50 GiB for write-heavy workloads such as log analytics.

  1. Optimize shard locating: Overallocating shards can lead to wasted resources. On a given node, have no more than 25 shards per GiB of Java heap. For example, an m5.large.search instance has a 4-GiB heap, so each node should have no more than 100 shards. At that shard count, each shard is roughly 5 GiB in size, which is well below the recommended size range.

Search and Query Performance

  1. Use filters: Filters are faster than queries because they don’t calculate relevance (_score). They simply include or exclude documents.
  2. Use search templates: One effective way to boost your Amazon OpenSearch Service is by utilizing search templates. Search templates allow you to predefine and reuse complex search queries, reducing the processing time and improving search performance.

Bonus Tip: Regularly Update Your Cluster for Optimal Performance

Keeping your Amazon OpenSearch Service cluster up to date is one of the easiest ways to ensure peak performance. Each new version of OpenSearch introduces significant performance improvements, optimizations, and bug fixes. For example, OpenSearch 2.14 brings major enhancements to query speed, indexing efficiency, and resource management, which can significantly reduce costs while improving overall cluster responsiveness.

To ensure you're taking advantage of the latest improvements:

  • Plan Regular Updates: Schedule periodic updates for your OpenSearch clusters. AWS makes it easy to upgrade with minimal downtime using blue/green deployments.
  • Test in a Staging Environment: Before applying updates in production, always test new versions in a staging environment to ensure compatibility with your existing setup.
  • Leverage New Features: Take advantage of the latest features and optimizations in the newer versions, such as better memory management, faster queries, and enhanced index recovery processes.

Regularly updating your OpenSearch cluster will keep your environment running smoothly and allow you to leverage the latest innovations for improved performance.

Additional Reading

Keep in mind, these tips and improvements are just the starting point. The ultimate effectiveness of their application depends on your specific scenario and use case. While we've focused on the areas of Hardware, Indexing, Sharding, Monitoring, and Optimization, there are many more facets to consider. For instance, security, which is a critical component we haven't delved into here. Remember, your OpenSearch Service should be as unique as your needs.

If you require assistance in optimizing your OpenSearch Service deployment, tecRacer, an Amazon OpenSearch Service Delivery Partner, is here to provide expert guidance. Our team of professionals specializes in designing, deploying and securely managing OpenSearch Service infrastructures tailored to individual needs. Whether you need support in selecting the right instance types, fine-tuning indexing strategies, monitoring performance, or optimizing search and query operations, tecRacer can provide the expertise you need.

Photo of Alessandro Bianchi on Unsplash

Top comments (0)