A successful approach to AWS cost optimization starts by gaining a thorough view of the existing costs, finding potential for cost optimization, and incorporating modifications. AWS and other providers of software have resources to help clients understand how they are spending.
In this article, I'll provide a comprehensive guide on how to understand your AWS costs and needs.
What are your data storage requirements?
The first step is to consider the performance profile for each of your workloads in order to maximize storage. To calculate input/output operations per second (IOPS), throughput, and other metrics you need for this analysis, you can conduct a performance evaluation.
Check out our comprehensive guide on creating an AWS cost optimization strategy.
AWS storage services are configured for various situations related to storage. There’s no one data storage solution that is suitable for all workloads. Evaluate data storage solutions for each workload independently when determining the storage requirements.
To do this efficiently, you should identify some key information.
- How often do you access your data? AWS has different pricing plans depending on how frequently you need to access data.
- Do you need high IOPS or throughput for your data store? AWS offers data types that are efficient and performance-tailored. It can help you decide the correct amount of storage and prevent overpayment by recognizing IOPS and throughput specifications.
- How important is your data? It is important to maintain vital or controlled data at almost any cost and needs to be retained for a long period.
- How delicate is your data? Highly confidential data needs to be shielded against unintentional and malicious modifications. Equally critical to remember are longevity, expense, and protection.
- How much data do you have? This is basic information to determine the storage you need.
- How temporary is your data? You only need transient data briefly, requiring no durability.
- What is your data storing budget? This is also a critical factor when deciding which provider to choose.
S3 storage classes
S3 storage classes affect the availability, lifetime, and spending on objects stored in S3. Every S3 bucket can store objects with different classes, which can be modified and changed during their lifetime. Picking out the right storage class is crucial to achieving cost-effectiveness. The wrong storage class can lead to many unnecessary costs.
Amazon S3 provides six storage classes, each built for specific use cases and available at differing rates. Each of them has a different cost per gigabyte.
- S3 Standard: costs are based on object size. Store here the objects that you will be accessing frequently.
- S3 Standard-Infrequent Access: costs are based on object size and retrieval.
- S3 One Zone-Infrequent Access: the difference between this class and S3 Standard-IA is that it stores data in a single AZ at a 20% lower cost, instead of a minimum of three AZs. However, this reduces availability.
- S3 Intelligent-Tiering: transfers objects between classes based on the frequency of use, charging per transfer. Used objects go to Standard, while infrequently used objects go to Standard-IA.
- S3 Glacier: long-term data archiving, additional storage at a lower cost.
- S3 Glacier Deep Archive: long-term data archiving with access once or twice a year.
EBS Storage
Elastic Block Store (EBS) represents storage for EC2 virtual machines. It can go up to 16 TiB per disk, offering SSD or HDD support. This is provisioned storage you pay per gigabyte, on a monthly basis. This means that you should try to estimate the amount of storage you need at a given time and only purchase that volume. You can increase the size of your EBS storage later.
When purchasing EBS storage, the first thing you should decide is whether to use SSD or HDD volumes. SSD volumes are great for regular read and write operations, while HDD is better with wide streaming workloads that require efficient throughput.
There are several types of EBS storage volumes. You can see a list of them here.
When choosing the correct volume, the first thing that will probably come to your mind is to dismiss HDD storage, but don’t precipitate with your decision. If you decide on one of the SSD options, regularly monitoring your EBS storage can show you whether HDD would be a better choice, highlighting that you might not need that efficient performance after all. Also, don’t forget to turn off EBS storage volumes you no longer use. This can save you many unnecessary costs.
Here, we need to mention EBS snapshots as well. EBS snapshots represent a piece of the used space in the EBS storage, not the provisioned storage. They are also charged per gigabyte per month, at a price of $0.05 per GB-month of data stored. When you want to restore a snapshot, you can use EBS Fast Snapshot restore, but at a higher price.
You can start with the Free Tier that offers 30GB of EBS Storage, 2 million I/Os, and 1GB of snapshot storage.
EC2 pricing
EC2 instances are charged per hour or per second while they are running. This means that when we don’t need them, we should shut them down. Here, you’ll also have to pay for the provisioned EBS storage, regardless of whether your EC2 instances are running or not. Finally, you’ll also pay for data transfer out, a price that varies depending on the region. There are also other points of pricing you can find in the EC2 documentation.
There are several types of EC2 payment options:
There are about 400 EC2 instances you can choose from. It’s important to choose the right instance family and size in order to be cost-effective. For right-sizing, you can use Amazon CloudWatch, AWS Cost Explorer, and AWS Trusted Advisor.
Cost savings in serverless
Serverless computing can save you a lot of time and money. Here are some of the benefits:
- No need for server management
- Scale automatically without downtime
- Pay for what you use
- Migrate a large amount of everyday work to AWS
- Save time you can use to focus on your actual product
- Become more agile and flexible To become serverless on the AWS platform, you can use AWS Lambda for computing, DynamoDB or Aurora for data, S3 for storage, and the API Gateway as a proxy.
Database pricing (RDS & DynamoDB)
When it comes to RDS pricing, the first thing to think about is the instance you choose. The only serverless option is Amazon Aurora. Next, database storage is also an important factor. Obviously, the bigger the database, the bigger the cost. The remaining two factors are backup storage and data transfer between availability zones and storage.
You can choose one of the following Amazon RDS instances:
- General purpose (T3, T2, M6g, M5, M5d, M4)
- Memory optimized (R6g, R5, R5b, R5d, R4, X1e, X1, Z1d)
You can try Amazon RDS for free and pay for what you use. The payment options are on-demand or Reserved Instances. To estimate your spendings, try the Pricing Calculator.
As for DynamoDB, you can also pay on-demand or for provisioned capacity. You can see the difference between read and write capacity units depending on the pricing type here.
To affect the costs you’ll have for DynamoDB, use auto-scaling. Auto-scaling uses traffic patterns to dynamically adjust the number of read and write capacity units, which in fact helps with the difficulty to predict DynamoDB workloads. By defining a scaling policy for read/write capacity you only enter the minimum and maximum values for provisioning. With alarms, you can trigger the autoscaling policy to perform certain steps in order to scale up or down.
To save more, you can purchase reserved capacity units for a period of one or three years. This commitment will allow you to get capacity units at a reduced price.
Key takeaways
To choose the right AWS pricing plans, you need to understand your data storage requirements first. The importance, delicacy, and amount of data, as well as other characteristics, will impact the final AWS spendings you’re going to have.
Then, you need to choose from the six storage classes AWS has to offer, as well as EBS storage volumes. There are also several EC2 pricing plans and choosing serverless computing as an option. Finally, we’ve explained Amazon RDS and DynamoDB pricing. What you choose depends on your application and your data storage requirements.
Top comments (1)
It’s a well-known fact that most firms provision more infrastructure than they require and overspend on cloud resources. I’ve seen it happen more often and wondered how the overspending could be avoided. It is important to note that the majority of the money is spent on idle or large infrastructure. Through experience I’ve learned these erratic cost expenditures can be brought under control and be reduced by deploying the right tools and employing the right techniques.
The trick here is to periodically scan your AWS accounts, identify cost savings opportunities, and apply quick fixes. This may seem simple, but in reality, it’s quite a challenge to put into action unless you have a powerful tool that does it for you. That is why I had mentioned earlier that you need to have the right tool to achieve cost savings.
For example, we use CloudFix tool for AWS cost optimization - devgraph.com/2021/02/25/cloudfix-f...
It works cool for us. This tool not only told us what was wrong with our resource utilization, but also suggested to us how to remedy the situation. Also, as far as our knowledge goes, this tool is safe, secure, and entirely automated, which is something we had on the top of our priorities.