AWS provides diverse data storage options, each designed for specific use cases. Below is a comparison of S3 (Simple Storage Service), EBS (Elastic Block Store), and EFS (Elastic File System):
1. Amazon S3 (Simple Storage Service)
Description:
- Object storage service designed for scalability, durability, and availability.
- Stores data as objects in buckets.
- Accessible via HTTP(S) with a RESTful API.
Key Features:
- Scalability: Handles petabytes of data seamlessly.
- Durability: 99.999999999% (11 9's) durability.
- Data Tiers: Standard, Infrequent Access (IA), Glacier for archiving.
- Versioning & Lifecycle Management: For object versioning and transitions.
- Global Access: Can be used with a CDN (CloudFront) for fast delivery.
Use Cases:
- Static website hosting.
- Backup and archival.
- Data lakes for analytics.
- Media storage and content distribution.
2. Amazon EBS (Elastic Block Store)
Description:
- Block storage attached to EC2 instances, functioning as virtual hard drives.
- Persistent storage that retains data even after EC2 instance termination.
Key Features:
- Performance Modes: General Purpose SSD, Provisioned IOPS SSD, Throughput Optimized HDD, Cold HDD.
- Snapshots: Backup volumes to S3 for recovery.
- High Availability: Replicated within an Availability Zone (AZ).
- Encryption: Supports data-at-rest encryption.
Use Cases:
- Databases (e.g., MySQL, PostgreSQL).
- Filesystem for EC2-based applications.
- Transaction-heavy workloads requiring low latency.
3. Amazon EFS (Elastic File System)
Description:
- Fully managed, scalable file storage accessible by multiple EC2 instances.
- Compatible with NFSv4 and designed for concurrent access.
Key Features:
- Elastic Scaling: Automatically scales as data grows.
- Regional Access: Accessible across multiple Availability Zones.
- Performance Modes: General Purpose and Max I/O for higher throughput.
- Integration: Supports Amazon ECS and AWS Lambda for serverless workloads.
Use Cases:
- Content management systems.
- Shared development environments.
- Big data and analytics requiring concurrent access.
Comparison Table
Feature | Amazon S3 | Amazon EBS | Amazon EFS |
---|---|---|---|
Storage Type | Object storage | Block storage | File storage |
Access | HTTP(S) API | Attached to EC2 instances | Concurrently by multiple EC2 instances |
Performance | Varies by tier | High performance, low latency | High throughput |
Durability | 11 9's durability | AZ-level replication | AZ-level replication |
Scalability | Virtually unlimited | Limited to volume size (up to 16 TiB) | Elastic scaling |
Cost | Pay for usage | Based on provisioned capacity | Pay for usage |
Best For | Backups, archives, and static content | Databases and transactional workloads | Shared file storage |
Which One to Use?
Use Case | Recommended Option |
---|---|
Backup, static content hosting | Amazon S3 |
Running a relational database on EC2 | Amazon EBS |
Shared storage for multiple EC2 instances | Amazon EFS |
Large-scale data analytics and machine learning | Amazon S3 |
High-performance, low-latency applications | Amazon EBS |
Web applications needing shared files | Amazon EFS |
Each storage option can complement the others. For example, you might store backups in S3, use EBS for database workloads, and leverage EFS for shared application files.
Task: Create an S3 Bucket and Upload/Download Files Using AWS CLI
Prerequisites
-
AWS CLI Installed:
- Install AWS CLI if not already installed.
-
AWS CLI Configured:
- Run the following command and provide your AWS credentials:
aws configure
- You’ll need:
- AWS Access Key
- AWS Secret Access Key
- Default Region (e.g.,
us-east-1
) - Default Output Format (e.g.,
json
)
Steps to Create an S3 Bucket
- Create the Bucket:
aws s3 mb s3://your-bucket-name --region your-region
Replace your-bucket-name
with a globally unique bucket name and your-region
with your desired AWS region (e.g., us-east-1
).
Example:
aws s3 mb s3://my-awesome-bucket --region us-east-1
- Verify the Bucket Creation:
aws s3 ls
This lists all your buckets.
Steps to Upload a File
-
Prepare a File:
- Create a sample file to upload:
echo "Hello, S3!" > hello.txt
Upload the File:
aws s3 cp hello.txt s3://your-bucket-name/
Example:
aws s3 cp hello.txt s3://my-awesome-bucket/
- Verify the Upload:
aws s3 ls s3://your-bucket-name/
Steps to Download a File
- Download the File:
aws s3 cp s3://your-bucket-name/hello.txt ./downloaded-hello.txt
- Verify the File:
cat downloaded-hello.txt
This should display Hello, S3!
.
Optional Commands
- List Files in the Bucket:
aws s3 ls s3://your-bucket-name/
- Delete a File from the Bucket:
aws s3 rm s3://your-bucket-name/hello.txt
-
Delete the Bucket:
- Ensure the bucket is empty first:
aws s3 rb s3://your-bucket-name --force
Summary
-
Create a bucket:
aws s3 mb
-
Upload files:
aws s3 cp <file> s3://<bucket>
-
Download files:
aws s3 cp s3://<bucket>/<file> <destination>
Happy Learning !!!
Top comments (0)