DEV Community

Cover image for Cloud provider comparison 2024: VM Performance / Price
Dimitrios Kechagias
Dimitrios Kechagias

Posted on • Edited on

Cloud provider comparison 2024: VM Performance / Price

Time for the yearly cloud compute comparison test. It's a bit delayed, most of the benchmarking work was done early this summer and some of the main points were already summarized in my Maximizing Performance and Cost Efficiency in the Cloud talk at a conference in June (watch it on youtube for lots of useful tips). However, there were some complications that I wanted to get right and also new types introduced during the course of the summer, so it's better to get it right than early.

Table Of Contents:

What's new

In the 2023 comparison I had looked at VM types from 10 providers, while this comparison can be more targeted thanks to that experience, covering 30 VM types from 7 providers. Here are the changes in providers:

  • IBM is dropped due to their value being too poor to compete with anyone else.
  • OVHCloud disappointed (both in performance and support), so they are replaced by a different popular budget provider: Hetzner.
  • Alibaba Cloud and Tencent are also dropped to simplify the comparison, as they are not as relevant in the Western markets.

Other changes:

  • New CPUs: AMD EPYC Genoa is available from 3 providers, while Google has just started offering the Intel Emerald Rapids and Amazon their brand new Graviton4.
  • More testing: More benchmarks run and more instances tested per type over various regions for longer to establish performance stability. Additionally, burstable instances were tested, but will be published in a separate follow-up.

The contenders (2024 edition)

Similar to last year, I will focus on 2x vCPU instances, as that's the minimum scalable unit for a meaningful comparison (and generally minimum for several VM types), given that most AMD and Intel instances use Hyper-Threading (aka Symmetric Multi Threading). So, for those systems a vCPU is a Hyper-Thread, or half a core, with the 2x vCPU instance giving you a full core with 2 threads. This will become clear in the scalability section.

The comparison will be a bit more focused, skipping instance types that are obviously uncompetitive. I am still trying to configure at 2GB/vCPU of RAM (which is variably considered as "compute-optimized", or "general-purpose") and 30GB SSD (not high-IOPS) boot disk for the price comparison to make sense (exceptions will be noted).

The pay-as-you-go/*on-demand* prices include 100% sustained discounts where available and refer to the lower cost region in the US or Europe. For providers with variable pricing, cheapest regions are almost always in the US.

For providers that offer 1 year and 3 year committed/reserved discounted prices, the no-downpayment price was listed with that option. The prices were valid for July 2024 - please check for current prices before making final decisions.

As a guide, here is an overview of the various generations of AMD, Intel and ARM CPUs from older (top) to newer (bottom), roughly grouped horizontally in per-core performance tiers, based on this and the previous comparison results:

Image description

This should immediately give you an idea of roughly what performance tier to expect based on the CPU alone, with the important note that for Hypethreading-enabled instances you get a single core for every 2x vCPUs.

A general guidance is that you should avoid old CPU generations, as due to their lower efficiency (higher running costs) the cloud providers will actually charge you more for less performance. I will even not include types that were already too old to provide good value last year, to focus on the more relevant products.

Amazon Elastic Compute Cloud (EC2)

Instance Type CPU type CPU GHz/ RAM/SSD Price $/Month 1Y Res. $/Month 3Y Res. $/Month Spot $/Month
C6a.large (M) AMD Milan 2.65/4/30 58.25 39.34 26.99 28.65
C7g.large (G3) AWS Graviton3 2.1/4/30 55.33 37.29 25.69 24.10
C7i.large (SR) Intel Sapphire R 2.4/4/30 67.55 45.50 31.08 30.42
C7a.large (G) AMD Genoa 2.6/4/30 77.36 51.97 35.39 28.62
R7iz.large (E) Intel Emerald R 3.7/16/30 138.18 87.94 58.42 55.35
R8g.large (G4) AWS Graviton4 2.8/16/30 88.41 59.40 39.01 23.01

Amazon Web Services (AWS) pretty much originated the whole "cloud provider" business - even though smaller connected VM providers predated it significantly (e.g. Linode comes to mind) - and still dominates the market. The AWS platform offers extensive services, but, of course, we are only looking at their EC2 offerings for this comparison.

There are 3 new CPUs introduced since last year. Amazon's own Graviton4 and the Intel Emerald Rapids were only just made public in memory-optimized types (R8g and R7iz respectively), so if you don't need 8GB/vCPU you'll need to wait longer for cost-effective types to become available. In the price/performance comparisons below, expect these two to be at a disadvantage. On the other hand, the new AMD EPYC Genoa is widely released with memory, general and compute-optimized types, and we are testing the latter (C7a) and the big surprise for me is that it non-SMT (non-Hyperthreading if you prefer), which means you get a full core per vCPU.

With EC2 instances you generally know what you are getting (instance type corresponds to specific CPU), although there's a multitude of ways to pay/reserve/prepay/etc which makes pricing very complicated, and pricing further varies by region (I used the lowest cost US regions). In the 1Y/3Y reserved prices listed, there is no prepayment included - you can lower them a bit further if you do prepay. The spot prices vary even more, both by region and are updated often (especially for newly introduced types), so you'd want to keep track of them.

Google Compute Engine (GCE)

Instance Type CPU type CPU GHz/ RAM/SSD Price $/Month 1Y Res. $/Month 3Y Res. $/Month Spot $/Month
n2-2 (I) Intel Ice Lake 2.6/4/30 49.82 39.87 29.34 37.60
n2d-2 (M) AMD Milan 2.45/4/30 43.73 35.08 25.91 12.22
t2d-2 (M) AMD Milan 2.45/8/30 64.68 41.86 30.76 11.77
c3-4/2** (SR) Intel Sapphire R 2.7/4/30 65.31 42.03 30.71 17.86
c3d-4/2** (G) AMD Genoa 2.6/4/30 57.12 36.87 27.02 24.28
n4-2 (E) Intel Emerald R 2.1/4/30 60.77 39.18 28.67 25.75
c4-2 (E) Intel Emerald R 2.3/4/30 64.49 41.52 30.34 27.23

** Extrapolated 2x vCPU instance - type requires 4x vCPU minimum size.

The Google Cloud Platform (GCP) follows AWS quite closely, providing mostly equivalent services, but lags in market share (3rd place, after Microsoft Azure). GCP does have one of the most interesting compute cloud offerings in respect to configurability and range of different instance types. However, this variety makes it harder to choose the right one for the task, which is exactly what prompted me to start benchmarking all the available types.

This year, we have an AMD EPYC Genoa solution (c3d), which we are currently using at SpareRoom based on the performance seen on the tests I posted about a few months ago. In addition, the latest Intel Emerald Rapids n4 and c4 just became available.

GCP prices vary per region and feature some strange patterns. For example when you reserve, t2d instances which give you a full AMD EPYC core per vCPU and n2d instances which give you a Hyper-Thread (i.e. HALF a core) per vCPU have the same price per vCPU, but n2d is cheaper on demand. Also, some types (e.g. n2 and n2d) will come with an older (slower) CPU if you don't set min_cpu_platform to AMD Milan and Intel Ice Lake to get a 30% and 20% faster machine respectively for the same price.

Note that c3 and c3d types have a 4x vCPU minimum. This breaks the price comparison, so I am extrapolating to a 2x vCPU price (half the cost of CPU/RAM + full cost of 30GB SSD). GCP gives you the option to disable cores (you select "visible" cores), so while you have to pay for 4x vCPU minimum, you can still run benchmarks on a 2x vCPU instance for a fair comparison.

Microsoft Azure

Instance Type CPU type CPU GHz/ RAM/SSD Price $/Month 1Y Res. $/Month 3Y Res. $/Month Spot $/Month
D2pls_v5 (A) Ampere Altra 3.0/4/32 52.04 31.65 21.26 7.36
D2ls_v5 (I) Intel Ice Lake 2.8/4/32 64.45 38.98 25.98 17.91
D2as_v5 (M) AMD Milan 2.45/8/32 65.18 39.40 26.26 8.68

Azure is the #2 overall Cloud provider and, as expected, it's the best choice for most Microsoft/Windows-based solutions. That said, it does offer many types of Linux VMs, with quite similar abilities as AWS/GCP.

The Azure pricing is at least as complex as AWS/GCP, plus the pricing tool seems worse. They also lag behind the other two major providers in updating their fleet with newer types, with nothing of interest released compared to last year which should have lowered their competitiveness. They have had EPYC Genoa in review since last year, and they recently started an Emerald Rapids preview, so they should at least have new types out for our comparison next year!

Oracle Compute VM

Instance Type CPU type CPU GHz/ RAM/SSD Price $/Month Spot $/Month
Standard.A1 (A) Ampere Altra 3.0/4/30 19.78 10.53
Standard.A2 (AO) Ampere AmpereOne 3.0/4/30 17.31
Standard3 (I) Intel Ice Lake 2.6/4/30 37.75 20.96
Standard.E4 (M) AMD Milan 2.45/4/30 23.88 12.57
Standard.E5 (G) AMD Genoa 2.6/4/30 28.99 15.12

The biggest surprise for me last year was Oracle Cloud Infrastructure (OCI). It was a pleasant surprise, not only does Oracle offer by far the most generous free tier (credits for the A1 type ARM VM credits equivalent to sustained 4x vCPU, 24GB RAM, 200GB disk for free, forever), their paid ARM instances were the best value across all providers - especially for on-demand. The free resources are enough for quite a few hobby projects - they would cost you well over $100/month in the big-3 providers.

Note that registration is a bit draconian to avoid abuse, make sure you are not on a VPN and also don't use oracle anywhere in the email address you use for registration. You start with a "free" account, which gives you access to a limited selection of services and apart from the free-tier eligible A1 VMs, you'll struggle to build any other types with the free credit you get at the start.

Upgrading to a regular paid account (which still gives you the free tier credits), you get a selection of VMs. New this year are the EPYC Genoa Standard.E5 VMs and the second generation ARM Standard.A2 type powered by the AmpereOne CPU. The A1 has gone up a bit in price and the A2 is cheaper. If you expected the newer CPU from Ampere to be significantly faster than the Altra, you may be disappointed. It probably uses less power per core, which would explain Oracle having it priced lower, but the performance is sort of a mixed bag with it being quite a bit faster in some tasks, but quite a bit slower in others.

Oracle Cloud's prices are the same across all regions, which is nice. They do not offer any reserved discounts, but do offer a 50% discount for preemptible (spot) instances.

Akamai (Linode)

Instance Type CPU type CPU GHz/ RAM/SSD Price $/Month
Linode 4 (R*) AMD Rome 2.9/4/80 24.00
Linode 4 (M*) AMD Milan 2.0/4/80 24.00
Premium 4G (M) AMD Milan 2.0/4/80 36.00

* Shared core.

Linode, the venerable cloud provider (predating AWS by several years), has now been part of Akamai for over 2 years.

It seems that they continue to update their data centers. As we saw in the previous years, it was luck of the draw what type of AMD CPU you would get when creating a VM, but in general you can now almost always get a Milan or Rome. Interestingly, while last year all the Milan instances were slower than Rome, this year I found that they frequently performed much better. This is not an oversubscribing / noisy neighbour issue, the significantly different (in performance) Milan instances would maintain a stable performance level through hours/days of testing. It seems like they are set up differently on different data centers.

It is a bit of an annoyance that without testing your VM after creation you can't be sure of what performance to expect, unless you go for the more expensive dedicated VMs, but otherwise, Akamai/Linode is still easy to set up and maintain and has fixed, simple pricing across regions.

DigitalOcean Droplets

Instance Type CPU type CPU GHz/ RAM/SSD Price $/Month
Basic-2 * Intel 2.5/4/80 24.00
Prem2-AMD (R*) AMD Rome 2.0/4/80 32.00
Prem2-Intel (C*) Intel Cascade L 2.5/4/100 28.00
CPU2-Opt (S) Intel Skylake 2.7/4/25 42.00

* Shared core.

DigitalOcean was close to the top of the perf/value charts two years ago, providing the best value with their shared CPU Basic "droplets". I am actually using DigitalOcean droplets to help out by hosting a free weather service called 7Timer, so feel free to use my affiliate link to sign up and get $200 free - you will help with the free project's hosting costs if you end up using the service beyond the free period. Apart from value, I chose them for the simplicity of setup, deployment, snapshots, backups.

However, for the second year in a row, the performance of their best value (shared-CPU) VMs has dropped even more showing signs of significant oversubscribing. While I like their simple, region-independent and stable their pricing structure, they seem to have stopped updating their VM fleet going by the aforementioned of performance drop and no sign of newer CPUs.

Hetzner

Instance Type CPU type CPU GHz/ RAM/SSD Price $/Month
CX22 (S*) Intel Skylake 2.2/4/40 4.91
CAX11 (A*) Ampere Altra 2.0/4/40 4.91
CPX31/2** (R*) AMD Rome 2.44/4/80 8.81
CCX13 (M) AMD Milan 2.4/8/80 15.54

* Shared core.
** Extrapolated 2x vCPU instance - type requires 4x vCPU minimum size.

Hetzner is a quite old German data center operator and web host, with a very budget-friendly public cloud offering. I had not used them before, but they are often recommended as a reliable low-budget solution, so I though I'd replace last year's budget operator OVH which left me unimpressed with both their performance and their support.

On the surface, their prices seem to be just a fraction of those of the larger providers, so I did extended benchmark runs over days to make sure there is no significant oversubscribing - especially for the shared-core types. Only the CCX13 claims dedicated cores. Ironically, those dedicated instances vary significantly in performance depending on which data center you create them in. So while the shared core VMs may not have stable performance, the variation is often smaller than that of dedicated VMs in different regions.

Test setup & Benchmarking methodology

The methodology is a bit similar to the last two years, although some benchmarks have changed. Almost all instances were on 64bit Debian 12, although I had to use Ubuntu 24.04 on a couple, and Oracle's ARM were only compatible with Oracle Linux. The initial setup was:

# Debian
sudo apt-get update
sudo apt install -y wget build-essential cpanminus libxml-simple-perl libjpeg-dev libexpat1-dev zlib1g-dev libssl-dev libdb-dev php-cli php-xml php-zip libelf-dev

# Ubuntu
sudo yum update
sudo yum install -y wget make automake gcc gcc-c++ kernel-devel perl-App-cpanminus perl-XML-Simple libjpeg-devel expat-devel zlib-devel openssl-devel libdb-devel php-cli php-xml php-pecl-zip elfutils-libelf-devel screen
Enter fullscreen mode Exit fullscreen mode
  • Perl 5.36 compilation (perlbrew)

Subsequently, a specific Perl version was installed from sources (no tests, dual-threaded) via perlbrew, both to serve as a compilation benchmark and to use as a common comparison base for the Perl-based test suite:

\curl -L https://install.perlbrew.pl | bash
source ~/perl5/perlbrew/etc/bashrc
perlbrew download perl-5.36.0
time perlbrew install perl-5.36.0 -n -j2
Enter fullscreen mode Exit fullscreen mode

Optional: For future use of perlbrew on your terminal you can add it to your profile.

echo 'source ~/perl5/perlbrew/etc/bashrc' >> ~/.bashrc
Enter fullscreen mode Exit fullscreen mode
  • Benchmark::DKbench

As previously, I use my own benchmark suite, now improved and released on CPAN as Benchmark::DKbench. It has both proven very good at approximating real-world performance differences in the type of workloads we use, and is also good at comparing single and multi-threaded performance (with scaling to hundreds of threads if needed).

To set it up we activate the Perl we just compiled and add the cpanm module installer, which we use to get DKbench (along with BioPerl to enable the optional benchmarks):

perlbrew switch perl-5.36.0 
perlbrew install-cpanm
cpanm -n BioPerl Benchmark::DKbench
Enter fullscreen mode Exit fullscreen mode

At this point, to make sure every instance has the same Perl libraries installed (it is otherwise optional) we run:

setup_dkbench --force
Enter fullscreen mode Exit fullscreen mode

And run the test with a minimum of 10 iterations (it will run both in single and multi-thread mode and show the scaling):

dkbench -i 10
Enter fullscreen mode Exit fullscreen mode

I created more than one instance preferably in different regions and if I saw a significant variance I ran the benchmark for 24h or more.

  • Geekbench 5

I have kept Geekbench, both because it can help you compare results from previous years and because Geekbench 6 seems to be much worse - especially in multi-threaded testing (I'd go as far to say it looks broken to me):

# x86
wget https://cdn.geekbench.com/Geekbench-5.4.4-Linux.tar.gz
tar xvfz Geekbench-5.4.4-Linux.tar.gz
Geekbench-5.4.4-Linux/geekbench5

# ARM64
wget https://cdn.geekbench.com/Geekbench-5.4.0-LinuxARMPreview.tar.gz
tar xvfz Geekbench-5.4.0-LinuxARMPreview.tar.gz
Geekbench-5.4.0-LinuxARMPreview/geekbench5
Enter fullscreen mode Exit fullscreen mode

I simply kept the best of 2 runs, you can browse the results here. There's an Arm version too at https://cdn.geekbench.com/Geekbench-5.4.0-LinuxARMPreview.tar.gz.

  • Phoronix (openbenchmarking.org)

For the first time, I added some Phoronix benchmarks. Mainly because, apart from being popular, they can help benchmark some specific things (e.g. AVX512 extensions) and also results are openly available.

To install the suite:

wget https://phoronix-test-suite.com/releases/phoronix-test-suite-10.8.4.tar.gz
tar xvfz phoronix-test-suite-10.8.4.tar.gz
cd phoronix-test-suite
sudo ./install-sh
Enter fullscreen mode Exit fullscreen mode

And then we install/run these benchmarks:

-- 7-zip

phoronix-test-suite benchmark compress-7zip
Enter fullscreen mode Exit fullscreen mode

Very common application and very common benchmark - average compression/decompression scores are recorded.

-- Openssl (RSA 4096bit)

phoronix-test-suite benchmark openssl
Enter fullscreen mode Exit fullscreen mode

Select option 1. This is benchmark uses SSE/AVX up to AVX512, which might be important for some people. Older CPUs that lack the latest extensions are at a disadvantage.

-- Linux Kernel compilation

phoronix-test-suite benchmark build-linux-kernel
Enter fullscreen mode Exit fullscreen mode

Select option 1. Only x86 CPUs were compared, as the workload is not identical across architectures.

Results

The raw results can be accessed on this spreadsheet (or here for the full Geekbench results).

In the graphs that follow, the y-axis lists the names of the instances, with the CPU type in parenthesis:

(E)  = Intel Emerald Rapids
(SR) = Intel Sapphire Rapids
(I)  = Intel Ice Lake/Cooper Lake
(C)  = Intel Cascade Lake
(S)  = Intel Skylake
(G)  = AMD Genoa
(M)  = AMD Milan
(R)  = AMD Rome
(G4) = Amazon Graviton4
(G3) = Amazon Graviton3
(G2) = Amazon Graviton2
(A1) = Ampere AmpereOne
(A)  = Ampere Altra
     = Unspecified Intel (Broadwell/Skylake)
Enter fullscreen mode Exit fullscreen mode

Single-thread Performance

Single-thread performance can be crucial for many workloads. If you have highly parallelizable tasks you can add more vCPUs to your deployment, but there are many common types of tasks where that is not always a solution. For example, a web server can be scaled to service any number of requests in parallel, however the vCPU's thread speed determines the minimum response time of each request.

  • DKbench 2.8 Single-Threaded

We start with the latest DKbench, running the 19 default benchmarks (Perl & C/XS) which cover a variety of common server workloads, first on a single thread:

Image description

Google's new Intel Emerald Rapids c4 instance takes the lead, with AMD EPYC Genoa from Amazon, Oracle and Google following closely. I should note here that Emerald Rapids is not faster in everything (we'll see more benchmarks later - DKbench is the one that favours it the most) and also may have performance issues with older OS images that don't have the latest Intel drivers. E.g. the same benchmarks run under a CentOS 7 Apache server are slower on Emerald Rapids.

While Amazon's Genoa is faster than Google's in this test, as I had pointed out when first testing Google's c3d, you can get about an extra 10% of boost speed if you allocate a full processor (180x vCPU) for your instance.

What was more surprising for me is that Amazon's own Graviton4 pretty much reached the single-core performance level of Intel/AMD's finest! Given that the Graviton ARM CPUs are designed for performance-per-watt and low vCPU pricing without hyper-threading, single-thread performance was expected to be their weak point. Once the general and compute-optimized versions become available on AWS I feel they will become instant hits.

The other new ARM CPU, AmpereOne, does not impress with performance and from the way it is marketed it wasn't supposed to - the focus is on cost; it even underperforms compared to the older Altra in several benchmarks. However, it did win the overall DKbench performance and is cheaper.

Lastly, among the lower-cost providers, DigitalOcean has lagged behind in performance, signaling that their fleet is due for an upgrade. Both Akamai and Hetzner offer some fast Milan instances, although for both providers you are not guaranteed what performance level you are going to get when creating an instance - there is the variation shown in the chart. It's not oversubscribing, the performance is stable, it's just that groups of servers are setup differently.

Geekbench results are not very different:

Image description

Multi-thread Performance & Scalability

  • Scalability

DKbench runs the benchmark suite single-threaded and multi-threaded (2 threads in this comparison as we use 2x vCPU instances) and calculates a scalability percentage. The benchmark obviously uses highly parallelizable workloads (if that's not what you are running, you'd have to rely more on the single-thread benchmarking). In the following graph 100% scalability means that if you run 2 parallel threads, they will both run at 100% speed compared to how they would run in isolation. For systems where each vCPU is 1 core (e.g. all ARM systems), or for "shared" CPU systems where each vCPU is a thread among a shared pool, you should expect scalability near 100% - what is running on one vCPU should not affect the other when it comes to CPU-only workloads.

Most Intel/AMD systems though give you a single core that has 2x threads (Hyper-Threads / HT in Intel lingo - or Symmetric Multi Threads / SMT if you prefer) as a 2x vCPU unit. Those will give you scalability well below 100%. A 50% scalability would mean you have the equivalent of just 1x vCPU, which would be unacceptable. Hence, the farther up you are from 50%, the more performance your 2x vCPUs give you over running on a single vCPU.

Image description

As expected, the ARM and shared CPUs are near 100%, i.e. you are getting twice the multithreaded performance going from 1x to 2x vCPUs. You also get that from two x86 types: Amazon's Genoa C7a has been added to the list alongside Google's older Milan t2d.

From the rest we note that, as usual, AMD does Hyper-Threading better than Intel, being able to consistently show over 60% scaling. At least Intel's latest does not suffer as bad as their Ice Lake from 2 gens ago which barely manages over 50%. If you've followed tech news you'll know Intel has announced that they will be dropping HT from future generations and you can understand why - it takes die space and they haven't been able to get it to offer any significant gains.

Note that last year I selected a specific highly scalable benchmark to calculate maximum scalability, while this year I went with average over all benchmarks - hence the lower values.

  • Multi-thread Performance

From the single-thread performance and scalability results we can guess how running DKbench multithreaded will turn out, but in any case here it is:

Image description

It is Amazon's turn to take the lead with the Genoa-powered C7a, while last year's leader, the Milan t2d has fallen to third - with the impressive Graviton4 R8g in between. The top was expected, Genoa is among the leaders in single-threaded performance and C7a gives you a full core per vCPU.

Graviton3 is close behind, followed by Emerald Rapids, Genoa, Ampere and Milan, mostly in that order. Ice Lake is last, partly due to that terrible 52% scalability.

Geekbench 5 generally agrees:

Image description

Moving on to another popular benchmark - 7zip:

Image description

Graviton4 tops the chart here, with Genoa following along with Graviton3. I knew that AMD generally performs better in 7zip benchmarks compared to Intel, but I did not expect such a strong showing from Graviton4. Emerald Rapids shows tremendous gains over Intel's previous generation but still lags behind the latest competition, at least in decompression.

I have a couple of developer-friendly benchmarks: compiling Perl 5.36 and the Linux Kernel from sources:

Image description

Image description

There are two clear "winners" here: Amazon's C7a, offering one full Genoa core per vCPU, and the super-cheap Rome-based Hetzner CPX31 (with Akamai's Rome Linode close behind). Graviton4 once again impresses in the Perl compilation, although I excluded ARM from the Linux Kernel results due to architecture dependencies that would make the comparison unfair.

  • AVX512

Lastly, in case you have software that can be accelerated by AVX512, I am including an OpenSSL RSA4096 benchmark. They are Intel's extensions so they are on all their CPUs since Skylake, whereas Genoa is the first AMD CPU to implement them. Older AMD CPUs and ARM architectures will be at a disadvantage in this benchmark:

Image description

Impressively, AMD outperforms Intel at their own game here, with Genoa claiming the absolute top, although the upper half of the chart is Intel-dominated with most of their CPUs having AVX512 support.

Performance / Price (On Demand / Pay As You Go)

One factor that is often even more important than performance itself is the performance-to-price ratio.

I will start with the "on-demand" price quoted by every provider (including any sustained use discounts). While I listed monthly costs earlier, these prices are actually charged per minute or hour, so there's no need to reserve for a full month.

  • DKbench single-thread performance/price

The first chart is for single-thread performance/price.

Image description

Hetzner is on a completely different tier, especially for its shared-CPU VMs. Significantly, after benchmarking for days and weeks, I encountered no issues - performance or otherwise. They don't seem oversubscribed as you might expect given their extremely low prices, although unlike the larger providers there is variation between the same types depending on the block you get (similar I guess to Akamai and DigitalOcean).

Oracle is the best of the rest. Their A1 which got all the praise last year has increased in price, so their AmpereOne A2 is now the lead, with the Genoa E5 following closely. Akamai is competitive in value and Google ranks at the top of the "Big-3" with either Emerald Rapids, Genoa or Milan types.

As I noted above, Microsoft has lagged behind in updates to their fleet, which is why they are the least competitive this year. But their new types are coming soon at least.

  • DKbench multi-thread performance/price

Moving on to 2x cores for judging multi-threaded performance:

Image description

The results are quite similar, except ARM instances benefit from having twice the physical cores. Additionally, the brilliant Amazon C7a provides better value than Amazon's own Graviton3 and any offerings from the three most popular providers.

The Oracle A2 reaches a similar value to Hetzner's dedicated CPU type, but the shared VMs of the latter are still in their own league.

As anticipated from the scalability chart, Intel-powered types drop a few positions in the ranking due to poorer Hyper-Threading (HT) performance.

We can do the same with Geekbench:

Image description

It is a very similar chart, although Microsoft does even worse.

Performance / Price (1-Year reserved)

The three largest (and most expensive) providers offer significant 1-year reservation discounts. To get the maximum discount you have to lock into a specific VM type, which is why it is extra important to know what you are getting out of each. Also, for AWS you can actually apply the 1 year prices to most on-demand instances by using third party services like DoIT's Flexsave, so this segment may still be relevant even if you don't want to reserve.

  • DKbench single-thread performance/price

The first chart is again for single-thread performance/price.

Image description

The 1-year discount is enough for at least the GCP c4 & c3d to reach the x86 Linode and OCI types in value.

  • DKbench multi-thread performance/price

Moving on to evaluating multi-threaded performance using 2x vCPUs:

Image description

The non-HT VMs benefit with the AWS C7g and C7a leading the Big-3, followed by GCP's t2d. Remember, you can access similar pricing on AWS without reservation through a third party.

Linodes and ARM-powered Oracle VMs still provide clearly better value (and Hetzner is still a long way ahead).

Geekbench is very similar in results:

Image description

Performance / Price (3-Year reserved)

  • DKbench single-thread performance/price

Finally, for very long term commitments, AWS, GCPand Azure provide 3-year reserved discounts:

Image description

Google finally comes ahead of Oracle and Akamai. In fact, AWS and even Azure also have types that can offer better value than Akamai and some from Oracle.

  • Multi-thread performance/price

Image description

Image description

Performance / Price (Spot / Preemptible VMs)

The large providers (AWS, GCP, Azure, OCI) offer their spare VM capacity at an - often heavy - discount, with the understanding that these instances can be reclaimed at any time when needed by other customers. This "spot" or "preemptible" VM instance pricing is by far the most cost-effective way to add compute to your cloud. Obviously, it is not applicable to all use cases, but if you have a fault-tolerant workload or can gracefully interrupt your processing and rebuild your server to continue, this might be for you.

AWS and OCI will give you a 2-minute warning before your instance is terminated. Azure and GCP will give you 30 seconds, which should still be enough for many use cases (e.g. web servers, batch processing etc).

The discount for Oracle's instances is fixed at 50%, but varies wildly for the other providers per region and can change often, so you have to be on top of it to adjust you instance types accordingly.

For a longer discussion on spot instances see last year's spot performance/price comparison. Then you can look at this year's results below.

  • DKbench single-thread performance/price

Image description

Azure had the most generous discounts, enough to bring it at the perf/price level of budget provider Hetzner's cheapest shared-CPU VMs. GCP was also very generous, especially with the Milan types at the time of testing, surpassing, along with Oracle's AMD types, the dedicated CPU type from Hetzner.

As noted in last year’s analysis, Amazon is less generous with discounts overall, but taking advantage of spot pricing is still the optimal approach for AWS users. I will note here that the best value is offered by the Graviton4 R8g, which provides 4x the RAM compared to the competition. This tells me that when general or compute-optimized Graviton4 types become available, they will be seriously competitive.

  • Multi-thread performance/price

Image description

Image description

All VMs that don't give you just a HyperThread per vCPU show up higher in the multi-thread charts, like Azure's ARM instance which tops everything apart from Hetzner's shared CPU. Google's t2d is almost as good value (and faster overall) if you prefer x86. Despite Amazon's lesser discounts, its most interesting VM types are non HT so they don't fall far behind: it's the aforementioned AWS R8g which gives some of the best perf/price ratios if you would like 4x the RAM, while the C7a provides good value while delivering top per-thread performance, thanks to the EPYC Genoa.

Conclusions

As always, I provide all the data so you can draw your own conclusions. If you have highly specialized workloads, you may want to rely less on my benchmarks. However, for most users doing general computing, web services, etc. I'd say you are getting a good idea about what to expect from each VM type. In any case, I'll share my own conclusions, some reasonably objective, others perhaps more subjective.

What's new this year

Let’s begin with what’s new or surprising compared to last year:

  • Hetzner is impressively cheap, especially for their shared-CPU instances. Although you don't get a dedicated CPU like those from the four major providers, each VM was still reasonably stable in performance, and the prices remain much lower than shared-CPU types from either Akamai or DigitalOcean. I still use DigitalOcean myself for the reliability, as in almost a decade of running servers I've never had a single issue occur. However, they really need to upgrade their data centers. If you're looking for the most cost-effective solution possible, I would suggest trying out Hetzner.
  • Some impressive new CPUs appeared. AMD Genoa is available for almost a year now and was indeed a good step up from anything available, until the Emerald Rapids was released recently. This is the first time in years I've seen Intel come this close to AMD in terms of performance and performance/price, which is good news. The Intel/AMD dominance might soon be challenged by ARM designs, which are catching up in per-thread performance. Amazon's Graviton4 is impressive and Google is preparing their own Axion (I can't yet comment on it, as it's in private preview).
  • Oracle continues to impress as it did last year, so that’s no surprise. I was a bit surprised though in that the new AmperOne A2 VMs are not overall faster than the old A1, but they are priced lower so they are going for value. Their free tier remains unbeatable.

General Tips

  • Upgrade to the latest CPU types when possible. Older VMs are slower and tend to be more expensive due to higher operational costs.
  • Plan your usage and consider making reservations (3-year preferred) to lower costs where applicable. Remember, you can get free 1y reservation prices through 3rd parties in AWS.
  • Leverage spot VMs as much as possible. They are essentially the only way the cloud can compete with the cost of buying and running your own servers. Check prices periodically and across all regions that interest you to find the best deals.
  • Remember that vCPUs are not always comparable: ARM systems and very few x86 systems like AWS's C7a and GCP's t2d, provide a full CPU core per vCPU. Most others give you a full core per 2x vCPUs.

Caveats for AMD vs Intel vs ARM

These benchmarks were mostly conducted for generic, parallelizable workloads. There are scenarios though where specific CPUs may exhibit lower than expected performance:

  • Intel Emerald Rapids seemed to be as good or even a bit better than Genoa - especially on DKbench. However, it showed up to 20% lower performance when running on older Apache web server images (Centos 7) in production. I suspect the issue lies with generic CPU drivers in the older OS. However, Emerald Rapids was a bit behind in other benchmarks as well, so I would still probably trust Genoa more for performance consistency if I could not benchmark the specific workload I intended to run.

  • AMD EPYC scales slightly better than Intel, as shown by its superior SMT / Hyper-Threading. However, these AMD CPUs are made of clusters of 8 cores, each cluster having its own cache. The effect is that when using more than 8 cores, some software (e.g. Database servers) will want cores on different clusters to communicate (this is done via AMD's "Infinity Fabric"), which may affect performance.

  • ARM CPUs may be slower at specific tasks for which the x86 compilation offers acceleration through special instructions (AVX etc).

Recommendations per use-case

I'll further comment with my picks for various usage scenarios:

  • Budget solution: Hetzner shared-CPU (preferably the ARM or AMD-powered) seem to offer the best value by quite a margin, assuming you can't use spot instances (in which case Azure's ARM or AMD Milan, or Google's t2d might be good).
  • Best overall value (performance/price) for non-shared CPU: If you can't use spot instances (which would be the best otherwise), look at Oracle's A1 and A2. In fact, if you have a small project, Oracle's free 4x vCPU A1 might be for you (either serving your entire project for free, or as a 50% discount on an 8x vCPU instance etc). If reservations are an option, the AWS C7a and C7g, along with the GCP t2d, are great for multi-threaded workloads.
  • Best value for top-tier performance: If you can reserve for at least 1 year, that would be Google's t2d. For on-demand pricing, Oracle's Standard3 is probably your best bet, unless the slightly lower performance of Amazon's C7g is enough.
  • Maximum performance: Assuming you don't care about price, Google's Emerald Rapids c4 does technically narrowly beat Google's Genoa c4d and Amazon's Genoa C7a in the single-threaded DKbench. However, I listed some caveats (situations both in production and benchmarks where Emerald Rapids falls behind), which would make me trust the Genoa solutions unless I specifically benchmarked my workloads. In addition, if you are looking at multi-threaded performance you'd need twice the c4 vCPUs to keep up with Amazon's non-SMT Genoa solution which for me makes the AWS C7a the overall performance leader.

Summary per cloud provider

Finally, I'll make some comments per provider tested in the order I introduced them:

  • Amazon: Loved their Graviton3 last year, love their new Graviton4 even more. It's the first ARM I've seen in the wild that delivers Apple Silicon level performance - pretty much catching up with AMD/Intel's latest. It is still not available in many types, so if you don't need the memory-optimized R8g, it will soon appear in better suited types. I also liked their new C7a Genoa non-SMT instance, which is at the same time their fastest and a great value. Traditionally, AWS isn't as competitive in x86 pricing, but the C7a outperforms most Azure and GCP offerings. YIt's best to avoid On-Demand pricing where possible. Your best bet is using spot instances; otherwise, consider reservations for long-running instances or services like Flexsave.
  • Google: The Milan-powered t2d is still the best deal for multi-threaded workloads. However, you now have better options if you are after faster per-thread performance, with either c3d or c4. You should be reserving instances for lower prices, but if you are going with on-demand and prefer lower price than top performance, the n2d is still a great choice, as long as you set min_cpu_platform="AMD Milan" when provisioning it. However, the best value by far comes from spot instances, though you'll need to check what's available in your preferred regions at the time.
  • Microsoft: They have not released anything new since last year (although new several types are expected in Q4), which means they have fallen in performance and value compared to the other players. They are still price-competitive for spot instances as they offer some of the biggest discounts.
  • Oracle: I definitely recommend Oracle for whoever has small projects where a 4-core ARM VM, which Oracle provides for free, covers most of the requirements. Their non-free instances are also very competitive, with their on-demand prices comparable to 1-3 year reservation prices from the "Big 3" (Oracle doesn't do reservation discounts), with the best value being the AmpereOne A2 and Genoa E5 for ARM/x86 respectively. Last year I had trouble provisioning non-A1 instances, but that was because I had not been able to upgrade to a paid account (my email contained oracle which sort of "broke" their system). The free account does come with an initial spending amount, but the account itself is very limited as to what VMs it can provision. A regular paid account seems to have no issues.
  • Akamai: It is a shame each VM of the same "type" you create can come at a significantly different level of performance (for both shared and dedicated CPU VMs). The CPU varies, it can be an AMD EPYC from 3 different generations, so upon creating a VM you have to check the CPU type (e.g. cat /proc/cpuinfo on Linux), with the 4-digit model showing the generation on the last digit. I think the ancient Naples are gone now, but if you happen across a 1, expect it to be very slow. On the other hand, 2 always signifies a fast Rome while 3 indicates a Milan which, depending on the data center or rack, can be either faster or slower than Rome. So you'd have to benchmark if you wanted to see how fast your VM is. After creating it, the performance is reasonably stable, even for the shared-CPU Linode types so you can tell the provider is not over-subscribed). I would not actually bother with the pricier dedicate-CPU types, given how good the shared-CPU ones are.
  • DigitalOcean: Although they still provide decent value, they have fallen quite behind Akamai, as there have not been any upgrades. In fact, there is quite a bit of over-subscribing so performance drops lower than usual. I still use their Basic and Premium instances for projects due to the history of reliability I've had with them, but if I want faster servers I have to look elsewhere, which is a shame. As with the other lower cost providers, you do not know exactly what performance level a VM will have until you provision it and benchmark it.
  • Hetzner: I am very suspicious of extreme "budget" cloud providers, but I had enough recommendations from Hetzner users that I had to try them. Their reputation seems quite solid - most complains I've read online are from banning people, usually with good reason. Their prices seemed too good to be true - I suspected they oversubscribe significantly. Well, if they do I could not detect it, VMs seemed to perform at reasonably stable performance levels for hours to days at a time. Note that here too you can't be sure of exactly what performance you are getting when selecting a type, with the dedicated CPU type I tried showing the greatest variation (just over 30%). Overall, the ARM CAX* type was the best value, with the Rome CPX* being the most interesting x86.

Finally, remember that choosing a cloud provider involves considering network costs, fluctuating prices, regional requirements, RAM, storage, and other factors that vary between providers. This comparison will only assist with part of your decision.

Top comments (1)

Collapse
 
waynetyler profile image
WayneTyler

Great analysis on cloud provider performance! It's interesting to see how the landscape is evolving, especially with new CPU offerings from providers like AWS and Google Cloud. For those considering alternative options, platforms like Cloudways also provide excellent flexibility and cost-effectiveness by allowing users to deploy applications across multiple cloud providers while optimizing for performance. Their focus on managed hosting means users can benefit from improved speed and uptime without getting bogged down in the complexities of infrastructure management. Looking forward to seeing how these trends develop further!