DEV Community

Cover image for Popular Backend Frameworks Performance Benchmark Comparison in 2024
Tuan Anh PHAM
Tuan Anh PHAM

Posted on • Updated on

Popular Backend Frameworks Performance Benchmark Comparison in 2024

Quick result:

based on Techempower round 22 october 2023
Popular Backend Frameworks performance comparaison

Popular Backend Frameworks performance comparaison

Motivation:

As a tech lead, one of my key responsibilities entails the selection of optimal technologies that align with our business requirements and deliver an exceptional user experience. In pursuit of this goal, I find it imperative to conduct a thorough performance comparison among the leading frameworks commonly employed for building production-level backend web servers with SQL databases, all within a realistic environment.

Unfortunately, many of the articles available on the internet that attempt to provide such performance comparisons often exhibit bias, lack realism, and are frequently outdated, rendering them unsuitable for use as reliable benchmarks. To date, I have yet to come across an article that offers a clear and unbiased performance assessment among the most widely utilized backend frameworks.

My aim is to rectify this gap in information and provide a clear performance benchmark of the most popular backend frameworks.

Benchmark source data:

Since 2013, TechEmpower has established a backend framework benchmark. They meticulously define benchmark specifications and maintain an open-source approach that encourages contributions from the community. This benchmark has become a respected standard in the tech industry, serving as a reliable yardstick for technology competitors to assess the performance of their solutions (exemple Go Fiber, C# Asp.net, JS Just). So I can trust the Techempower benchmark.

I use data from TechEmpower benchmark round 22 (released on 2023-11-15). The benchmark has many frameworks (> 300), it can be overwhelming when trying to compare the most popular ones.
My goal is to compare only popular, productive, realistic, near production-level, that companies use to build real backend. For all that, I make some filters on this benchmark:

  • Only some popular frameworks (see list later)

  • Only the Fortunes test that’s the most realistic scenario for a backend web. The result is a number of requests per seconds to the database. Quote from Fortunes test specification: “The Fortunes test exercises the ORM, database connectivity, dynamic-size collections, sorting, server-side templates, XSS countermeasures, and character encoding”

  • ORM (Object-Relational Mapper) is Full or Micro, and not Raw: for productivity reason:

    • Full: “ORM that provide wide functionality, possibly including a query language”
    • Micro: “less comprehensive abstraction of relation model”
    • Raw: “no ORM is used at all, the platform raw database connectivity is used”
  • Classification is Fullstack or Micro and not Raw: for productivity reason:

    • Fullstack: “framework that provide wide features coverage including server side template, database connectivity, form processing and so on”
    • Micro: “framework that provide request routing and some plumbing”
    • Raw: “raw server, not a framework at all”
  • Database: only Postgre for normalization

How I choose Popular Backend Frameworks (PBF):

For programming languages, I use Tiobe index and PYPL index. The programming languages list is Java, C#, Go, Rust, Javascript (JS), Ruby, Python, Php

For popular backend web frameworks, I use data from SimilarTech and BuildWith. I do not choose small, simplistic, fully optimized frameworks which are not popular or can not be used for production-level general use cases.

The final list of popular backend frameworks to benchmark is:

  • Java: Spring
  • C#: Asp.net
  • Go: Fiber
  • Rust: Actix
  • JS/Node: Express
  • Ruby: Rails
  • Python: Django
  • Php: Laravel

Go is a special case. Any Go frameworks satisfy my filters on “Classification=Full/Micro” and “ORM=Full/Micro”. But I want to have Go in the benchmark. So I need to do some logical deduction to be able to compare.
Although Go Gin enjoys greater popularity compared to Go Fiber, I've decided to opt for Go Fiber due to its significantly superior performance. This choice is made with the intention of ensuring fairness.

Popular vs not popular backend frameworks

Popular vs not popular backend frameworks

Raw benchmark data from TechEmpower round 22:

Here’s the Fortunes tests result: (you can see my filters on Filters panel)

Techempower round 22

Techempower round 22 raw data

Each framework has its Fortunes requests number, then from that the relative ratio is built. The relative ratio expresses the relative performance of each framework over the worst one (about 5852 requests).

I report the Fortunes request number and the relative ratio on this table

Techempower results with normalization

Techempower results with normalization based on relative ratio

For Go Fiber, there’s no data with ORM, so I do use Raw data (No ORM) then compared with Rust Actix Raw and make deductions (= Fiber Raw / Actix Raw * Actix ORM(diesel)). So the Fortunes requests of Go Fiber may be not exact.

For Php Laravel, there’s no data with Postgre database, so I use data with MySQL then compare and make deduction between Laravel and Symphony (=Laravel MySQL / Symphony MySQL * Symphony Postgre)

Then finally, I use relative ratios to build the Popular Backend Frameworks’s performance table

Popular Backend Frameworks’s performance benchmark (PBF benchmark)

The row and column are ordered from the best to worst performance:

Popular backend frameworks performance comparaison

Popular Backend Frameworks performance comparaison

How to read the table: for example take the Rust column, then

  • Rust Actix is 119% more performant than C# Asp.net
  • Rust Actix is 147% more performant than Go Fiber
  • Rust Actix is 505% more performant than JS/Node Express
  • Rust Actix is 715% more performant than Java Spring
  • Rust Actix is 1172% more performant than Python Django
  • Rust Actix is 1221% more performant than Ruby Rails
  • Rust Actix is 2254% more performant than Php Laravel

Take the C# Asp.net row, then

  • Rust Actix performance is 119% of C# Asp.net
  • Go Fiber performance is 81% of C# Asp.net
  • JS/Node Express performance is 23% of C# Asp.net
  • Java Spring performance is 17% of C# Asp.net
  • Python Django performance is 10% of C# Asp.net
  • Ruby Rails performance is 10% of C# Asp.net
  • Php Laravel performance is 5% of C# Asp.net

Bonus: TechEmpower Composite Score

TechEmpower has multiple tests: Json serialization, Queries (single, multiple), Fortunes, Data update, Plaintext. Each test has a specific purpose and specification.

To have a global view of all tests, they create a composite score with this formule (test results are normalized)
Composite Score = Json *1 + SingleQuery * 0.75 + MultipleQueries * 0.75 + Fortunes * 1.5 + DataUpdate * 1.25 + Plaintext * 0.75

Techemposer round 22’s composite score

Techempower round 22’s composite score

This score is similar to my performance table, the 3 first places are exactly the same. Java Spring and JS Express swap their places and the 4 last places (Js Express, Ruby Rails, Php Laravel, Python Django) are very closed. In my analysis (only the Fortunes test), JS Express is 141% more performance than Java Spring, or with the Composite score, JS Express is only 37% of Java Spring score.

As composite score is a linear composition of other scores, the absolute value may not express the real performance, we should care about the ranking and relative magnitude of composite score.

Conclusion:

This article provides a clear performance comparison among the most popular backend frameworks, based on the TechEmpower benchmark of round 22. This is purely data. The conclusions drawn from this data are open for anyone to interpret and analyze.

One significant takeaway is related to technology selection. When choosing a backend technology stack, there are various parameters to consider, including popularity, performance, security, productivity, stability, reliability, maintainability, ecosystem, product domain, features, culture, team capabilities, and more. While performance is undoubtedly a crucial factor, especially in domains like Games, Finance, Trading, IoT, and others, it's just one piece of the puzzle.

It's essential to note that this comparison primarily focuses on backend frameworks and not on programming languages. Drawing conclusions like "Rust outperforms Java" or "C# surpasses JS" may seem valid from the data, it's a false conclusion from my analysis. For a direct comparison of programming languages, there are separate benchmarks available for that purpose (like this or that).

The article also addresses the consideration of fairness in framework selection. It raises the question of why not choose extremely fast but less popular frameworks like JS Just, C++ Drogon, Java Vert.x. The rationale for selecting from the list of popular frameworks is to ensure a pragmatic choice that aligns with real-world scenarios.

Imagine you have the responsibility to choose a framework to build a new backend as of today (the end of 2023 - mid 2024) and you know that one choice is made, you will live with that choice for the years to come. Will you bet on some experimental and small frameworks ? Or will you opt for a popular framework which is solid, well-established and battle-tested as in my list ? One fun thing, when I ask ChatGPT-4 the most popular backend frameworks, it returns the same list without any framework from Rust and Go.

One last word, this benchmark is valid as of the end of 2023. Technology competitors work hard to improve their technology. The ranking in the next round of TechEmpower may change. Wait and see.

Disclaimer

This analysis is based on data from the TechEmpower benchmark. I’m not responsible for the source data, only for my analysis.

Top comments (0)