I was inspired by Ishaan Sheikh's post here on counting to 1 billion: https://dev.to/sheikh_ishaan/count-to-1-billion-20de.
I wanted to see if there was a sizable performance difference in counting to 1 Billion in Javascript when using Node vs. when using Bun.
I ran this program and timed it using the MacOS time
utility:
for (let i = 0; i < 1_000_000_000; i++) {
}
I ran it with Node and got:
node counter.js 0.46s user 0.03s system 84% cpu 0.578 total
Then, I ran it with Bun and got:
bun counter.js 0.90s user 0.01s system 99% cpu 0.912 total
I was surprised to see in something as simple as this the time difference was double for Bun vs. Node! Especially since Bun claimed that it was faster than Node since it uses Javascript Core instead of V8.
To verify this was the case, I tested it on 10,000,000,000. Here were the results:
node counter.js 7.87s user 0.06s system 97% cpu 8.118 total
bun counter.js 8.79s user 0.02s system 99% cpu 8.820 total
Clearly, it seems that there may be some bun startup/initialization cost that is causing the counting to be slower than Node. I'm curious to know where this is coming from.
Thanks to writethrough.io for helping me write this article.
Top comments (0)