It has come to my attention that someone is wrong on the Internet. So here’s yet another page about Event Loop in Node, this time it is actually correct.
Why would you read that?
Here I’m talking about low level details of Node.JS, what it uses for async IO and how different parts of Node.JS (v8 and others) are glued together.
Read on if you want to understand Node.JS and the problem it’s solving a bit better.
There’s a bit of C and C++ examples that help to build understanding of what Node.JS is doing behind the scenes.
Also a warning:
Do not use this to reason about your code in Chrome. Chrome’s even loop is a separate implementation and might work in totally different way. Chrome uses
libevent
for event loop.
Those are not the droids you’re looking for.
Here are some breaking news for you:
- there’s no event loop in Node.JS.
- there are no event queue in Node.JS.
- and there’s no micro-tasks queue.
- there’s no thread in which your JavaScript runs continuously. I don’t know how widespread this one is but some people think that JS is just running all the time in a single thread. It is not. It is on and off, on and off, on and off. We enter V8, we exit V8, we enter V8, we exit V8.
Let’s get the simple stuff out of the sight first.
There’s no micro-tasks queue in Node.JS
Here’s C++ implementation of runMicrotasks()
:
static void RunMicrotasks(const FunctionCallbackInfo<Value>& args) {
Environment* env = Environment::GetCurrent(args);
env->context()->GetMicrotaskQueue()->PerformCheckpoint(env->isolate());
}
This:
env->context()->GetMicrotaskQueue()
is a call into V8 API which actually contains micro-tasks queue. And no, process.nextTick()
does not go in there. process.nextTick()
goes into this one:
const FixedQueue = require('internal/fixed_queue'); // tasks_queues.js:36
const queue = new FixedQueue(); // tasks_queues.js:55
and it is run by runNextTicks()
and it so happens this one plugged into every place whenever your JS runs pretty much manually. We go into a bit of details about that later.
TL/DR: Micro tasks are V8 thing, managed and implemented by V8, Node.JS C++ code just tells it to run them (PerformCheckpoint()
) and it has nothing to do with process.nextTick()
.
There’s no event loop in Node.JS.
But there used to be one 😉.
Another bits of news: Node.JS is low level programming platform that provides you with access to low level Linux, *BSD (including MacOS) and Windows primitives for asynchronous IO. It wraps those primitives with something called libuv
which is a library that provides simplified API for those low level OS primitives and it allows you to write your callbacks in JavaScript. Besides the automatic memory management of JS the whole thing is waaaaaay more low level (and honest) than Go’s “gorutines”, any kind of green thread implementation or async/await of C# and Kotlin.
libuv
libuv
is the event loop. This is the actual library that implements event loop. It splits it into stages and allows you to create handles for each stage and use those handles to schedule callbacks for those stages. It ref counts those handles and keeps on running as long as there are handles.
For demonstration purposes I have written a very primitive web server and that’s what we’re going to use to learn.
libuv
allows you to create as many event loops as you want with the only caveat - one event loop per thread, please and thank you. You don’t have you create your own though, there’s a default:
loop = uv_default_loop();
Now, if you leave it like that it will exit just like Node exits when there are no stuff to poll left. In fact, Node exits exactly because libuv exits. Or even better: it is not Node who exists, it is libuv
. So we need to give it something for poll
stage:
uv_tcp_init(loop, &server);
Here, I’ve got a handle - server
which libuv
will register for poll
stage. Still, nothing to poll yet, gotta set it up:
uv_ip4_addr("0.0.0.0", PORT, &addr);
uv_tcp_bind(&server, (const struct sockaddr *)&addr, 0);
int err = uv_listen(AS_STREAM(&server), 128, on_new_connection);
if(err) {
fprintf(stderr, "Listen error: %s\n", uv_strerror(err));
return 1;
}
and after that we can run the loop:
uv_run(loop, UV_RUN_DEFAULT);
In fact, when Node.JS is initialising it’s doing exactly the same kind of thing. It’s just that it sets up V8, then bootstraps and loads your code. Your code runs and (hopefully) calls some of the APIs that does similar kind of handles and callbacks registration for libuv’s poll stage. It’s just you use Node’s JS API to do that. Only after your file is finished running Node will actually start the event loop.
Now, back to our server, I have registered callback here for new connections - on_new_connection
. So whenever new connection comes, libuv
will execute this function:
void on_new_connection(uv_stream_t *server, int status) {
if (status < 0) {
fprintf(stderr, "New connection error: %s\n", uv_strerror(status));
return;
}
uv_tcp_t *client = malloc(sizeof(uv_tcp_t));
uv_tcp_init(loop, client);
if (uv_accept(server, AS_STREAM(client)) == 0) {
uv_read_start(AS_STREAM(client), alloc_buffer, on_receive);
} else {
uv_close(AS_HANDLE(client), on_close);
}
}
Don’t read too much into it, what we do here is:
- accept connection:
uv_accept(server, AS_STREAM(client))
- start reading:
uv_read_start(AS_STREAM(client), alloc_buffer, on_receive)
Now, “start reading” part is interesting because that’s where we register another callback (actually two, but we omit one of them): on_receive
:
void on_receive(uv_stream_t *client, ssize_t nread, const uv_buf_t *buf) {
if (nread > 0) {
serve_t * serve = malloc(sizeof(serve_t));
serve->req = http_inflight_parse(http_inflight_new(buf));
serve->client = client;
serve->buf = uv_buf_init(serve->reserved, sizeof(serve->reserved));
fs_req_t *fs = malloc(sizeof(fs_req_t));
fs->serve = serve;
uv_fs_open(loop, AS_FS(fs), serve->req->path, O_RDONLY, 0, on_open);
} else if (nread < 0) {
if (nread != UV_EOF) {
fprintf(stderr, "Read error %s\n", uv_err_name(nread));
}
uv_close(AS_HANDLE(client), on_close);
free(client);
}
}
If we could read something then we do the following:
- parse the request (:fingers_crossed: it all came in one TCP buffer, as I said very primitive HTTP server).
- figure out which file client wants and open this file:
uv_fs_open(loop, AS_FS(fs), serve->req->path, O_RDONLY, 0, on_open);
Again, we register another callback here: on_open
. Now whenever libuv
opens the file it’ll run this on_open
callback. I won’t print it here, it’s bit too long but what it’s doing is the following:
- check if the open result is “ok”, not an error.
- look into request to determine the type of the file requested and
- pick appropriate HTTP headers.
- send the headers.
That’s right, we don’t start reading it here, only sending headers:
uv_write(AS_WRITE(write), serve->client, &serve->buf, 1, on_send);
We politely ask libuv
“please dear write this stuff into socket for us and once you’re done tell about it to this guy: on_send
". on_send
will be called once the buffer is completely written into the socket:
void on_send(uv_write_t *res, int status) {
write_req_t *write = WRITE_REQ(res);
serve_t *serve = write->serve;
if (status) {
fprintf(stderr, "Write error %s\n", uv_strerror(status));
}
if (write->done) {
uv_close(AS_HANDLE(serve->client), on_close);
free_serve(serve);
} else {
serve->buf.len = sizeof(serve->reserved);
fs_req_t *read = malloc(sizeof(fs_req_t));
read->serve = write->serve;
uv_fs_read(loop, AS_FS(read), serve->file, &serve->buf, 1, -1, on_read);
}
free(write);
}
Again, a bit of validation and the important part is that we will ask libuv
to start reading the actual file that the client requested:
uv_fs_read(loop, AS_FS(read), serve->file, &serve->buf, 1, -1, on_read);
Aaaaand, you guessed it, yet another call back: on_read
. This callback is executed by libuv
whenever it is able to read enough data from the file to fill the buffer or whenever it receives EOF
or any other error.
on_read
then validates the state and again asks libuv
to send it down the socket with the same on_send
callback.
void on_read(uv_fs_t *res) {
fs_req_t *fs = FS_REQ(res);
serve_t *serve = fs->serve;
uv_fs_req_cleanup(res);
bool done = false;
if (res->result < 0) {
fprintf(stderr, "Read error: %s\n", uv_strerror(res->result));
serve->buf.len = 0;
done = true;
} else if (res->result == 0) {
serve->buf.len = 0;
done = true;
uv_fs_close(loop, res, serve->file, NULL); // synchronous
} else if (res->result > 0) {
serve->buf.len = res->result;
}
write_req_t *write = malloc(sizeof(write_req_t));
write->serve = serve;
write->done = done;
uv_write(AS_WRITE(write), serve->client, &serve->buf, 1, on_send);
uv_fs_req_cleanup(res);
free(res);
}
And so it goes, jumping between those callbacks back and forth.
Stages
libuv
runs in stages:
You might’ve seen this picture in Node’s docs but it’s really from libuv
docs.
Node
What does it have to do with Node.JS? Well, Node.JS does the same. Look at the starting sequence:
It configures the default loop:
uv_loop_configure(uv_default_loop(), UV_METRICS_IDLE_TIME); // node.cc:1173
Some time later it calls this:
MaybeLocal<Value> LoadEnvironment(
Environment* env,
StartExecutionCallback cb) {
env->InitializeLibuv();
env->InitializeDiagnostics();
return StartExecution(env, cb);
}
The interesting part for us is: env->InitializeLibuv();
It registers a handle for timers
stage:
CHECK_EQ(0, uv_timer_init(event_loop(), timer_handle()));
It creates a handle for check
stage (which runs your setImmediate()
s):
CHECK_EQ(0, uv_check_init(event_loop(), immediate_check_handle()));
It also creates a handle for idle
stage that may run immediates as well:
CHECK_EQ(0, uv_idle_init(event_loop(), immediate_idle_handle()));
Why two places to run immediates ask you? Well, here’s a plot twist: the event loop actually blocks. In the poll
stage it blocks until it is woken up by Linux/BSD/Windows to get data from one of the sockets/descriptors/handles it’s listening. If there are no timers and no idles then it blocks indefinitely. If there are timers it blocks until the next timer. If there’s at least one idle it doesn’t block. So Node.JS uses idle stage and its handle to prevent libuv
from blocking and process your setImmediate()
s ASAP. Note that the check
stage is started (actually “started”, just marked as started) and callback is supplied right away:
CHECK_EQ(0, uv_check_start(immediate_check_handle(), CheckImmediate));
It doesn’t happen for the idle stage handle.
It does some more initialisation, reading bootstrapping node.js
and your entry file and running all of that and then finally:
*exit_code = SpinEventLoop(env).FromMaybe(1); //node_main_instance.cc:140
where it does:
uv_run(env->event_loop(), UV_RUN_DEFAULT); //embed_helpers.cc:36
As you can see:
- there’s no even loop in Node, it’s from
libuv
- there is no constant JS running, it is
libuv
that is running - JS code is only run at the startup and then in the event handlers from libuv.
- Node.JS is just a fancy JS wrapper around
libuv
so you don’t have to chase your lost mallocs around all over of the place and so you don’t get too much SEGFAULTs in prod.
Potential questions (at least I asked myself).
How does process.nextTick()
work?
The answer is simple, Node’s C++ side provides an API that is accessible from JS:
void SetupTimers(const FunctionCallbackInfo<Value>& args) {
CHECK(args[0]->IsFunction());
CHECK(args[1]->IsFunction());
auto env = Environment::GetCurrent(args);
env->set_immediate_callback_function(args[0].As<Function>());
env->set_timers_callback_function(args[1].As<Function>());
}
Notice, that callbacks for that C++ function are JavaScript functions from V8. And notice that it adds callbacks to immediate handle (which we know runs on check
and sometimes on idle
)
And it also adds timers callback. So basically it adds callbacks into every stage of even loop when JS code is running. Why is that important? Well because in node.js bootstrapping script will register couple callbacks for those:
// Sets two per-Environment callbacks that will be run from libuv:
// - processImmediate will be run in the callback of the per-Environment
// check handle.
// - processTimers will be run in the callback of the per-Environment timer.
setupTimers(processImmediate, processTimers);
Those two come from timers.js
and they execute runNextTicks()
which in turn does two things:
- runs micro tasks from V8
- runs
process.nextTick()
callbacks
From what I’m seeing is that JS runs twice per stage, first time in native callback to process IO (runs your code), then another libuv
callback that was registered from node.js bootstrapper enters V8 again to run microtasks and process.nextTick()
. I might be wrong here, I didn’t dig too deep.
What is libuv locking on during poll
stage?
Various operating systems provide various facilities for asynchronous IO.
- in Linux it is epoll. In a nutshell it allows you to create over 9000 file descriptors, give it to Linux and tell it “please wake me up whenever anyone has got anything for me”. The API allows you to lock until anything comes, lock for a limited amount of time or simply check without locking.
- in *BSD (and MacOS, because it is BSD, duh) it is kqueue. Much better than
epoll
- and then there’s Windows, the pinnacle of async APIs: IO Completion Ports. This is not a sarcasm. Fun fact, Solaris also used approach of IO Completion Ports.
All of those APIs (except may be for Windows? not sure here) provide you with facilities to lock and wait until something comes, lock for limited amount of time or just check and not to lock at al.
Because libuv
is an IO library it only makes sense that it locks and waits for new IO to happen.
What happens with file IO? I heard there’s a thread pool there as well?
Well, here’s a thing: async File IO API in POSIX SUCKS.
So instead of dealing with its quirks and features libuv
simply does it synchronously but offloads it to a thread pool that is provided by default and by default the size of this thread pool is 4. Btw, it is not only for File IO, you can throw any long running task to run there. There are C++ encryption libraries for Node.JS that offload CPU intensive encryption onto this thread pool to avoid blocking the event loop.
How then libuv
runs callbacks, they are supposed to run on the main thread, aren’t they?
What libuv
does is:
- if it is Unix, setup a pipe,
epoll
andkqueue
support pipes, this pipe’s file descriptor is one of the descriptors to block on duringpoll
stage. When file operation is completed on a worker thread it writes into this pipe to wake up the event loop and call handlers. - if it is Windows do the same with named pipe. Technically IO Completion Ports allow proper async File IO but as far as I understand
libuv
still does the same pipe trick, probably for the sake of consistency and simplicity.
Note: As far as I'm aware, support for io_uring has landed to libuv
and in theory it is capable of doing proper async file io, at least on Linux. I'm don't know if Node is using it yet.
Honesty
Recently it became fashionable to speak about green threads, co-routines and etc. as primitives of async programming.
I loath those. Because they are fake.
I’m a firm believer that everything should be implemented in the right way and the only way to implement those in the right way is to have support from operating system.
Unfortunately not Linux nor MacOS and nor any kind of BSD I know provide you with facility to interrupt your own running code. So every time someone’s speaking about “green threads” or “coroutines” or anything pretending to be doing preemptive multitasking in user space - they are lying. What they are offering is one of two hacks:
- you write your own complier and runtime and you sprinkle the generated code with “yield”s. This is what Go used to do. The least “bad” way but still produced some interesting quirks.
- you setup a timer and a signal so the Kernel will interrupt your process normal execution and will call the function that you specified as signal handler. This comes with several “gotchas” and people have written about those extensively. Recently Go has switched to this mechanism and avoids the issue with “for” loop from the first bullet point. Unfortunately for this to work you have to hack your stack and rewrite instruction pointer registers so when your signal handler is done running and kernel resumes you process it end up in scheduler instead of previously running function. This is really a nasty nasty nasty hacking. Stack and instruction pointer belongs to Kernel, not us.
There’s one operating system that allows you to do that: Windows
- there’s support for “fibers” which are user-space threads
- there’s actually a framework to replace Kernel scheduler with your own for your process which unfortunately is dead in Windows 11. Or may it’ll live in Server versions?
So this is why I love Node so much - it is the only platform that doesn’t seem to hide and pile up hacks and gotchas when it comes to async io and just lets you use OS primitives through a somewhat simple programming language and comfortable interfaces.
Code references
I have checked out commit 331088f4a450e29f3ea8a28a9f98ccc9f8951386 so if there are any changes in the files after that then line numbers in code references might’ve drifted.
Top comments (0)