Say we are a Node.js client, and we've made a request to some server. What happens as we're waiting for that response? How does the event loop know when to put the associated callback on the event queue?
Say we are a Node.js client, and we've made a request to some server.
- What happens as we're waiting for that response?
- How does the event loop know when to put the associated callback on the event queue?
Demultiplexing and the event loop
Node.js's event loop is implemented in a library called libuv, which is also used by Julia and Python. We're going to dive into its internals.
The following is 1 iteration of the event loop:
uv__update_time(loop);
uv__run_timers(loop);
uv__run_pending(loop);
uv__run_idle(loop)
uv__run_prepare(loop);
// our method of interest
+------------------------------+
| uv__io_poll(loop, timeout); |
+------------------------------+
uv__run_check(loop);
uv__run_closing_handles(loop);
The method we care about, uv__io_poll
, basically does the following:
Say the event loop is watching n open sockets 👀, because we have n unresolved requests. It does this by maintaining a watcher queue, which is just a list of n watchers—basically a socket with some metadata.
Then, the polling mechanism recieves an event. At notification time, it doesn't yet know which open socket this corresponds to.
All of our watchers (in the watcher queue) are identified by a file descriptor. This is just an integer that acts as an ID for an open I/O resource. This is a common thing in an operating system.
The event we received contains an id field (named ident
), which is a file descriptor. Once we have the file descriptor, we can get the watcher. This is the step that gives this process the name demultiplexing.
Finally, once we have the watcher, we can get the callback to put on the event queue.
The polling mechanism?
In the above description, we glossed over something that seems kind of magical—what is the polling mechanism and how does event loop receive an event?
The short answer is that it uses a system call to be notified of such events. Which one depends on the operating system.
Let's take a look at kqueue
, but first let's review what happens when we our computer receives a packet.
When the kernel gets a packet from the network interface, it decodes the packet and figures out what TCP connection the packet is associated with based on the source IP, source port, destination IP, and destination port. This information is used to look up the struct sock in memory associated with that connection. Assuming the packet is in sequence, the data payload is then copied into the socket’s receive buffer. [3]
How kqueue recieves a notification:
+--------------------------+
| | +-------------+ +-------------+
| | | | | |
receives packet +--------->+ Network Interface +--------->+ Socket +-------->+ kqueue |
| | | | | |
| | +-------------+ +-------------+
+--------------------------+
After this occurs, the socket (our event-generating entity of interest) traverses the kqueue's list of registered events (called knotes
), and finds the one it belongs to. A filter function decides whether it merits reporting. [2] kqueue
would then report it to the user program.
Here are some of the events an application might register with kqueue
.
Event name | Operation tracked |
---|---|
EVFILT_READ | Descriptor has data to read |
EVFILT_AIO | Asynchronous I/O associated with descriptor has completed |
EVFILT_TIMER | An event-based timer has expired |
kqueue
is actually pretty simple. It's just a FreeBSD system call which provides notification to a user program of a kernel event.
In our case, libuv is the user program.
Conclusion
This has certainly helped me understand the core of what libuv is. It provides Node with its event loop; it uses callback style API's, and most importantly, it abstracts away the complexity of interfacing with system calls.
It's "polling" mechanism is not inherently that complex, because the system calls it uses are event-driven. It just has to keep a data structure of the callbacks registered for each event.
Top comments (0)