I love the aesthetics of terminals and I’m not the only one, there is a whole subreddit dedicated to people sharing their desktops and showcasing different terminal setups. Last year I spent working on an innovative WebAssembly runtime called Lunatic. Recently we landed TCP support and I was super excited to start building real world applications with it, and what would be a better fit than a terminal based chat server with a retro vibe?
It took me around a week to build it with Rust + Lunatic and you can check out the code here. If you would like to try it out you can connect to it with:
# US
> telnet lunatic.chat
# EU
> telnet eu.lunatic.chat
While writing the server I ran into many interesting problems and would like to share here how I leveraged the power of Rust and Lunatic to overcome them.
Architecture
The reason I picked telnet is that the specification is simple enough to read through and implement in a short time. It’s a small layer on top of TCP and as mentioned before we had TCP already working. On the other hand, telnet is a really limiting protocol and I needed to get creative while building a chat application on top of it.
The first issue I encountered was the line based nature of terminals. You write a command, hit enter and the terminal prints out some text. This doesn’t go well with the UI of a chat app where
messages can come in at any time. What are you supposed to do when new text arrives and the user has already partially written her own message? Override the user's input? Print the new message after the input?
One solution would be to buffer all messages until the user hits enter and then just dump all the ones that arrived in the meantime at once, but this can’t work as we would rely on the user to keep hitting enter to read new messages.
It became clear that I needed to use some kind of terminal user interface where I render separately all the incoming messages from the user who is currently typing. It’s possible to do this by using a few extensions to the telnet protocol. Once the telnet client connects I send it the following instructions:
- Don’t echo anything that the user is typing, let me be in charge of printing in the terminal.
- Don’t buffer messages, send each keystroke to the server.
- Report size changes of the terminal.
This allows me to construct the UI on the server and just send a sequence of terminal escape characters back to bring the user’s terminal up to date. On each keystroke or message received the UI is updated.
Massive concurrency
For this to work we need to permanently keep the telnet connection open and periodically send data through it. This is a perfect use case for Lunatic’s Processes, they are designed for massive concurrency. Each client’s connection is handled in a separate Process.
Not to be confused with Operating System processes, Lunatic’s Processes are lightweight and also known as green threads (but isolated) or go-routines in other runtimes. They are fast to create, have a small memory footprint and a low scheduling overhead. All Processes are preemptively scheduled and they can’t spend too much time running without yielding and giving others a fair share of the resources. This keeps all connections responsive in an environment where most of the time is spent waiting on I/O.
Interop with existing libraries
Luckily I could make use of existing Rust libraries and didn’t need to reinvent the wheel. I used:
They all compiled to WebAssembly without issues. I just needed to provide a telnet backend for TUI, but I could reuse most of the code from the termion crate (sadly it has no Windows support for now).
TUI works in a somewhat similar way to React.js, you update your state and just call a render method. It will re-render the UI and send back to the client the minimal amount of changes in the form of terminal escape characters.
State Management
A big part of programming is state management. Your application getting into a state that you couldn’t predict while writing the code is a big source of bugs, and Lunatic tries to simplify this by allowing you to isolate the state into separate processes.
From the perspective of a process, it owns the whole memory and can’t influence the memory of other processes in any way, not even by unsafe pointer dereferencing. This is a result of building
them on top of WebAssembly instances. The only way processes can talk between each other is through message passing.
This greatly simplifies reasoning about state changes. You only need to think about what state you are in and how the next message will influence the state change. It makes it a lot easier to debug once you find yourself in an undesirable state. Let’s look at a concrete code sample from the implementation:
// The server is the main coordinator between all the clients.
// It keeps track of connected clients and active channels.
// It's also in charge of assigning unique usernames to new clients.
pub fn server_process(state_receiver: Receiver<ServerMessage>) {
let mut state = ServerState {
clients: 0,
channels: HashMap::new(),
};
let mut username_generator: i64 = 0;
let mut all_usernames = HashSet::new();
loop {
match state_receiver.receive().unwrap() {
ServerMessage::Joined(client) => {
// Increase the number of active users
state.clients += 1;
// Generate a new username
username_generator += 1;
let username = format!("User_{}",
username_generator);
all_usernames.insert(username.clone());
// Client specific state
let server_info = ServerInfo {
clients: state.clients,
username,
};
let _ = client.send(server_info);
}
ServerMessage::List(client) => {
….
}
ServerMessage::ChangeName(from, to, client) => {
if all_usernames.contains(&to) {
// Notify client that the name is taken
let _ = client.send(false);
} else {
all_usernames.remove(&from);
all_usernames.insert(to);
let _ = client.send(true);
}
}
...
}
}
We can see that the process has a few local variables to keep track of its state:
- How many clients are connected.
- Which channels are available.
- If a new user joins what username should be assigned.
- Which usernames are taken.
Afterwards the Process just runs in a loop waiting on messages. If a new client is connected the server receives a ServerMessage::Joined
message. It will then update the total count of users, assign a new username to the client and send back a message notifying the client about the assigned username.
The client’s process is similarly structured, it keeps a state of the current input box for each channel and all received messages for the channels. The client’s process can receive 2 types of messages:
- Keystrokes coming from the telnet connection.
- New chat messages coming from all subscribed channels.
For each keystroke we update the current channel’s input box or attach the new message to the history of messages in the channel.
If we have such an architecture and run into a bug, let’s say the number of connected users shown is wrong, there is only one source of truth here and we know exactly where this information came from. We just need to figure out how we got into this state.
Other benefits
There are some not so obvious additional benefits that we get from Lunatic.
If a client’s process receives some malicious data from the telnet connection and crashes, it will only terminate the existing connection. It can’t access the state of any other Processes. In my first implementation I was often using .unwrap
in the code, following Erlang’s let it crash philosophy and knowing that if I see any crashes in the logs I can always later investigate why they happened, but the application should continue running.
The message sending implementation uses Smol’s channels underneath, but you may be surprised not to see any async
or .await
keywords in the code. The reason for this is that Lunatic abstracts away the asynchronous code and you can just write seemingly blocking code, but it actually never blocks the underlying thread and takes full advantage of async Rust. This is a whole topic on its own so I will leave it for another blog post.
Lunatic works with any code that can compile to WebAssembly, and as I have shown earlier a lot of libraries just work out of the box. You can also link C code into your Rust application while compiling to WebAssembly. One big pain point when using C from Erlang is that you need to be extremely careful in your code, because if something crashes it will take the whole VM down. Or if you spend too much time in the C part it will block the scheduler from using the thread and endanger the responsiveness of your system. Lunatic solves both of these problems. The reduction counter is inserted before the WebAssembly code is JIT translated to machine code and will also be part of the “native” C code, allowing the scheduler to preempt it. A crash still stays isolated thanks to WebAssembly sandboxing properties.
Conclusion
In the end the chat server will remain a nice toy application and you should not use it for more serious use cases as telnet doesn’t encrypt any of the data sent to the server.
However, it’s a really good feeling to get something like this running on a runtime you have built. While developing the chat application I found a few bugs in the runtime itself, so it was totally worth creating this app . I’m really looking forward to gradually moving away from building Lunatic and building amazing applications with it. I was also positively surprised how well the chat app is working, being the first real word app built on Lunatic.
I think that we are finally at the point where WebAssembly is mature enough to be used in serious applications, and I strongly believe that WebAssembly on the backend is going to play an important role in the future.
Top comments (1)
Awesome work :)
Thanks for sharing the story, and the awesome library... I can not wait to get more time playing with it :)