DEV Community

Bastian Gruber
Bastian Gruber

Posted on • Edited on

Web Development with Rust— 02/x: Deploy your first App

You can find the Introduction to web programming in Rust over here. Follow me on twitter to always get the latest information about web development in Rust. Also checkout the GitHub repository to his series.


Update 08.07.2019: tide 0.2.0

This series has 3 goals

  1. Show the obstacles new developers will face when programming in Rust but also its advantages when it comes to the specific topic (this time: Deployments).
  2. Show different options for web development in Rust.
  3. Always have an updated application in production afterwards.

Number three is super important for me. That’s the reason why we start part 02/x with a few ideas on what you can build, and why this tutorial will look like as it does. Whenever you learn something new, take this mental model:

Never do things for their own sake

Which translate to: Never learn Rust just because you want to learn Rust. This is the biggest factor why you will fail to learn a new language or basically anything in life. You have to have a goal in mind, a reason to do something.

“So how should I learn Rust then?”

  1. Have an app or idea in mind you want to see in production. This can be a service to calculate prime numbers, a web app to track your programming goals, a service which fetches your latest likes on GitHub and tracks their activities etc. You decide what you want to build.
  2. Hold yourself accountable to it. Tell your friends, work colleagues or your partner that you promise them that in 6 months time, they will be able to use this service. Every few days or weeks, you will keep them updated about your current status.

It doesn’t need to be a polished idea or a service to compete with other apps. It has to be something you would like to have. It will help you staying through the hardships in the beginning, and through the phase when the honey moon is over and you see that Rust can be hard at times.

I will make mine public as well:

At the end of the next 6 months, I have a running web service with frontend for my MeetUp “Rust and Tell Berlin” up an running so speakers can submit proposals and slides and videos can be watched from talks which were held the previous events.

I will hold myself accountable through this tutorial series.


Lets move on. In this part of the series, we want to deploy our first application. If you come from NodeJS, the deployment life cycle looks like this:

nodejs_dev_cycle

With NodeJS, you can push any code to a production server. You have to have good tests, ESLint and other tools to catch undefined and Type errors.

In an ideal world, we have a development cycle which looks like this:

ideal_dev_cycle

So we want to break things as early and close to the code (your local machine) as possible. Once we figured out a working code base, we would like to bring exactly this working solution onto a server. Because of Rusts Type System and strong compiler, we would be able to pack a working binary and move it to production. Tests would cover the rest of the errors.

Rust moves possible errors closer to the coding environment

a) The Rust Compiler will catch a lot of problems, almost all of them.

b) You can catch the rest with good tests (in our case: Error handling when receiving the wrong parameters).

c) After you can compile your Rust Code, you have a binary which can be shipped in many different ways.

Difference between local and production-ready code

When we talk about deploying, we have to make sure that our code is able to:

  • randomly assign a PORT based on the environment it is running
  • handle errors gracefully
  • respond to not expected input with proper return codes and messages
  • fail early in the deployment pipeline with a proper test setup
  • log events so errors can be traced

In this article we will cover the first must-have (randomly assigning a PORT). Each article in the series will cover the rest of the requirements.

Four different deployment options

We generally have different deployment and hosting options. Some are more suited for large scale application and some are better for private projects and to get a project off the ground without too much complexity. Our options are:

  • Managed Deployments / Hosting (Heroku)
  • Self managed via Docker and a Docker registry
  • Self managed via Docker and a Git registry
  • Managed Serverless Lambda functions (AWS Lambda, ZEIT now)

We will cover each of these options in this article and see advantages, disadvantages and how to prepare your Rust Code so it can be deployed (in the best possible way).

Building the first version of your app

As we said in the beginning, we need an idea and what we want to build. Even if we map out a bigger picture of the application in the next article (03/x), we can get started and choose a framework we want to build it with:

As seen in the first article, you can go lower level if you want:

We will pick one framework for the written version of this article. I will pick tide, since I am planning to contribute to it more in the future. I will map out solutions for rocket and actix in the GitHub repository for this series.

Set up our app

We want to make sure to use asynchronous code, which is not in Rust stable yet. Therefore we need to install and set the nightly version of Rust:

$ rustup install nightly-2019-02-25
$ rustup default nightly
Enter fullscreen mode Exit fullscreen mode

This will generate our first folder structure. The bare bones of a running web app with tide look like this:

Cargo.toml

[package]
name = "my-cool-web-app"
version = "0.1.0"
authors = ["YOUR NAME + EMAIL"]
edition = "2018"

[dependencies]
tide = "0.2.0"
Enter fullscreen mode Exit fullscreen mode

main.rs

#![feature(async_await)]

fn main() {
    let mut app = tide::App::new(());
    app.at("/").get(async move |_| "Hello, world!");

    app.serve();
}
Enter fullscreen mode Exit fullscreen mode

As we said earlier, we need to give the hosting environment the chance to assign a PORT to our application.

Our main.rs has to accompany these requirements:

#![feature(async_await)]

extern crate tide;

use tide::App;
use std::{env, net::SocketAddr};


fn main() {
    let mut app = App::new(());
    let address = SocketAddr::from(([127, 0, 0, 1], get_server_port()));

    app.at("/").get(async move |_| "hello world");
    app.serve(address).expect("Start server");
}

fn get_server_port() -> u16 {
    env::var("PORT")
        .ok()
        .and_then(|port| port.parse().ok())
        .unwrap_or_else(|| 8186)
}
Enter fullscreen mode Exit fullscreen mode

With this setup ready, we can go over each deployment option.

Managed Deployments via Heroku

Managed environments are for the most part just an abstraction. They internally do the same as you would with your own pipeline: Push code to a git repository. A “hook” is watching this repository and on changes will start to compile the latest version and run it. For you however, it’s just a git push heroku master.

heroku_deployments

To get started, you need a Heroku account (free). Login with your new account and create a new app:

heroku_new_app

After clicking “Create app”, Heroku explains under the “Deploy” tab how to push your code to their servers:

heroku_deploy

Prepare your code

First, we need to be able to push our code base to the remote location (Heroku). Therefore please install the Heroku tool chain. Afterwards we can add the remote location to our GIT repository:

$ cd my-cool-web-app
$ heroku login
$ heroku git:remote -a my-cool-web-app
Enter fullscreen mode Exit fullscreen mode

Next, we need to tell Heroku how to run our application after it is build. Heroku expects a file with the name Procfile, which has the start command in it:

$ touch Procfile
Enter fullscreen mode Exit fullscreen mode

And put the following line it it:

web ./target/release/my-cool-web-app
Enter fullscreen mode Exit fullscreen mode

We also have to tell Heroku which version of Rust we are using. Since we want to use nightly, we create a file called RustConfig in the root directory:

$ touch RustConfig
Enter fullscreen mode Exit fullscreen mode

with the following line:

VERSION=nightly
Enter fullscreen mode Exit fullscreen mode

Caveat

Rust is so new that Heroku doesn’t support it out of the box. We need to install and activate a “buildpack” for Rust. So inside the root directory of your application, execute the following commands:

$ heroku create --buildpack emk/rust
$ heroku buildbpacks:set emk/rust
Enter fullscreen mode Exit fullscreen mode

This will activate the language support for Rust.

Now we can:

$ git add .
$ git commit -m "Init"
$ git push heroku master
Enter fullscreen mode Exit fullscreen mode

When succeeded, we go back to the Heroku dashboard in the browser and click on the the generated domain (under “Settings”). A browser windiw should open and display “Hello, World!”.

Summary

  • Heroku makes it easy to deploy your application
  • In less then 5 minutes you have a running version of your app live
  • You can assign your own domain and activate HTTPS (if you pay for it)
  • Heroku ist the best option when it comes to this tutorial and starting side projects: Cheap, easy to use and removes the overhead of deplyoments especially in the beginning

Docker

Using Docker has the huge advantage of being free in choosing your pipelines and environments. You can either build the image locally and push it as-is to a Docker registry. From there a server can take(download) and execute (docker run) it. Or you create a blueprint (Dockerfile) which other service can use to build on their servers.

If you are using Docker for your deployments, you have two options. The first one is to push your code (with a Dockerfile) to a Git registry (like GitHub or Bitbucket) and then have a configured deployment server which listens to changes, SSHs into the Git registry, takes the code, deploys and runs it.

docker_git_registry

Your second option is to use a Docker registry. There you have the advantage to pre build your container and ship it as-it-is. This makes it sometimes faster to run deployments and you have to ship less code (especially in case of Rust).

docker_registry

We can use Rusts feature of being able to be compiled to a binary. We can even go one step further and compile a static Rust binary with no external dependencies. What we would need for this, is:

  • Build a Rust binary
  • Statically linked the needed C libraries to it so it can run on it’s own

The result would be to have a binary which doesn’t even need Rust to run. Thanks to the Open Source community and Erik Kidd, there is already a solution out there which helps us with that.

The result is a super small Docker image with no external dependencies. Meet rust-musl-builder. It is a Docker image which helps you build static Rust binaries. It will download the whole image just after the first execution.

Everything we type and create happens from the root directory of our application.

$ cd my-cool-web-app
Enter fullscreen mode Exit fullscreen mode

Before we create our Dockerfile, lets see what we actually trying to do. We are using the rust-musl-builder to statically link the musl-libc library into our binary.

$ docker run --rm -it -v "$(pwd)":/home/rust/src ekidd/rust-musl-builder cargo build --release
Enter fullscreen mode Exit fullscreen mode

This will create our super small binary. You can inspect it like that:

$ ls -lh target/x86_64-unknown-linux-musl/release/my-cool-web-app
Enter fullscreen mode Exit fullscreen mode

It is just a few MB small (in my example: 4,4MB). To be able to recreate this procedure over and over again, and not just on our local machine but also in a deployment pipeline on different servers, we create a multi-stage Dockerfile.

FROM ekidd/rust-musl-builder:nightly AS build
COPY . ./
RUN sudo chown -R rust:rust .
RUN cargo build --release

FROM scratch
COPY --from=build /home/rust/src/target/x86_64-unknown-linux-musl/release/my-cool-web-app /
ENV PORT 8181
EXPOSE ${PORT}
CMD ["/my-cool-web-app"]
Enter fullscreen mode Exit fullscreen mode

You can build the image now via:

$ docker build -t my-cool-web-app:latest .
Enter fullscreen mode Exit fullscreen mode

And run it with:

$ docker run -d --rm -P --name heroku heroku:latest
Enter fullscreen mode Exit fullscreen mode

Now you can open your browser (in macOS) via:

$ open http://$(docker container port my-cool-web-app 8181)
Enter fullscreen mode Exit fullscreen mode

We just created a super minimal Docker image which contains our binary with no external dependencies. You can inspect your just created image via:

$ docker image ls my-cool-web-app
Enter fullscreen mode Exit fullscreen mode

small_docker_image

Summary

  • Docker is a beast, but when used wisely can be quite helpful
  • Especially with Rust: You can create statically linked binaries which are super small and don’t even need a Rust environment to run in
  • You also have much more options to host and run your application when choosing Docker
  • However, managed hosting environments like Heroku don’t allow pushing Docker images to their environment

Serverless runtimes — ZEIT/now

Serverless is a different mindset then the first two options. Serverless also means stateless, so you are not building web applications but functions. Instead of having API endpoints build into your app, you basically just have those API endpoints (in serverless terms: handlers). Our web frameworks like rocket and actix might be an overkill here. Right now, ZEIT is not supporting Rust nightly builds in their new serverless environment.

So instead of creating a binary (with cargo new web-app), we create a library:

$ cargo new now-service --lib
$ cd now-service
Enter fullscreen mode Exit fullscreen mode

Here we have to create a file called now.json

{
  "name": "now-service",
  "version": 2,
  "builds": [
    {
      "src": "src/index.rs",
      "use": "@now/rust"
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

And our src/lib.rs example looks like this:

use http::{Request, Response, StatusCode, header};

fn handler(request: Request<()>) -> http::Result<Response<String>> {
    let response = Response::builder()
        .status(StatusCode::OK)
        .header(header::CONTENT_TYPE, "text/html")
        .body("<!doctype html><html><head><title>A simple deployment with Now!</title></head><body><h1>Welcome to Rust on Now</h1></body></html>".to_string())
        .expect("failed to render response");

    Ok(response)
}
Enter fullscreen mode Exit fullscreen mode

As with Heroku, you need to install the ZEIT toolchain, which is called “now”. There are several options. If you are on macOS, you can do it via:

$ brew cask install now
Enter fullscreen mode Exit fullscreen mode

Which installs the Now application. Find it in your /Applications folder and open it. You can finish the installation by typing in your email address. This will also install the command line tool chain.

That’s basically it. You can type:

$ now
Enter fullscreen mode Exit fullscreen mode

and hit Enter. This will start the upload of your application. Login to your ZEIT dashboard and click on the provided link.

ZEIT_dashboard

Summary

  • Serverless lets you save costs: The service is just running when requested
  • This ends up in higher boot times which need to be considered
  • The mindset of serverless lets you rethink state and if you really need a fully fledged web application for some use cases
  • The deployment can take a bit longer when using AWS Lambda or ZEITs now

Follow me on twitter to always get the latest information about web development in Rust. Also checkout the GitHub repository to his series.

Top comments (12)

Collapse
 
czilla profile image
Chenyang Shao • Edited

Update:


#![feature(async_await)]

extern crate tide;

use tide::App;
use std::{env, net::SocketAddr};



fn main() {
    let mut app = App::new(());
    let address = SocketAddr::from(([127, 0, 0, 1], get_server_port()));

    app.at("/").get(async move |_| "hello world");
    app.serve(address).expect("Start server");
}

fn get_server_port() -> u16 {
    env::var("PORT")
        .ok()
        .and_then(|port| port.parse().ok())
        .unwrap_or_else(|| 8186)
}

Collapse
 
gruberb profile image
Bastian Gruber

Thank you so much for the update Chenyang!

Collapse
 
czilla profile image
Chenyang Shao • Edited

thank you for your tutorial~ ^ ^

Collapse
 
czilla profile image
Chenyang Shao • Edited

Thank you!

And if anyone can't compile tide like this

error: internal compiler error: src/librustc_mir/transform/generator.rs:715: Broken MIR: generator contains type std::option::Option<cookies::CookieData> in MIR, but typeck only knows about for<'r> {cookies::CookieData, std::sync::Arc<std::sync::RwLock<cookie::jar::CookieJar>>, std::pin::Pin<std::boxed::Box<(dyn core::future::future::Future<Output = http::response::Response<http_service::Body>> + std::marker::Send + 'r)>>, ()}
  --> /Users/shaochenyang/.cargo/registry/src/github.com-1ecc6299db9ec823/tide-0.2.0/src/lib.rs:21:56
   |
21 |           ::futures::future::FutureExt::boxed(async move { $($t)* })
   |                                                          ^^^^^^^^^^
   | 
  ::: /Users/shaochenyang/.cargo/registry/src/github.com-1ecc6299db9ec823/tide-0.2.0/src/middleware/cookies.rs:34:9
   |
34 | /         box_async! {
35 | |             let cookie_data = cx
36 | |                 .extensions_mut()
37 | |                 .remove()
...  |
58 | |             res
59 | |         }
   | |_________- in this macro invocation

thread 'rustc' panicked at 'Box<Any>', src/librustc_errors/lib.rs:578:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
error: aborting due to previous error


note: the compiler unexpectedly panicked. this is a bug.

note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports

note: rustc 1.37.0-nightly (de7c4e423 2019-06-23) running on x86_64-apple-darwin

note: compiler flags: -C debuginfo=2 --crate-type lib

note: some of the compiler flags provided by cargo are hidden

error: Could not compile `tide`.

To learn more, run the command again with --verbose.

please see

hope this helps

Collapse
 
franky47 profile image
François Best • Edited

Thanks for the article !

I'm looking into Zeit Now for Rust, but I have a few issues with it:

  • It does not follow standard crates filesystem recommendations, but rather takes a Next.js approach of using filesystem = routes, which means you end up with your endpoint at example.com/src/lib.rs. I see you defined src/index.rs in now.json, but no such file exists, how does this work ?
  • For this reason, I found it hard to integrate serverless endpoints in an existing codebase that uses Cargo workspaces, because that "crate" won't actually compile.
Collapse
 
gruberb profile image
Bastian Gruber

Thanks Francois! I changed the filename:
github.com/gruberb/web-programming...

You are right, serverless mindest is different than a typical cargo app. You can follow the official documentation here: zeit.co/blog/introducing-now-rust

As I mentioned in the article: You don't really create an app when thinking in serverless terms, but you just invoke functions/handlers which process data.

The ZEIT environment will idle your application when it's not needed and start it when triggered (an endpoint is called).

Collapse
 
franky47 profile image
François Best • Edited

What I meant was that in a real-world case, your serverless endpoint/function will probably want to use business/domain/applicative code that is located and organised elsewhere, in Cargo workspaces, and Zeit's approach does not play well with that.

One way that could work would be to have a directory structure as follows:

├── workspace
│ ├── Cargo.toml      Workspace root definition
│ ├── Cargo.lock      Shared lockfile
│ ├── target/         Shared build directory
│ ├── foo
│ └── bar
└── serverless        Zeit Now endpoints
  ├── baz
  | ├── Cargo.toml    [dependencies] foo = { path = "../../workspace/foo" }
  | └── index.rs
  └── qux
    ├── Cargo.toml    [dependencies] bar = { path = "../../workspace/bar" }
    └── index.rs

Edit: after a quick test, this cannot work either, if the crates that the serverless endpoints depend on (here foo and bar) are not published, which would be the case if they are internal to the project.

Also, not having the Cargo.toml workspace root at the root of the project directory disables RLS, for code completion / formatting etc in VSCode.

I guess this is what people mean when they talk about lock-in with serverless, it's not so much about the platform, but the constraints they impose upon your project structure and dependency management.

Collapse
 
cezaryb profile image
cezaryB • Edited

Hey man,

I have been interested in Rust for the last couple months - from the time that I've heard that a new Node.js framework called Deno is written in Rust. This last couple months I've been learning Rust through the official book (doc.rust-lang.org/book/) as well as through the examples and tutorials that I could find online.

Always had one goal in my mind - wanted to built the services and APIs with Rust that I've already built in Node.js in the past.

Anyway, I think you are doing an amazing work, sharing this tutorials and your mindset with other Rust (wannabe) developers - so thank you for that

Collapse
 
jeikabu profile image
jeikabu

Is Zeit a better choice than Lambda when it comes to rust? I haven't taken the time to look into many other options.

Collapse
 
gruberb profile image
Bastian Gruber

Zeit was just so easy to setup. I will look into AWS Lambda at some point in the next few weeks, but the hassle of setting everything up was not worth it to me (for now).

Collapse
 
franky47 profile image
François Best

Now v2 uses AWS Lambdas under the hood, and provide abstractions and automation of a few things to make the process easier.

Collapse
 
loothood profile image
loothood

great article!
Thank you!