That feeling of embarking on something new.
Not pictured is the wall I ran into.
I am a Python developer. My life has been lived within the safe confines of an interpreter and devoid of realities of compilers and platform architectures. Life is sweet. Despite this, I have found myself being lured by the prospects and promises of what compiled languages can bring: ludicrous speed.
Since 2020 it has been clear to me that AWS has decided on Rust as their Go. The release of the official AWS SDK for Rust, and a maturing Rust runtime for Lambda, signaled that it might be time for me to explore what developing my functions in this language might look like.
The Setup
Up until now I've heard that Rust can be difficult, but I can safely say that getting started with Rust itself is not difficult. Following the instructions in the Rust Book resulted in my ability to write and compile a program without a single hitch.
Where I really ran into trouble was trying to use the Rust runtime and AWS SDK and deploy a Lambda Function. I am writing code on an Apple Silicon MacBook Pro (M1 Pro to be precise). This is an arm64 architecture. My goal was to compile for the standard x86 architecture in Lambda.
Yes, arm64 is available as a target, but the availability of x86 is 24 regions (including GovCloud) vs 10 (and no GovCloud). These regions lists are available on the AWS Lambda pricing page.
Head First
My goals going into this were not very ambitious. That is, beyond jumping in headfirst to running Rust code in Lambda. The code I used was taken straight from the lambda-http crate example. I wanted to get a feel for what the experience was going to be like moving from serverless Python to serverless Rust.
use lambda_runtime::{handler_fn, Context, Error};
use serde_json::{json, Value};
#[tokio::main]
async fn main() -> Result<(), Error> {
let func = handler_fn(func);
lambda_runtime::run(func).await?;
Ok(())
}
async fn func(event: Value, _: Context) -> Result<Value, Error> {
let first_name = event["firstName"].as_str().unwrap_or("world");
Ok(json!({ "message": format!("Hello, {}!", first_name) }))
}
Running cargo build
here introduced me to Rust's fantastic and helpful compiler errors, and managing my dependencies in Cargo.toml
.
error[E0432]: unresolved import `serde_json`
--> src/main.rs:2:5
|
2 | use serde_json::{json, Value};
| ^^^^^^^^^^ use of undeclared crate or module `serde_json`
error[E0433]: failed to resolve: use of undeclared crate or module `tokio`
--> src/main.rs:4:3
|
4 | #[tokio::main]
| ^^^^^ use of undeclared crate or module `tokio`
Using cargo
feels very familiar to me as pipenv
works much the same way. Nice.
[package]
name = "rust_demo_api"
version = "0.1.0"
edition = "2021"
[dependencies]
lambda_runtime = "*"
lambda_http = "*"
serde_json = "*"
tokio = "*"
Re-running cargo build
now results in success! Great! Now to compile my function for x86 and deploy is using SAM. The documentation is straightforward and cross compiling is something that is touted with Rust. So I...
% rustup target add x86_64-unknown-linux-gnu
% cargo build --target x86_64-unknown-linux-gnu
The Wall
This is where I hit it. The very first thing I tried to do was switch my target from x86 to arm64 to see if I could compile for Graviton.
% rustup target add aarch64-unknown-linux-gnu
% cargo build --target aarch64-unknown-linux-gnu
More errors.
I begin a process we are all very familiar with: encounter issue, Google appropriate errors and terms, and test solutions. I did indeed find posts and GitHub issues that related to the various compiling errors I was trying to work through, but in the process I came to the realization that I was trying to solve cross compiling issues in the opposite direction of most of the Rust community.
When cross compiling is discussed it is about an x86 host targeting arm64 platforms. I'm trying to go from an arm64 host to an x86 platform. At this point I come back to the Rust runtime's documentation.
At the time of writing, the ability to cross compile to x86 or aarch64 AL2 on MacOS is limited. The most robust way we've found is using Docker to produce the artifacts for you.
% PLATFORM="linux/amd64" \ # linux/arm64
% TARGET="x86_64-unknown-linux-gnu" # aarch64-unknown-linux-gnu
% docker run \
--platform "${PLATFORM}"
--rm --user "$(id -u)":"$(id -g)" \
-v "${PWD}":/usr/src/myapp -w /usr/src/myapp rust:latest \
cargo build --release --target "$(TARGET}"
The x86 build did not work, but the arm64 build did!
Finished release [optimized] target(s) in 2m 56s
Oof that's slow. An immediate repeat run clocked in at 58.81 seconds. The update of the crates.io
index is taking up the bulk of that time, and the compiling will only go so fast as the number of CPUs I assign to Docker.
Still, this is progress. I can copy the resulting binary to build/get/bootstrap
, the location set in my SAM template shown below, and deploy my first function.
AWSTemplateFormatVersion: 2010-09-09
Transform: AWS::Serverless-2016-10-31
Globals:
Function:
Architectures:
- arm64
Runtime: provided.al2
Handler: rust
Resources:
Api:
Type: AWS::Serverless::HttpApi
Properties:
FailOnWarnings: true
GetFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./build/get
Events:
read:
Type: HttpApi
Properties:
ApiId: !Ref Api
Method: get
Path: /
Outputs:
ApiUrl:
Value: !Sub https://${Api}.execute-api.${AWS::Region}.${AWS::URLSuffix}
I had partially achieved my goals. I have a sample Rust function deployed behind an API Gateway that I can invoke. I had to fall back to arm64, and it doesn't use the new SDK to perform operations (i.e. DynamoDB).
Professional Help
During all of this I had an AWS Solutions Architect guided me through my pitfalls. Nicolas Moutschen has a fully realized version of what I was prototyping sitting in a pull request to the AWS SAM CLI, and a much larger and more comprehensive Rust application available here.
I took the code in his PR (it writes a JSON body to a DynamoDB table) and added a second function.
...
PostFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./build/post
Policies:
- DynamoDBWritePolicy:
TableName: !Ref Table
Events:
create:
Type: HttpApi
Properties:
ApiId: !Ref Api
Method: post
Path: /create/{id}
...
The results were nothing short of fantastic.
In addition to learning a great deal from Nicolas' code, I also followed suggestions he made to fix my issue of running builds for x86. He came across cargo-zigbuild which uses ziglang as the linker for cross compiling.
% python3 -m venv zigenv
...
% source zigenv/bin/activate
(zigenv) % pip install ziglang
...
(zigenv) % cargo install cargo-zigbuild
...
(zigenv) % cargo zigbuild --release --target x86_64-unknown-linux-gnu
...
Finished release [optimized] target(s) in 20.85s
First try!
That is night and day with the earlier two minutes plus in Docker! The re-run without changes - just as before - clocked at 0.04 seconds. These are the kinds of build speeds I wanted to see!
This Was the Hard Way
The Rust Book - linked again to emphasize its importance - is a fantastic getting started resource. Kicking the tires on Rust as a language was quick and easy and lived up to much of the promise of having the cake and eating it too.
Jumping immediately into running Rust code in AWS Lambda also shouldn't have been too difficult had I been on an Intel based Mac. I came across what felt like ALL of the edge cases in cross compiling Rust code.
I still need to learn much of the how of Rust, but seeing working code from Nicolas demonstrating the use of the new SDK in the context of a Lambda function taught me a great deal. If I really wanted to I could probably cobble together a basic CRUDL API with my current knowledge.
Would I use such a thing in production? I'm not nearly that irresponsible.
This adventure did bear unexpected fruits.
I'm going to be continuing my Rust journey by completing the Rust Book. If by the end of this post you haven't gone and followed @NMoutschen on Twitter yet you can do so now!
🦀
Top comments (7)
Great article. Thanks for sharing your experience :)
I recently went through a similar journey (and probably bumped into similar walls) and I see we have found some different solutions, so great to be able to steal a trick or two here.
Please keep sharing more as you go though this journey!
Thanks!
Why do you say "I had to fall back to arm64". Didn't you target arm64 on purpose to use Graviton? ("The very first thing I tried to do was switch my target from x86 to arm64 to see if I could compile for Graviton.")
Nicolas Moutschen and a few other AWS guys are putting quite a bit of time in improving the runtime. It is a huge progress from a year or so ago when the maintainers went silent and we had to fork it temporarily just to fix some basic bugs.
In terms of cross-compilation, I figured that it is easier to have a build server with the same architecture as the target. Basically, it's an EC2 spot instance with Git + RustUp.
Check out my github.com/rimutaka/lambda-debug-p... repo for local debugging of Lambdas. It plugs your development machines directly into the pipeline where your Lambda is. No need to "emulate" anything.
You might want to try to new tool too: github.com/cargo-lambda/cargo-lambda. It's a combination of your final commands, plus a bunch of other features that are handy to work with AWS Lambda.
Thanks for sharing, I am a member of a Linux community group in Dublin, and we chat often about Rust. This programming language is becoming more and more popular. At the moment, I build on Lambda with Python runtimes, and I am getting more and more interested in learning Rust because of its speed.
Speed is one part of it. Rust’s very nature enforces safety and good code design. Lambda functions by nature should be small and concise programs - a single function has one job and does it well. Rust feels like a natural fit to this.
Thanks for sharing! There is still little documentation and guides for applying Rust to modern cloud environments so I appreciate you taking the time. I am about to test this out now :)