In part 3 we started with some very simple
AWS Lambda deployments. In this part we are going to continue on that path, using the
AWS Serverless Application Model (AWS SAM). This is essentially an extension to AWS Cloudformation,
the AWS service that provides infrastructure-as-code capabilities.
With Cloudformaton one can write declarations how the infrastructure should look like in AWS as templates.
Using a template one can create an instance of that template, called a stack.
AWS SAM makes it easier to define the infrastructure for serverless application solutions and is certainly a step up
from just plain Cloudformation in that regard.
The AWS SAM specification also includes the concept of an Application, which essentially is a packaged serverless solution. A SAM application declaration can point to a packaged solution for something and is one approach to
encourage re-use.
AWS has for this purpose the Serverless Application Repository, currently with more than 1000 entries.
There will also be a brief look into the area of cold start times for AWS Lambda with F# and .NET Core.
Cloudformation and SAM
Originally Cloudformation templates used JSON format and were not particularly easy to read or work with. Later Cloudformation
supported YAML format for the templates as well, which is an improvement to readability. Still, the resource descriptions are a bit low-level and a bit complex to work with, especially since there is not a convenient/easy way to build packages or higher-level modules with-in Cloudformation itself.
AWS SAM is one an extension to Cloudformation that tries to make it a little bit easier to use around AWS Lambda.
More recently AWS has also introduced more elaborate frameworks on top of Cloudformation, AWS Cloud Development Kit, but that is a topic for another blog post.
In this post though we will explore building and deploying AWS Lambda solutions using AWS SAM with F# code.
Creating a SAM project
Similar to what we did in part 3 we will use the dotnet lambda command to create a starting point for the Lambda-based solution using one of the templates there:
When I looked at these, the template Serverless Detect Image Labels looked like an interesting option. This is a
Lambda function that triggers whenever an image file is uploaded to an S3 bucket. It then uses the AWS service
Rekognition to try to identify what is in the picture. The result is then put as tags on the file itself, with a label and the confidence value for that label.
So we can use the template to set up this solution and start playing around with it. Similarly to the template we used in part 3, it creates two projects - one for the Lambda function itself and a related test project. The file structure is also similar.
One addition that can be noticed here is the serverless.template file. This is the Cloudformation template file which uses AWS SAM. This includes the declaration of the Lambda function, the S3 bucket and the necessary permissions which the Lambda function needs.
Similarly to the previous Lambda project, we need to get the dependencies downloaded and installed. This we can do by running the dotnet build command in the project directory.
The Detect Image Labels code
Let us start by looking at the code for this Lambda function, which is in the file Function.fs. The complete code is below. We will look at different parts piece by piece.
namespace DetectImageLabels
open Amazon.Lambda.Core
open Amazon.Lambda.S3Events
open Amazon.Rekognition
open Amazon.Rekognition.Model
open Amazon.S3
open Amazon.S3.Model
open Amazon.S3.Util
open System
open System.Collections.Generic
open System.IO
// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[<assembly: LambdaSerializer(typeof<Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer>)>]
()
type Function(s3Client: IAmazonS3, rekognitionClient: IAmazonRekognition, minConfidence: float32) =
let supportedImageTypes = set [".png"; ".jpg"; ".jpeg"]
new() =
let environmentMinConfidence = System.Environment.GetEnvironmentVariable("MinConfidence")
let minConfidence =
match Single.TryParse(environmentMinConfidence) with
| false, _ -> 70.0f
| true, confidence ->
printfn "Setting minimum confidence to %f" confidence
confidence
Function(new AmazonS3Client(), new AmazonRekognitionClient(), minConfidence)
/// <summary>
/// A function for responding to S3 create events. It will determine if the object is an image
/// and use Amazon Rekognition to detect labels and add the labels as tags on the S3 object.
/// </summary>
/// <param name="input"></param>
/// <param name="context"></param>
/// <returns></returns>
member __.FunctionHandler (input: S3Event) (context: ILambdaContext) =
let isSupportedImageType (record: S3EventNotification.S3EventNotificationRecord) =
match Set.contains (Path.GetExtension record.S3.Object.Key) supportedImageTypes with
| true -> true
| false ->
sprintf "Object %s:%s is not a supported image type" record.S3.Bucket.Name record.S3.Object.Key
|> context.Logger.LogLine
false
let processRecordAsync (record: S3EventNotification.S3EventNotificationRecord) (context: ILambdaContext) = async {
sprintf "Looking for labels in image %s:%s" record.S3.Bucket.Name record.S3.Object.Key
|> context.Logger.LogLine
let detectRequest =
DetectLabelsRequest(
MinConfidence = minConfidence,
Image = Image(
S3Object = Amazon.Rekognition.Model.S3Object(
Bucket = record.S3.Bucket.Name,
Name = record.S3.Object.Key
)
)
)
let! detectResponse =
rekognitionClient.DetectLabelsAsync(detectRequest)
|> Async.AwaitTask
let s3Tags =
detectResponse.Labels
|> Seq.truncate 10
|> Seq.map (fun x ->
sprintf "\tFound Label %s with confidence %f" x.Name x.Confidence |> context.Logger.LogLine
Tag(Key = x.Name, Value = string x.Confidence))
|> List
let putTags =
PutObjectTaggingRequest(
BucketName = record.S3.Bucket.Name,
Key = record.S3.Object.Key,
Tagging = Tagging(TagSet = s3Tags)
)
let! putResponse =
s3Client.PutObjectTaggingAsync(putTags)
|> Async.AwaitTask
context.Logger.LogLine("Tags put on S3 object")
}
input.Records
|> Seq.filter isSupportedImageType
|> Seq.iter(fun x -> processRecordAsync x context |> Async.RunSynchronously)
Dependencies and structure
The starting point here in the code is a namespace for the solution, some expressions with the open
keyword to get entities available in the current namespace, plus getting the conversion package from JSON to .NET/F# data structures.
namespace DetectImageLabels
open Amazon.Lambda.Core
open Amazon.Lambda.S3Events
open Amazon.Rekognition
open Amazon.Rekognition.Model
open Amazon.S3
open Amazon.S3.Model
open Amazon.S3.Util
open System
open System.Collections.Generic
open System.IO
// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[<assembly: LambdaSerializer(typeof<Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer>)>]
()
Function class definition
Next up is the definition of the Function class. In this case, the constructor for the class takes
three parameters - AWS SDK client objects for S3 and Rekognition, as well as a minimum confidence value.
The latter is our limit for which labels to include in the result - values below will not be included and values
above it will be.
In the body of the class, we also define a set of image formats which we will support. This will be used to investigate the filename of the image files. The let statements in a class definition defines private fields
in the class. The parameters in the type declaration and the statements in the "body" defines the primary constructor.
There is also a new keyword called new (!). This is a declaration of an alternate constructor,
one which in this case do not have any parameters. Alternate constructors must always call the primary constructor.
type Function(s3Client: IAmazonS3, rekognitionClient: IAmazonRekognition, minConfidence: float32) =
let supportedImageTypes = set [".png"; ".jpg"; ".jpeg"]
new() =
let environmentMinConfidence = System.Environment.GetEnvironmentVariable("MinConfidence")
let minConfidence =
match Single.TryParse(environmentMinConfidence) with
| false, _ -> 70.0f
| true, confidence ->
printfn "Setting minimum confidence to %f" confidence
confidence
Function(new AmazonS3Client(), new AmazonRekognitionClient(), minConfidence)
The alternate constructor retrieves the value of the environment variable MinConfidence. It then sets the value of minConfidence by using the .NET
Single class (System.Collections.Generic.Single to be more precise), which is an encapsulation of single-precision
floating-point value. This class has a method called TryParse, which tries to parse a string into a floating-point. The documentation can be found here.
As with a lot of other documentation for .NET there are only examples for C#:
public static bool TryParse (string s, out float result);
It returns a boolean which indicates whether the parsing was successful or not. There is also an out variable,
which will contain the parsed value if it was successful. So how does F# handle this?
The out variable becomes part of the result and not an input value. So instead of returning just a
boolean in F#, the function call will return a tuple with a boolean and a float.
This means that we can do pattern matching on the result and do different things depending on whether the boolean
part of the tuple is true or false.
In the case it is false, we simply ignore the float value (that is what the underscore indicates) and the result is
a predefined float value of 70.0 On the other hand, if we could parse the environment variable string, we return
the parsed value.
We then call the primary constructor, with created clients for S3 and Rekognition, as well as the minConfidence value.
The handler function
The actual handler function is where interesting things happen.
/// <summary>
/// A function for responding to S3 create events. It will determine if the object is an image
/// and use Amazon Rekognition to detect labels and add the labels as tags on the S3 object.
/// </summary>
/// <param name="input"></param>
/// <param name="context"></param>
/// <returns></returns>
member __.FunctionHandler (input: S3Event) (context: ILambdaContext) =
.
.
.
.
input.Records
|> Seq.filter isSupportedImageType
|> Seq.iter(fun x -> processRecordAsync x context |> Async.RunSynchronously)
The FunctionHandler member function takes two parameters as input - an S3Event and the Lambda context.
Then at the end, there is the actual body of the handler code. In between are to helper function definitions, which we look more into detail further down.
The main processing logic here is to take all the records that are part of the S3Event, which can be multiple records
and then applies the function isSupportedImageType on the S3 event records. What remains should be the file data
for which we can do image processing with Rekognition.
The remaining sequence of event data is then iterated on and for each record, in that list, we apply the
processRecordAsync function and run its content synchronously.
The helper functions
The first helper function, isSupportedImageType, is relatively simple. Input is a notification record from the
S3Event. The record contains a key field for the filename. The function Path.GetExtension retrieves the file extension part of the filename. This is then checked if the set of supported image types which we had defined
in the constructor contains this value.
If it does, it just returns true, otherwise, we send log information using the logger reference in the context object
that it is not a supported image type and after that return false.
let isSupportedImageType (record: S3EventNotification.S3EventNotificationRecord) =
match Set.contains (Path.GetExtension record.S3.Object.Key) supportedImageTypes with
| true -> true
| false ->
sprintf "Object %s:%s is not a supported image type" record.S3.Bucket.Name record.S3.Object.Key
|> context.Logger.LogLine
false
The processRecordAsync function is a bit more complex. It takes a notification record and the Lambda context as input and the body of the function is an async computation expression. This is used since we are going to do some asynchronous calls.
The first line of the code generates a logline.
let processRecordAsync (record: S3EventNotification.S3EventNotificationRecord) (context: ILambdaContext) = async {
sprintf "Looking for labels in image %s:%s" record.S3.Bucket.Name record.S3.Object.Key
|> context.Logger.LogLine
After that, we perform the call to Rekognition to detect labels for the image in the S3 bucket. First the request
object with the input parameters are created, with a regular let statement.
let detectRequest =
DetectLabelsRequest(
MinConfidence = minConfidence,
Image = Image(
S3Object = Amazon.Rekognition.Model.S3Object(
Bucket = record.S3.Bucket.Name,
Name = record.S3.Object.Key
)
)
)
let! detectResponse =
rekognitionClient.DetectLabelsAsync(detectRequest)
|> Async.AwaitTask
The actual call is an asynchronous call where we eventually will get the response and thus use the let! keyword,
similar to what we did in part 2 of this blog series. The actual execution of the code in the async block has not happened yet - this is triggered by the Async.RunSynchronously we looked at earlier in the handler function itself.
Now we are just setting up for what should be executed. See also the documentation for asynchronous programming in F#.
When this response has been received though, we will have the label information we requested. So it will be time to
construct tags from these:
let s3Tags =
detectResponse.Labels
|> Seq.truncate 10
|> Seq.map (fun x ->
sprintf "\tFound Label %s with confidence %f" x.Name x.Confidence |> context.Logger.LogLine
Tag(Key = x.Name, Value = string x.Confidence))
|> List
The response will contain a list of labels. As a list is also a sequence, we use a sequence function
Seq.truncate to take up to 10 labels from the list - if there are more than 10 they will simply be discarded.
For each of the remaining labels, we log the label info and construct a Tag object. Finally, the sequence is converted back to a list and s3Tags will be a list of Tag objects.
let putTags =
PutObjectTaggingRequest(
BucketName = record.S3.Bucket.Name,
Key = record.S3.Object.Key,
Tagging = Tagging(TagSet = s3Tags)
)
let! putResponse =
s3Client.PutObjectTaggingAsync(putTags)
|> Async.AwaitTask
context.Logger.LogLine("Tags put on S3 object")
Once we have a list of tags we create the input with which to execute the request to add the tags to the image file (object) n S3 and then execute this call asynchronously (or rather set it up to be executed).
The final line is the end of processRecordAsync where we log that we are done with the tagging.
The Cloudformation piece
In the serverless.template file in the project, we have the Cloudformation part, with the SAM extension. Unfortunately, the template
uses JSON format, which in my opinion is a bit hard to read. Even small ones like this template are not that easy to read. It is out of scope to get into the details of learning Cloudformation and SAM in this post, so we will not cover that here and not talk about all details of the template. Some useful documentation links are:
The JSON version of the template is below:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Transform": "AWS::Serverless-2016-10-31",
"Description": "Template that creates a S3 bucket and a Lambda function that will be invoked when new objects are upload to the bucket.",
"Parameters": {
"BucketName": {
"Type": "String",
"Description": "Name of S3 bucket to be created. The Lambda function will be invoked when new objects are upload to the bucket. If left blank a name will be generated.",
"MinLength": "0"
}
},
"Conditions": {
"BucketNameGenerated": {
"Fn::Equals": [
{
"Ref": "BucketName"
},
""
]
}
},
"Resources": {
"Bucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": {
"Fn::If": [
"BucketNameGenerated",
{
"Ref": "AWS::NoValue"
},
{
"Ref": "BucketName"
}
]
}
}
},
"LabelDetectFunction": {
"Type": "AWS::Serverless::Function",
"Properties": {
"Handler": "DetectImageLabels::DetectImageLabels.Function::FunctionHandler",
"Runtime": "dotnetcore3.1",
"CodeUri": "",
"Description": "Default function",
"MemorySize": 256,
"Timeout": 30,
"Role": null,
"Policies": [
"AWSLambdaFullAccess",
"AmazonRekognitionReadOnlyAccess"
],
"Events": {
"NewImagesBucket": {
"Type": "S3",
"Properties": {
"Bucket": {
"Ref": "Bucket"
},
"Events": [
"s3:ObjectCreated:*"
]
}
}
}
}
}
},
"Outputs": {
"BucketForImages": {
"Value": {
"Ref": "Bucket"
},
"Description": "Upload images to this bucket to trigger the Lambda function"
}
}
}
Fortunately JSON is a subset of YAML and all JSON can be converted to a YAML representation. There may be a
plugin in your IDE for that, or there are numerous online tools to perform such transformation. Either way, to have
something a bit more readable to reason about I converted the JSON template to YAML, see below. Specifically for Cloudformation some optimisations can be done even further, but I have not applied those here - the result is good enough, I think.
AWSTemplateFormatVersion: 2010-09-09
Transform: AWS::Serverless-2016-10-31
Description: Template that creates an S3 bucket and a Lambda function that will
be invoked when new objects are upload to the bucket.
Parameters:
BucketName:
Type: String
Description: Name of S3 bucket to be created. The Lambda function will be invoked when new objects are upload to the bucket. If left blank a name will be generated.
MinLength: "0"
Conditions:
BucketNameGenerated:
Fn::Equals:
- Ref: BucketName
- ""
Resources:
Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName:
Fn::If:
- BucketNameGenerated
- Ref: AWS::NoValue
- Ref: BucketName
LabelDetectFunction:
Type: AWS::Serverless::Function
Properties:
Handler: DetectImageLabels::DetectImageLabels.Function::FunctionHandler
Runtime: dotnetcore3.1
CodeUri: ""
Description: Default function
MemorySize: 256
Timeout: 30
Role: null
Policies:
- AWSLambdaFullAccess
- AmazonRekognitionReadOnlyAccess
Events:
NewImagesBucket:
Type: S3
Properties:
Bucket:
Ref: Bucket
Events:
- s3:ObjectCreated:*
Outputs:
BucketForImages:
Value:
Ref: Bucket
Description: Upload images to this bucket to trigger the Lambda function
There are essentially 3 key elements here:
- A parameter for the name of the S3 bucket with the images
- A resource definition for the S3 bucket itself
- A resource definition for the Lambda function.
We should either provide a name for the S3 bucket or let it generate a name. Either way an S3 bucket will
be created, due to the resource definition for it. There is also a definition for the Lambda function,
with permission settings for S3 and Rekognition and that it will receive events from the S3 bucket.
I prefer to work with YAML, so I replaced the JSON version with the YAML version and call it serverless.template.yml.
Then the project file DetectImageLabels.fsproj should be updated to refer to this file then. Depending on what
IDE you use, you may be able to do that from the GUI itself, or you have to edit the project file directly:
<ItemGroup>
<Compile Include="Function.fs" />
</ItemGroup>
<ItemGroup>
<None Include="aws-lambda-tools-defaults.json" />
<None Include="Readme.md" />
<None Include="serverless.template.yml" />
</ItemGroup>
</Project>
Or you could simply overwrite the old name with the YAML version - or keep the JSON. It is up to you.
Solution deployment
What may not be immediately obvious is that to deploy the serverless package using AWS SAM and dotnet
lambda command, it needs to be uploaded to an S3 bucket and then deployed from there via the Cloudformation template. So there is a need for a different S3 bucket than the solution itself uses (and creates), a deployment bucket.
This deployment bucket can be a bucket you already have in place for this purpose, or you can create a new one in AWS Console, or through AWS CLI for example:
aws s3 mb s3://my-unique-deployment-bucket-name
Note that the bucket must be created in the same region as where you intend to deploy the Lambda function.
In previous posts, I used eu-north-1 region, but in this case, I will switch to eu-west-1 instead.
The reason for this is that the Rekognition service is not available in eu-north-1.
So in order to avoid too much typing on the command line, I enter many of the parameters into the aws-lambda-tools-defaults.json file:
{
"profile": "erik",
"region": "eu-west-1",
"configuration": "Release",
"framework": "netcoreapp3.1",
"s3-bucket": "deploybucket-0123456789",
"s3-prefix": "DetectImageLabels/",
"template": "serverless.template.yml",
"template-parameters": "BucketName=imagestore-0123456789"
}
The profile and s3-bucket (the deployment bucket) would be names you pick yourself, as well as the value of the BucketName parameter to the Cloudformation template (imagestore-0123456789).
With this we are ready to deploy the solution, using the command dotnet lambda deploy-serverless
. The only added
parameter we need to specify is the name of the Cloudformation stack. In this case, I will call it
DetectImageLabels. So let us deploy this! (It will take about 1 1/2 minute)
Testing our Lambda function
To test the function and see how it works, I picked a few pictures.
Picture 1
So in the AWS Console, if we check the first picture and what tags have been set on it, we see 4 tags:
We can also look at the logs for the Lambda function execution and see the log information there:
Pretty good I think. The first picture was also the sample picture that was included in the project itself
when we instantiated the template.
Pictures 2-5
Since we see the same information both as tags and in the logs, I will show the log entries only with the rest
of the pictures here.
I think this was reasonably good for Robocop. It did not pick up the gun in the hand.
I was curious to see what it would make of this poster-type picture. It did pick up that there was a mammal in the picture, but not that it was a monkey. What made it pick antelope, I wonder? The confidence value is a bit
lower on these parts.
Some labels are spot on, some are not so clear. I was wondering if it would detect the sign and the number on
it, but it does not seem so - at least not with a reasonably large confidence value.
The Bill & Ted picture did pick up a few things correct. I thought it was a bit funny that it included Karaoke as a label...
Picture 6
This picture was not in my original test set-up, I just put it in to see if an AWS Service would recognise the
company logotype. I must say a was a bit surprised by the results...
Cold starts
The time between processing picture 1 and 2 was quite long, while pictures 3-5 were processed just after picture 2. This can be seen in the execution times - picture 1 and 2 are about 4.5 seconds, while the other 3 are in 0.5-1.5 seconds range.
So, in this case, there are 3-4 seconds extra in cold start time, which is what happens when a Lambda function runs for the first time or if it has not been executed for some time. In these cases, AWS needs to provision a new virtual machine to run the Lambda code on and that takes a bit of time, compared to one that is already up and running.
Those seconds are a significant amount of time for solutions that may be time-sensitive. For our very simple Lambda in part 3, the execution time without cold start was about 1 ms, with a cold start it was 300-350 ms.
For the cold starts for DetectImageLabels, Lambda reports an init time of a bit over 300 ms. So of that cold start time, about 0.3-0.4 seconds go for Lambda itself and the rest would be the time that is processing time for the solution
itself.
I am no expert on .NET and, the CLR (Common Language Runtime) and CIL (Common Intermediate Language), but my guess here is that there is some amount of JIT (just-in-time) compilation translating the CIL representation into native code during the cold start phase. In Java land that would correspond to JIT compilation of Java bytecode into native code. See also this view of .NET architecture
So what happens to the cold start times if we give the Lambda execution environment more capacity? In AWS it is not possible to individually change network I/O, CPU or memory (like in GCP). In particular with Lambda you only have the memory parameter to play with. However, if you increase memory you will also implicitly increase the other ones - more memory allocated will also allocate more CPU power. So presumably more CPU should also result in
faster JIT compilation time.
For our original deployment, the memory setting was 256 Mb. I re-deployed the solution with memory setting of 1024 Mb. In this case, the cold start times dropped to about 1.5 seconds, from about 4.5 seconds. The execution times with
a warmed up lambda was very similar to the ones before, which indicates that most of that time is spent waiting for
Rekognition to do its work.
Another re-deploy with memory set to 2048 Mb gave cold start times of about 800-900 milliseconds. Init times were
still 300-400 milliseconds in all cases.
There are numerous posts about comparing cold start times for different runtimes, like this one which is fairly recent.
I have learned that .NET has a concept of what is called ReadyToRun, which is to compile the code directly to native code, instead of it being JIT compiled at runtime. This would likely bring back cold start times further.
Another option to investigate is maybe to use Fable to transpile F# to Javascript and use the Node.js runtime, which has better cold start times.
Summary
This post was about dipping the toes more into building serverless solutions in F# with AWS Lambda, using AWS SAM.
In my opinion, using the dotnet CLI in combination with AWS SAM was a bit better experience than I had expected,
although my expectations were not particularly high, to be honest. There is a reason there are a bunch of 3rd party "serverless" frameworks for AWS Lambda - the workflow and experience was pretty bad if you used AWS own tooling,
going back a bit in time.
AWS SAM has improved a bit and now AWS also provides its Cloud Development Kit (CDK),
which is the topic for the next blog post in this series.
Source code
The source code in the blog posts in this series is posted to this Github repository:
Top comments (0)