Part 2 of a 4 part series on AWS Serverless IoT
In part one of this series of hands-on labs covering AWS Serverless IoT I showed you how to develop the "Worlds Simplest Synchronous Serverless IoT Dashboard on AWS." The design was fairly simple:
We ingest IoT data from our device into AWS IoT Core.
That JSON IoT payload is then put into a S3 bucket using an IoT Core Action/Rule.
Our static website, in that same public S3 bucket as our IoT data, then consumes our IoT data on an adjustable time interval.
The web host then graphs the IoT data in a line chart using Highcharts.js.
While this IoT implementation was both simple and effective, in fact the "World’s Simplest" as determined by extensive scientific review, it did have some problems.
The main problem with the previous synchronous IoT design was that IoT data was only extracted on interval, which led to over-fetching of stale data as well as under-fetching of new IoT data that could easily have been missed if synchronous polling didn’t align with data replacement in our S3 bucket. It should be remembered that this is an inherent ‘issue’ with serverless design. When we have a “serverful” design, like with a Linux partition using AWS EC2, we have a live server instance that can “serve up” data on demand. With a “serverless” model AWS Lambda runs server code on demand, thus the web host and the server can't implicitly execute coordinated “CRUD” operations without special provisions.
So how could we fix these issues? Well the obvious answer is to move our IoT design flow from a synchronous polling model to an asynchronous model. This can be also re-framed as moving from a “client pull” of data from our website host to a "server push" of data from AWS Lambda. There are various ways to do this in a serverless model, here are three :
- AWS WebSockets in the browser.
- MQTT over WebSockets using AWS IoT device SDK for JavaScript in the browser.
- GraphQL with AWS AppSync/Amplify in the browser.
AWS WebSockets to the rescue
In this tutorial we will focus on the AWS WebSockets solution for asynchronous serverless design. I have a more generic WebSockets solution tailored for IoT which requires a manual input of the connection ID which simplifies both of the lambda functions I use in this tutorial. You can see this more standard solution in my Udemy Class or on my YouTube channel. However, for this written walk-through lab I’m providing my “improved” WebSockets solution which is more complicated but also eliminates the need for manual input of the connection ID in an otherwise automated design flow.
Just like MQTT, which most of you are familiar with, WebSockets allows bi-directional communication so our lambda function can send and receive data directly from our website host. This allows us to move from our synchronous client polling to asynchronous server pushing. AWS WebSockets requires AWS API Gateway to provide an external and internal WebSockets endpoint URL to accomplish this.
Synchronous – Our website pulling IoT data via polling from a data repository in S3 on interval (Client pull)
Asynchronous – Our Lambda function pushing IoT Data via WebSockets to our Website host as soon as the data is available (Server push)
Now that we have a basic understanding of the advantages of WebSockets for Asynchronous IoT we can proceed to build our design flow on AWS to achieve these aforementioned advantages.
Table of contents:
Step 1 - Create a public Bucket in S3 and then enable static Web hosting
Step 2 - Set up a variable in the Systems Manager Parameter Store
Step 3 - Creating our Lambda functions
Step 4 – Creating WebSocket Endpoints in AWS API Gateway
Step 5 – Creating an AWS IoT Core Action and Rule
Step 6 – Uploading your HTML and JavaScript code to create a asynchronous visualization for your IoT data.
Step 7 – Populating and visualizing your IoT data using an automated IoT data producer
All the code posted in this tutorial can also be found at:
✅Step 1 - Create a public Bucket in S3 and then enable static Web hosting
For those of you who read my first article "World's Simplest Synchronous Serverless AWS IoT Dashboard" I already explained this process step by step. For those that didn't read it I'll iterate the process again here. You can skip this step if you remember the procedure from the first tutorial in this series. After we make the public bucket we are going to add additional web hosting capabilities to our S3 bucket.
Whenever we create a public bucket the first caveat is to confirm the bucket will only store data that we don’t mind sharing with the world. For our example we are just using the S3 bucket to hold IoT JSON data showing temperature, humidity, and timestamps. I think sharing basic environmental data from an unknown location is not too much of a privacy risk. The advantage of using a public bucket for our static web host, with an open bucket policy and permissive CORS rule, is that it makes the website easily accessible from anywhere in the world without having to use a paid service like AWS CloudFront and Route 53.
Since re:Invent 2021 AWS has changed the process in which to make a S3 bucket public. They have added one extra default permission which must be proactively changed to ensure you are not declaring a public bucket by mistake. AWS is especially concerned with people making buckets public unintentionally, the danger being that they will hold sensitive or personal data, and in the past unethical hackers have used search tools to find private data in S3 public buckets to exploit them. Fortunately for our use case, we don’t care about outsiders viewing our environmental data.
Many of you already know how to make a S3 public bucket for a static webhost on AWS. For those that don’t know how to do this in 2022, I will document it below.
Making a Public S3 Bucket
The process of creating a public S3 bucket for website hosting:
Go to AWS S3 and then select “Create bucket”
A) Give your bucket a globally unique name, here I call mine a catchy name: mybucket034975
B) Keep your S3 bucket in the same region as the rest of your AWS services for this lab.
C) Switch “Object Ownership” to “ACL’s enabled”, this is new for late 2021! We now must first enable our Access Control Lists to make them public.
D) Unblock your S3 bucket and acknowledge that you really want to do this. Scary anti-exculpatory stuff! 😧
F) Finally, select the “Create bucket” button at the bottom of the screen. That's all you have to do for this page, but don’t worry, we are going to have more opportunities to make sure we really, really, and truly want to create a public bucket soon. 👍
G) Now go back into your newly created bucket and click on the “Permissions” tab.
F) Go to Bucket Policy and choose “Edit.” We will paste and save a basic read-only policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRea2411145d",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::<Paste-Your-Bucket-Name-Here>/*"
}
]
}
You must paste the name of your bucket into the policy then follow it by ‘/*’ to allow access to all Get/Read partitions within the bucket. Also it's a good idea to change the “Sid” to something unique within your account.
G) Now we get a chance to visit that ACL we enabled earlier in this process. Click “Edit” and then make the changes as shown below:
We are giving “Everyone,” or at least those who know or can discover our unique bucket URL, permission to read our bucket info. Click on the 'List' and 'Read' buttons where shown and then acknowledge again that you are extra special certain that you want to do this 😏. Then click “Save changes.”
H) Wow, we are at our last step in creating a public bucket. Now we should set the CORS policy so we don’t get any pesky “mixed use” access-control non-allowed origin issues for cross domain access – I hate those 😠!
CORS rules used to be in XML only format and then AWS decided to keep everything consistent and switch the CORS format to JSON. Even though this change caused some legacy conflict issues with existing XML CORS rules it was the right choice as JSON is clearly better than XML despite what the SOAP fans on social media will tell you 👍. Below is a generic CORS JSON document you can use in your own S3 bucket:
[
{
"AllowedHeaders": [
"Authorization"
],
"AllowedMethods": [
"GET"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 6000
}
]
That’s it for making your cheap and easily accessible public bucket but now it's time to effortlessly turn our public bucket into a static webhost.
As I said before AWS makes it so the same S3 bucket can be enabled to both hold IoT data and to host a static website with a static IP address for pennies a month.
We are now ready to convert our public bucket so that it can facilitate hosting a static website.
Go to your new S3 public bucket, select the "Properties" tab, then scroll down to the bottom where we can edit "Static website hosting" and select "Edit."
Now enable website hosting and name your index document “index.html”, this will be our landing page for our visualization website. Click “Save changes” at the bottom of the page and you are good to go.
That’s it! Now your open public bucket is configured as a webhost with a unique URL address that is statically available worldwide. You have just changed your uber cheap and accessible public bucket into a uber cheap and accessible public bucket that can also host a website with a static IP address. 😲
In my Udemy course I speak more about inexpensive ways to add security while avoiding paying for CloudFront or Route 53 for accessible public buckets and static websites in S3. However I will tacitly reveal “one weird trick” that I find very effective for pretty good protection regarding free S3 public bucket security: Simply google “restrict IP range in a S3 public bucket policy.”
✅Step 2 - Set up a variable in the Systems Manager Parameter Store
We need to keep a hold of the unique session connection ID as a requirement of WebSockets. This is similar to unique client ID’s are a requirement of the MQTT protocol to keep track of device clients. I will be using the AWS Systems Manager Parameter Store rather than DynamoDB (as is typical) to store our connection ID. I can get away with this massive simplification and cost saving mechanism over using a database because unlike a generic and overused “chatroom” websocket examples, which requires us to keep track of an unknown number of connection ID’s (chatroom participants), for our use case we can be assured that we only need one connection ID. This connection ID denotes our connection between our browser client and AWS as our server.
Now navigate in the AWS console to the “Systems Manager” service and select the “Parameter Store” on the panel on the left hand side of the screen. On the upper right of the screen select the “Create parameter” button.
Choose a parameter name like “connection_id”, choose Type: String, and then put some dummy string value into the box below (we don’t care about the string value here because it will be overwritten by our connection Lambda). That’s it, super easy, and for all effective purposes Parameter Store is free for 10,000 strings; compare this cost to DynamoDB once you are off the free tier!
Finally hit, "Create parameter" and you are done.
✅ Step 3 – Creating our Lambda functions
We will need two lambda functions for this lab.
A) Lambda function one – ‘connection’ Lambda: This lambda function receives the connection ID from the website host via the $connect route and then stores the ID into the AWS Parameter Store as a string variable. The $connect route on API Gateway is mapped to this lambda.
B) Lambda function two – ‘sendIoTdata’: This lambda function will forward our IoT data to our static website host through the 'message' route in API Gateway. We will need both the lambda mapped 'message' route and the ability to execute an API Gateway 'post_to_connection()' function to achieve bi-directional data transfer capabilities with API Gateway mediating the data pipe between our lambda function and our website. We will also use an API from the AWS Parameter Store to retrieve the connection ID that was written from our 'connection' lambda function.
1. Connection Lambda
Navigate to Lambda and create a new Lambda function in Node.js. Call it something like “myConnection”, you can also choose your own name. We will be using the Node.js V. 14 runtime.
After you create your lambda paste the following code into the function.
//Extra Permission required: SSM (search for 'system' in inline policies)
const AWS = require('aws-sdk')
var mySSM_Client = new AWS.SSM(); //create new client object of System Manager Class
exports.handler = async (event, context) => {
console.log(event);
let connectionId = event.requestContext.connectionId;
console.log("myConnectionID is: ", connectionId)
//-----------------Begin SSM Code
//https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/SSM.html#putParameter-property
var params = {
Name: '<Insert-Your-SSM-Parameter-Name-Here>',
Value: connectionId,
Overwrite: true //not required but default is False
};
//await and promise() stub are not documented but necessary for //function to work - UNFORTUNATELY
var mySSM_request = await mySSM_Client.putParameter(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log("success: ", data); // successful response
}).promise()
//var mySSM_request = await mySSM_Client.putParameter(params).promise(); //should also work; sans padantic if err else
console.log("My request: ", mySSM_request)
//------------------End SSM Code
const response = {
statusCode: 200,
body: JSON.stringify('Hello from Lambda!'),
};
return response; //need a response or we get disconnected immediately
};
At this point you should paste the name of your parameter variable string you made in the Parameter store earlier into the Lambda function on line 22 where it reads:
Before:
Name: '<Insert-Your-SSM-Parameter-Name-Here>'
After:
Name: connection_Id,
As you can see the function has two main routines.
A) The lambda function receives the connection ID by extracting this necessary parameter by digging into the blob of browser data that is returned from our static website host as soon as we open our custom visualization website. This will be done by integrating this connection lambda with our $connect route key that we will set up in API Gateway soon. You may not know but the code below:
console.log(event);
let connectionId = event.requestContext.connectionId;
console.log("myConnectionID is: ", connectionId)
logged your event info to AWS CloudWatch. In CloudWatch you can view the big blob of data your browser sent to your lambda. Deep within this blob is the connection ID that will be saved in the AWS System Manager Parameter Store.
B) After the connection ID is successfully extracted, the lambda function uses the “put” API from the Systems Manager Parameter Store to save the connection ID to a string variable. That string variable in the Parameter Store (xxxxx…) is now overwritten with the real variable and is made available for use when called by our "sendIoTdata" lambda function that we will create next. Whenever a new connection is made the new connection ID is returned from the website, via the $connect route key.
A few notes about this function are in order:
You may notice that I have put a link to the put Parameter API code documentation within the lambda function as a comment. Notice that I had to modify the original API code or it would not work as documented by AWS. I received no errors as to the problem with the 'put' api, but the parameter string value would not be stored. However I have dealt with this issue before when using Node.js in Lambda and it seems to be an ongoing challenge. To fix this problem I had to add the additional await and promise() stub to the function. I will only make two opinionated points about this; One, this issue should be addressed by the Lambda tools team to optimize the Lambda runtime compiler for Node.js to ameliorate this issue when there are no external calls, and thus indeterminate waits required. Two, the longer I work with lambda in Python and Node the more I prefer using Python for Lambda when appropriate. Assured sequential instructional operation just makes dev life easier😎.
Adding permissions to our connection Lambda Function
No, we aren't done with our connection lambda yet, we still need to add the necessary permission so that our lambda function has the ability to write to the Parameter Store in the AWS Systems Manager. To accomplish this navigate to IAM by going to:
Configuration tab → Permissions tab→ Role name → and then open your role up to get to AWS IAM.
Within IAM we need to add an inline policy. Click the “Add inline policy” on the right hand side of the screen.
We will add our Parameter Store permission which is in the “Systems Manager” service. To add this permission search for it by typing in "sys" in the policy search box until "System Manager" comes up and then grant it all Access and all Resources permissions. Now click “Review Policy” and give your policy a name like “SysManager4connection” and then create policy. Your screen should look like this:
Press “Create Policy and now you are done.
Now that we are done assigning the needed Systems Manager permission for our policy we can create our second Lambda function.
2. sendIoTData Lambda
Navigate back to Lambda and create a new Lambda function in Python 3.8. Call it something like “sendIoTData”, you can also choose your own name.
“Create function” and then paste the following code into the function:
import json
import boto3
Websocket_HTTPS_URL = "https://<Insert-Websocket-Endpoint_Here>"
client = boto3.client("apigatewaymanagementapi", endpoint_url = Websocket_HTTPS_URL)
ssm_Client = boto3.client('ssm')
def lambda_handler(event, context):
print(event)
response_ssm = ssm_Client.get_parameter(Name='<Insert_ConnectionId-Parameter-Name-Here>')
print("my stored connection id: ", response_ssm['Parameter']['Value'] )
connectionId = response_ssm['Parameter']['Value'] #dig into the response blob to get our string cvalue
Test_Message = json.dumps({ "message": "Hello from lambda, hardcoded test message"})
IoT_Message = json.dumps(event)
#AWS API Gateway API's require 'key=value' arguments
response = client.post_to_connection(ConnectionId = connectionId, Data = IoT_Message)
This Lambda function accomplishes two main tasks.
A) The lambda function retrieves the stored connection ID we saved in the previous function, and then adds it as a key value parameter to our “post to connection” API. Lambda uses the "get_parameter()" API to retrieve the connection ID we saved in the Systems Manager Parameter Store with the connection lambda.
B) The lambda function receives our incoming IoT data from AWS IoT Core as an event, and then dispatches the IoT JSON payload along with the connection ID and the internal WebSockets https endpoint via the API Gateway client's "post_to_connection()" function and on to our website via the external WebSocket endpoint.
At this point you should paste the name of your parameter variable from the Parameter Store into the lambda function:
Before:
response_ssm = ssm_Client.get_parameter(Name='<Insert-Your-SSM-Parameter-Name-Here>')
After:
response_ssm = ssm_Client.get_parameter(Name='connection_Id')
Now deploy the sendIoTdata function.
At this point we still need the internal WebSocket HTTPS endpoint which we won’t get until we create a WebSocket API in AWS API Gateway. We will accomplish this in the next section.
A few interesting notes to help explain the operation of this sendIoTdata function. To retrieve our connection ID we have to dig into the blob of data returned as a response when invoking the client object of type SSM. This is a different blob of information than what we received from our website host. If you want to see this information, or any other information in lambda, simply 'print' out the response and go to AWS CloudWatch to examine the blob info. CloudWatch is a necessary tool for any lambda debugging. For convenience CloudWatch permissions are always included in basic lambda execution roles which are created automatically for you when you created your function.
A second thing to note is that the API:
post_to_connection(ConnectionId = connectionId, Data = IoT_Message)
requires a key-value pair as function parameters. The IoT message has the option of being any hardcoded message we add to the lambda function or just a JSON payload from a test event. Of course we will be using a real IoT JSON payload as a message when we add in the AWS IoT Core service in a coming step. This function is part of the API Gateway client ("apigatewaymanagementapi") and thus needs the related permission to execute the API.
Adding permissions to our sendIoTdata Lambda function
Bad news, we have to add two extra permissions for this Lambda function to integrate it with our complete design flow. Good news, you already know how to do this from our previous step. The two permissions we will add are “Systems Manager” and “ExecuteAPI.”
To add the "Systems Manager" permission policy to give access to the parameter store, simply follow the same instructions from the previous lambda permissions.
To find the "ExecuteAPI" permissions policy just search for "execute" in the policies search box. Duplicating the same process as before, you can give access to all resources to make things easy or narrow the permissions down to your lambdas ARN if you have to be pedantic or are on a shared AWS account. Now you are ready to move to the next step. More good news, you have just completed the most difficult part of this tutorial.
You should now have three policies for this lambda: default execution role, Systems Manager, and ExecuteAPI
Finally, note that we need "ExecuteAPI" in the 'sendIoTdata' lambda function because the "post_to_connection()" API needs to pass our IoT payloads through API Gateway before it can be sent to our webpage. The ExecuteAPI permission policy allows our lambda function to 'execute' the internal https endpoint via the API Gateway 'client' using the "post_to_connection()" API. We will be creating our internal and external WebSocket endpoints in the next section in AWS API Gateway.
✅ Step 4 – Creating WebSocket Endpoints with AWS API Gateway
WebSockets form a direct connection with your website host and thus they need a URL which directs both the website and the lambda function as to where to direct their data exchange. API Gateway provides these endpoints for this exchange just as it would for a normal REST API.
We will be obtaining two WebSocket endpoints from our creation process in API Gateway. One internal endpoint (https) for use in the lambda function and one external endpoint (wss) for use on our webpage.
External WebSocket endpoint: this is the URL with a 'wss' prefix. This is the AWS WebSockets endpoint needed in our JavaScript code on our website to communicate with API Gateway, and in turn, communicate with our lambda functions.
Internal WebSocket Endpoint: this is the URL with a 'https' prefix. This is the AWS WebSocket endpoint needed in our 'sendIoTdata' lambda function to communicate with API Gateway through the post_to_connection() function, and in turn, communicate with our web page.
You should now create your own API Gateway WebSocket endpoints and routes by navigating to API Gateway –> Create API and choosing to build a Websocket API.
Select a name for your WebSocket API and then type “request.body.action” in the box for Route Selection Expression. This is the standard path to designate a WebSocket action like “message”, “join”, or “send”.
Choose the “Next” button and then on the next screen we will select one pre-made macro route and one custom route. The one route that we need is the connection route. I'm not adding optional routes just to keep the tutorial as simple as possible while still maintaining reasonable functionality.
Select to add a connection route with the macro called $connect to route our "connection" lambda function. The second route we will designate is a custom route we will call “message”. This is the custom route which forms a bi-directional pipe or "socket" between our website and AWS allowing for incoming and outgoing data.
Select “Next"
Now we have to link our WebSockets API routes with our two previously created lambdas. You may see why I created the lambdas first even though it is counter intuitive to our design. This is typical for IoT design flows as we often have to work backwards. To link our two lambdas to our two routes just select the lambda functions you just created and link them to the appropriate route key of similar names.
Select “Next” then leave the stage name as “production” and select “Next” again, and then finally “create and deploy.”
Now to view the WebSocket endpoints go to the tab “stages” on the left of your screen. Click your only stage which should be in blue and called “production.” After selecting it you should now see your two WebSocket endpoints on the top of your screen. Leave this screen open as we will need both endpoints.
As an aside, this single external "wss" socket URL can work with any static webhost external to AWS to connect to AWS. Thus you can also host this Highcharts visualization website on JSbin, JSFiddle, playcode.io, or any other web based host you like if all you want to do is test the IoT design and don't care about retaining your website after testing.
OK, now we can complete our "SendIoTdata" Lambda function with the new internal https WebSocket URL and then re-deploy the lambda function. To do this open a new tab and navigate back to your lambda "sendIoTdata" lambda function. The format for pasting the endpoint to your lambda follows next.
Adding the WebSockets internal endpoint to sendIoTMessage Lambda
B) Insert HTTP Endpoint:
Before:
Websocket_HTTPS_URL = 'https://<Insert-Websocket-Endpoint_Here>'
After:
Websocket_HTTPS_URL = 'https://4astring7.execute-api.us-east-1.amazonaws.com/production'
For this endpoint use the 'https://' prefix
Don’t forget to 're-Deploy' your lambda or your changes won’t be made.
✅Step 5 – Creating an AWS IoT Core Action and Rule
In this step we will create an AWS IoT Core Action and Rule that will send our IoT data from the MQTT broker in IoT Core to our 'sendIoTData' lambda function. Once the IoT payload is in our lambda function with the connection ID being set, the JSON payloads can then be forwarded to our static website via the internal WebSocket endpoint.
In AWS IoT Core select: Action→ Rules→ Create
Select a name for your Rule and then change the Rules Query Statement to the following:
SELECT *, timestamp() as timestamps FROM 'iot/#'
This query does two important things that differ from the default RQS. First, it adds a field to our incoming JSON IoT payload called “timestamps.” This "timestamps" field is a literal match for a variable extant in our JavaScript webcode. Using the premade AWS function timestamp() also adds a Unix/Epoch timestamp to our payload. We will need this for indexing the X-axis on our line chart. Of course adding the timestamp can be done on the device, or in lambda depending on your needs or your preferences. Often on non-application level MCU’s on embedded devices, UNIX timestamps cannot be programmed without additional RTC hardware or extra library's, in these cases I often use “uptime” as a relative time index rather than an absolute time format like the timestamp() function provides.
The second important thing the RQS does is that it uses a topic from which the hash/pound allows a fungible extension to the incoming base topic. So for instance, if I was using MQTT messaging to communicate between lambda functions (standard inter-lambda is SNS) I could have one topic published as 'iot/lambdaTopic' and another as 'iot/deviceTopic'. Using the 'iot/#' means that both formats will be picked up by my RQS and then I could discriminate how I handle multiple topics coming into the same or different lambda function by topic extension. The exciting possibilities are endless!
Your Rule should now look something like this:
Let’s select “Add Action” next.
Choose to add a “send message to a Lambda function” action. Can you guess what Lambda we will add?
Select “add action” which sends us back to our Rules page where we must select “Create Rule."
The final thing we must do is make sure our new rule is 'enabled.' On the next page by selecting the breadcrumbs next to your new rule. You can find your new Rule at the bottom of any Rules you may have created previously. Once found make sure your Rule is "enabled".
✅ Step 6 – Uploading your HTML and JavaScript code to create a asynchronous visualization for your IoT data
We have two files to upload to our public bucket and our newly created webhost. The files are called 'index.html' and 'main.js'. While the index.html is the exact same as the one listed in the previous article in this series, the main.js file is different. The main.js now has WebSockets compatible asynchronous code as shown here:
const socket = new WebSocket('<Insert-Your-WSS-Endpoint-With-Prefix-Here>')
socket.addEventListener('open', event => {
console.log('WebSocket is connected, now check for your new Connection ID in Cloudwatch on AWS')
})
socket.addEventListener('message', event => {
console.log('Your iot payload is:', event.data);
drawChart(event.data);
})
By these few lines of code we can now take advantage of WebSockets to enable the "sendIoTdata" lambda function to perform a "server push" and bring event changes (incoming IoT payloads) directly into our static web host. This is possible through the socket event listener for 'message' which forms a bidirectional link with API Gateway given the external WebSocket address and the route key of 'message'.
The index.html is our launch page. Copy the following code and save it locally as "index.html":
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>Dashboard</title>
</head>
<body>
<div class="container">
<h1>Asynchronous Weather Data with AWS Websockets</h1>
<div class="panel panel-info">
<div class="panel-heading">
<h3 class="panel-title"><strong>Line Chart</strong></h3>
</div>
<div class="panel-body">
<div id="container1"></div>
</div>
</div>
</div>
<script src="https://code.jquery.com/jquery-3.1.1.min.js"></script>
<script src="https://code.highcharts.com/highcharts.js"></script>
<script src="./main.js"></script>
</body>
</html>
The main.js is our JavaScript page (below). Copy the following code then open it in the editor of your choice. We have one change to this code we need to make before you can upload it to S3:
let humArr = [], tempArr = [], upArr = [];
let myChart = Highcharts.chart('container1', {
title: {
text: 'Line chart'
},
subtitle: {
text: 'subtitle'
},
yAxis: {
title: {
text: 'Value'
}
},
xAxis: {
categories: upArr
},
legend: {
layout: 'vertical',
align: 'right',
verticalAlign: 'middle'
},
plotOptions: {
series: {
label: {
connectorAllowed: false
}
}
},
series: [{
name: 'Humdity',
data: []
}, {
name: 'Temperature',
data: []
}],
responsive: {
rules: [{
condition: {
maxWidth: 500
},
chartOptions: {
legend: {
layout: 'horizontal',
align: 'center',
verticalAlign: 'bottom'
}
}
}]
}
});
const socket = new WebSocket('<Insert-Your-WSS-Endpoint-With-Prefix-Here>')
socket.addEventListener('open', event => {
console.log('WebSocket is connected, now check for your new Connection ID in Cloudwatch on AWS')
})
socket.addEventListener('message', event => {
console.log('Your iot payload is:', event.data);
drawChart(event.data);
})
let drawChart = function (data) {
var IoT_Payload = JSON.parse(data);
console.log("our json object", IoT_Payload);
let { humidity, temperature, timestamps } = IoT_Payload;
humArr.push(Number(IoT_Payload.humidity));
tempArr.push(Number(IoT_Payload.temperature));
upArr.push(Number(IoT_Payload.timestamps));
myChart.series[0].setData(humArr , true)
myChart.series[1].setData(tempArr , true)
}
The only change you need to make to the code is on line 61 of the main.js file.
Before:
const socket = new WebSocket('<Insert-Your-WSS-Endpoint-With-Prefix-Here>')
After:
const socket = new WebSocket('wss://4astring7.execute-api.us-east-1.amazonaws.com/production')
You will need to insert the external AWS WebSocket endpoint you got from API Gateway here. This is the external address that starts with wss://. Make sure to include the 'wss://' prefix when pasting your external address into the main.js file.
After changing this line of code in 'main.js', you are now ready to save it locally and then upload the files you just saved into your S3 bucket. To do this simply select the 'Objects' tab in your S3 bucket and drag both files to the base level of your bucket. Both files should be on the same level of the partition hierarchy.
Press the 'upload' button on the bottom right of your screen, and then after both files have been uploaded select the 'close' button. You should now have two objects in your bucket; both web code files('index.html' and 'main.js').
Now is a good time to initiate your static webhost by opening a new web browser tab with your static website URL. The address of your website can be found by going to the “index.html” object in your bucket and opening the 'Object URL.' Clicking this URL will bring up your website.
The Highcharts code works by using AWS WebSockets with AWS Lambda for asynchronous invocations. The connection function in the main.js first gives you the "WebSocket is connected, now check for your new Connection ID in Cloudwatch and the Parameter store on AWS" message when your website is first connected. You can see this message in the browser by typing 'ctrl + shift + i' in most browsers, in Chrome the shortcut is 'ctrl + shift +j'
Now that our website is up, let's send it some IoT data so we can produce our visualization.
✅Step 7 - Populating and visualizing your IoT data using an automated IoT data producer
For this last step we have three ways to populate the visualization from IoT Core to our webhost.
A) Use a device to publish IoT JSON payloads under our topic name.
B) Manually publish JSON data payloads from the MQTT test client in IoT Core as demonstrated earlier in the tutorial.
C) Use a test script to publish IoT data to our topic automatically at a configurable interval and delay between IoT payloads.
For Option A you can simply program your device to publish data to IoT core as I instruct in my course. For Option B you would have to spend some time manually altering and then publishing JSON payloads in the MQTT test client in IoT Core to generate the line chart in the visualization.
For this tutorial I will explain 'Option C.' For this option you need the AWS CLI installed. It’s easy to install with the directions listed here:
https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
This bash IoT data producer script was provided by AWS and can be originally found on https://github.com/aws-samples. I have already altered the test script to send just temperature and humidity data. Simply insert your AWS region and MQTT topic name (iot/whatever) into the test script where indicated. The bash script uses your AWS CLI to deliver the payload to IoT Core (using your SigV4 credentials from the AWS CLI). You can also change the number of payloads published (iterations) and wait time between each payload publish (interval) to produce as much fake IoT data as you like.
#!/bin/bash
mqtttopic='<Insert-Your-IoT-Topic-Here>'
iterations=10
wait=5
region='<Insert-Your-AWS-Test-Region-Here>'
profile='default'
for (( i = 1; i <= $iterations; i++)) {
#CURRENT_TS=`date +%s`
#DEVICE="P0"$((1 + $RANDOM % 5))
#FLOW=$(( 60 + $RANDOM % 40 ))
#TEMP=$(( 15 + $RANDOM % 20 ))
#HUMIDITY=$(( 50 + $RANDOM % 40 ))
#VIBRATION=$(( 100 + $RANDOM % 40 ))
temperature=$(( 15 + $RANDOM % 20 ))
humidity=$(( 50 + $RANDOM % 40 ))
# 3% chance of throwing an anomalous temperature reading
if [ $(($RANDOM % 100)) -gt 97 ]
then
echo "Temperature out of range"
TEMP=$(($TEMP*6))
fi
echo "Publishing message $i/$ITERATIONS to IoT topic $mqtttopic:"
#echo "current_ts: $CURRENT_TS"
#echo "deviceid: $DEVICE"
#echo "flow: $FLOW"
echo "temperature: $temperature"
echo "humidity: $humidity"
#echo "vibration: $VIBRATION"
#use below for AWS CLI V1
#aws iot-data publish --topic "$mqtttopic" --payload "{\"temperature\":$temperature,\"humidity\":$humidity}" --profile "$profile" --region "$region"
#use below for AWS CLI V2
aws iot-data publish --topic "$mqtttopic" --cli-binary-format raw-in-base64-out --payload "{\"temperature\":$temperature,\"humidity\":$humidity}" --profile "$profile" --region "$region"
sleep $wait
}
You have to change fields at the top of the page in the bash script to customize it for your MQTT topic name (iot/whatever) and AWS region ('us-east-1 or other) in which you developed your AWS services for this tutorial. The other two fields, 'iterations' and 'wait time', are optional to edit.
Edit these fields for your own info:
- mqtttopic=''
- iterations (number of payloads to send)
- wait time (number of seconds between transmissions)
- region=''
Now save the above code, giving it a name like "IoT_tester.sh". You can run the script by simply installing the above bash script locally and then from the command prompt typing the name of the bash script. Bash scripts are neat because they should work on any operating system. Activating the test script in MS Windows looks like this:
😀 🏁
Congratulations! You finished the second tutorial in the series and created an asynchronous Serverless AWS IoT Dashboard using WebSockets. Make sure to stay tuned for parts three and four of this hands-on tutorial series as we get more advanced with Serverless IoT on AWS.
Top comments (0)