This IoT walk-through lab will show you how to send IoT data from your ESP8266 or ESP32 device, through AWS API Gateway, to Lambda, to a data lake in S3, and finally design a static web page for IoT data visualization.
You may be asking, "why would you want to deploy a HTTP API when AWS has a well functioning MQTT broker on AWS IoT Core?" Well, there are a few good reasons that we may want to send our IoT data through AWS API Gateway directly rather than through AWS IoT Core. As an example, I had a student who was using a SIM7000A cellular modem for his ESP32. The hardware abstraction layer on his device was poorly integrated so MQTT(s) wasn't enabled, but HTTP worked well on his device. For this reason a AWS serverless design flow, utilizing the HTTP protocol instead of MQTT, can make sense. Some other possible reasons for using HTTP rather than MQTT are:
A) Your embedded device may not be capable of MQTT(s).
B) You may want to utilize REST instead of MQTT(s), and don't mind losing the key advantage of MQTT: lightweight duplex communication.
C) You may simply want to take advantage of the built-in features of API Gateway such as caching, throttling, velocity templates, payload modeling, and data transformations.
After having said all this, 90% of my course curriculum on Udemy still focuses on AWS IoT Core. However, it is important to determine how to handle these exceptions. In an effort to explore these interesting IoT scenarios I have designed this tutorial and walk-through IoT lab to better help you understand this serverless IoT implementation on AWS. It is important to note that the ESP32 has better built in security than the ESP8266, so the Arduino sketches at the end of the tutorial will reflect these differences.
It is also worth noting that charges for the AWS services used in this tutorial are free, or minuscule, as a serverless design without much compute utilized. AWS S3, Lambda, and API Gateway are all extremely inexpensive for prototyping and testing for non-commercial loads. It's unlikely the following lab will cost you more than a few cents even if you are no longer on the "AWS free tier."
Prerequisites for the tutorial:
A) An AWS free tier or normal AWS account
B) Ability to navigate between AWS services
C) An ESP8266 or ESP32 development board
D) The free Arduino IDE with the device libraries and board manager for your ESP 8266 or ESP32 device
How it works - Serverless IoT
Deploy the Serverless IoT infrastructure
- You will create a Lambda function to send your IoT data from API Gateway to S3.
- You will configure API Gateway to handle incoming data from our Arduino sketch.
- You will create an API Key to secure your deployed URL created in API Gateway.
- You will copy the provided Arduino sketch for your ESP8266 or ESP32 and provide your own API Gateway URL.
- You will change the permissions on your IoT data bucket and web page bucket from private to public.
- You will copy the provided 'index.html' file to visualize your IoT data on a static web host held in a second S3 bucket.
Create a S3 bucket to hold your IoT Data
Create a new S3 bucket in the region of your choice. Choose a globally unique name for your bucket and make sure to keep the region consistent between AWS services.
✅ Step-by-step Instructions for S3
1. Navigate to the AWS S3 console
2. Create a new S3 Bucket in the same region you decide to use consistently throughout this lab. Name your bucket something globally unique (this AWS requirement is so every bucket has its own static URL)
3. You don't need to set a ACL, Bucket policy or CORS at this time, so just select "Create".
4. Finally create and save a folder/partition within your newly created S3 bucket. Name the folder whatever you like.
We are now ready to move on to creating a lambda function to enhance our IoT data and dispatch it to our newly created S3 bucket.
Create your Lambda function in Node.js
A Lambda function programmed in Node.js will be used to format, enrich, and dispatch our incoming JSON payload, sent through API Gateway, to our S3 bucket to hold our IoT sensor data readings
✅ Step-by-step Instructions for Lambda
1. Navigate to the Lambda console and create a new Lambda function ("Author from scratch") in the AWS Region of your S3 bucket
2. Choose the latest runtime of Node.js
3. Chose a new basic execution role
4. Press button to create your lambda function
5. Paste the Node.js code listed below into your lambda function console. Make sure to add your own bucket name and folder name that you created in the previous section where indicated in the lambda code. Uncomment the (event) line of code but keep the (event.queryStringParameters) line of the code commented out for now. We will want to see the entire test payload "event" (object) at this point in the lab. Later, when we utilize our device, we will limit the incoming IoT payload to just the query string parameters.
After pasting in the code listed below, save your lambda function.
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
exports.handler = (event, context, callback) => {
var bucketName = "<Your-Bucket-Name>/<Your-folder-Name>";
var keyName = JSON.stringify(Date.now());
var content = JSON.stringify(event); //uncomment this statement for testing in lambda
//var content = JSON.stringify(event.queryStringParameters); //uncommnet this statement after integration with API Gateway
//keep only one of the above uncommented!
var params = { Bucket: bucketName, Key: keyName, Body: content};
s3.putObject(params, function (err, data) {
if (err)
console.log(err);
else
console.log("Successfully saved object to " + bucketName + "/" + keyName
+ "and data=" + JSON.stringify(content));
});
};
Link to the lambda code: https://github.com/sborsay/Serverless-IoT-on-AWS/blob/master/API_Gateway_Direct/My-Arduino-lambda-Proxy.js
This lambda function writes incoming JSON data into our newly created S3 bucket and the folder/data partition within our s3 bucket. Notice that this function 'enhances' our IoT data payload by adding 'Date.now(),' this is a function that returns a epoch/UNIX timestamp. This is useful as an alternative to the 'UUID' package as we can sequentially label our data objects/payloads with no fear of collision (i.e. duplicate names). In addition we don't have to "roll up" a NPM package as this time-stamping function is native to the language.
6. Currently our lambda function does not have permission to access our newly created S3 bucket. Next, let's give our lambda function the necessary permission, added to then lambda role to give it the ability to write data from our lambda function to our S3 bucket. In Lambda click on the "Permissions" tab (It's between the "Configuration" and "Monitoring" tabs) under the function name.
7. Open the Execution Role we initially created in S3 by clicking on the "Role name."
8. Now we will open a new browser window in the IAM console, click the blue "Attach Policies" button so that we can add our new S3 policy to our lambda execution Role. Type in "S3" in the search bar and select the "AmazonS3FullAccess" managed policy. We are not using the standard AWS "least privilege" model, but don't worry too much about that, we are going to add better security later. If you know what you are doing feel free to limit the role to a stand-alone unmanaged "S3Put" role as a best practice. After making your managed policy selection click the "Attach Policy" blue button.
9. After attaching the managed policy you can now close the IAM window, return to lambda, and click the "Configuration" tab in lambda. That should return you to the coding window. It is now time to test our lambda function to ensure it has the ability to send data to our S3 bucket.
10. Make sure you have entered your S3 bucket name and S3 folder name correctly within your lambda node code, and have already saved the file. Note: we aren't using environmental variables for macros. Next click the "Configure test events" drop down in the upper right of your lambda configuration window.
11. Within the test console, name your test whatever you like, here I call my test payload event "t1", you can leave the JSON data as is, or alter it to better help you remember what you are sending to your S3 bucket as a test. Make sure to keep your test payload in proper JSON format or it won't work. Next hit "Create" to save your 't1' test event as a new test template.
12. After creating your test template in JSON format you should be back in lambda. We are now ready to test our Lambda functions ability to send JSON data to S3. Click the test button on the upper right of the screen to send off your test data to your S3 bucket and folder.
If everything was done correctly you should have received a null response in an 'Execution result: succeeded ' when you scroll up to the log. It is a 'null' response because we haven't written any response code.
13. The last step of verifying our lambda function is correct is to ensure that our test data object was indeed written to our S3 data bucket. To check this go back to your S3 bucket and folder and check that the data object holding the JSON test payload from lambda is indeed in your S3 bucket (you may need to refresh your S3 folder to see your new data object). Click on your test data object which will be be listed by the Date.now() function as an epoch timestamp, and download it.
You will likely have to download your data object to view it rather than simply clicking the URL. If you try to click the URL without making your bucket and partition public you will get an "Access denied" message. We will be changing this later by making our buckets public.
14. After you download the data object, open the JSON payload in the editor of your choice. If you are down with the cool kids you will likely be using VS Code which I find to be overkill in many cases, since I am both uncool and lame i'm using Notepad++ here to open and inspect the test payload.
Awesome! I hope you see your JSON test data object dispatched from your lambda function then sent through to S3. If not, then you need to review the previous steps as nothing going forward will work. Assuming you were successful thus far, let's move on to configuring AWS API Gateway to work with our new lambda function.
Create a Rest API to connect your ESP device to Lambda
API Gateway will be used to configure a publicly facing URL that we can access from both our computer and device to send IoT data to our lambda function.
✅ Step-by-step Instructions for API Gateway
1. Navigate to the API Gateway Console in the same region you have been using for the first two sections of this lab.
2. Select "Rest API" (public) as your API Choice and check "Build."
3. Leave all the defaults and name your API, enter an optional description, then click"Create API."
4. On the next screen use the drop down "Actions" menu to create a new "Method". Choose the "Get" method and click the check mark next to it.
5. Choose "Proxy Integration." This will inject our HTTP headers with our 'QuesryStringParametrs' into the 'event' object which we will parse out later.
6. Select the lambda function you created in the previous section.
Click the "Save button"
7. After saving your work, go back to the same "Actions" button drop down menu you used to select the GET method, and click it. Now choose to "Enable CORS."
8. Remove all the headers from the "Access-Control-Allow-Headers" field (since we are using an embedded device our HTTP headings are not standard).
9. Click the "Enable CORS...headers" button and then "yes...replace current values."
10. Next go back to the "Actions" drop down menu and choose to "Deploy API." Choose a "[New Stage]" and name your stage something short. Then click "Deploy."
11. Now that you connected your API to your lambda function and deployed your API, it is now time to test it. Click the "Invoke URL" address at the top of the page.
12. Clicking "Invoke URL" should open up a new browser window stating "{"message": "Internal server error"}".
Don't worry, this is the correct response, as we haven't configured a custom response. Now let's test our work thus far. Enter a query string in our browser window so that we can check that our data is actually getting sent to our S3 bucket. Enter a test query string such as listed below in your browser window.
YOUR-API-ID.YOUR-REGION.amazonaws.com/DEPLOYMENT-NAME?temperature=55&humidity=66
This is just your unsecured deployment URL concatenated with an arbitrary test query string.
13. Now return to your S3 bucket and the folder within your S3 bucket. Refresh your screen and you should have two new data objects with recent epoch timestamps as names. One object was created by simply opening the unsecured deployed URL, and the latest data object will have the temperature and humidity variables added in the queryStringParameters section of the payload. Download the most recent data object and open it in your editor of choice. Verify that the query string parameters contain your variables entered from your browser's URL pane.
Congratulations! We now have a working Lambda connected to a working, publicity facing URL created in API Gateway. Now it is time to add some security.
Create an API Key to secure our deployed URL
You may notice that we have virtually no security other than keeping your initial deployed URL private. While it is also a good idea to limit total requests and bursts requests on our API, it is a better idea to create and enforce an "API Key" that the client must possess in order to initiate a successful request against our lambda function. Fortunately we can do both by creating an "API Key" which we can then provide to the client to ensure that they have a valid access mechanism to use our deployed URL for their requests. API keys are especially appropriate for IoT, most third party IoT visualization sites such as Losant, Ubidots, and ThingsSpeak will issue their registered users an API key for external requests. Now is a great time to make an API Key and associated usage plan so that we can use the key in our Arduino sketch to confirm that our device has the right credentials to invoke our API. At the same time we will configure a "usage plan" to limit request overload and other potential abuses.
✅ Step-by-step Instructions for Creating an API Key
1. Go back to your API Resources Configuration screen and in the"Method Request" section change "API Key Required" from false to true.
2. Now we have to repeat the deployment process. This time create a new stage with another name like "Dep_with_api_key" or whatever name you like. Our old stage will remain open to the public and our new stage will require an API key which we will create next. You can also delete your old deployment if you no longer wish to have an unsecured URL.
Re-deploy your new stage using the "Actions" drop down button. Now test the new URL associated with this API Key required stage. The browser should now return a {"message": "Forbidden"} alert. This is the built-in notice that you are not allowed to use this new URL as is.
3. Now let's create our API Key, to do this navigate back to API Gateway. On the pane to the left select "Usage Plans." Once in the "Usage Plans" tab select "Create."
4. Next we will limit requests per second, bursts, and total monthly requests. You can set your request configuration to something to meet your own needs. Limiting total monthly requests to under 1000 constrains your account to nominal, if any expense. This is helpful if your client, who possesses a valid API Key, exceeds their request limits. After you select your rates for throttling and quota select "Next."
5. Next we will attach the new usage plan to our currently deployed URL. Choose the API we created in the previous step. Now choose the new deployment you just created and deployed with a API key requirement. Click the check mark then click "Next."
6. Next click "Create an API Key and add to Usage Plan" (that's the box on the right, do not click the box on the left). This will bring up a box to name your specific API Key. Name it something, then click "Save", then click "Done".
7. Now we have to retrieve and copy the alphanumeric cipher for the API Key we just created. To see your new key click on the "API Keys" tab on the screen.
8. Click the "API key" in blue, and now click "Show."
9. Now copy the alphanumeric code for your API Key and keep it handy, you will need it next.
As a side note we don't need to redeploy our API at this point because we are just changing things on the server side on AWS with a new usage plan and X-API-Key. If you watched other API Key most instructionals assume you have to redeploy after creating a usage plan and API Key but this is not needed as long as you deployed when you set the API Key requirement to "true" in the "Method Request" window as we did previously.
Now we are ready to test our new deployment which requires an API key. Unfortunately we can't simply test our API Key in a browser as the headers don't format correctly in the browsers address bar. At this point you can move on and see if it works in the next section in the Arduino Sketch, or we can test the API-Key with a free API testing tool like cURL or Postman. Here I will test our new deployment with our API Key in Postman.
10. To test our API in Postman simply select the GET method. Now paste your API Key secured deployment URL into Postman's address bar. You can try this process first without the API Key added and you should receive the same "Forbidden" message. Now add the "X-API-KEY" (letter case doesn't matter), in the headers box (as circled in the picture below) , and resend your GET request. You should now get the "Internal server error" as before, and the data object should appear in your S3 bucket. Make sure you insert your key in the Header section and not the Body section in Postman. Also confirm that this test is successful by checking your S3 folder for the new data object before moving on to the next step.
Congratulations, now your API Gateway URL can connect with your lambda forwarding IoT data to S3 as long as you provide your API Key along with your GET request for added security. In the next section we will add the API Gateway deployment URL (endpoint) along with our working API Key to our Arduino sketch so that we can send HTTP requests directly to API Gateway from our ESP device.
Program our device sketch in the Arduino IDE for our ESP device
I have provided sketches for the ESP8266 and the ESP32, however in this section I will focus on the ESP8266. It's worth noting that the ESP32 has built in HTTPS along with other WiFi security capabilities while the ESP8266 does not. Given this, we will focus on the more complicated sketch employing SHA-1 security on the ESP8266 device, which we can use as a minimum, to meet API Gateway's security requirements. However we will add pretty good security (PGS) by adding our AWS API Key to the Arduino sketch running on the device.
For a more professional deployment I would rotate an API Key on the device by using a MQTT subscription topic from a lambda MQTT publisher with an AWS.IoTData object provided by the AWS-SDK. However this method would be part of a more advanced lab.
✅ Step-by-step Instructions for the device sketch
1. At this point we only want to extract the query string parameters from the overly explicit information coming from API Gateway. AWS inherently adds a lot of potentially useful information to our incoming IoT data payload which we don't need for the purposes of this tutorial. In order to remove this spurious data simply go to your lambda function and comment out:
//var content = JSON.stringify(event);
and uncomment
var content = JSON.stringify(event.queryStringParameters);Make sure to re-save your lambda functions after coding the simple change above.
2. Our Arduino ESP8266 sketch is based on the script found here: https://github.com/esp8266/Arduino/blob/92373a98370618dea09718010b30d311a97f3f25/libraries/ESP8266WiFi/examples/HTTPSRequest/HTTPSRequest.ino
I have altered the sketch to work with AWS and API Gateway. There are a number of fields to fill out with your own information. If you are using the ESP8266 rather than the ESP32 there is one extra field we have yet to explore, and that's our SHA-1 fingerprint. So let's acquire that alphanumeric cipher now. For this you should be using Chrome as your browser.
3. First, go back to the URL of your recent API Gateway deployment after you set the "API Key Required": true and deployed it. The web page should be the website displaying the "Forbidden" alert (as this page requires the API Key we created in the previous section). We can retrieve the SHA-1 thumbprint from here.
To acquire the fingerprint (Chrome calls it "Thumbprint")for this web page, go to the breadcrumbs icon in the upper right corner of your Chrome browser. Then go to:
More tools-->Developer tools-->Security(tab)-->view certificate(button) -->Details(tab)-->Thumbprint
4. you will see the SHA-1 Thumbprint as something like this:
53f2ZX9XX6zoqGAupqyXX5yNoOdgzm8qew8hC41
put a space between every other character so it now looks like:
53 f2 ZX 9X X6 zo qG Au pq yX X5y No Od gz m8 qe w8 hC 41
Now the thumbprint is ready to be inserted in your sketch, so copy your own SHA-1 thumbprint.
5. Now fill out the following fields in the provided sketch.
You will need to fill in the following fields respectively:
A) WiFi Network Name (make sure your networks at 2.4GHz not 5GHz)
B) WiFi Password
C) Host name (First part of API Gateway URL, do not include "https://")
D) URL (API Gateway deployment name)
E) API Key
F) Formatted fingerprint (found in the Chrome thumbprint SHA-1)
(above sketch is just an example, for a different region and thumbprint)
/*
HTTP over TLS (HTTPS) example sketch
This example demonstrates how to use
WiFiClientSecure class to access HTTPS API.
We fetch and display the status of
esp8266/Arduino project continuous integration
build.
Limitations:
only RSA certificates
no support of Perfect Forward Secrecy (PFS)
TLSv1.2 is supported since version 2.4.0-rc1
Created by Ivan Grokhotkov, 2015.
This example is in public domain.
* This example modified by Stephen Borsay for AWS Serverless course on Udemy
* to Connect your device directly to AWS API Gateway
* modified for sending fake data buffer, connect any sensor as desired
*
*/
#include <ESP8266WiFi.h>
#include <WiFiClientSecure.h>
#ifndef STASSID
#define STASSID "<YOUR-WIFI-NETWORK>"
#define STAPSK "<YOUR-NETWORK-PASSWORD>"
#endif
const char* ssid = STASSID;
const char* password = STAPSK;
const char* host = "<YOUR-API-GATEWAY-ENDPOINT>.execute-api.<YOUR-REGION>.amazonaws.com"; //do not include "https://"
String url = "<YOUR-API-GATEWAY-DEPLOYMENT-NAME>";
const char* API_KEY = "<YOUR-API-GATEWAY_API-KEY-HERE>";
const int httpsPort = 443;
unsigned long uptime;
// Use web browser to view and copy SHA1 fingerprint of the certificate
//to acquire the thumbprint for this webpage, go to the breadcrumbs in the upper right corner of your browser.
//Then go to Tools-->developer tools-->security-->view certificate-->details(tab)-->thumbprint
//const char fingerprint[] PROGMEM = "98 f8 5e fc 87 65 43 5f 0f c1 1e fe e9 81 c9 9c c2 43 27 4c"; //example thumbprint with proper formatting
const char fingerprint[] PROGMEM = "<YOUR-SHA-THUMBPRINT>";
WiFiClientSecure client;
void setup() {
Serial.begin(115200);
Serial.println();
Serial.print("connecting to ");
Serial.println(ssid);
WiFi.mode(WIFI_STA);
WiFi.begin(ssid, password);
while (WiFi.status() != WL_CONNECTED) {
delay(500);
Serial.print(".");
}
Serial.println("");
Serial.println("WiFi connected");
Serial.println("IP address: ");
Serial.println(WiFi.localIP());
// Use WiFiClientSecure class to create TLS connection
Serial.print("connecting to ");
Serial.println(host);
Serial.printf("Using fingerprint '%s'\n", fingerprint);
client.setFingerprint(fingerprint);
if (!client.connect(host, httpsPort)) {
Serial.println("connection failed");
return;
}
//String url = "/dep1";
Serial.print("requesting URL: ");
Serial.println(url);
}
void loop() {
int t = random(30,110); //fake number range, adjust as you like
int h = random(50,100);
Serial.print("uptime: ");
uptime = millis()/1000;
Serial.println(uptime); //prints time since program started
client.print(String("GET ") + url + "/?uptime=" + (String) uptime
+ "&temperature=" + (String) t + "&humidity=" + (String) h + " HTTP/1.1\r\n" +
"Host: " + host + "\r\n" +
"x-api-key: " + API_KEY + "\r\n" +
"User-Agent: 14 ESP8266\r\n" +
"Connection: close\r\n\r\n");
Serial.println("request sent");
while (client.connected()) {
String line = client.readStringUntil('\n');
if (line == "\r") {
Serial.println("headers received");
break;
}
}
String line = client.readStringUntil('\n');
if (line.startsWith("{\"state\":\"success\"")) {
Serial.println("esp8266/Arduino CI successfull!");
} else {
Serial.println("esp8266/Arduino CI has failed");
}
Serial.println("reply was:");
Serial.println("==========");
Serial.println(line);
Serial.println("==========");
Serial.println("closing connection");
delay(1000);
//unlike MQTT, HTTP/HTTPS has to be reconstructed every time a request is processed
// so reconnect after GET request is completed and key/value URL payload is dispatched
if (!client.connect(host, httpsPort)) {
Serial.println("connection failed");
return;
}
delay(1000);
}
Here is a link to the whole sketch for the ESP8266 on Arduino, you can now upload the sketch to your device after filling out the required fields as listed above.
The sketch just generates random values for temperature and humidity as well as uptime. You can easily integrate a DHT11/22, BME280, or numerous other sensors to report actual sensor readings. If you have done everything right you should receive readings on your serial monitor which looks similar to the readings below. Again, ignore the "internal server error" message in the terminal due to not developing a request response.
If you are using the ESP32 then the sketch is significantly easier as WiFi is secure without having to use SHA-1. There are a few very good HTTP sketches available on the internet, I decided to modify Rui Santos's open source ESP32 sketch and add in our AWS specific code, and X-API-Key header. Below is the github link to the simplified ESP32 sketch with API Key secured.
Next let's go back to our S3 bucket and ensure that our IoT data payloads landed successfully in our folder.
Now we see our S3 bucket contains our data objects with the "humidity", "temperature", and "uptime" variables within each data object.
Congratulations! You now have completed the base lab. I have added a stretch lab below if you wish to continue with a visualization of your IoT data.
Visualizing our IoT data with Highcharts on a static web host in S3
✅ Step-by-step Instructions for Visualization of IoT data
Now that your data is in your bucket there are all types of manipulation you can do with the IoT data lake besides just visualizations. You can use AI, machine learning, BI, as well as tie in many other AWS services like SageMaker, Glue, Athena, Redshift, and QuickSight to name a few. You can utilize many of these AWS services with your IoT data while it is still in your S3 bucket. For this lab we will be creating a second public bucket in S3 to host our visualization website. To do this we will make our new S3 bucket completely open and public as we aren't using AWS CloudFront, Route53, or a VPN. We will then extract our IoT data from our public web host in S3 directly from our soon to be public IoT data bucket. It's important to note that it is NOT appropriate for professional deployments to use public buckets. A professional implementation would involve using a Lambda function as a private layer to extract, ingest, and consume data from a private S3 data bucket. See my Udemy course for details on this more professional method.
1. We now need to create a new S3 bucket to host our static web site for IoT data visualization. Go back to S3 and create a new bucket and give it a globally unique name. Remember to keep all your buckets and AWS services in the same region.
2. After creating your bucket (I called mine "webhost76"), set your bucket up as a static web host. To do so go to: properties-->static website hosting and "Use this bucket to host a website." Now name the "index document" as index.html then "save."
3. Now click on the next tab labeled "permissions." Click and deselect "Block all public access," then save and confirm. AWS wants to ensure you know you are allowing your buckets data to be seen publicly, as they have experienced security breaches in the past with hackers grabbing info in other users' public buckets. In our case we aren't holding sensitive data so it's permissible to make our buckets public in an effort to make this tutorial easier.
4. Next go to the "Access Control List" and click on "Public access" Everyone. Under access to the objects and select "List objects." This gives everyone the ability to read our info. Then click "Save." Notice we aren't giving write permissions so we can prevent cross origin injection attacks.
5. Go to the next box which is "Bucket Policy." We will insert a JSON formatted document granting public access to our bucket (see below). I have added some simple security- IP range limiting. By adding this additional IP field we make using our website only available to IP's in our predesignated range. To find your IP simply google "my IP." Insert your bucket name and IP in the designated areas of the Bucket Policy that I have listed below, and then click "Save." As a note IP's can be spoofed but this is a simple way to add some security with minimal extra complication. I have also included a non IP protected bucket policy as well if you want to see your webpage from any remote location.
Later on, when you are done with this section, you can test that your IP limiting was successful by trying to bring up your visualization website on your smartphone. Smartphones use IPv6 instead of IPv4 by default, and thus your website should not be accessible with your smartphone if you used the bucket policy that limits access by IP range.
IP range limited Bucket Policy:
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::<YOUR-BUCKER-NAME-HERE>/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "<YOUR-IP-HERE>/24"
}
}
}
]
}
https://github.com/sborsay/Serverless-IoT-on-AWS/blob/master/PublicBucket/LimitByIPBucketPolicy
Open Bucket Policy :
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<YOUR-BUCKET-NAME>/*"
}
]
}
https://github.com/sborsay/Serverless-IoT-on-AWS/blob/master/PublicBucket/PublicBucketReadPolicy
6. The last thing we need to do to configure our public bucket is to add a CORS policy in the next box. This is a XML document setting cross origin resource sharing which will allow us to ingest the IoT data held in our S3 IoT data bucket. You don't need to customize the XML document below. Simple copy and past it into your CORS window and save.
CORS XML:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
https://github.com/sborsay/Serverless-IoT-on-AWS/blob/master/PublicBucket/PublicReadCORS
7. Now you have to repeat the exact same process with the S3 IoT data bucket that you created previously in the first section of this lab. This is the bucket that is filled with our test JSON data objects. We need to make that bucket public as well so that our website can access the IoT data within the buckets folder. The one difference between configuring this other bucket is that we are not setting our IoT data bucket for "static website hosting," as we are still just using our original bucket as a data repository for our IoT data lake holding our fake sensor readings.
Now it is time to edit our index.html web page to prepare it for upload to our new s3 bucket. The two fields you will need to customize in my index.html to work with your IoT data bucket are:
A) Your base bucket name
B) The folder name that holds your sensor reading in the base bucket
7. We can get both our folder and bucket URL with the same process. We can simply copy our "Object URL" and extract both the needed info within the URL. To do this go to your IoT data bucket and then go to:
overview-->click on your data folder--> click on a data object
Now click the object URL and at the bottom of the page you can now copy the Object URL.
In my IoT data bucket my Object URL is:
https://globallyuniquebucketname76.s3.amazonaws.com/IoTDataFolder/1582578233424
From this Object URL I can extract the base bucket name as : https://globallyuniquebucketname76.s3.amazonaws.com/
The base bucket will have the format: https://bucketname.s3.amazonaws.com
And my folder name is: IoTDataFolder
*Note: if your bucket is not in your home region you may also have the region listed in your base bucket address which you will need as well.
8. Now insert both URL's in the index.html provided below. Simply replace my URL and folder name with yours. There are two places in the index.html below that you need your base bucket URL, and one location that will need your folder name. As a note the program works by going to the base bucket level URL, and once the program knows where to grab your data objects it can effectively parse them.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>Document</title>
</head>
<body>
<script src="https://code.highcharts.com/highcharts.js"></script>
<div class="container">
<h1>Dashboard</h1>
<div class="panel panel-info">
<div class="panel-heading">
<h3 class="panel-title"><strong>Line Chart</strong></h3>
</div>
<div class="panel-body">
<div id="container1"></div>
</div>
</div>
<div class="panel panel-info">
<div class="panel-heading">
<h3 class="panel-title"><strong>Bar Chart</strong></h3>
</div>
<div class="panel-body">
<div id="container"></div>
</div>
</div>
</div>
<script>
var x = new XMLHttpRequest();
x.open("GET", "https://<YOU-BUCKET-NAME>.s3.amazonaws.com/", true);
// x.setRequestHeader("Content-Type", "application/xml");
x.onreadystatechange = function () {
if (x.readyState == 4 && x.status == 200) {
let promiseArr = [];
let data = [];
var doc = x.responseXML;
let keys = doc.getElementsByTagName("Key");
let index = 0;
createDataSet(index);
function createDataSet(index) {
if (index >= keys.length) {
generateGraph();
return false;
}
let element = keys[index];
element = element.textContent;
let splitName = element.split('/');
if (splitName[0] === '<YOUR-FOLDER-NAME>' && splitName[1] !== '') {
promiseArr.push(new Promise((resolve, reject) => {
var innerReq = new XMLHttpRequest();
innerReq.open("GET", "https://<YOU-BUCKET-NAME>.s3.amazonaws.com/" + splitName[0] + "/" + splitName[1], true);
// innerReq.setRequestHeader("Content-Type", "application/xml");
innerReq.onreadystatechange = function () {
if (innerReq.readyState == 4 && innerReq.status == 200) {
let parseData = JSON.parse(innerReq.responseText);
if (parseData.humidity) {
data.push(Object.assign({}, parseData, { timestamp: splitName[1] }));
}
resolve('Done')
index++;
createDataSet(index);
} else {
// reject(innerReq)
}
}
innerReq.send(null);
}));
} else {
index++;
createDataSet(index);
}
}
function generateGraph() {
Promise.all(promiseArr.map(p => p.catch(e => e)))
.then(res => {
abcData = data;
let barGraphXaxisName = ['Humidity', 'Temperature', 'Uptime'];
let humiditySum = 0, temperatureSum = 0, uptimeSum = 0;
let lineXaxisData = [], humArr = [], tempArr = [], upArr = [];
for (let i = 0; i < abcData.length; i++) {
humiditySum += Number(abcData[i].humidity);
temperatureSum += Number(abcData[i].temperature);
uptimeSum += Number(abcData[i].uptime);
humArr.push(Number(abcData[i].humidity));
tempArr.push(Number(abcData[i].temperature));
upArr.push(Number(abcData[i].uptime));
// lineXaxisData.push(new Date(Number(abcData[i].timestamp)).toLocaleString());
}
var chart = Highcharts.chart('container', {
chart: {
type: 'column'
},
title: {
text: 'Bar Chart'
},
xAxis: {
categories: barGraphXaxisName
},
yAxis: {
title: {
text: 'Value'
}
},
series: [{
data: [humiditySum, temperatureSum, uptimeSum]
}],
responsive: {
rules: [{
condition: {
maxWidth: 500
},
chartOptions: {
chart: {
className: 'small-chart'
}
}
}]
}
});
Highcharts.chart('container1', {
title: {
text: 'Line chart'
},
yAxis: {
title: {
text: 'Value'
}
},
xAxis: {
categories: upArr
},
legend: {
layout: 'vertical',
align: 'right',
verticalAlign: 'middle'
},
plotOptions: {
series: {
label: {
connectorAllowed: false
}
}
},
series: [{
name: 'Humdity',
data: humArr
}, {
name: 'Temperature',
data: tempArr
}],
responsive: {
rules: [{
condition: {
maxWidth: 500
},
chartOptions: {
legend: {
layout: 'horizontal',
align: 'center',
verticalAlign: 'bottom'
}
}
}]
}
});
}).catch(err => {
console.log('err', err)
})
}
}
};
x.send(null);
</script>
</body>
</html>
Github link to our index.html for visualizing our IoT Data:
9. Now that you have customized my index.html file for your own URL and folder name you are ready to upload the file to your new bucket. To accomplish this, simply drag and drop your customized index.html to your newly created web host bucket.
I have made four videos on YouTube that cover this entire tutorial.
The first video in the series that can be found here:
If any part of this lab is unclear then I would encourage you to watch the videos, or better yet, take one of my courses on Udemy covering AWS IoT extensively! I hope you enjoyed learning about AWS IoT as well as getting some hands on experience with different serverless services within the AWS framework for IoT. Feel free to email me with any questions.
Top comments (0)