In this post we will cover how to enable use of Cognitive Services Containers within an Azure IoT Edge Deployment.
Introduction
IoT Solutions often involve a number of challenges related to the environment where devices are ultimately deployed. These can include intermittent access to the internet, challenges related to remote updates of the device code, and security of the device itself. On the software side, Azure IoT Edge addresses these challenges through a variety of features that were designed specifically for IoT scenarios.
Here are some brief explanations of how IoT Edge addresses these challenges:
- Intermittent access to the internet is addressed by buffering outbound device telemetry locally on the device until internet access is restored, where it is then sent along with the original timestamps of the data at the time it was generated locally.
- Remote updates are able to be configured using targeted deployment configurations which specify a list of modules (containerized applications) which are to run on a given device
- Security is employed by only allowing devices which have been registered to an Azure IoT Hub instance to be able to submit telemetry data to it's respective Azure hosted endpoint in addition to a local Hardware Security Manager which runs on the device itself to enable transitioning trust from underlying hardware root of trust hardware (if available) to securely bootstrap the IoT Edge runtime and monitor the integrity of its operations.
It is important to note that IoT Edge accomplishes safe deployment of code through the use of containerized modules. Containers allow for easy distribution via familiar docker pull
commands and enable runtime-recovery of modules via container restarts in mission critical IoT deployments. This makes containers an ideal candidate for IoT Solutions where an OS environment is available. The IoT Edge runtime can run on Linux X64, Linux ARM32, and Windows X64 environments. Once enabled, you can take advantage of all the benefits mentioned above in addition to some handy features in the Device SDKs which power custom IoT Edge modules.
Azure Cognitive Services allow developers to leverage intelligent algorithms into apps, websites, and bots so that they see, hear, speak, and understand by enabling common AI functionality through a Software as a Service offering. Microsoft recently announced support for a subset of it's line of Cognitive Services to run locally in the form of containers. These include services for Computer Vision, Face, LUIS, and various forms of Text Analytics. It is important to note that the Computer Vision and Face containers are currently in preview but may you may request access if you are interested in trying them out by filling out the Cognitive Services Vision Containers Request form. Please be aware the Cognitive Services Containers currently only support Linux X64 and require that internet connectivity is re-established every ten minutes, otherwise the container will stop producing results when it's API endpoint is queried.
Leveraging Azure IoT Edge and Cognitive Services Containers together, we can build out IoT Solutions which allow for local AI processing in environments where internet connectivity may be intermittent. We no longer need to rely on an external services to produce insights from our data, everything can be processed locally and deployed / configured from the cloud to enable rollout of updates in a securely designed fashion at scale.
Steps
To begin development, you will want to ensure that you have a recent installation of VSCode on your dev machine, you will also need to install the IoT Edge Extension for VSCode.
First create a new IoT Edge Solution with F1
=> "Azure IoT Edge: New IoT Edge Solution"
When asked to create a module, leave the default values and select "C# module" for the module template.
Next, open the deployment.template.json file included in the solution directory. If you are using container images that are in preview, you will have been supplied a username and password to connect to a private container repository. If this is the case, update the registryCredentials section as shown below, to enable pulling images from the private repository.
"registryCredentials": {
"containerpreview": {
"username": "{YourUsername}",
"password": "{YourPassword}",
"address": "containerpreview.azurecr.io"
}
Next we need to create a Cognitive Services resource in Azure:
After you have created the Cognitive Services resource, obtain the API Key for the service, this will become the value used later for {YourCogServicesApiKey}:
Now we can begin adding a module configuration to specify deployment of a Cognitive Services Container. We will start using the "cognitive-services-recognize-text" container. Keep in mind that the process will be similar for other Cognitive Services Containers. You will want to look at the "modules" section of deployment.template.json and update it with the following additional entry (be sure to replace {YourCogServicesLocale} and {YourCogServicesApiKey} with appropriate values):
"cognitive-services-recognize-text": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "containerpreview.azurecr.io/microsoft/cognitive-services-recognize-text:latest",
"createOptions":
{
"Cmd": [
"Eula=accept",
"Billing=https://{YourCogServicesLocale}.api.cognitive.microsoft.com/vision/v2.0",
"ApiKey={YourCogServicesApiKey}"
],
"HostConfig": {
"PortBindings": {
"5000/tcp": [
{
"HostPort": "5000"
}
]
}
}
}
}
},
The createOptions are inferred by looking at the documentation to run the container locally, outside of IoT Edge, i.e.:
docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \
containerpreview.azurecr.io/microsoft/cognitive-services-recognize-text \
Eula=accept \
Billing={BILLING_ENDPOINT_URI} \
ApiKey={BILLING_KEY}
This specifies a container which maps port 5000 on the host to port 5000 running inside of the container along with Cmd's for setting Eula, Billing, and ApiKey values.
If you are curious how the appropriate syntax was obtained for the deployement.template.json module entry, I executed the command above and ran docker inspect
on the container which will provide clues on how to formulate the appropriate structure for the "Cmd" section. For a full list of available container create options, see the Docker Engine API documentation.
Now to call the API from our C# module, add the following Global variables to SampleModule.cs:
// ApiKey is not needed on client side talking to a container
private const string ApiKey = "000000000000000000000000000000";
//Note: Endpoint value matches the module name used in deployment.template.json to allow internal resolution from custom modules
private const string Endpoint = "http://cognitive-services-recognize-text:5000";
private static HttpClient client = new HttpClient { BaseAddress = new Uri(Endpoint) };
Next, add the following method:
private static async Task ExtractText(Stream image)
{
String responseString = string.Empty;
using (var imageContent = new StreamContent(image))
{
imageContent.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
var requestAddress = "/vision/v2.0/recognizetextDirect";
using (var response = await client.PostAsync(requestAddress, imageContent))
{
var resultAsString = await response.Content.ReadAsStringAsync();
var resultAsJson = JsonConvert.DeserializeObject<JObject>(resultAsString);
if (resultAsJson["lines"] == null)
{
responseString = resultAsString;
}
else
{
foreach (var line in resultAsJson["lines"])
{
responseString += line["text"] + "\n";
}
}
Console.WriteLine(responseString);
}
}
}
The process for calling the appropriate container endpoint was determined by perusing the source code of the cognitive-services-containers-samples/Recognize-Text Sample
To trigger the ExtractText method, produce an appropriate mechanism to supply a Stream for a suitable image file to the method and monitor the result using docker logs -f <container_id>
Here is an example image and response supplied from using a screengrab of the NES Classic "The Addams Family"
If you think this kind of thing is cool (translating retro video games), I have a full open-source project which makes use of IoT Edge and Cognitive Services to do exactly that @ https://github.com/toolboc/RetroArch-AI-with-IoTEdge.
When your code is ready to deploy to a device, follow the instructions for how to deploy modules from VSCode.
Summary
While I have only explicitly demonstrated the process for working with the cognitive-services-recognize-text container, the overall process can be applied to other Cognitive Services Containers.
Here is the general flow you will want to follow:
- Create a proper module entry for the Cognitive Services container in deployment.template.json (Also provide docker registry credentials if using a preview image or images stored in a private docker registry). If using multiple Cognitive Services Containers as modules, be sure to map the external host port from a unique external port number to the internal port 5000 for all entries.
- Peruse the cognitive-services-containers-samples repo for examples on how to interact with the service in question.
- Implement a method which calls the appropriate API endpoint in your Custom Module code using information obtained in step 2 and employ logging to stdout to track module output and status at runtime
- Deploy and test the code by watching the
docker logs
of the module in question
Conclusion
With the announcement of Cognitive Services Containers, we can now employ AI services into IoT Edge deployment configuration to allow for AI scenarios in IoT Edge Solutions without a need for reliance on external cloud services. This provides extremely powerful AI capabilities without the need for the latency and overhead of external services. The possibilities are endless: imagine using localized face recognition for access systems, live-translating video game text from one language to another, or processing license plate text on vehicles using an attached camera. These are exactly the types of scenarios that will be at the crest of the next wave of IoT solutions, i.e. scenarios which employ localized AI functionality in disconnected environments to produce intelligent decisions on-site. It will interesting and exciting to see what kinds of systems will be created using AI processing paired with IoT solutions in the next five years. Do you have any cool ideas you would like to see built on these concepts? Drop a line in the comments and let us know what kinds of things you think will be part of the next wave of AI-enhanced IoT solutions!
Until next time,
Happy Hacking!
Top comments (5)
Great blog post! The combination of Cognitive Services Containers with Azure IoT Edge is a game-changer for IoT solutions. It empowers developers to create smarter, more efficient applications. For businesses looking to leverage IoT's potential, to Hire IoT developers is crucial. They'll bring innovation, expertise, and ensure successful implementation.
Quick one please.
I get how to specify the cpu and ram of a docker run but struggle to find how to speicfy this in the deployment template of egde. Would there be a docker in particular I could review?
Cheers
Manu
Great question, you can find examples of
Memory
andCpu*
"container create options" in the docker "create options" documentation.Cheers!
Very helpful, thank you !
When i configured the PortBindings, the port wasn't exposed.
So i added ExposedPorts before PortBindings to expose the port :
Edit : Apparently, it's known issue.
Thank you! This was extremely helpful guidance. I wish Microsoft would have included this information in their instructions for ACS containers (docs.microsoft.com/en-us/azure/cog...).
I searched hi and low online for any articles related to creating IoT Edge Modules for existing docker containers such as this, but couldn't find anything. I was only able to find your article by a cryptic search for "iot edge module docker \"eula\"".
Thanks again for your writeup, it got me one step further!