DEV Community

Nenad Ilic for IoT Builders

Posted on • Edited on

Orchestrating Application Workloads in Distributed Embedded Systems: Setting up a Nomad Cluster with AWS IoT Greengrass - Part 1

Introduction

As interconnected, purpose-built Systems on Chips (SoCs) have become more popular in industrial and automotive settings, managing their software components, including application deployments, data processing, and network communication, has become more challenging. This is particularly important in factories and cars, where downtime and maintenance costs can have a significant impact on productivity. To address these issues, developers can use available orchestration tools to ensure that the system operates efficiently and effectively, and interface those distributed embedded systems from the cloud in order to enable changes and updates throughout their lifecycle.

In this blog post series, we will demonstrate how to use AWS IoT Greengrass and Hashicorp Nomad to seamlessly interface with multiple interconnected devices and orchestrate service deployments on them. Greengrass will allow us to view the cluster as a single "Thing" from the cloud perspective, while Nomad will serve as the primary cluster orchestration tool.

Architecture

In Part 1 of this series, we will cover how to set up a cluster of 3 devices with AWS IoT Greengrass already installed on one device, and then set up a cluster of 1 Nomad server/client and 2 Nomad clients.

It's important to note that Nomad is a lightweight orchestration tool for distributed environments that comes as a single binary installation. This makes it easy to operate while still providing features that allow applications to run in high availability. In addition to running Docker containers, Nomad enables the execution of tasks on the cluster in other forms such as Java, exec, QEMU, and more.

That being said, this tutorial is targeted for IoT engineers, embedded systems developers, and IoT DevOps engineers with a technical background in building and running applications on embedded systems, as well as basic knowledge about Greengrass.

Prerequisites

For this specific example, we will be using 3 Raspberry Pis, but this setup should generally work on devices running Linux, thus having a 3 EC2’s running Ubuntu would be a good replacement. To get started, we will pick one of those 3 devices to be our server which will need to have AWS IoT Greengrass installed, by following these steps and by taking note of you Things Name and/or Thing Group. In this example we have used nomad-cluster as a Thing Group. Additionally in order to follow along, please have a GDK installed on development host.

Next, on two other devices, in order to be able to ssh into them from the server and set up the Nomad clients, we will need to add the server’s public key to them. This can be done by using ssh-keygen to generate the pair and then using ssh-copy-id user@remote_host to copy the public key to other two devices. This will allow easier client setup using Greengrass components provided in the example repository.

Please also note that we will be using root in this setup in order to allow certain application access, but this is not advised in production (we will discuss this in one of the following blogs). Also, instead of using IP addresses, it is advised for each device to have a unique hostname and responding DNS entry for easier use.

Bootstrapping the Nomad Cluster

Now that we have all that in place we can start with bootstrapping the Nomad cluster. For this we will be using the following GitHub repository as a starting point by cloning it:

git clone https://github.com/aws-iot-builder-tools/greengrass-nomad-demo.git
Enter fullscreen mode Exit fullscreen mode

Nomad Server

Next navigate to ggv2-nomad-setup/cluster-bootstrap/nomad-server directory in the cloned repository. Here we can find a recipe.yaml for our nomad.bootstrap.server component as well as our gdk-config.json which we can modify to reflect the current setup (region, author, bucket and so on...).

Now we would be ready to build and publish our component by executing gdk build and gdk publish which will create the component, publish the artifact to an S3 bucket and version it.

The component will include a server.hcl file for setting up the nomad server/client that looks like this:

# Increase log verbosity
log_level = "DEBUG"

# Setup data dir
data_dir = "/opt/nomad/data"

# Enable the server
server {
  enabled = true
}
# Enable the client as well
client {
  enabled = true
  meta {
    greengrass_ipc = "server"
  }
  options = {
    "driver.raw_exec.enable" = "1"
  }
}

# port for the ui
ports {
  http = 8080
}
Enter fullscreen mode Exit fullscreen mode

For the Nomad client, we will add additional metadata greengrass_ipc = "server" which will allow us to deploy services on a host where Greengrass is running and also enable raw_exec driver by setting "driver.raw_exec.enable" = "1" , and finally we will expose the Nomads UI on the 8080 port allowing us to view it in our local network for debugging purposes, of course this should be disabled in production environment.

In the root of the repository we have deployment.json.template which we can use to create a Greengrass deployment targeting our Thing Group, which in this scenario we called nomad-cluster.

To reflect your set-up, you can adapt the template file to your needs and then run the commands below to populate the required variables:

AWS_ACCOUNT_ID=$(aws sts get-caller-identity |  jq -r '.Account')
AWS_REGION=eu-west-1
envsubst < "deployment.json.template" > "deployment.json"
Enter fullscreen mode Exit fullscreen mode

Now, we should be able to modify the deployment.json file to only include the nomad.bootstrap.server and the appropriate component version, which should look like this:

{
    "targetArn": "arn:aws:iot:<AWS Region>:<AWS Account>:thinggroup/nomad-cluster",
    "deploymentName": "Deployment for nomad-cluster group",
    "components": {
        "nomad.bootstrap.server": {
            "componentVersion": "1.0.0",
            "runWith": {}
        }
    },
    "deploymentPolicies": {
        "failureHandlingPolicy": "ROLLBACK",
        "componentUpdatePolicy": {
            "timeoutInSeconds": 60,
            "action": "NOTIFY_COMPONENTS"
        },
        "configurationValidationPolicy": {
            "timeoutInSeconds": 60
        }
    },
    "iotJobConfiguration": {}
}
Enter fullscreen mode Exit fullscreen mode

And finally deploy it to our target by executing:

aws greengrassv2 create-deployment \
    --cli-input-json file://deployment.json\
    --region ${AWS_REGION}
Enter fullscreen mode Exit fullscreen mode

This bootstraps the Nomad server on a device with Greengrass installed, and configures the Nomad client to enable the deployment of proxies for Greengrass IPC and Token Exchange Service. We will provide a detailed guide for setting up the client in the next blog post.

Nomad Clients

For Nomad clients the process would be similar, with a difference of configuring and adding the DNS Names (or IPs) with usernames for each client, as well as server’s DNS (or IP) in the recipe.yaml under ggv2-nomad-setup/cluster-bootstrap/nomad-clients/recipe.yaml:

ComponentConfiguration:
  DefaultConfiguration:
    Server:
      DnsName: "rpi-server.local"
    Client:
      "1":
        UserName: root
        DnsName: "rpi-client1.local"
      "2":
        UserName: root
        DnsName: "rpi-client2.local"
Enter fullscreen mode Exit fullscreen mode

This is to allow the Greengrass component to install the Nomad clients using ssh and adding configuring them so they can connect with the server.

The client.hcl which will be added to each Nomad client will look like this where the <SERVER_DNS_NAME>will be replaced from the configuration provided in the recipe.

# Increase log verbosity
log_level = "DEBUG"

# Setup data dir
data_dir = "/opt/nomad/data"

client {
  enabled = true
  servers = ["<SERVER_DNS_NAME>"]
  meta {
    greengrass_ipc = "client"
  }
  options = {
    "driver.raw_exec.enable" = "1"
  }
}

# different port than server
ports {
  http = 5656
}
Enter fullscreen mode Exit fullscreen mode

Once that is done, we can proceed with building and publishing the component by doing gdk build and gdk publish .

Additionally in order to deploy this to the targets we will need to add:

        "nomad.bootstrap.clients": {
            "componentVersion": "1.0.0",
            "runWith": {}
        }
Enter fullscreen mode Exit fullscreen mode

And execute the same command to deploy it to our target:

aws greengrassv2 create-deployment \
    --cli-input-json file://deployment.json\
    --region ${AWS_REGION}
Enter fullscreen mode Exit fullscreen mode

With the clients bootstrapped, we are ready to move on to the next steps in deploying a demo component to Greengrass and Nomad, and how to publish data to AWS IoT Core using Greengrass IPC, which we will do in the second part of this blog.

It's worth noting that this is a demo deployment and not suitable for production environments. In a future blog post in this series, we will discuss how to set up Nomad for production environments, including security considerations and best practices.

Conclusion

In Part 1 of this series, we have covered the importance of orchestrating application workloads in distributed embedded systems and how to set up a Nomad cluster with AWS IoT Greengrass. By following these steps, developers can ensure that their systems operate efficiently, reliably, and with optimal performance, while also allowing them to develop and deploy new services even after the systems are operational.

In the next blog post, we will cover the next steps in the process of setting up a demo component to Greengrass and Nomad, and how to publish data to AWS IoT Core using Greengrass IPC. Stay tuned for Part 2 of this series!

If you have any feedback about this post, or you would like to see more related content, please reach out to me here, or on Twitter or LinkedIn.

Top comments (0)