DEV Community

Cover image for Red Hat Enterprise Linux on AWS Cloud
Jakub Wołynko for AWS Community Builders

Posted on • Originally published at blog.3sky.dev

Red Hat Enterprise Linux on AWS Cloud

Welcome

Let’s talk about basic IT operations included in the everyday tasks range. For example, accessing VMs. As you may realize (or not) - not everyone is using immutable infrastructure. Especially when their core business isn’t IT, and they are a bit bigger than 2-pizza team. That is why today we will talk about accessing the console of Red Hat Enterprise Linux 9.3 in AWS Cloud. I will show you the 3 most useful methods - in my opinion; there are no statistics.

Initial note

Due to the fact, that I appreciate AWS CDK, all examples today will be written in TypeScript.
Additionally, I decided to use shared stacks and store VPC config separately,
as well as default instance properties.
The network is very simple, one public subnet,
one private subnet, and one NAT(default option when using subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS).

import * as cdk from 'aws-cdk-lib';
import * as ec2 from 'aws-cdk-lib/aws-ec2';

export class SharedNetworkStack extends cdk.Stack {
  public readonly vpc: ec2.Vpc;
  constructor(scope: cdk.App, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    cdk.Tags.of(this).add("description", "Shared Network");
    cdk.Tags.of(this).add("organization", "3sky.dev");
    cdk.Tags.of(this).add("owner", "3sky");

    this.vpc = new ec2.Vpc(this, 'TheVPC', {
      ipAddresses: ec2.IpAddresses.cidr("10.192.0.0/20"),
      maxAzs: 1,
      enableDnsHostnames: true,
      enableDnsSupport: true,
      restrictDefaultSecurityGroup: true,
      subnetConfiguration: [
        {
          cidrMask: 28,
          name: "public",
          subnetType: ec2.SubnetType.PUBLIC,
        },
        {
          cidrMask: 28,
          name: "private",
          subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS,
        },
      ],
    });
  }
}
Enter fullscreen mode Exit fullscreen mode

As a base AMI for all instances, I will be using the publicly available latest RH build: amazon/RHEL-9.3.0_HVM-20240117-x86_64-49-Hourly2-GP3. With the smallest possible instance size. Everything is sorted in file called bin/rhel.ts

#!/usr/bin/env node
import 'source-map-support/register';
import * as cdk from 'aws-cdk-lib';
import * as ec2 from 'aws-cdk-lib/aws-ec2';
import { SSHStack } from '../lib/ssh-stack';
import { ICStack } from '../lib/ic-stack';
import { SSMStack } from '../lib/ssm-stack';
import { SharedNetworkStack } from '../lib/shared-network';

const app = new cdk.App();

const network = new SharedNetworkStack(app, 'SharedNetworkStack');

const defaultInstanceProps = {
    vpc: network.vpc,
    machineImage: ec2.MachineImage.genericLinux({
        // amazon/RHEL-9.3.0_HVM-20240117-x86_64-49-Hourly2-GP3
        "eu-central-1": "ami-0134dde2b68fe1b07",
    }),
    instanceType: ec2.InstanceType.of(
        ec2.InstanceClass.BURSTABLE2,
        ec2.InstanceSize.MICRO,
    ),
};

new SSHStack(app, 'SSHStack', {
    instanceProps: defaultInstanceProps,
    vpc: network.vpc,
});

new ICStack(app, 'ICStack', {
    instanceProps: defaultInstanceProps,
    vpc: network.vpc,
});

new SSMStack(app, 'SSMStack', {
    instanceProps: defaultInstanceProps,
});
Enter fullscreen mode Exit fullscreen mode

SSH

Let’s start with the basics. Regular SSH, what do we need to make this possible?

  • SSH Key Pair
  • ssh-server and ssh-client installed
  • connection to configured port(default: 22)
  • Bastion Host, as we’re simulating enterprise

setup

The initial assumption is, that we already have a key pair if not, please generate it with the following command:

ssh-keygen \
    -t ed25519 \
    -C "aws@local-testing" \
    -f ~/.ssh/id_ed25519_local_testing
Enter fullscreen mode Exit fullscreen mode

Let’s back to code, we’re using a much too open Security Group for the Bastion host, regular in-VPC SG, and two hosts with the same ssh-key configured. That is why the created stack definition is rather simple:

import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as ec2 from 'aws-cdk-lib/aws-ec2';

export interface SSHStackProps extends cdk.StackProps {
  vpc: ec2.Vpc;
  instanceProps: any;
}
export class SSHStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props: SSHStackProps) {
    super(scope, id, props);

    const theVPC = props.vpc;
    const theProps = props.instanceProps;

    // WARNING: change key material to your own
    const awsKeyPair = new ec2.CfnKeyPair(this, "localkeypair", {
      publicKeyMaterial:
        "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJxYZEBNRLXmuign6ZgNbmaSK7cnQAgFpx8cCscoqVed local",
      keyName: "localawesomekey",
    });

    const myKeyPair = ec2.KeyPair.fromKeyPairAttributes(this, "mykey", {
      keyPairName: awsKeyPair.keyName,
    });

    const tooopenSG = new ec2.SecurityGroup(this, "tooopenSG", {
      securityGroupName: "Allow all SSH traffic",
      vpc: theVPC,
      allowAllOutbound: false
    });

    tooopenSG.addIngressRule(
      // NOTE: we should use a more specific network range with
      // ec2.Peer.ipv4("x.x.x.x/24")
      ec2.Peer.anyIpv4(),
      ec2.Port.tcp(22),
      "Allow SSH",
      false,
    );

    const defaultSG = new ec2.SecurityGroup(this, "regularSG", {
      securityGroupName: "Regular in-VPC SG",
      vpc: theVPC,
      allowAllOutbound: false
    });

    defaultSG.addIngressRule(
      ec2.Peer.ipv4(theVPC.vpcCidrBlock),
      ec2.Port.tcp(22),
      "Allow SSH inside VPC only"
    );


    const bastion = new ec2.Instance(this, 'bastion-host', {
      instanceName: 'bastion-host',
      vpcSubnets: {
        subnetType: ec2.SubnetType.PUBLIC,
      },
      securityGroup: tooopenSG,
      keyPair: myKeyPair,
      ...theProps,
    });

    const instance = new ec2.Instance(this, 'host', {
      instanceName: 'host',
      vpcSubnets: {
        subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS,
      },
      securityGroup: defaultSG,
      keyPair: myKeyPair,
      ...theProps,
    });

    new cdk.CfnOutput(this, "bastionIP", {
      value: bastion.instancePublicIp,
      description: "Public IP address of the bastion host",
    });

    new cdk.CfnOutput(this, "instnaceIP", {
      value: instance.instancePrivateIp,
      description: "Private IP address of thsh host",
    });
  }
}
Enter fullscreen mode Exit fullscreen mode

Then after executing cdk deploy SSHStack, and waiting around 200s, we should be able to see:


 ✅  SSHStack

✨  Deployment time: 200.17s

Outputs:
SSHStack.bastionIP = 3.121.228.159
SSHStack.instnaceIP = 10.192.0.24
Enter fullscreen mode Exit fullscreen mode

Great, now we can use our ssh key and ec2-user, for connection with our instance.

$ ssh ec2-user@3.121.228.159 -i ~/.ssh/id_ed25519_local

Register this system with Red Hat Insights: insights-client --register
Create an account or view all your systems at https://red.ht/insights-dashboard
[ec2-user@ip-10-192-0-6 ~]$ clear
[ec2-user@ip-10-192-0-6 ~]$
logout
Connection to 3.121.228.159 closed.
Enter fullscreen mode Exit fullscreen mode

Ok, for accessing our regular target host I strongly recommend using a regular ~/.ssh/config file, with the following content:

Host aws-bastion
  PreferredAuthentications publickey
  IdentitiesOnly=yes
  IdentityFile ~/.ssh/id_ed25519_local
  User ec2-user
  Hostname 3.76.116.53

Host aws-host
  PreferredAuthentications publickey
  IdentitiesOnly=yes
  ProxyJump jump
  IdentityFile ~/.ssh/id_ed25519_local
  User ec2-user
  Hostname 10.192.0.24
Enter fullscreen mode Exit fullscreen mode

The good thing about it is that it’s very easy to configure Ansible with it. Our inventory file will be just simple:

[bastion]
aws-bastion

[instance]
aws-host

[aws:children]
aws-bastion
aws-host
Enter fullscreen mode Exit fullscreen mode

However, in the case of a real-world system, I would recommend using Ansible dynamic inventory, based on proper tagging.

props

  • easy-to-implement solution
  • standard SSH, so we can use Ansible just after deployment
  • no additional costs(besides Bastion host)

cons

  • exposing VMs to the public internet isn’t the most secure solution
  • we need to manage SSH key pair
  • we need to manage users
  • we need to manage accesses manually, from the OS level
  • no dedicated logging solution, besides Syslog

SSM Session Manager

Setting SSM based on documentation could be a bit more challenging, as we need to:

  • install SSM agent
  • configure role and instance profile

setup

import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as ec2 from 'aws-cdk-lib/aws-ec2';
import * as iam from 'aws-cdk-lib/aws-iam'

export interface SSMStackProps extends cdk.StackProps {
  instanceProps: any;
}
export class SSMStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props: SSMStackProps) {
    super(scope, id, props);

    const theProps = props.instanceProps;

    const ssmRole = new iam.Role(this, "SSMRole", {
      assumedBy: new iam.ServicePrincipal("ec2.amazonaws.com"),
      managedPolicies: [
        iam.ManagedPolicy.fromAwsManagedPolicyName("AmazonSSMManagedInstanceCore")
      ],
      roleName: "SSMRole"
    });
    new iam.InstanceProfile(this, "SSMInstanceProfile", {
      role: ssmRole,
      instanceProfileName: "SSMInstanceProfile"
    });

    const userData = ec2.UserData.forLinux();
    userData.addCommands(
      'set -o xtrace',
      'sudo dnf install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm',
      'sudo systemctl enable amazon-ssm-agent',
      'sudo systemctl start amazon-ssm-agent'
    );

    const instnace = new ec2.Instance(this, 'instance-with-ssm', {
      instanceName: 'instance-with-ssm',
      vpcSubnets: {
        subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS,
      },
      role: ssmRole,
      allowAllOutbound: true,
      detailedMonitoring: true,
      userData: userData,
      ...theProps,
    });


    new cdk.CfnOutput(this, "HostID", {
      value: instnace.instanceId,
      description: "ID of the regular host",
    });

    new cdk.CfnOutput(this, "hostDNS", {
      value: instnace.instancePrivateDnsName,
      description: "Hostname of the regular host",
    });
  }
}
Enter fullscreen mode Exit fullscreen mode

As you can see, we need to specify the role and instance profile(which can’t be displayed in GUI), and then we place the instance in a private subnet with specific user data and role. After a while we should be able to connect via CLI:

$ aws ssm start-session --target i-0110d03f4713d475c


Starting session with SessionId: kuba@3sky.dev-6n5goh43iisz45cmvpht54ize4
sh-5.1$ sudo su
[root@ip-10-192-0-26 bin]#
Enter fullscreen mode Exit fullscreen mode

Or via GUI:

ssm-s1

ssm-s2

ssm-s3

bonus: cockpit

As we’re using the Red Hat system(however also available on Ubuntu etc), and SSM supports port forwarding we can utilize the power of the cockpit. For example, manage subscriptions, check security recommendations, and connect with Red Hat Insights. How to do it? Login to our instance, create an admin user with a password, and then start the port forwarding session.

# login to host
$ aws ssm start-session --target i-0113d03f4713d412b

# become a root and create the needed user
sh-5.1$ sudo su
[root@ip-10-192-0-24:~]$ sudo useradd kuba
[root@ip-10-192-0-24:~]$ sudo passwd kuba
[root@ip-10-192-0-24:~]$ sudo usermod -aG wheel kuba
[root@ip-10-192-0-24:~]$ exit
sh-5.1$ exit

## start port-forwarding from the workstation
$ aws ssm start-session \
  --target i-0113d03f4713d412b 
  --document-name AWS-StartPortForwardingSessionToRemoteHost \
  --parameters '{"portNumber":["9090"],"localPortNumber":["9090"],"host":["ip-10-192-0-24"]}'
Enter fullscreen mode Exit fullscreen mode

ssm-4

props

  • straightforward setup
  • allow the user to access the instance from GUI and CLI
  • does not require using or storing ssh-keys
  • has built in monitoring with CloudWatch
  • provides the ability to restrict access based on IAM
  • support port forwarding (useful for DB access)

cons

  • using Ansible will be challenging(however supported)
  • requires more AWS-specific knowledge than Bastion host
  • in case of failure, push us to annoying debugging

EC2 instance connect

Our EC2 Instance Connect setup and configuration will be based on official documentation.
Here are the main prerequisites we have:

  • installed ec2-instance-connect packages on our host, which are not included in the default Red Hat build
  • EC2 instance connect endpoint placed in the corresponding network
  • open network connection to instance security group on TCP/22
  • open network connection from EC2 instance to connect security group to instances on TCP/22

setup

Based on these prerequisites content of our file is as:

import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as ec2 from 'aws-cdk-lib/aws-ec2';

export interface ICStackProps extends cdk.StackProps {
  vpc: ec2.Vpc;
  instanceProps: any;
}
export class ICStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props: ICStackProps) {
    super(scope, id, props);

    const theVPC = props.vpc;
    const theProps = props.instanceProps;

    const iceSG = new ec2.SecurityGroup(this, "iceSG", {
      securityGroupName: "Instance Connect SG",
      vpc: theVPC,
      allowAllOutbound: false
    });

    iceSG.addEgressRule(
      ec2.Peer.ipv4(theVPC.vpcCidrBlock),
      ec2.Port.tcp(22),
      "Allow outbound traffic from SG",
    );

    // WARNING: We need outbound for package installation
    const iceSGtoVM = new ec2.SecurityGroup(this, "iceSGtoVM", {
      securityGroupName: "Allow access over instance connect",
      vpc: theVPC,
    });

    iceSGtoVM.addIngressRule(
      iceSG,
      ec2.Port.tcp(22),
      "Allow SSH traffic from iceSG",
    );

    new ec2.CfnInstanceConnectEndpoint(this, "myInstanceConnectEndpoint", {
      securityGroupIds: [iceSG.securityGroupId],
      subnetId: theVPC.privateSubnets[0].subnetId
    });

    const userData = ec2.UserData.forLinux();
    userData.addCommands(
      'set -o xtrace',
      'mkdir /tmp/ec2-instance-connect',
      'curl https://amazon-ec2-instance-connect-us-west-2.s3.us-west-2.amazonaws.com/latest/linux_amd64/ec2-instance-connect.rpm -o /tmp/ec2-instance-connect/ec2-instance-connect.rpm',
      'curl https://amazon-ec2-instance-connect-us-west-2.s3.us-west-2.amazonaws.com/latest/linux_amd64/ec2-instance-connect-selinux.noarch.rpm -o /tmp/ec2-instance-connect/ec2-instance-connect-selinux.rpm',
      'sudo yum install -y /tmp/ec2-instance-connect/ec2-instance-connect.rpm /tmp/ec2-instance-connect/ec2-instance-connect-selinux.rpm'
    );

    const instnace = new ec2.Instance(this, 'instance-with-ic', {
      instanceName: 'instance-with-ic',
      vpcSubnets: {
        subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS,
      },
      securityGroup: iceSGtoVM,
      allowAllOutbound: true,
      detailedMonitoring: true,
      userData: userData,
      ...theProps,
    });

    new cdk.CfnOutput(this, "HostIP", {
      value: instnace.instanceId,
      description: "Public IP address of the regular host",
    });
  }
}
Enter fullscreen mode Exit fullscreen mode

As you can see, the longest part is just user-data for downloading and installing needed packages. What is important in my opinion, we always should test it before deploying it on production. In case of errors with the installation process, debugging will be hard and will require adding a bastion host with keys, and instance recreation.

After deploying a stack(a bit longer this time), we should be able to access our instance with CLI:

$ aws ec2-instance-connect ssh --instance-id i-08385338c2614df28

The authenticity of host '10.192.0.26 (<no hostip for proxy command>)' can't be established.
ED25519 key fingerprint is SHA256:BAxtwbZYKsK6hTJbvqOGgulGYftNQHZHMSpBkIGRTeY.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.192.0.26' (ED25519) to the list of known hosts.
Register this system with Red Hat Insights: insights-client --register
Create an account or view all your systems at https://red.ht/insights-dashboard
[ec2-user@ip-10-192-0-26 ~]$
Enter fullscreen mode Exit fullscreen mode

or via GUI:

ic-1

ic-2

ic-3

props

  • straightforward setup
  • allow the user to access the instance from GUI and CLI
  • does not require using or storing ssh-keys
  • has built in monitoring with CloudWatch
  • provides the ability to restrict access based on IAM

cons

  • using Ansible will be very challenging(no official support or plugin)
  • requires more AWS-specific knowledge than Bastion host
  • in case of failure, push us to annoying debugging

Final notes

  • network setup was simplified; one AZ; 2 subnets
  • security groups were to open, especially outbound rules(due to package downloading)
  • SSM could be used over endpoints as well:
    • com.amazonaws.:aws-region:.ssm
    • com.amazonaws.:aws-region:.ssmmessages
    • com.amazonaws.:aws-region:.ssmmessages
  • we should use already pre-built AMI to avoid this issue
  • repo as always can be find here

Summary

As you can see setting up RHEL in AWS isn’t such hard, for sure it’s more expensive and requires ec2-connect-instance or SSM agent installation(Amazon Linux does not), but if we’re going to use RHEL in the cloud, probably we have good reason to do so. For example, great support, Insights, or Cloud Console, but my job was to show possibilities and do it with style.

Top comments (2)

Collapse
 
jasondunn profile image
Jason Dunn [AWS]

Nicely detailed!

Collapse
 
3sky profile image
Jakub Wołynko

Thanks! Hopefully it will help someone.