In this article I hope to demonstrate how a Network Engineer could leverage the AWS product line to securely transfer files to and from the cloud from their on-prem infrastructure using traditional transfer protocols like SFTP, FTP, FTPS. AWS Transfer Family is a robust solution providing an efficient and secure means of transferring files to and from any host capable of being a client of one of the protocols above, in this example routers & switches. This allows for easy retrieval of configuration files, logs, or any other data stored you may need to push or pull from your physical infrastructure, streamlining network management tasks. In the simplest terms possible, this is an (FTP,SFTP,FTPS) server in the cloud.
In this demo I'll be using Containerlab to deploy a containerized version of Arista EOS (simulating my on-prem router) and Terraform to deploy out the required AWS infrastructure. If you want to get the code and a break down of what each piece of Terraform is doing, find it below.
https://github.com/friday963/networklabs/tree/main/transfer_family
Deploy AWS Infrastructure
Run the your init
, plan
, apply
friday@ubuntu:~/code/networklabs/transfer_family$ terraform init
friday@ubuntu:~/code/networklabs/transfer_family$ terraform plan
friday@ubuntu:~/code/networklabs/transfer_family$ terraform apply
Deploy Containerlab instance
friday@ubuntu:~/code/networklabs/transfer_family/containerlab_configs$ sudo containerlab deploy -t topo.yml
[sudo] password for friday:
INFO[0000] Containerlab v0.47.2 started
INFO[0000] Parsing & checking topology file: topo.yml
INFO[0000] Creating docker network: Name="clab", IPv4Subnet="172.20.20.0/24", IPv6Subnet="2001:172:20:20::/64", MTU='Χ'
INFO[0000] Creating lab directory: /home/friday/code/networklabs/transfer_family/containerlab_configs/clab-SFTP_Sample_Lab
INFO[0000] config file '/home/friday/code/networklabs/transfer_family/containerlab_configs/clab-SFTP_Sample_Lab/router/flash/startup-config' for node 'router' already exists and will not be generated/reset
INFO[0000] Creating container: "router"
INFO[0000] Running postdeploy actions for Arista cEOS 'router' node
INFO[0024] Adding containerlab host entries to /etc/hosts file
INFO[0024] Adding ssh config for containerlab nodes
INFO[0024] π New containerlab version 0.50.0 is available! Release notes: https://containerlab.dev/rn/0.50/
Run 'containerlab version upgrade' to upgrade or go check other installation options at https://containerlab.dev/install/
+---+-----------------------------+--------------+--------------+------+---------+----------------+----------------------+
| # | Name | Container ID | Image | Kind | State | IPv4 Address | IPv6 Address |
+---+-----------------------------+--------------+--------------+------+---------+----------------+----------------------+
| 1 | clab-SFTP_Sample_Lab-router | 202444f34875 | ceos:4.30.3M | ceos | running | 172.20.20.2/24 | 2001:172:20:20::2/64 |
+---+-----------------------------+--------------+--------------+------+---------+----------------+----------------------+
Log into router and generate private/public SSH key
After logging in, I'm dropping into the shell so I can interact with the underlying system to generate that SSH key.
friday@ubuntu:~/code/networklabs/transfer_family/containerlab_configs$ ssh admin@172.20.20.2
Warning: Permanently added '172.20.20.2' (ED25519) to the list of known hosts.
(admin@172.20.20.2) Password:
router>en
router#bash
Arista Networks EOS shell
[admin@router ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/admin/.ssh/id_rsa):
Created directory '/home/admin/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/admin/.ssh/id_rsa.
Your public key has been saved in /home/admin/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:Vh2tdtedpI/5qX9g6FolSO70YXukTqqsd7jfHCr5dao admin@router
<TRUNCATED>
Collect the public key
First we need to retrieve the public key from the router as seen below.
[admin@router ~]$ cat /home/admin/.ssh/id_rsa.pub
ssh-rsa Vh2tdtedpI/5qX9g6FolSO70YXukTqqsd7jfHCr5dao admin@router
Proceed to AWS console to configure the SFTP user.
Search transfer family
in the console and click into your instance.
From here, find your user. Notice the bottom of the screen transfer_user
, click into this.
Now that you're in the user console, find the Add key
button to add your public key.
Now paste the key and click Add key
Move files between router & SFTP server
At this point we are ready to start transferring files. Here I'm jumping to flash to get to some interesting files for transfer.
[admin@router ~]$ cd /mnt/flash/
[admin@router flash]$ ls
AsuFastPktTransmit.log SsuRestore.log aboot debug if-wait.sh persist startup-config
Fossil SsuRestoreLegacy.log boot-config fastpkttx.backup kickstart-config schedule system_mac_address
Next you'll notice I'm running sftp -i /home/admin/.ssh/id_rsa transfer_user@34.225.236.228
in my situation, since I have no DNS I cannot actually SFTP to the FQDN that amazon created for me. In any other situation I would be using the FQDN provided. If you're following along you also probably lack a DNS server.
DON'T FORGET TO INCLUDE THE KEY LOCATION IN YOUR SFTP CALL
[admin@router flash]$ sftp -i /home/admin/.ssh/id_rsa transfer_user@34.225.236.228
Warning: Permanently added '34.225.236.228' (RSA) to the list of known hosts.
Here is how I got an IP for the endpoint that was created for me.
friday@ubuntu:~/code/networklabs/transfer_family$ nslookup
> s-0a4da29.server.transfer.us-east-1.amazonaws.com
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: s-0a4da29.server.transfer.us-east-1.amazonaws.com
Address: 34.225.236.228
Name: s-0a4da29.server.transfer.us-east-1.amazonaws.com
Address: 44.212.239.132
Name: s-0a4da29.server.transfer.us-east-1.amazonaws.com
Address: 184.73.175.221
The last few things to note in this output is the remote working directory. This was configured in my terraform as the directory I wanted to be dropped into upon logging in. What's occurring here is that I'm interacting with an S3 bucket with the same path seen below /network-logging-bucket-2073/router_1
.
[admin@router flash]$ sftp -i /home/admin/.ssh/id_rsa transfer_user@34.225.236.228
Warning: Permanently added '34.225.236.228' (RSA) to the list of known hosts.
Connected to transfer_user@34.225.236.228.
sftp> pwd
Remote working directory: /network-logging-bucket-2073/router_1
From there I'm able to put or get files from that home directory. First I put startup-config
then I get important_configuration_file.cfg
from the remote server.
sftp> put startup-config
Uploading startup-config to /network-logging-bucket-2073/router_1/startup-config
startup-config 100% 870 10.0KB/s 00:00
sftp> ls
important_configuration_file.cfg.txt startup-config
sftp> get important_configuration_file.cfg.txt
Fetching /network-logging-bucket-2073/router_1/important_configuration_file.cfg.txt to important_configuration_file.cfg.txt
sftp> exit
[admin@router flash]$ ls
AsuFastPktTransmit.log SsuRestoreLegacy.log debug important_configuration_file.cfg.txt schedule
Fossil aboot fastpkttx.backup kickstart-config startup-config
SsuRestore.log boot-config if-wait.sh persist system_mac_address
Take away
In conclusion, I hope you were able to gain insight into the Transfer Family product and how you could leverage it to transfer files to and from your on-prem infrastructure if needed. It really is an easy product to set up and provides a slick interface for getting you secure durable storage for your networking object storage needs.
Top comments (0)