DEV Community

Cover image for API's From Dev to Production - Part 1 - Docker
Pete King
Pete King

Posted on • Edited on

API's From Dev to Production - Part 1 - Docker

Series Introduction

Welcome to a blog post series that will go from the most basic example of a .net 5 webapi in C#, and the journey from development to production with a shift-left mindset. We will use Azure, Docker, GitHub, GitHub Actions for CI/C-Deployment and Infrastructure as Code using Pulumi.

It will be a long multi-part series covering pretty much everything... I hope!

Our goal is to have an API that follows modern practices, including everything as code with the shift-left philosophy, inclusive of testing, security and infrastructure; in order to facilitate the ability to have an increased level of autonomy as an engineering team.

We will host our services in Microsoft Azure using PaaS services, now you may think how? Surely we need Kubernetes, well the answer is no. There are other options that provide a good middle-ground before going into full-blown Kubernetes. The AppService in Azure supports containers (and there are other products too; such as ACI). This is a fantastic product, those of you who are experienced with the “standard” AppService will know the benefits it provides such as SSL, scaling and much, much more. If you couple this with the benefits of containers, it becomes a good stepping stone into containerised workloads; before taking the leap into Kubernetes such as AKS (Azure Kubernetes Service).


This is a practical hands-on 🤲 journey from Dev to Production.


TL;DR

Visual Studio Code is great to use as an editor and take advantage of various extensions to make working with Docker and more much easier. We use dotnet new to create our C# project, generate our Dockerfile, modify it just a little bit, build, run and test to ensure it works. We learn some basic commands of the dotnet CLI and Docker CLI.


GitHub Repository

GitHub logo peteking / Samples.WeatherForecast-Part-1

This repository is part of the blog post series, API's from Dev to Production - Part 1 on dev.to. Based on the standard .net standard Weather API sample.


Why Docker?

Consistent and Isolated Environment

Using containers developers can create predictable environments that are isolated from other apps. Regardless of where the app is deployed, everything remains consistent and this leads to massive productivity: less time debugging, and more time launching fresh features and functionality for users.

Cost-effectiveness with Fast Deployment

Docker-powered containers are known for decreasing deployment time to seconds. That’s an impressive feat by any standard. Traditionally, things like provisioning, getting the hardware up and running would take days or more. In addition, you faced massive overheads and extra work. When each process is put into a container, it can be shared with new apps. The process of deployment becomes swift and you are essentially ready to go.

Mobility – Ability to Run Anywhere

Docker images are free of environmental limitations, and that makes any deployment consistent, movable (portable), and scalable. Containers have the added benefit of running anywhere, providing it is targeted at the OS (Win, Mac OS, Linux, VMs, On-prem, in Public Cloud), which is a huge advantage for both development and deployment. The widespread popularity of the Docker image format for containers further helps. It has been adopted by leading cloud providers, including Amazon Web Services (AWS), Google Compute Platform (GCP), and Microsoft Azure. In addition, you have powerful orchestration systems such as Kubernetes, and products such as AWS ECS or Azure Container Instances are mighty useful in terms of mobility.

Repeatability and Automation

You are building code with repeatable infrastructure and config. This speeds up the development process tremendously. It must be pointed out that Docker images are often small. Consequently, you get fast delivery and, again, shorter deployment for new application containers. Another advantage is straightforward maintenance. When an application is containerized, it’s isolated from other apps running in the same system. In other words, apps don’t intermix and application maintenance significantly easier. It lends itself to being automated; the faster you repeat, the fewer mistakes you make, and the more you can focus on core value for a business or application.

Test, Roll Back and Deploy

As we said, Environments remain more consistent in Docker, from start to finish. Docker images are easily versioned, which makes them easy to roll back if you need to do so. If there is a problem with the current iteration of the image, just roll back to the older version. The whole process means you are creating the perfect environment for continuous integration and continuous deployment (CI/CD). Docker containers are set to retain all configs and dependencies internally. Now, you have a fast and easy way of checking for discrepancies.

Flexibility

If you need to perform an upgrade during a product’s release cycle, you can easily make the necessary changes to Docker containers, test them, and roll out new containers. This sort of flexibility is another key advantage of using Docker. Docker really allows you to build, test, and release images that can be deployed across multiple servers. Even if a new security patch is available, the process remains the same. You can apply the patch, test it, and release it to production. Additionally, Docker allows you to start and stop services, or apps rapidly, which is especially useful within the cloud environment.

Collaboration, Modularity and Scaling

The Docker method of containerization allows you to segment an application so you can refresh, clean up, repair without even taking down the entire app. Furthermore, with Docker you can build an architecture for applications comprising of small processes that communicate with each other via APIs. From there, developers share and collaborate, solving any potential issues quickly. At this stage, the development cycle is completed and all issues are resolved, with no massive overhaul needed – this is extremely cost-effective and time-saving.

See references [1]


Requirements

I’m using Windows 10, for other OS’s like MacOS and Linux, there will be small differences that are not covered here.

VS Code Extensions


Initial project creation

  1. Create Repository in Git → Samples.WeatherForecast

  2. Create ‘src' directory

  3. Within ‘src’ execute:

dotnet new webapi -n Samples.WeatherForecast.Api

At this point you should have a nice new .net 5 webapi.

A nifty trick to launch VS Code in your terminal, simple type code .

. refers to the the file path. In this case: current directory. You can specify alternative location, OR even simply write code at the terminal, which will open code with last workspace.

If you are prompted to install the C# extension, be sure to select, ‘yes’ if prompted; this will create a .vscode folder.

VS Code missing assets


GitIgnore

I assume you probably haven’t created a .gitignore file yet, there is a really useful extension in VS Code called, “.gitignore Generator”. I recommend you install it.

In VS Code - Command Palette - Windows (Ctrl+Shift+P) → .gitignore Generator.

For this project, it’s best we select 4 items form the list:

  • visualstudiocode
  • visualstudio
  • Dotnetcore
  • csharp

Once you have your .gitignore file, it should look something like the below:

VS Code git ignore created


Dockerfile

The good thing about the Docker extension is that it can do loads of things for you, it is a fantastic timesaver; a real Docker companion.

As a first step, we can generate our Dockerfile, and it can generate a docker-compose file too!

VS Code - Command Palette - Windows (Ctrl+Shift+P) → Docker: Add Docker Files to Workspace

Select .NET: ASP.NET Core.
VS Code Dockerfile create step 1


Select Linux.
VS Code Dockerfile create step 2


Specify port 80, lets deal with SSL later
VS Code Dockerfile create step 3


Selectyes’ to include docker-compose files, we’ll need them later.
VS Code Dockerfile create step 4


At the end of this, you should have the following files all generated for you:

  1. Dockerfile
  2. docker-compose.debug.yml
  3. docker-compose.yml
  4. .dockerignore VS Code Dockerfile create step 5

Now, if you carried this operation out in the src/Samples.WeatherForecast.API directory (like I did), these files would have been created there. However, we should move them…

It is best practice to have your Dockerfile etc. in the root directory of your repository.

Let’s move them up into the root directory, if you open VS Code in the root directory, it should appear like below, and we should also move the ignore files; both .gitignore, and .dockerignore too!
VS Code Dockerfile moved


Now, we have to modify the Dockerfile, why you ask? Well, given we moved it, plus it’s a little clunky. So let’s do a very quick change to make it just slightly better and then build the image.

You can see below the paths will no longer work for us.
VS Code Dockerfile autogen


We can simplify the file a little bit by using WORKDIR and make use of COPY . . to copy everything from source to destination.

In addition, if you look carefully, you can see it is doing a dotnet restore then a dotnet build, which means it’s going to check to see if it needs to restore again. As part of this initial optimisation (and there will be more to come), we can make use of the --no-restore option.

Your final Dockerfile should appear as the below - Don’t worry if you’ve already spotted further optimisations at this point, there are many we will do in future posts in this series.

FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80

FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /app
COPY . .

WORKDIR /app/src/Samples.WeatherForecast.Api

RUN dotnet restore "Samples.WeatherForecast.Api.csproj"

RUN dotnet build "Samples.WeatherForecast.Api.csproj" -c Release -o /app/build --no-restore

FROM build AS publish
RUN dotnet publish "Samples.WeatherForecast.Api.csproj" -c Release -o /app/publish

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Samples.WeatherForecast.Api.dll"]
Enter fullscreen mode Exit fullscreen mode

At this point, I think we are good to go!

If you haven’t already done so, this will be a really good time to commit your code to your git repository. For this you can use git directly from your command line, use a VS Code extension, or even use GitHub Desktop; this choice is yours.


Let’s build the image

The command you need to build a docker image is:

docker build -t samples-weatherforecast:latest .

Note, you’ll need to run this in the root directory where the Dockerfile is located, if not, it won’t be able to find it. There is an -f option available to specify the location of the Dockerfile if you ever need to.

-t is the name of the image and tag in name:tag format.
VS Code Dockerfile build

If you’ve followed along to this point, everything should build just fine, if not, please carefully go over the steps, it doesn’t take long.


Docker image built - Now what?

We have built the image, so what do we do next, let’s list the images in our local repository and see what we have there.

Type the follow command in your terminal:

docker image ls

You should see something like the below.
Terminal docker image ls

There is a single image called samples-weatherforecast with a Tag of latest, it has an Image ID, and we can even see the Size; 210MB - Don’t worry about this size, we will optimise this further later.


Let’s run it!

We can see our image there, it all looks good, so let’s run it and make sure it works. We can do this a couple of ways, my preferred is the terminal with the below command, however, Docker Desktop has improved so much recently, it has become much more user friendly, in particular for first time users, or just for those people who either prefer a UI, or just can’t be bothered with more commands.

docker run -it --rm -p 8080:80 samples-weatherforecast:latest

You can see we are using a few options on the command, and we can break these down so you can understand why we have used them.

-it is actually two options, i and t.

-i is for interactive mode; so our terminal will wait.

-t is to allocate a pseudo-TTY

--rm automatically removes the container image once the command is terminated.

-p specifies the port configuration in the format of host:container

You can always type --help to find out what commands are available.

If all is well with your container and your run command, you should see a screen like mine below:
Terminal docker run

We now have a working container running, and there are loads of things we could do to inspect the container etc. we can get into that in another blog post.

Don’t forget the Docker Desktop has that new UI, please check it out, it’s fantastic.


Send a request

We have the container running, just waiting there to serve requests, it would be a shame not to send one to make sure it actually works.

For me, my best friend is Postman, if you haven’t heard of it, I ask you where have you been for the last 10 years! Anyway, if you send a request like the following, your container should reply and come back with the weather forecast.
Postman result

We’re Finished! 🤥

Unfortunately not 😟, there is still lots we can do to optimise the size of the container, and there are also loads of Docker commands you should explore to help with your everyday engineering lives too. Which means more work 😁 but, we'll go step by step, hands-on and learn, by working through this blog series, the knowledge will solidify in your mind 🧠


What have we learned?

We have learned the basics of creating a brand new .net 5 webapi, ensuring you have a valid .gitignore file, and the very basics of Docker. We built and ran the image, and tested it to make sure it worked as we expected it to.


Furthermore

Docker cannot be understated, since 2016 there has been a lot of changes in the container ecosystem, we’ve seen Docker Swarm pretty much come and go through the container wars (Kubernetes won if you didn’t know). Although through Docker Enterprise sale to Mirantis, Docker Swarm is still alive and kicking inside their Enterprise Container Cloud solution, in addition to a secure, zero-downtime managed Kubernetes product.

Not to mention there are other orchestrators, to mention just one, there is Nomad from Hashicorp, and there are even other containers similar to Docker like Podman too.

The benefits of containers are there and paramount no matter what you are really building, even if you’re mixing IaaS and PaaS.


Up next

Part 2 in this series will be about:

  • Optimising the Docker image size

More information


References

[1] - https://hentsu.com/docker-containers-top-7-benefits/

Top comments (2)

Collapse
 
allanshady profile image
Allan Camilo

Great series! Thanks!

Collapse
 
peteking profile image
Pete King

Thanks for the feedback Allan.

Enjoy!