In your computer, everything works because of little things called "processes". These guys talk to each other to keep everything running. But sometimes they disagree at stuff because one thinks something should be black while the other thinks it should be red (for example, because of incompatible versions).
Maybe you built some super cool process that does super awesome things like letting you buy catnip on the internet. That process needs to talk to other neighbor processes that agree with it on which color the road between them should be and have no distractions around it.
Traditionally, you would send your little process to play with others and hope for the best. But the little guy can eventually grow up and change it's worldviews and opinions on road colors. This will introduce disagreements with it's peers.
Introducing: . The point is that you can send your little process to play with others but inside a protective shell. It will have everything it needs to do it's job within it's little bubble. Also, from it's point of view, it'll be completely alone (processes are loners, don't feel bad for them). You will be able to modify it's behavior and it's environment, and test it way before releasing him into the wild.
Once you have all your processes in their own hamster balls running around and playing safely, you can pick them up, move them around and group them using a crane. If any of them pops or starts throwing a tantrum, you can pick it up and put it in a corner to scold it for being a bad boy (saying "kill it" would be too rough for a 5yr old) or just replace it for a new one.
That was a really lame and childish attempt at explaining containers and a bit of orchestration. Don't know if it helped at all, but it was fun to write.
Think of "containers" for shipping goods: If you don't have those, you need to care about what you are supposed to ship. Technical devices will need to be handled in a different manner than, say, books or food.
With "containers", this all pretty much disappears. You do have a standard sized box that can extremely easy be stacked, carried around, liffted on a truck, shipped all around the world - and only the one filling the container and the one unpacking actually need to care what's inside.
With software containers, things are the same. Running a Java application is considerably different than in example running a node.js or Ruby On Rails server. Likewise, a RedHat Linux server system is subtly different to an Ubuntu or Debian server installation. There are a bunch of things an application (directly or indirectly) depends upon, and these things lead to an almost traditional "clash" between developers (who build code) and operations teams (who maintain running code in production systems): Application crashes on a production server, developer says "works on my system" - and everyone is having a hard time figuring out what exactly went wrong.
Enter containers: They try to establish a standardized way of how to packaging an application, including most (best case: all) required dependencies and make sure the operation team has a set of limited functionality (start a container, stop a container, update the container software to a newer version) required to fully operate an application without having to bother much about what technology is being used to build this application or which operating system the developer used for building it.
So from that point of view, containers add a bit more standardization compared to running a virtual machine - and make this process actually usable: You could do the same with a VM indeed, but to achieve the same thing containers can achieve, you would be required not to provide your operations people with an application to run on a VM but instead completely build and configure a VM template they can just import into whatever environment they use (VMWare, ...) and start it without second thought.
There's a load more to containers of course, but that should be the essence I guess...
Docker is like a little self-contained operating system that can run on top of your computer's operating system. The nice thing is that you can write in code what this little OS has installed and how it's configured, so that other people and computers can easily build the exact same little OS that you have. The important thing is that it's portable, can be built in a reproducible way, and it can be run on a developer's computer, a staging environment, or on a production server in the same way.
Exactly! You can do whatever you want to a Docker container and (in theory) it should be a nice self-contained environment that won't have any consequences on your computer's OS.
You could go to the grocery store to pick up your cooking ingredients and maybe make an additional trip to that farmer's market, which happens to be further away, to grab other thing OR you can rely on Blue-Apron/Plated/Hello-Fresh/Peach-Dish to gets you everything you need. You would use it because it's a time saver and if you were to tell your friend to try a specific dish from one of the aforementioned delivery services, it would very likely to come out the same way you would make it.
Containers provide a packaging and deployment mechanism for our application and its dependencies. The container registry is a powerful concept that helps with the deployment and distribution of our application.
Containers also improve the "inner loop" of our development experience when working locally, particularly as we trend towards building microservices over monolithic applications. They provide greater parity across local and remote environments including the cloud and help our infrastructure to become more immutable.
The vibrant ecosystem of tooling around containers also help us consume cloud-native platforms and development methodologies. Whether we are using serverless containers, a PaaS platform that supports containers, or an orchestrator like Kubernetes, we focus on our application instead of thinking about and managing the individual host or hosts we deploy it to.
Top comments (13)
hahaha "send halp" made me laugh.
Hopefully these two comments will help!
My shot at ELYF:
In your computer, everything works because of little things called "processes". These guys talk to each other to keep everything running. But sometimes they disagree at stuff because one thinks something should be black while the other thinks it should be red (for example, because of incompatible versions).
Maybe you built some super cool process that does super awesome things like letting you buy catnip on the internet. That process needs to talk to other neighbor processes that agree with it on which color the road between them should be and have no distractions around it.
Traditionally, you would send your little process to play with others and hope for the best. But the little guy can eventually grow up and change it's worldviews and opinions on road colors. This will introduce disagreements with it's peers.
Introducing: . The point is that you can send your little process to play with others but inside a protective shell. It will have everything it needs to do it's job within it's little bubble. Also, from it's point of view, it'll be completely alone (processes are loners, don't feel bad for them). You will be able to modify it's behavior and it's environment, and test it way before releasing him into the wild.
Once you have all your processes in their own hamster balls running around and playing safely, you can pick them up, move them around and group them using a crane. If any of them pops or starts throwing a tantrum, you can pick it up and put it in a corner to scold it for being a bad boy (saying "kill it" would be too rough for a 5yr old) or just replace it for a new one.
That was a really lame and childish attempt at explaining containers and a bit of orchestration. Don't know if it helped at all, but it was fun to write.
Think of "containers" for shipping goods: If you don't have those, you need to care about what you are supposed to ship. Technical devices will need to be handled in a different manner than, say, books or food.
With "containers", this all pretty much disappears. You do have a standard sized box that can extremely easy be stacked, carried around, liffted on a truck, shipped all around the world - and only the one filling the container and the one unpacking actually need to care what's inside.
With software containers, things are the same. Running a Java application is considerably different than in example running a node.js or Ruby On Rails server. Likewise, a RedHat Linux server system is subtly different to an Ubuntu or Debian server installation. There are a bunch of things an application (directly or indirectly) depends upon, and these things lead to an almost traditional "clash" between developers (who build code) and operations teams (who maintain running code in production systems): Application crashes on a production server, developer says "works on my system" - and everyone is having a hard time figuring out what exactly went wrong.
Enter containers: They try to establish a standardized way of how to packaging an application, including most (best case: all) required dependencies and make sure the operation team has a set of limited functionality (start a container, stop a container, update the container software to a newer version) required to fully operate an application without having to bother much about what technology is being used to build this application or which operating system the developer used for building it.
So from that point of view, containers add a bit more standardization compared to running a virtual machine - and make this process actually usable: You could do the same with a VM indeed, but to achieve the same thing containers can achieve, you would be required not to provide your operations people with an application to run on a VM but instead completely build and configure a VM template they can just import into whatever environment they use (VMWare, ...) and start it without second thought.
There's a load more to containers of course, but that should be the essence I guess...
I also loved this:
Bless this post.
Bless you, Fippy.
🤣🤣🤣🤣🤣
hahahaha
Awesome !!!
Docker is like a little self-contained operating system that can run on top of your computer's operating system. The nice thing is that you can write in code what this little OS has installed and how it's configured, so that other people and computers can easily build the exact same little OS that you have. The important thing is that it's portable, can be built in a reproducible way, and it can be run on a developer's computer, a staging environment, or on a production server in the same way.
Is it, in a way, a virtual environment, but you can break it without breaking your entire system?
Exactly! You can do whatever you want to a Docker container and (in theory) it should be a nice self-contained environment that won't have any consequences on your computer's OS.
You could go to the grocery store to pick up your cooking ingredients and maybe make an additional trip to that farmer's market, which happens to be further away, to grab other thing OR you can rely on Blue-Apron/Plated/Hello-Fresh/Peach-Dish to gets you everything you need. You would use it because it's a time saver and if you were to tell your friend to try a specific dish from one of the aforementioned delivery services, it would very likely to come out the same way you would make it.
I personally buy my own grocery though.
I think I might get it:
You are at a taco party:
There are many, many ingredients to make your own taco.
You must assemble it on your own plate.
You made a super delicious taco on your plate.
You're gonna make it again without disturbing others' plates.
Friend wants what you're having.
You could "clone" your plate/make that taco on his plate (?)
Or you give him the same recipe and he tries to make it on his plate.
He adds -- like, I dunno, twizzlers-- which is not on your taco.
He does not harm your taco, but he screws up his own.
I recently posted this on the topic - Why we should care about containers for development.
Here is a summary:
Containers provide a packaging and deployment mechanism for our application and its dependencies. The container registry is a powerful concept that helps with the deployment and distribution of our application.
Containers also improve the "inner loop" of our development experience when working locally, particularly as we trend towards building microservices over monolithic applications. They provide greater parity across local and remote environments including the cloud and help our infrastructure to become more immutable.
The vibrant ecosystem of tooling around containers also help us consume cloud-native platforms and development methodologies. Whether we are using serverless containers, a PaaS platform that supports containers, or an orchestrator like Kubernetes, we focus on our application instead of thinking about and managing the individual host or hosts we deploy it to.
I wrote a little blurb about Docker and Crystal a while back:
dev.to/jvarness/demystifying-conta...
It kind of went over my head at this moment :( But when I'm a little more familiar with it, I will revisit the post. Saved!
Thanks for linking it!
Not to boast, but here is an example use case I wrote about:
A docker appreciation post and use case with tensorflow
Bernardo