Recently, I started playing with Docker. I must say that it is a really cool thing, and I liked it a lot.
But first, lets talk a bit about containers. The use of containers helps not only with the efficiency, elasticity, and reusability of the hosted applications, but also with portability of the platform and applications.
Usage of the containers is pretty much very usable in these days, because:
- Flexible: Even the most complex applications can be containerized.
- Lightweight: Containers leverage and share the host kernel.
- Interchangeable: You can deploy updates and upgrades on-the-fly.
- Portable: You can build locally, deploy to the cloud, and run anywhere.
- Scalable: You can increase and automatically distribute container replicas.
- Stackable: You can stack services vertically and on-the-fly.
Often, a software application with all its dependent services (databases, messaging, filesystems) are made to run in a single container. However, container characteristics and agility requirements might make this approach challenging or ill-advised. In these instances, a multi-container deployment may be more suitable.
Containers boost the microservices development approach because they provide a lightweight and reliable environment to create and run services that can be deployed to a production or development environment without the complexity of a multiple machine environment.
An example of software applications that might run in a container would be a database-driven Python application accessing services such as a MySQL database, a file transfer protocol (FTP) server, and a web server on a single physical host, or a Java Enterprise Edition application, with an Oracle database, and a message broker running on a single VM.
Some use cases where containers are a good option might be a software provider who needs to distribute software which could be reused by other companies in a fast an error-free way or a data center who is looking for alternatives to a shared hosting for some database apps in order to minimize the amount of hardware processing needed.
There are many container providers available, such as Rocket, Drawbridge, and LXC, but one of the major providers is Docker.
Docker is the container platform which enables true independence between the applications, infrastructure, and developers. Docker is a platform for developers and sysadmins to develop, deploy, and run applications with containers.
First, I should explain what exactly is a container. A container is just a run-time instance of an image.
An image is an executable package that includes everything needed to run an application:
- the code,
- a run-time,
- environment variables, and
- configuration files.
It is a container blueprint from which a container will be created.
A container runs natively on Linux and shares the kernel of the host machine with other containers.
The third major element (besides Images and Containers) is the Registry. Registries store images for public or private use. The well-known public registry is Docker Hub, and it stores multiple images developed by the community, but private registries can be created to support internal image development under a company’s discretion. This course runs on a private registry in a virtual machine where all the required images are stored for faster consumption.
We have two version of Docker: Community Edition (CE), and Enterprise Edition (EE). So far, I was using only Community Edition, and this edition is more than enough for developers who wants to start working with Docker, to do some tests and to find out what exactly Docker is capable of.
Docker CE has two update channels, stable and edge:
- Stable gives you reliable updates every quarter
- Edge gives you new features every month
I am using Stable version, because the Edge was sometimes could be buggy, and from time to time it happened to me that update fails, and the only way to make Docker working was to install it again, by using the latest Stable version.
And I cannot say that I didn’t try to solve the issues with Edge version, GitHub is usually full of recommendations what to do when a failure during an update occurs.
But it is just that I feel much comfortable working with Stable version.
Docker CE and EE are supported on many platforms, on cloud and on-premises.
Docker for MAC and Docker for Windows are packaged with CE version only.
Amazon Web Service and Microsoft Azure support both CE and EE, while IBM Cloud supports EE version only.
When we come to server versions, CentOS, Oracle Linux, Red Hat Enterprise, SUSE Enterprise, Ubuntu, and Microsoft Server 2016 support EE, while Debian, Fedora, Ubuntu, and CentOS also support CE.
I will work Docker for Windows edition, Stable version. More info here:
Docker for Windows
I have forgot to mention that this version of Docker is available only for Windows 10 Professional and Enterprise Editions. For previous versions get Docker Toolbox.
The installation is pretty much straightforward. Download the installer, and launch it.
Right after you executed the installation, package will start to unpack and install by itself. No need to do anything, and it will be over in a matter of seconds.
After the installation is completed, go into Programs and start Docker for Windows.
A welcome pop up window will show. You can login to Docker by using Docker ID.
If you don’t have the Docker ID, you can create it on: Cloud Docker
It is a good thing to have Docker ID, because you could push and pull your repositories on a public Docker Hub. We will later on pull some public repositories from Docker Hub.
So, right now I have installed Docker for Windows on my Windows 10 machine. In the next part, I will talk about how to work with Docker from Powershell console, how to pull the images from the Docker Hub etc.