Docker Container
Containers are high on the agenda for digitization strategies with a focus on IT architectures. Containerization is considered to be the most significant upheaval in the IT world since the introduction of hardware virtualization with virtual machines (VMs). This new variant of virtualization got its boom with the trend towards so-called microservices and away from monolithic applications. Similar to VMs, containers are a kind of container for applications in which they can run. However, while VMs represent an entire computer environment, the containers only contain the important data that are required for the execution of the application. This includes operating system components such as libraries and binaries. This enables a more lightweight form of virtualization. Probably the best-known container technology is Docker, which is why the term “Docker Container” is on everyone’s lips.
What are Docker containers?
Docker containers are encapsulated units that can be run independently of one another, no matter where they are. Let us compare them to freight containers in which one or more males sit and work. These males are in reality applications such as php, mysql and Apache and sit together in a container (example see graphic). For the males, it makes no difference whether the freight container is in Munich, New York or Sydney, as it always looks the same from the inside. The same conditions apply accordingly. The same applies to the applications in the software container.

What is the difference between virtual machines and Docker containers?
Containers are referred to as a lighter weight form of virtualization, since several of them can run with applications that are isolated from one another within an operating system installation. If we want to achieve this separation of applications by means of hardware virtualization, two complete VMs including the operating system must be started. As a result, VMs require significantly more resources.
In contrast to VMs, with containers it is not the hardware that is emulated, but the operating system. The VMs run directly on a physical server that is virtualized with the help of a so-called hypervisor such as Vmware ESXi. The virtualization of containers is done at a higher level without a hypervisor. Here the installed operating system with the container engine takes care of the virtualization. This type of virtualization is significantly more complex than emulating complete hardware.
What are the advantages of Docker containers?
The new technology is particularly popular with developers, because Docker Containers are significantly more efficient, and resource-saving compared to VMs: They require less CPU and memory.
Another advantage is their portability. As closed application packages, they can be executed on a wide variety of systems. This means that they can not only be used for offline development, but also work without any problems on productive servers, regardless of the infrastructure or cloud platform chosen. This results in higher speed and consistency in development, debugging and testing. The discussions between development and operations à la “but it still worked locally for me” are a thing of the past.
Containers are highly scalable. If additional instances of an application are required, e.g. because the traffic on a website increases due to a good marketing campaign, new ones can easily be started and stopped again. Hundreds of containers can be raised or lowered within seconds. The management of this large number can be made easier by orchestration solutions.
What is container management?
An orchestration solution is required to efficiently manage a large number of containers. The best known are Kubernetes, Docker Swarm and Amazon’s Elastic Container Service. Among other things, they ensure starting and stopping, optimal placement on available compute nodes or the automated adjustment of required compute nodes in the event of load changes.
What are container images?
Now that the advantages of the new technology are obvious, the question now arises how it can be built or used. So-called images, a simple file that eliminates the need to install and update software, form the basis for containers. Images contain all the components needed to run an application on a platform-independent basis. Thus, an image can only be transferred to another system by a simple copying process. The container can then be started from the image. Images are made available via a registry that stores, manages and provides them. The best-known public registry is Docker Hub.
What is the container life cycle?
Of course, an image is not set in stone and can be adapted as desired. This adaptation process is also known as the container life cycle. I would like to illustrate this with an example:
Typically, the life of a Docker container begins with the download of an image from a registry. As mentioned above, a registry is a kind of storage facility for container images.
Portainer
There are not always complex solutions required to simplify everyday work with containers. Portainer is such a simple container management solution. The administration of individual Docker engines or a Swarm cluster is usually done via the Docker CLI when using the Community Edition. Portainer offers a free, intuitive and easy to implement GUI for Docker that enables the management of containers, volumes, etc.
Portainer is itself a single container that can be deployed as a Linux or native Windows container. In order to be able to manage the Docker Engine, Portainer needs access to the engines to be managed.
Basically, Portainer is an open-source container management tool that eliminates the need to write code. For this purpose, Portainer offers a graphical user interface for everything that can be implemented via a Docker command line.
Portainer enables users to manage their own containerized applications, for example customer applications or tools for internal use. This applies both to containers that are operated individually and to operation on clusters based on Kubernetes and Docker Swarm, for example. The assignment of authorizations also works with it.
Telegraf
Telegraf is an application which is used for collecting metrics from a wide array of input sources and writing them into a wide array of output sinks. It comes as a so-called agent. This agent can collect, process, aggregate and write metrics. In the PulseSensor Maker project, Telegraf is used for example to receive a JSON based MQTT stream from a MQTT broker and to push these data to the Grafana Live API for nearly real-time measurement of data in the Grafana application.