Are you still running your websites, databases, or Linux servers without Docker?
Maybe you heard about it, but you’re not sure why you should use it? Or you want to do it but you don’t know how?
Don’t worry it’s not that hard. We will cover all the cool advantages of containerization, and how to easily migrate and manage your existing services and apps into Docker Containers!
Why should you migrate to Docker?
Docker is an open-source tool to containerize applications. It allows you to run applications isolated from each other, and deploy them in packages. These packages, called Docker Images, contain the application and all necessary files and libraries to run it.
This makes the management and deployment process very easy. Because we can just run a Container without care about what’s installed on the host Operating System underneath. We can always be sure that our application is running as intended, no matter which Linux Distribution or other software we have installed on our server.
This sounds very similar to virtualization. Therefore Containers are often compared to virtual machines, but they are not really the same. Docker containers still run only on the same Operating System as the host. Because a Container doesn’t include a full Operating System, but only the important libraries and dependencies to execute the application.
That’s why Containers are very lightweight compared to virtual machines, so they can start very fast and are more resource-efficient. To run Linux containers on Windows 10, check out WSL and Docker Desktop.
Deploying Applications in Containers has become a very common standard in the IT industry, because of these advantages.
Some basics about how to use Docker
Are you excited to try it out? Awesome!
Firstly, we need to install Docker on our Linux server. The good news is, that it can be installed on nearly every distribution pretty easily. Because you can just refer to the official installation guide on Docker’s website.
On Ubuntu 20.04 LTS and newer versions, just install it from the official repository.
sudo apt install docker
To manage Docker without the sudo command, add your administrative user to the Docker group.
sudo usermod -aG docker <username>
Let’s start a hello-world container. Because then you will see how easy it is to run applications. Remember, Container Images are packages that contain applications and all necessary libraries. The following command will run a new container instance from the Container Image “hello-world”. It will also automatically search and pull down the Image.
docker run hello-world
How to find Docker Images?
To get new applications, just visit the official Docker Hub. Because most vendors and communities maintain their own images, you will find Images for many applications there.
Run a simple NGINX webserver
Let’s also run a more useful application on our server. For example, an NGINX webserver! We don’t need to install it somewhere, we can simply just download the Image of NGINX and run it.
To expose port 80, we also need to tell Docker, that it should do so. Because it also isolates the app inside a separate network. To start the Container in the background, we also append the -d parameter. Also check, if your container is running correctly with the command docker ps.
docker run -p 80:80 -d nginx docker ps
Now we can connect our webserver. This was easy right? Well, one problem we have with containers is that they are immutable. That means, whenever we stop the container or it crashes, all changes made to the filesystem inside the container are gone. It simply just starts from the same Docker Image, we’ve downloaded.
Persistent Storages with Volumes
But don’t worry, there is an easy way to make files inside containers persistent.
The solution is called Docker Volumes. You can mount a specific location on the host inside a mount point of our container. For example, we know that NGINX websites are always stored in the /var/www/html directory. We can now just mount any location on our host into the container, to make the websites stored persistently.
The following command will create a new volume and makes it persistent storage inside our NGINX container. Note, that whenever we want to change the configuration of existing containers, we need to remove them first.
docker run -p 80:80 -v nginx_data:/var/www/html -d nginx
Migrate existing Data to a Docker Volume
Docker Volumes allow us to store data persistently on the host. But how do you migrate existing data inside a volume? Because the data inside the Volumes are stored in the /var/lib/docker/volumes directory, but it’s hidden on the host.
Migrate static files
Instead of using a named Volume, you can also mount a specific location on the host. The following commands mount the existing website on the host inside an NGINX webserver.
# stop the running NGINX server on the host sudo systemctl stop nginx sudo systemctl disable nginx docker run -p 80:80 -v /var/www/html:/var/www/html -d nginx
For static files, websites, etc. the method above works fine. But what if you want to migrate a database into a Docker volume?
The following command migrates a MySQL Database that was running on the host into a Container.
# stop the running database on the host sudo systemctl stop mysql-server sudo systemctl disable mysql-server docker run -v /var/lib/mysql:/var/lib/mysql -d mysql
Note, if you don’t stop a database, you can get into trouble. Because databases hold some data inside the memory until it’s written back to the hard disk. Mostly, it isn’t a big problem for small databases, where not many changes are happening. But for a large database or website, it can definitely cause some problems.
To avoid inconsistencies or damaging the database relations, always stop the database server before copy the files away!
Manage Docker Containers with Portainer
I hope I could convince you that Docker isn’t actually so hard. But let me tell you this, you can also manage all your Containers, Volumes, Networks, and much more in a nice graphical way!
Portainer is an open-source management tool for Docker. You can mange single hosts, remote hosts and also Cloud environments and container orchestrators.
To deploy Portainer, you can simply use Docker and start it in a Container. We need to pass through the docker socket, otherwise, Portainer doesn’t have the permissions to manage the host. And we also expose ports 8000 (remote agents) and 9000 (web interface). And of course, store the data in a persistent volume. To make Portainer automatically started on our system, we also append the restart policy.
docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce
And now our management tool is up and running, we can now just access it in the web browser. After you set up your credentials just select “Docker” to manage a local environment. Confirm the settings with “Connect”.
Now you have access to local host to manage your entire Docker environment. The web interface makes it a lot more intuitive to use Docker. And you can manage Containers, Images, Networks, Volumes and more in an easy way.
However, it’s still important to understand the fundamentals of Docker and be able to manage it from the CLI as well.
Do you also want to learn how to securely protect the web interface with SSL certs? I’ve made a dedicated tutorial about Portainer and how to securely protect it with a reverse proxy.