Let’s protect your Linux server against Hackers, Malware, and other bad things! Here are some security strategies that you can use to protect from common attacks. Optimize your deployments to lower the attack surface, especially when you’re running administrative web interfaces and Docker Containers.
Security is not a simple Check-List!
First, I want to emphasize how important it is that you develop your own security strategy. Because when you search on the web about “How to protect Linux” or “Harden your Linux Server”, you’ll find tons of articles. People are just throwing out their “best practices” (just like this article here :). And you might be seduced to just grab some commands, run it on your server, and then assume you’re done! Fine, you don’t need to worry about security anymore!
Well, that’s probably the worst thing you could do. Because Cybersecurity is much more complicated than a simple Check-List. It’s not just enough to copy-paste some commands. There are so many aspects of server security that you would need to consider. And even when you went through all the recommendations, you might still miss something, or face edge-cases you haven’t thought about. That’s of course, very challenging!
But it’s still a great idea to have a solid setup that’s protected from the most common and critical threats on the web. So, my biggest wish is that you carefully think about it and develop your own security strategy. Because some recommendations in this article might apply to your scenario, while some might not. However, I still want to give you some ideas, tips, and of course, practical examples, how I’m planning a security strategy on my servers. Let’s go!
Always update your Software
This is the most basic and simple thing you should always do! It’s also the most effective one to protect your Linux Server. because for most security vulnerabilities there is already a patch out there, once they’re disclosures. So, you might want to upgrade your software as soon as there is a new security patch available.
On my Ubuntu servers, I use the “unattended-upgrades” package, which should be already installed. You simply can configure it with these two commands.
sudo apt install unattended-upgrades sudo dpkg-reconfigure --priority=low unattended-upgrades
Don’t forget to update your Docker Containers
Another important fact, most tutorials forget, is to update your Docker Containers as well! Because Docker Containers are not storing and downloading any updates automatically. The correct way to upgrade them is to destroy the container and redeploy it with a new image.
Of course, I don’t want to do this manually all the time, therefore I’m using a tool that’s called Watchtower. I’ve made a separate tutorial on Watchtower and how to configure it, here.
Secure your SSH Server
There are many tutorials on how to secure your SSH server on the web. And because it’s the main management tool everyone is using on Linux, it’s very important to protect your Linux Server against SSH attacks. But it’s important to understand that SSH by itself is a very secure protocol. There is nothing that would make it insecure by default. Most problems with SSH are just a result of bad habits.
If you’re using credentials to authenticate to your server, always pick a strong password. But I personally prefer to create a separate user and set up private and public SSH keys. That has two advantages: First, it’s more comfortable than using passwords. And second, I avoid reusing my password accidentally.
Create a separate user on Linux
Let’s create a separate user on Linux. The -m flag will also create a personal home folder automatically and the -s flag will set the default shell to bash.
useradd <username> -m -s /bin/bash
Also add the user to any administrative groups, like sudo and adm. If you’re also using docker, you might want to add the user to this group, too.
usermod -aG sudo,adm,docker <username>
Because we want to execute sudo commands, we set up a strong password for this user.
Now, we could technically just use this user to authenticate via password and manage our server. But to get rid of the password authentication we also want to set up private and public SSH keys, of course.
Set Up private and public SSH Keys
Create this private and public key pair on any client machine, you’re using to connect. You can do this in PowerShell or in the Linux Shell.
ssh-keygen -b 4096
Don’t share the private key (id_rsa), with anyone! Protect it like you would protect your root password. But the public key (id_rsa.pub) can be distributed on all your Linux servers for authentication.
scp ~/.ssh/id_rsa.pub root@<ip-address-linux-server>:/home/<username>/.ssh/authorized_keys
Disable root loging and password authentication in SSH
It’s also a good idea to disable the root login and force all users to authenticate with private and public keys. Just change these two lines in the “/etc/ssh/sshd_config” file.
... # Authentication: PermitRootLogin no .... # To disable tunneled clear text passwords, change to no here! PasswordAuthentication no ....
After changing the configuration file, make sure you’re restarting your SSH server.
sudo systemctl restart ssh
Use an Access Proxy and Two-Factor Authentication
Because using SSH is great if you’re the only one who’s administrating the server, it’s very straightforward and secure.
But if you want to manage shell access in a team, have better audit logging, I also recommend you look at an access proxy and 2FA.
For example, I recently made a video about the software Teleport. Teleport allows you to authenticate to a Linux Shell through a centralized proxy server, that you can completely self-host in your own environment or run in the cloud. You can also protect everything with Two Factor Authentication which is extremely useful because it adds another verification layer. For example, you can authenticate via a username and password, but you also need to have a second factor, like an authentication app on your phone when you need to scan a new token with every time you want to login to a server.
Only expose services that you need
First, get an overview of all services and ports your server is listening on. Because every application that wants to receive data from a client needs to listen on a separate port. Most of them are defined as port 80 for HTTP traffic, 443 for HTTPS traffic, 22 for SSH, and so on.
You can easily do that via the command “ss -lptn” in the Linux command line, and then you’ll get a list of all applications that are currently listening on network ports.
And you should go through all of them and find out if you really need them, what are they for and what exactly are they doing. For example, you can see that I have an SSH server running that’s listening on port 22, and all IP Addresses with 0.0.0.0 are applications that listen on all incoming interfaces.
Because when you’re running some old, legacy applications that use unsecured protocols or have serious security vulnerabilities, you don’t want to make them accessible on your server. The best way is to really go through all services that are running and check if you need them to listen on that port, or not.
Use a firewall system
You can also enable a firewall, and that’s a great way to control the traffic to your Linux Server as well. A firewall that is based on the IPTables stack on your Linux Server is the Uncomplicated Firewall you can easily enable on your server and then it will drop all traffic that’s coming in, and only allow what you explicitly describe in a rule.
Before activating it, make sure you’re not locking out your server, so always add a new rule for your primary management tool, in my case SSH.
sudo ufw allow 22 sudo ufw enable sudo ufw status
Use reverse proxies when possible
Avoid using unsecured protocols like HTTP, that are not encrypted. A great way to expose web applications that don’t come with a built-in HTTPS server is to use a reverse proxy.
A reverse proxy sits in between the clients and the web applications and forwards the requests. Sometimes it’s also combined with a load balancer or able to scan for malicious traffic.
I recently created a tutorial about the Nginx Proxy Manager, which is a simple reverse proxy based on the NGINX web server. It’s very easy and intuitive to configure by a web interface. However, still consider this as just a hobby project, but sufficient for small environments. Keep in mind, there are other products around like Traefik or using NGINX as a reverse proxy, as well.
Don’t expose administrative interfaces to the public internet
Another general piece of advice is to not expose any administrative interfaces to the public internet! This can lower down the attack surface much. There are several techniques existing, you could think about.
I tend to listen with administrative web interfaces to an internal network and use VPNs or access proxies to securely authenticate to them.
Use an Intrusion Prevention System
Another strategy is to use an Intrusion Prevention System. Sometimes you will also hear the term Intrusion Detection System (IDS), but that’s just only detecting that something bad is happening, of course, we also want to prevent that.
There are some common open-source IPS systems available, the most common one is fail2ban. Fail2ban is very simple and easy to set up as this is a service that goes through the log files of your applications and rejects IP addresses that have tried accessing your server and made too many password failures.
This is great for a service like SSH, or web servers to block brute force attempts, but of course, it doesn’t really protect you from advanced attacks.
This is an example of how to protect SSH with fail2ban.
sudo apt install fail2ban sudo systemctl enable fail2ban --now
Fail2ban can be used for other services as well, but by default, it should analyze the log files for your SSH server. Verify the status with the following commands.
sudo fail2ban-client status sudo fail2ban-client status sshd
It’s worth noting that there are much more versatile Intrusion Prevention Systems available out there, but to block any automated tools and simple brute force attacks fail2ban is still a great option for a Home Lab Server or a small environment.
Isolate your Applications with AppArmor and Docker
Now we have the case something is somehow still able to infect our application, even though we have limited the access to only the applications we need, we configured an IPS, a Reverse Proxy, but still, something is going through it and exploits our web server. How can we protect from this scenario, that the attacker or malware can’t spread out and infect the rest of the system?
Protect Applications with AppArmor
One effective method is to isolate the applications with AppArmor and Docker, for example, so limit what the application could do on the system. AppArmor is installed and loaded by default on Ubuntu. It uses the profiles of an application to determine what files and permissions the application requires.
With “sudo apparmor_status” you can view the status of the AppArmor profiles that are loaded on your system and if they are enforced on your applications.
Isolate Applications with Docker
Another way to isolate applications is to use Docker. And Docker is absolutely my favorite way of deploying applications on a server because it’s not just isolating the application in its own contained environment, it also makes it easier to maintain applications.
If you want to get started with Docker, I made several tutorials about it. Here is a great video to start with.
Keep in mind, that AppArmor and Docker don’t protect the application itself. But it helps to protect from privilege escalation, as it will be much harder for an attacker or malware to break out of the container and infect the rest of the system.