Skip to content ↓ | Skip to navigation ↓

Docker is an awesome technology, and it’s prevalent in nearly every software developer’s workflow. It is useful for creating identical environments and sharing them between development, testing, production, and others. It’s a great way to ship a reliable software environment between systems or even to customers. However, like with any technology, one must know how to be secure when using it.

By default, Docker interacts with other Docker Containers via the Docker Network. Docker Compose makes it very easy to create Docker Networks and link them together – to mimic production environments where each service is isolated in its own network and only interacts via defined interfaces. For example, consider the following system:

  • Application A
    • Database
    • Application API
  • Application B
    • Database
    • Cache Store
    • Application UI

By separating each into its own Docker Network, the systems remain isolated from each other. However, when Application B needs to talk to Application A, then the two networks must be linked together. This is achieved by telling Docker to give a network interface to the Application UI Container that also exists in the network for Application A. In this configuration, the Docker Containers involved remain as secure as the applications running within the containers. There are no ports exposed outside the Docker Networks.

Now consider that the Application UI is *not* under a Docker Container, that this is entirely setup to help developers write the Application UI projects on their local system. Or consider that the system is being used to test out features. This might be done on a local laptop or desktop system, or it might even be done on a cloud server. To achieve this, however, one must expose a port from the container to the local host.

Docker offers several ways to achieve this:

  1. Via the “docker” command-line, there are several options (-p, -P)
  2. Via the Dockerfile Configuration using the EXPOSE command
  3. Via the Docker Compose Configuration using the EXPOSE attribute

All three of these work in basically the same manner – configuring the local system firewall rules to expose the specified Port using the format: “[ip:]<external port>:<internal port>”.  This has the effect of allowing things outside the Docker Container access to things inside the Docker Container but only on the specified port.

What is the Issue?

Now consider that you’re being conscientious of your own systems and are running a firewall. For example, consider a Debian System using UFW as its firewall manager. To secure the system, you’ve done the following:

$ sudo apt-get install ufw

$ sudo ufw allow OpenSSH

$ sudo ufw enable

At this point, you expect that the *only* access through the firewall is Port 22 for remote access via SSH. Would you expect that exposing a port in Docker would bypass your firewall configuration? Or would you expect that as the system administrator you would need to add a firewall rule to allow access to the Docker Container *if* you wanted outside entities to access it?

Unfortunately, it turns out that Docker integrates with the system firewall in such a way that exposing a port from a container exposes it through the firewall to the outside world. Moreover, the way that Docker interacts with firewalls is essentially invisible to most firewall tooling unless you’re directly interacting with the raw firewall applications (f.e IPTables).

On Linux, Docker creates a set of Netfilter chains to manage its Docker Network. When a port is exposed from a container, the related chains are munged to allow the port access. By default, this maps the port to the IPv4 address and effectively does two things:

  1. Exposes the port through the firewall to the outside world.
  2. Keeps any other Docker Container on the local system from being able to expose the *same* port.

Number 2 is an annoyance but not a security threat. It’s annoying because if one wants to work on multiple projects and utilize Docker for related infrastructure only, then configurations have to be setup so they don’t conflict with each other.

Number 1, however, is a security threat especially since if one relies on tools like UFW to check their firewall state, then it will not show that the Docker Container is being made visible through the firewall to the outside world. Granted there are many, many different firewall tools, and it would be impossible for Docker to integrate with all of them; however, users should still be in control of what actually goes through the firewall to the outside world and should have to make a conscientious decision to expose the port through the firewall.

How can I mitigate this threat?

Initially, the Docker devs released the `DOCKER_USER` functionality to enable users to mitigate this; however, this requires that users know how to manage their firewall themselves. This can help mitigate some; however, firewall rules are notoriously hard to get right so there is a reason why people employ tools like UFW to manage the firewall rules. Therefore, this is really an unacceptable solution. Further, as illustrates, a break in the DOCKER_USER functionality will leave users fully exposed again.

One can somewhat mitigate this by changing the Docker Daemon configuration to use a local host address – one in the range or in alternative doing the same in any of the methods listed above to expose a port. This works as a mitigation, but it requires active intervention to do so. Failure to do so leaves the system open to a security threat.

The best way to see and fully confirm what ports are exposed to other systems is to do a port scan from another host. The de facto tool for this is nmap, and it is available for most systems – nearly every Linux distro provides it. Pre-built binaries are also available for Windows and MacOS. An exhaustive port scan can be done using nmap as follows:

$ sudo nmap -p1-65535 <ip>

Where ‘<ip>’ is the IP address of the system. Running as a normal user (without sudo, or as an account other than root) will also work, but it will take longer.

NOTE: When using resources in the cloud or even on your own corporate networks, be careful to know the applicable security policies. Many organizations do not appreciate a port scan being done without their knowledge. This may even potentially lead to bad results as the security systems might identify the scanning system as a threat and block it.

If you want to track more on this issue, then please follow along in the Docker Issue Tracker at