The Basics of Docker Network Drivers

Docker does a great job when it comes to connecting containers among each other or to the outside world. This functionality is powered by pluggable network drivers.

July 23, 2020 in Docker

Seamless network integration is a key factor for container portability. An application running inside a container is able to communicate with other containers or with other, non-containerized applications on the network. It doesn't even have to know that it is running inside a container.

Docker’s Approach to Networking

To provide maximum flexibility, Docker implements a pluggable networking system. You may have thought that the Docker core itself is responsible for networking, but in fact the heavy lifting is done by exchangeable plugins. Those plugins are referred to as network drivers and implement the actual networking functionality.

Choosing a network driver for your containers is up to you, as each driver fits better or worse than another one depending on your use case. Docker ships with some built-in network drivers for elementary networking, and we'll take a look at them here.

A Sane Default: Bridge

When you create a network, the network driver defaults to bridge. This driver probably covers the majority of use cases for standalone containers. They can communicate with each other, have internet access and can be published to the outside world.

Internally, the Docker Engine creates a virtual Linux bridge on the host system. The bridge connects to a host network interface like eth0 on the one hand and to interfaces from all containers on the other hand. A container accessing the internet sends a request through its eth0 interface, from where it is routed through the Linux bridge to the host's eth0 interface - and from there, the request leaves the host system. Associating incoming responses with a container is done via NAT.

Direct Access Using the Host Driver

It is also possible to completely remove Docker's network isolation for standalone containers using the host driver. This approach has some implications you need to be aware of: For example, the container doesn't receive its own IP address and it directly shares the networking namespace with the host system, while the other Linux namespaces remain activated for the container.

As there is no NAT involved, the host driver is a good fit in situations where containers handle a wide port range or maxmimum networking performance is desirable. Note that this driver is only available on Linux hosts and won't work on Mac or PC.

No Driver, No Networking

Disabling a container's network stack can be accomplished using the pre-defined none network, which has its driver set to null. Docker will only create a loopback device lo within the container, but no external network interface like eth0.

Typically, using the none driver makes sense in scenarios where you want to provide a custom network driver instead.

Distributed Overlay Networks

All of the abovementioned network drivers connect containers on a single host. To enable multi-host networking for communication between containers on different hosts, you need to create an overlay network using the overlay driver. An overlay network is layered on top of the host networks, connecting all Docker daemons together. Docker even takes care of routing the network traffic to the correct destination container, thus there is no need for OS-level routing.

Overlay networks are a large topic that goes beyond the scope of this article. Because overlay networks originally were intended for swarm services, you explicitly have to enable connections to standalone containers. This can be done with the --attachable flag when creating the network:

$ docker network create --driver overlay --attachable my-overlay

Hint: To provision a Docker daemon on multiple hosts at once, use Docker Machine.

MAC Addresses with macvlan

The macvlan driver has been designed for a very specific set of scenarios. When you're connecting a container to a macvlan network, its virtual network interface receives a MAC address and thus appears as a physical interface. As a result, your container will look like a physical host on the network.

Some applications expect a direct connection to the physical network, implying that routing traffic through Docker's virtual network stack could cause problems. This is where macvlan comes in, and it may even be the key to successfully containerizing a legacy application. When designing a macvlan network, keep in mind that it requires a physical network interface, a subnet as well as a gateway to use.

Third-Party Network Drivers

As said before, these drivers are built-in drivers shipped with Docker. There is a wide range of third-party drivers for virtually every requirement available on Docker Hub and vendor websites. How these third-party drivers have to be installed and configured depends on the particular driver, just check out the corresponding documentation.

This article is a brief introduction to the basic network drivers and therefore doesn't give concrete usage examples - but we'll take a closer look at some individual built-in drivers in the future.

💫graph: A library for creating generic graph data structures and modifying, analyzing, and visualizing them.
💫
graph: A library for creating generic graph data structures and modifying, analyzing, and visualizing them.