For Docker containers to communicate with each other and the outside world via
the host machine, there has to be a layer of networking involved. Docker supports
different types of networks, each fit for certain use cases.
When Docker is installed, a default bridge network named docker0 is created. Each new
Docker container is automatically attached to this network, unless a custom network is
specified.
Besides docker0 , two other networks get created automatically by Docker: host (no
isolation between host and containers on this network, to the outside world they are on
the same network) and none (attached containers run on container-specific network
stack).
Network drivers
Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by
default, and provide core networking functionality:
bridge: The default network driver. If you don’t specify a driver, this is the type of network
you are creating. Bridge networks are usually used when your applications run in
standalone containers that need to communicate.
host: For standalone containers, remove network isolation between the container and
the Docker host, and use the host’s networking directly. host is only available for swarm
services on Docker 17.06 and higher.
overlay: Overlay networks connect multiple Docker daemons together and enable
swarm services to communicate with each other. You can also use overlay networks to
facilitate communication between a swarm service and a standalone container, or
between two standalone containers on different Docker daemons. Tmahis strategy
removes the need to do OS- level routing between these containers.
macvlan: Macvlan networks allow you to assign a MAC address to a container, making
it appear as a physical device on your network. The Docker daemon routes traffic to
containers by their MAC addresses. Using the macvlan driver is sometimes the best
choice when dealing with legacy applications that expect to be directly connected to the
physical network, rather than
routed through the Docker host’s network stack.
none: For this container, disable all networking. Usually used in conjunction with a
custom network driver. none is not available for swarm services.
Check the default network list in your
machine. # docker network ls
The docker0 interface is a virtual Ethernet bridge that connects our containers and the
local host network. If we look further at the other interfaces on our Docker host, we’ll find
a series of interfaces starting with veth.
Every time Docker creates a container, it creates a pair of peer interfaces that are like
opposite ends of a pipe (i.e., a packet sent on one will be received on the other). It gives
one of the peers to the container to become its eth0 interface and keeps the other peer,
with a unique name like veth, out on the host machine. You can think of a veth interface as one end of a virtual network cable. One end is plugged into the docker0 bridge, and the other end is plugged into the container. By binding every veth interface
to the docker0 bridge, Docker creates a virtual subnet shared between the host
machine and every Docker container.
Check veth interface using below command.
$ sudo yum install bridge-utils -y
$ brctl show
Here there is no container on this bridge.
Note: there is no veth* interfaces means there is no containers are running for
confirmation check docker ps command.
Let run one container and see.
$ sudo docker run -t -i ubuntu /bin/bash
If you want to check ip addr show eth0 you need install network tools ..
$ apt-get update && apt install -y iputils-ping && apt install iproute2 -y
root@b78a7ea8f398:/# ip addr show eth0
Come out from the container without stopping using ctrl+p+q
Now check brtcl show command this time we are able see one veth interface.
Because of one container is running.
[ec2-user@ip-172-31-3-152 ~]$ brctl show
And do ifconfig in host machine
Do docker network inspect bridge for more info
[ec2-user@ip-172-31-3-152 ~]$ docker network inspect bridge
$ apt-get update && apt install -y iputils-ping && apt install iproute2 -y
How many containers are running in that network it will show all containers.
And run another container
[ec2-user@ip-172-31-3-152 ~]$ docker run –rm -it –name myubuntu2 ubuntu
Ping the first container from second
container root@6e847848b743:/# ping
172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=255
time=0.060 ms 64 bytes from 172.17.0.2: icmp_seq=2
ttl=255 time=0.050 ms 64 bytes from 172.17.0.2:
icmp_seq=3 ttl=255 time=0.050 ms 64 bytes from
172.17.0.2: icmp_seq=4 ttl=255 time=0.052 ms
^C
It will connect because of both are in the same network.
Create custom network – bridge network
$ docker network create mynetwork –subnet=10.0.0.1/16 –gateway=10.0.10.100
$ docker network ls
Then inspect and check
$ docker network inspect mynetwork
Then create a container with our new network
$ docker run -it –net mynetwork –name myubuntu3 ubuntu
Once login the container try to ping the default bridge container ip…nope it won’t
work because both are different network.
To get the particular container ip we can use docker inspect command to get
ipaddress like below.
[ec2-user@ip-172-31-3-152 ~]$ docker inspect -f ‘{{range
.NetworkSettings.Networks}}{{.IPAddress}}{{end}}’ 5676df12b29d
172.17.0.2
To login the container you can you docker exec command
[ec2-user@ip-172-31-3-152 ~]$ docker exec -it 72cf3c6c1058 bash
Then try to ping the default bridge container ip from inside custom network container.
Note ping command not works need to install ping utilities.
$ apt-get update && apt install -y iputils-ping && apt install
iproute2 -y root@72cf3c6c1058:/# ping 172.17.0.2
Shhhhhh….nope it won’t work because both are different network.
Then what we do we need to add the out custom network container to default network bridge.
[ec2-user@ip-172-31-3-152 ~]$ docker network connect bridge myubuntu3
Then login my custom network docker container and ping to default bridge docker container
ip.
yessss…its connecting. Now both network connected each other.
even we can reconfirm it docker inspect bridge
[ec2-user@ip-172-31-3-152 ~]$ docker inspect bridge
we are done….
we can start the container’s with different modes.
- –net=bridge this is the default mode that we just saw. so we can start
container like below.
$ docker run -it –net=bridge ubuntu
- –net=host with this option docker does not create any network it will
use host network.
$ docker run -it –net=host ubuntu - –net=container:<containername>_or<containerid> with this option
docker does not create a new network namespace while starting the
container but shares it form another container.
$ docker run -it –name myubuntu ubuntu
now start another container as follows.
$ docker run -it –net=container:myubuntu ubuntu
you will find that both containers have same ip address - –net=none with this option, Docker creates the network namespace
inside the container but does not configure networking.
Network driver summary
User-defined bridge networks are best when you need multiple containers to
communicate on the same Docker host.
Host networks are best when the network stack should not be isolated from the
Docker host, but you want other aspects of the container to be isolated.
Overlay networks are best when you need containers running on different Docker
hosts to communicate, or when multiple applications work together using swarm
services.
Macvlan networks are best when you are migrating from a VM setup or need your
containers to look like physical hosts on your network, each with a unique MAC
address.
Third-party network plugins allow you to integrate Docker with specialized network stacks.
Differences between user-defined bridges and the default bridge
https://docs.docker.com/network/bridge/