Skip to main content
DevopsIntermediate16 min readUpdated March 2025

Container Networking

Container networking controls how containers communicate with each other and the outside world. Docker provides several network drivers — bridge, host, overlay, and none — each suited for different use cases.

How Docker Networking Works

When Docker is installed, it creates a virtual network interface called docker0 (a Linux bridge). Each container gets its own virtual network interface and an IP address on the Docker network.

Docker networking is built on Linux network namespaces — each container has its own isolated network stack (interfaces, routing tables, iptables rules). The Docker daemon manages the virtual network plumbing between containers and the host.

Key networking concepts: - Network drivers — Pluggable backends (bridge, host, overlay, macvlan, none) - DNS resolution — Containers on the same network resolve each other by service name - Port mapping — Expose container ports to the host with -p host:container

Network Drivers Explained

Docker supports multiple network drivers for different scenarios:

  • bridge (default) — Creates an isolated network on the host. Containers on the same bridge can communicate; external access requires port mapping. Best for single-host applications.
  • host — Removes network isolation; container shares the host network stack directly. Best for performance-critical applications (no NAT overhead).
  • overlay — Spans multiple Docker hosts (Docker Swarm). Enables container-to-container communication across machines. Required for distributed applications.
  • macvlan — Assigns a MAC address to the container, making it appear as a physical device on the network. Best for legacy applications that need direct network access.
  • none — Disables all networking. Container has only a loopback interface. Best for batch jobs that need no network access.

Working with Docker Networks

Managing networks with the Docker CLI:

bash
# ---- List and inspect networks ----
docker network ls                          # List all networks
docker network inspect bridge              # Inspect the default bridge network
docker network inspect myapp-network       # Inspect a custom network

# ---- Create custom networks ----
docker network create myapp-network                    # Default bridge driver
docker network create --driver bridge   --subnet 172.20.0.0/16   --gateway 172.20.0.1   myapp-network

# ---- Connect containers to networks ----
docker run -d --name web --network myapp-network nginx
docker run -d --name db  --network myapp-network postgres

# Connect a running container to a network
docker network connect myapp-network existing-container

# Disconnect a container from a network
docker network disconnect myapp-network existing-container

# ---- DNS resolution between containers ----
# Containers on the same custom network resolve by name:
docker exec web ping db          # 'db' resolves to the db container's IP
docker exec web curl http://db:5432

# ---- Remove networks ----
docker network rm myapp-network
docker network prune              # Remove all unused networks

Port Mapping and Exposure

EXPOSE in a Dockerfile documents which port the container listens on (metadata only). -p in docker run actually publishes the port to the host:

- -p 8080:80 — Map host port 8080 to container port 80 - -p 127.0.0.1:8080:80 — Bind only to localhost (more secure) - -p 80 — Map container port 80 to a random host port - -P — Publish all EXPOSE'd ports to random host ports

In production, avoid exposing databases directly. Use an internal network and only expose the application tier through a reverse proxy.

bash
# Expose web on port 80, database only on internal network
docker network create internal-net

docker run -d   --name web   --network internal-net   -p 80:3000   myapp:1.0

docker run -d   --name db   --network internal-net   # No -p flag: database NOT accessible from host
  postgres:16

# Web container can reach db by hostname 'db'
# Host can reach web on port 80
# Host CANNOT reach db directly (secure!)

Overlay Networks for Multi-Host Communication

In a Docker Swarm or Kubernetes cluster, containers run on different physical hosts. Overlay networks create a virtual network that spans multiple hosts, allowing containers to communicate as if they were on the same machine.

Overlay networks use VXLAN (Virtual Extensible LAN) to encapsulate container traffic in UDP packets that travel between hosts.

For Kubernetes, the Container Network Interface (CNI) plugins (Calico, Flannel, Weave) implement similar cross-node networking with additional features like network policies for fine-grained access control.

Key Takeaways

  • Docker uses Linux network namespaces to give each container an isolated network stack.
  • The bridge driver (default) isolates containers on a single host; overlay spans multiple hosts.
  • Containers on the same custom network resolve each other by container/service name via built-in DNS.
  • Use -p to publish ports to the host; never expose databases directly — keep them on internal networks.
  • Overlay networks with VXLAN enable container communication across multiple physical hosts in a cluster.

Contact Us

Have a question or feedback? Fill out the form below or reach us directly at support@nvaitraining.com