1. What is Docker?
2. Docker Internals – How It Works
Host OS
Linux Container
Kernel Features Used by Docker
3. Container Lifecycle in Docker
4. Docker Layered Architecture
4.1 Docker CLI
4.2 Docker Daemon (dockerd)
4.3 containerd
4.4 runc
4.5 Docker Storage & Networking Drivers
4.6 Registry
3.2 Inside a Docker Container
Let’s try out simple example:
Docker Networking Deep Dive
1. Bridge Network (default)
Root Network Namespace (Host Side)
Container Network Namespace
Connection Path
Key takeaway:
Role of docker0
How it works
Traffic flow with multiple docker containers example
Key facts
2. Host Network
Comparing Bridge Vs Host Network
Extra deep‑dive commands (optional)
key network commands
Let’s check the difference between bridge and host networking practically.
Key takeaway
3. Overlay Network
Test Overlay network using Vagrant
4. Macvlan Network
5. None Network
1. What is Docker?
Docker is a platform for developing, shipping, and running applications inside containers. Containers package application code, dependencies, and configurations into a single, portable unit that can run reliably across different environments.
Unlike virtual machines (VMs), containers share the host’s operating system kernel rather than requiring a separate OS for each application. This makes them lightweight, fast to start, and efficient in resource usage.
2. Docker Internals – How It Works
From a Linux perspective, a Docker container is just a process running on the host, isolated from other processes via kernel features, and sharing the host’s Linux kernel.
The following diagram illustrates Docker internals from a Linux perspective:
Host OS
The base operating system running on your physical or virtual machine. It manages hardware, kernel space, and system calls.
Linux Container
The container includes:
- Application Code: The actual business logic.
- Dependencies: Required libraries, packages, and OS utilities.
- Processes: e.g., process 1, process 2 — these run inside the container namespace.
Kernel Features Used by Docker
1. Namespaces – Provide isolation for: pid, net, mnt, ipc, uts, user.
2. Control Groups (cgroups) – Limit and measure resource usage (CPU, memory, I/O).
3. Union File Systems (OverlayFS, AUFS, etc.) – Layers that make Docker images efficient and immutable.
3. Container Lifecycle in Docker
1. Docker client sends request to Docker daemon.
2. Daemon pulls image from registry (if not available locally).
3. Image is unpacked into a container filesystem.
4. The container process starts inside isolated namespaces.
5. Networking is set up (via docker0 bridge or another mode).
6. Process runs until stopped; container can be paused, resumed, or removed.
4. Docker Layered Architecture
Docker follows a layered architecture that separates concerns into different components for maintainability, extensibility, and modularity. The main layers are:
4.1 Docker CLI
The Docker Command Line Interface (CLI) is the user-facing tool used to interact with Docker. Commands like `docker run`, `docker build`, and `docker ps` are sent to the Docker daemon.
4.2 Docker Daemon (dockerd)
`dockerd` is the persistent background process that manages Docker objects (containers, images, volumes, networks). It listens for API requests via the Docker REST API and communicates with `containerd` to manage container lifecycles.
4.3 containerd
`containerd` is an industry-standard container runtime that handles container execution and supervision. It is responsible for pulling container images, unpacking them, managing snapshots, and starting/stopping containers.
4.4 runc
`runc` is a lightweight, portable container runtime that actually creates and runs containers using Linux kernel features such as namespaces and cgroups. It follows the Open Container Initiative (OCI) runtime specification.
4.5 Docker Storage & Networking Drivers
These drivers abstract the underlying OS storage and networking functionalities. Storage drivers (OverlayFS, AUFS, Btrfs, ZFS) manage layered image and container filesystems. Networking drivers (bridge, host, overlay, macvlan) manage container connectivity.
4.6 Registry
Docker images are stored in registries such as Docker Hub or private registries. The Docker daemon can push and pull images using the Registry HTTP API V2.
The layered architecture of Docker can be visualized as follows:
[Diagram: Docker CLI → dockerd → containerd → runc → Kernel]
3.2 Inside a Docker Container
A Docker container is not a miniature virtual machine. It is an isolated process running on the host's Linux kernel, packaged with its dependencies, configuration, and environment. The main components inside a running container are:
Application Code – The primary application logic or services.
Dependencies – Required packages, libraries, and OS utilities.
Processes – The main process (PID 1) and any child processes running inside the container.
Isolated Filesystem – Built from the container image layers plus a writable container layer.
Isolated Network Stack – Own network interfaces (e.g., lo, eth0), routes, and firewall (iptables) rules.
Isolated IPC & UTS – Separate shared memory segments, message queues, and unique hostname.
Environment Variables & Config – Supplied at runtime to control application behavior.
Mounted Volumes – Bind mounts or named volumes connecting to host storage.
The diagram below illustrates these components within the boundaries of a Linux container:
Let’s try out simple example:
Please clone this repository first.
Execute below command:
$ ./run.sh
Then this will build and run a docker image and container locally.
Then execute second command:
$ docker logs -f myapp-container
== Inside container ==
WORKDIR: /app
MY_ENV: HelloFromHost
Contents of /data:
total 0
In the below script we have passed an environment variable and mount volume in the host machine to a container volume.
https://raw.githubusercontent.com/dhanuka84/learning-docker/refs/heads/main/run.sh
Docker Networking Deep Dive
This document explains the internal networking model of Docker, covering the bridge, host, none, overlay, and macvlan network drivers. It also describes how containers communicate with each other and with the external world using Linux networking primitives like network namespaces, virtual Ethernet pairs (veth), and the docker0 bridge.
1. Bridge Network (default)
The bridge network is Docker's default networking mode for standalone containers. It uses a Linux bridge (docker0) on the host to connect multiple containers to the same Layer 2 domain. Each container gets its own network namespace with its own interfaces, routes, and iptables rules. A veth pair connects the container's eth0 interface to the host's docker0 bridge.
Host eth0 → docker0 (bridge) → vethA (host end) ⇆ vethB (container end) → container eth0
.jpg)
Diagram: Two containers connected via docker0 bridge using veth pairs.
Root Network Namespace (Host Side)
The host has its own eth0 (physical or virtual NIC) connected to the outside network.
It also has a virtual Ethernet pair (veth pair) that acts like a cable between namespaces.
One end of the veth pair lives in the root network namespace (often named something like vethXYZ).
Container Network Namespace
The other end of that veth pair is placed inside the container’s network namespace and renamed eth0 inside the container.
This is the interface the container uses to communicate with the host and the outside world.
The lo0 inside the container is just the container’s own loopback device — used for processes inside the container to talk to each other (localhost).
Connection Path
Host’s eth0 → kernel routing/iptables → veth pair → Container’s eth0
lo0 is never used for external connections — only internal communications within the same namespace.
Key takeaway:
eth0 in the host namespace is connected to the LAN/Internet, while the container’s eth0 is connected to the host via a veth bridge. The lo0 inside a container is unrelated to the host's interfaces.
Role of docker0
Purpose: It connects containers to each other and to the host.
Created automatically: When Docker starts, it creates docker0 unless you configure a custom network.
Interface type: It’s a Linux bridge (br0-like device), implemented in kernel space.
How it works
When you start a container on the default bridge network:
Docker creates a veth pair (two virtual network interfaces linked together).
One end stays in the container’s network namespace (renamed to eth0 inside the container).
The other end stays in the host’s network namespace and is attached to docker0.
docker0 acts like a switch:
Forwards traffic between containers.
Forwards traffic between containers and the host.
docker0 has its own IP:
Default: 172.17.0.1/16 (but configurable).
Acts as the gateway for containers on the bridge network.
Traffic flow with multiple docker containers example
[container1 eth0] ⇆ veth1 ⇆ docker0 ⇆ veth2 ⇆ [container2 eth0]
|
+-- Host network stack
|
+-- NAT/iptables to external network
Key facts
Exists only on the host.
Handles Layer 2 (Ethernet) switching inside the host.
Works with iptables NAT rules to give containers Internet access.
If you create a user-defined bridge network in Docker, a separate Linux bridge (not docker0) will be made for that network.
2. Host Network
In host network mode, the container shares the host's network namespace. It uses the host's network interfaces directly, bypassing docker0 and veth pairs. This mode provides the best performance but removes network isolation between containers and the host.
Container process → host eth0
(no veth pairs, no docker0)
Comparing Bridge Vs Host Network
Extra deep‑dive commands (optional)
See Docker’s NAT rules:
sudo iptables -t nat -L -n -v | grep -E 'DOCKER|MASQUERADE'
Inspect the default bridge network:
docker network inspect bridge | jq '.[0].IPAM, .[0].Containers'
Bridge FDB (MAC learning table):
sudo bridge fdb show br docker0
Confirm peer indices in one line:
nsenter --net=/proc/$BRIDGE_PID/ns/net ip -o link show eth0
ip -o link show vethcdcc4bd
key network commands
1. nsenter --net=/proc/$BRIDGE_PID/ns/net ip addr
Purpose:
Run ip addr inside the container’s network namespace from the host.
Breakdown:
nsenter — Linux tool to enter another process’s namespace(s).
--net=/proc/$BRIDGE_PID/ns/net — tells nsenter to join the network namespace of the process with PID $BRIDGE_PID.
/proc/<PID>/ns/net is a special symlink representing the process’s network namespace.
ip addr — lists all interfaces and their IP addresses in that namespace.
Why it matters:
Docker containers have their own network namespaces in bridge mode. This command lets you inspect the container’s interfaces without using docker exec.
2. ip addr show docker0
Purpose:
Display the IP and MAC address details of the docker0 bridge interface on the host.
Breakdown:
ip addr show <iface> — prints interface configuration.
docker0 — the virtual bridge Docker creates by default for containers on the bridge network.
Why it matters:
The bridge IP (usually 172.17.0.1) is the default gateway for all containers on the default bridge network.
3. brctl show docker0 (optional, older tool)
Purpose:
Show which interfaces are connected to the docker0 bridge.
Breakdown:
brctl — part of the older bridge-utils package (being replaced by ip link commands).
show docker0 — lists members (e.g., veth*) connected to that bridge.
Why it matters:
Each container on the bridge network has a veth pair; the host side connects to docker0. This shows which veths belong to which containers.
4. ls -l /proc/$PID/ns/net
Purpose:
View the network namespace inode number for a given process.
Breakdown:
/proc/$PID/ns/net — symbolic link to the process’s network namespace.
The link target looks like net:[4026532421], where the number is the namespace’s inode ID.
Why it matters:
If two processes have the same inode number, they are in the same network namespace.
Bridge mode → containers have different inode from the host.
Host mode → container shares same inode as host.
5. ip link
Purpose:
List all network interfaces and their link-level info on the host.
Why it matters:
Shows veth interfaces, bridges, and physical NICs. The @ifX notation points to the peer’s ifindex in another namespace.
6. ip route / nsenter --net=... ip route
Purpose:
Display the routing table either in the host or in the container namespace.
Why it matters:
In bridge mode → default route goes via docker0’s IP.
In host mode → default route is same as host’s normal route.
7. docker inspect -f '{{.State.Pid}}' <container>
Purpose:
Get the container’s init process PID on the host.
Why it matters:
The PID is required to access /proc/$PID/ns/net for nsenter.
Let’s check the difference between bridge and host networking practically.
Execute below shell script with sudo
$ sudo ./https://raw.githubusercontent.com/dhanuka84/learning-docker/refs/heads/main/bridge-example.sh
Sample output: data is masked
=== Cleaning up old containers ===
bridge-demo
host-demo
=== Starting container in default bridge network ===
[CONTAINER_ID_REDACTED]
=== Starting container in host network ===
[CONTAINER_ID_REDACTED]
=== Showing Docker bridge (docker0) info ===
6: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether [MAC_REDACTED] brd ff:ff:ff:ff:ff:ff
inet [IP_REDACTED]/[MASKLEN] brd [IP_REDACTED] scope global docker0
valid_lft forever preferred_lft forever
inet6 [IPV6_REDACTED]/[MASKLEN] scope link
valid_lft forever preferred_lft forever
bridge name bridge id STP enabled interfaces
docker0 8000.[MAC_REDACTED] no veth[MAC_REDACTED]
=== Finding network namespace of bridge-demo container ===
bridge-demo PID: 1061367
lrwxrwxrwx 1 root root 0 Ogos 14 10:21 /proc/1061367/ns/net -> 'net:[NETNS_REDACTED]'
=== Interfaces inside bridge-demo namespace ===
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if78: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether [MAC_REDACTED] brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet [IP_REDACTED]/[MASKLEN] brd [IP_REDACTED] scope global eth0
valid_lft forever preferred_lft forever
=== veth pair mapping for bridge-demo ===
Host veth: veth[MAC_REDACTED]@if2
=== Network namespace of host-demo container ===
host-demo PID: 1061453
lrwxrwxrwx 1 root root 0 Ogos 14 10:21 /proc/1061453/ns/net -> 'net:[NETNS_REDACTED]'
=== Interfaces inside host-demo namespace (same as host) ===
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
link/ether [MAC_REDACTED] brd ff:ff:ff:ff:ff:ff
4: wlp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether [MAC_REDACTED] brd ff:ff:ff:ff:ff:ff
inet [IP_REDACTED]/[MASKLEN] brd [IP_REDACTED] scope global dynamic noprefixroute wlp2s0
valid_lft 82110sec preferred_lft 82110sec
inet6 [IPV6_REDACTED]/[MASKLEN] scope link noprefixroute
valid_lft forever preferred_lft forever
5: br-[BRIDGE_ID_REDACTED]: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether [MAC_REDACTED] brd ff:ff:ff:ff:ff:ff
inet [IP_REDACTED]/[MASKLEN] brd [IP_REDACTED] scope global br-[BRIDGE_ID_REDACTED]
valid_lft forever preferred_lft forever
inet6 [IPV6_REDACTED]/[MASKLEN] scope global nodad
valid_lft forever preferred_lft forever
inet6 [IPV6_REDACTED]/[MASKLEN] scope link
valid_lft forever preferred_lft forever
6: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether [MAC_REDACTED] brd ff:ff:ff:ff:ff:ff
inet [IP_REDACTED]/[MASKLEN] brd [IP_REDACTED] scope global docker0
valid_lft forever preferred_lft forever
inet6 [IPV6_REDACTED]/[MASKLEN] scope link
valid_lft forever preferred_lft forever
30: br-[BRIDGE_ID_REDACTED]: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether [MAC_REDACTED] brd ff:ff:ff:ff:ff:ff
inet [IP_REDACTED]/[MASKLEN] brd [IP_REDACTED] scope global br-[BRIDGE_ID_REDACTED]
valid_lft forever preferred_lft forever
inet6 [IPV6_REDACTED]/[MASKLEN] scope link
valid_lft forever preferred_lft forever
75: enxa84a63c60bbc: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
link/ether [MAC_REDACTED] brd ff:ff:ff:ff:ff:ff
78: veth[MAC_REDACTED]@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether [MAC_REDACTED] brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 [IPV6_REDACTED]/[MASKLEN] scope link tentative
valid_lft forever preferred_lft forever
=== Routing table for bridge-demo ===
default via [IP_REDACTED] dev eth0
[SUBNET_REDACTED] dev eth0 proto kernel scope link src [IP_REDACTED]
=== Routing table for host-demo ===
default via [IP_REDACTED] dev wlp2s0 proto dhcp metric 600
[SUBNET_REDACTED] dev br-[BRIDGE_ID_REDACTED] scope link metric 1000 linkdown
[SUBNET_REDACTED] dev docker0 proto kernel scope link src [IP_REDACTED]
[SUBNET_REDACTED] dev docker0 proto kernel scope link src [IP_REDACTED] metric 427
[SUBNET_REDACTED] dev br-[BRIDGE_ID_REDACTED] proto kernel scope link src [IP_REDACTED] linkdown
[SUBNET_REDACTED] dev wlp2s0 proto kernel scope link src [IP_REDACTED] metric 600
[SUBNET_REDACTED] dev br-[BRIDGE_ID_REDACTED] proto kernel scope link src [IP_REDACTED] linkdown
[SUBNET_REDACTED] dev br-[BRIDGE_ID_REDACTED] proto kernel scope link src [IP_REDACTED] metric 425 linkdown
=== Cleanup ===
bridge-demo
Host-demo
Key takeaway
$ sudo nsenter --net=/proc/$PID/ns/net ip route
Bridge Mode: container isolated in its own network namespace, connected to host via veth pair into docker0 bridge, NAT to external network.
Host Mode: container shares host network namespace, uses host NIC directly, no NAT, no docker0 path.
3. Overlay Network
Overlay networks are used to connect containers across multiple Docker hosts in a swarm or cluster. They use VXLAN encapsulation to create a Layer 2 overlay on top of Layer 3 networks.
It doesn’t use docker0 (the default bridge network’s switch).
For each overlay network, Docker creates:
a per‑overlay Linux bridge on every participating host (name like br-<overlayID>), and
a VXLAN device (name like vxlan<id>, UDP 4789) bound to the host’s underlay NIC.
So the packet path for overlay is:
container eth0 → vethB → br-<overlayID> (host bridge for that overlay) → vxlan<id> → underlay IP network → vxlan<id> (peer host) → br-<overlayID> → vethB → container eth0
Test Overlay network using Vagrant
Prerequisites:
Vagrant
VirtualBox
Clone this github repo and then follow the steps below.
Find virtualbox IP and replace Vagrantfile configuration below.
$ ifconfig
vboxnet0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.48.1 netmask 255.255.255.0 broadcast 192.168.56.255
inet6 fe80::800:27ff:fe00:0 prefixlen 64 scopeid 0x20<link>
ether 0a:00:27:00:00:00 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 297 bytes 48821 (48.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Boot up two virtual machines called hostA and hostB
$ make up
Verify the overlay network
$ make verify
bash scripts/verify.sh
+ echo '== Host A checks =='
== Host A checks ==
+ vagrant ssh hostA -c 'ip -d link show vxlan42 || true'
5: vxlan42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 9a:d1:80:fd:3d:5b brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535
vxlan id 42 remote 192.168.48.12 dev enp0s8 srcport 0 0 dstport 4789 ttl auto ageing 300 udpcsum noudp6zerocsumtx noudp6zerocsumrx
bridge_slave state forwarding priority 32 cost 4 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8001 port_no 0x1 designated_port 32769 designated_cost 0 designated_bridge 8000.56:7:95:2a:64:81 designated_root 8000.56:7:95:2a:64:81 hold_timer 0.00 message_age_timer 0.00 forward_delay_timer 0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on mcast_to_unicast off neigh_suppress off group_fwd_mask 0 group_fwd_mask_str 0x0 vlan_tunnel off isolated off addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
+ vagrant ssh hostA -c 'bridge link | grep br0 || true'
5: vxlan42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 master br0 state forwarding priority 32 cost 4
6: vpeerA@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 2
+ vagrant ssh hostA -c 'sudo ip netns exec nsA ip -o -4 addr show vethA || true'
7: vethA inet 10.88.0.1/24 scope global vethA\ valid_lft forever preferred_lft forever
+ echo
+ echo '== Host B checks =='
== Host B checks ==
+ vagrant ssh hostB -c 'ip -d link show vxlan42 || true'
5: vxlan42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether be:f9:14:2e:fe:d4 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 65535
vxlan id 42 remote 192.168.48.11 dev enp0s8 srcport 0 0 dstport 4789 ttl auto ageing 300 udpcsum noudp6zerocsumtx noudp6zerocsumrx
bridge_slave state forwarding priority 32 cost 4 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8001 port_no 0x1 designated_port 32769 designated_cost 0 designated_bridge 8000.92:3c:7f:ce:f1:37 designated_root 8000.92:3c:7f:ce:f1:37 hold_timer 0.00 message_age_timer 0.00 forward_delay_timer 0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on mcast_to_unicast off neigh_suppress off group_fwd_mask 0 group_fwd_mask_str 0x0 vlan_tunnel off isolated off addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
+ vagrant ssh hostB -c 'bridge link | grep br0 || true'
5: vxlan42: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 master br0 state forwarding priority 32 cost 4
6: vpeerB@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 2
+ vagrant ssh hostB -c 'sudo ip netns exec nsB ip -o -4 addr show vethB || true'
7: vethB inet 10.88.0.2/24 scope global vethB\ valid_lft forever preferred_lft forever
+ echo
+ echo '== Ping across VXLAN (nsA -> nsB) =='
== Ping across VXLAN (nsA -> nsB) ==
+ vagrant ssh hostA -c 'sudo ip netns exec nsA ping -c 3 10.88.0.2'
PING 10.88.0.2 (10.88.0.2) 56(84) bytes of data.
64 bytes from 10.88.0.2: icmp_seq=1 ttl=64 time=0.501 ms
64 bytes from 10.88.0.2: icmp_seq=2 ttl=64 time=0.597 ms
64 bytes from 10.88.0.2: icmp_seq=3 ttl=64 time=0.661 ms
--- 10.88.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2139ms
rtt min/avg/max/mdev = 0.501/0.586/0.661/0.065 ms
Explanation:
As highlighted we can see each main component according to the above diagram.
hostA = 192.168.48.11
hostB = 192.168.48.12
The below verify command will execute the Ping command from nsA (container/namespace) from hostA to nsB in hostB through container ip ( 10.88.0.2).
+ echo '== Ping across VXLAN (nsA -> nsB) =='
== Ping across VXLAN (nsA -> nsB) ==
+ vagrant ssh hostA -c 'sudo ip netns exec nsA ping -c 3 10.88.0.2'
PING 10.88.0.2 (10.88.0.2) 56(84) bytes of data.
[ container(nsA) eth0 → vethA ]→ vpeerA [hostA] → br0 → vxlan42 ==(UDP/4789; VNI 42 over enp0s8)==> Host B vxlan42 ── br0 ── vpeerB [hostB] →[ vethB → container(nsB) eth0 ]
4. Macvlan Network
Macvlan networks assign a unique MAC address to each container, making it appear as a physical device on the network. Containers are directly accessible on the physical network without NAT.
container eth0 (unique MAC) → physical switch → LAN
5. None Network
In none network mode, the container has its own network namespace but no network interfaces configured (other than loopback). This is used when you want to set up networking manually.
container lo0 (loopback only)
(no eth0, no docker0, no external connectivity)
No comments:
Post a Comment