问题
k8s version: v1.10.4
flannel version: v0.10.0
docker version v1.12.6
when i use command brctl show
on node,it shows as bellow:
[root@node03 tmp]# brctl show
bridge name bridge id STP enabled interfaces
cni0 8000.0a580af40501 no veth39711246
veth591ea0bf
veth5b889fed
veth61dfc48a
veth6ef58804
veth75f5ef36
vethc162dc8a
docker0 8000.0242dfd605c0 no
it shows that the vethXXX are binding on network bridge named cni0, but when i use command `ip addr`,it shows :
[root@node03 tmp]# ip addr |grep veth
6: veth61dfc48a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
7: veth591ea0bf@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
9: veth6ef58804@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
46: vethc162dc8a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
55: veth5b889fed@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
61: veth75f5ef36@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
78: veth39711246@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP
these veth are all binding on `if3` ,but `if3` is not cni0.it is `docker0`
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
it seems that network bridge docker0
is useless, but ip addr
shows that all veth device are binding on it . what role does network bridge docker0
play in k8s with flannel? thanks
回答1:
There are two network models here Docker and Kubernetes.
Docker model
By default, Docker uses host-private networking. It creates a virtual bridge, called
docker0
by default, and allocates a subnet from one of the private address blocks defined in RFC1918 for that bridge. For each container that Docker creates, it allocates a virtual Ethernet device (calledveth
) which is attached to the bridge. The veth is mapped to appear aseth0
in the container, using Linux namespaces. The in-containereth0
interface is given an IP address from the bridge’s address range.The result is that Docker containers can talk to other containers only if they are on the same machine (and thus the same virtual bridge). Containers on different machines can not reach each other - in fact they may end up with the exact same network ranges and IP addresses.
Kubernetes model
Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):
- all containers can communicate with all other containers without NAT
- all nodes can communicate with all containers (and vice-versa) without NAT
- the IP that a container sees itself as is the same IP that others see it as
Kubernetes applies IP addresses at the
Pod
scope - containers within aPod
share their network namespaces - including their IP address. This means that containers within aPod
can all reach each other’s ports onlocalhost
. This does imply that containers within aPod
must coordinate port usage, but this is no different than processes in a VM. This is called the “IP-per-pod” model. This is implemented, using Docker, as a “pod container” which holds the network namespace open while “app containers” (the things the user specified) join that namespace with Docker’s--net=container:<id>
function.As with Docker, it is possible to request host ports, but this is reduced to a very niche operation. In this case a port will be allocated on the host
Node
and traffic will be forwarded to thePod
. ThePod
itself is blind to the existence or non-existence of host ports.
In order to integrate the platform with the underlying network infrastructure Kubernetes provide a plugin specification called Container Networking Interface (CNI). If the Kubernetes fundamental requirements are met vendors can use network stack as they like, typically using overlay networks to support multi-subnet and multi-az clusters.
Bellow is shown how overlay networks are implemented through Flannel which is a popular CNI.
You can read more about other CNI's here. The Kubernetes approach is explained in Cluster Networking docs. I also recommend reading Kubernetes Is Hard: Why EKS Makes It Easier for Network and Security Architects which explains how Flannel works, also another article from Medium
Hope this answers your question.
来源:https://stackoverflow.com/questions/54102888/what-role-does-network-bridge-docker0-play-in-k8s-with-flannel