https://blog.51cto.com/youdong/1963416
选择模式4,并且layer选择3+4,交换机要配置LAG
一.传统的bond方式
(1)bond几种主要模式介绍
ü mode 0
load balancing (round-robin)模式 ,需要交换机端支持,支持多端口负载均衡,支持端口冗余,slave接口的mac相同
ü mode 1
active-backup模式,最大支持两个端口,一主一备,同一时间只有一块网卡工作,不支持抢占
ü mode 4
采用IEEE802.3ad方式的动态协商机制聚合端口,需要交换机开启lacp,并且配置为主动(active)模式
ü mode5和mode6
类似mode1的主备模式,不常用
(2)bond配置
ü 需要关闭NetworkManager服务
[root@docker ~]# systemctl stop NetworkManager [root@docker ~]# systemctl disable NetworkManager
ü 查看内核是否加载bounding
[root@docker ~]# lsmod|grep bonding bonding 141566 0
如果没有加载bonding可以通过以下命令加载
modprobe --first-time bonding
ü 配置bonding驱动
[root@docker ~]# vi /etc/modprobe.d/bond.conf //没有的话手动创建 [root@docker ~]# cat /etc/modprobe.d/bond.conf alias bond0 binding options bond0 miimon=100 mode=0 //miimon是用来进行链路监测的,后面指定的是检查的间隔时间,单位是ms
ü 配置bond接口
cat /etc/sysconfig/network-scripts/ifcfg-bond0 TYPE=Bond BOOTPROTO=none ONBOOT=yes USERCTL=no //USERCTL:是否允许普通用户控制此设备 DEVICE=bond0 IPADDR=192.168.0.221 PREFIX=24 NM_CONTROLLED=no //NetworkManger服务的参数,配置修改后无重启立即生效 BONDING_MASTER=yes
ü 配置slave接口
cat /etc/sysconfig/network-scripts/ifcfg-eth0 TYPE=Ethernet BOOTPROTO=none DEFROUTE=yes PEERDNS=yes PEERROUTES=yes NAME=eth0 DEVICE=eth0 ONBOOT=yes MASTER=bond0 SLAVE=yes USERCTL=no
其他slave网卡与此配置相同
ü 重启network服务,并检查
[root@docker network-scripts]# systemctl restart network [root@docker network-scripts]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:0c:29:ff:31:80 Slave queue ID: 0 Slave Interface: eth1 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:0c:29:ff:31:8a Slave queue ID: 0 [root@docker network-scripts]#
二.NetworkManager服务的nmcli方式
(1)查看网络设备状态
[root@compute ~]# nmcli dev DEVICE TYPE STATE CONNECTION eth0 ethernet connected eth0 eth1 ethernet connected Wired connection 1 lo loopback unmanaged -- [root@compute ~]#
(2)查看网络连接状态
[root@compute ~]# nmcli con sh NAME UUID TYPE DEVICE Wired connection 1 d75d7715-1098-353e-bb11-4b718e51ff38 802-3-ethernet eth1 eth0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 802-3-ethernet eth0 [root@compute ~]#
(3)创建team0(也就是bond接口)
按照下面的语法,用 nmcli 命令为网络组接口创建一个连接。
# nmcli con add type team con-name CNAME ifname INAME [config JSON]
CNAME 指代连接的名称,INAME 是接口名称,JSON (JavaScript Object Notation) 指定所使用的处理器(runner)。JSON语法格式如下:
'{"runner":{"name":"METHOD"}}'
METHOD 是以下的其中一个:broadcast、activebackup、roundrobin、loadbalance 或者 lacp。
下面以“roundrobin”为例:
[root@compute ~]# nmcli con add type team con-name team0 ifname team0 config '{"runner":{"name":"roundrobin"}}' Connection 'team0' (64021ca5-85c3-429d-b930-56802dc0ccc4) successfully added. [root@compute ~]#
设置team0的ip,gateway,dns
[root@compute ~]# nmcli con modify team0 ipv4.address "192.168.0.222/16" ipv4.gateway "192.168.0.1" [root@compute ~]# nmcli con modify team0 ipv4.dns "223.5.5.5" [root@compute ~]#
设置team0的属性为手动(manual)
[root@compute ~]# nmcli con modify team0 ipv4.method manual
添加slave网卡
[root@compute ~]# nmcli con add type team-slave con-name team-port2 ifname eth1 master team0 Connection 'team-port2' (df74a4c7-f8ff-4ae3-b04f-3dd1210598cd) successfully added. [root@compute ~]# nmcli con add type team-slave con-name team-port1 ifname eth0 master team0 Connection 'team-port1' (757648c4-114f-439f-b022-5bcf63ae0cb3) successfully added. [root@compute ~]#
启动team0网口,并检查
[root@compute ~]# nmcli con up team0 Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3) [root@compute ~]# teamdctl team0 sta setup: runner: roundrobin ports: eth0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 eth1 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 [root@compute ~]#
常见故障:
启动team0网口,team0仍旧为down
[root@compute network-scripts]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:4f:fd:82 brd ff:ff:ff:ff:ff:ff inet 192.168.0.222/16 brd 192.168.255.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe4f:fd82/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:4f:fd:8c brd ff:ff:ff:ff:ff:ff inet 192.168.0.160/16 brd 192.168.255.255 scope global dynamic eth1 valid_lft 7191sec preferred_lft 7191sec inet 192.168.0.159/16 brd 192.168.255.255 scope global secondary dynamic eth1 valid_lft 6300sec preferred_lft 6300sec inet6 fe80::86d1:12d7:5a7c:2d88/64 scope link valid_lft forever preferred_lft forever 4: team0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
排错:
1.检查网络连接状态,发现team-port1和team-port2以及team0没有连接到网卡设备
[root@compute network-scripts]# nmcli con sh NAME UUID TYPE DEVICE eth0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 802-3-ethernet eth0 eth1 22a287d8-6206-4d10-bdd9-5299b063300e 802-3-ethernet eth1 team-port1 757648c4-114f-439f-b022-5bcf63ae0cb3 802-3-ethernet -- team-port2 df74a4c7-f8ff-4ae3-b04f-3dd1210598cd 802-3-ethernet -- team0 64021ca5-85c3-429d-b930-56802dc0ccc4 team --
2.删除eth0和eth1的连接
[root@compute network-scripts]# nmcli con del eth0 eth1
3再次查看发现team0及slave接口正常连接到设备
[root@compute ~]# nmcli con sh NAME UUID TYPE DEVICE team-port1 757648c4-114f-439f-b022-5bcf63ae0cb3 802-3-ethernet eth0 team-port2 df74a4c7-f8ff-4ae3-b04f-3dd1210598cd 802-3-ethernet eth1 team0 64021ca5-85c3-429d-b930-56802dc0ccc4 team team0
4.查看team0接口状态并测试连通性
[root@compute ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master team0 state UP qlen 1000 link/ether 00:0c:29:4f:fd:82 brd ff:ff:ff:ff:ff:ff 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master team0 state UP qlen 1000 link/ether 00:0c:29:4f:fd:82 brd ff:ff:ff:ff:ff:ff 4: team0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 00:0c:29:4f:fd:82 brd ff:ff:ff:ff:ff:ff inet 192.168.0.222/16 brd 192.168.255.255 scope global team0 valid_lft forever preferred_lft forever inet6 fe80::ac47:e724:cd16:c5ca/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::acce:9394:eafe:57bb/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::e1a2:77fd:6148:c7c6/64 scope link tentative dadfailed valid_lft forever preferred_lft forever [root@compute ~]# ping baidu.com PING baidu.com (111.13.101.208) 56(84) bytes of data. 64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=1 ttl=52 time=30.6 ms 64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=1 ttl=52 time=30.7 ms (DUP!) ^C --- baidu.com ping statistics --- 1 packets transmitted, 1 received, +1 duplicates, 0% packet loss, time 0ms rtt min/avg/max/mdev = 30.684/30.696/30.708/0.012 ms [root@compute ~]#
注意测试中出现以下状况是由于交换机端没有做端口聚合配置造成
[root@compute ~]# ping baidu.com PING baidu.com (111.13.101.208) 56(84) bytes of data. 64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=1 ttl=52 time=28.2 ms 64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=1 ttl=52 time=28.2 ms (DUP!) 64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=2 ttl=52 time=29.2 ms 64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=2 ttl=52 time=29.2 ms (DUP!) 64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=3 ttl=52 time=29.8 ms 64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=3 ttl=52 time=29.9 ms (DUP!) 64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=4 ttl=52 time=27.7 ms 64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=4 ttl=52 time=27.7 ms (DUP