CentOS下网卡bonding配置

大城市里の小女人 提交于 2019-12-01 10:02:51

章节

  1. bonding技术
  2. CentOS7 配置 bonding
  3. CentOS6 配置 bonding

 

一、bonding技术

bonding(绑定)是一种linux系统下的网卡绑定技术,可以把服务器上n个物理网卡在系统内部抽象(绑定)成一个逻辑上的网卡,能够提升网络吞吐量、实现网络冗余、负载等功能,有很多优势。

bonding技术是linux系统内核层面实现的,它是一个内核模块(驱动)。使用它需要系统有这个模块,我们可以modinfo命令查看下这个模块的信息, 一般来说都支持。

[root@host ~]# modinfo bonding
filename:       /lib/modules/3.10.0-229.el7.x86_64/kernel/drivers/net/bonding/bonding.ko
alias:          rtnl-link-bond
author:         Thomas Davis, tadavis@lbl.gov and many others
description:    Ethernet Channel Bonding Driver, v3.7.1
version:        3.7.1
license:        GPL
rhelversion:    7.1
srcversion:     25506952906F95B699162DB
depends:        
intree:         Y
vermagic:       3.10.0-229.el7.x86_64 SMP mod_unload modversions 
signer:         CentOS Linux kernel signing key
sig_key:        A6:2A:0E:1D:6A:6E:48:4E:9B:FD:73:68:AF:34:08:10:48:E5:35:E5
sig_hashalgo:   sha256
parm:           max_bonds:Max number of bonded devices (int)
parm:           tx_queues:Max number of transmit queues (default = 16) (int)
parm:           num_grat_arp:Number of peer notifications to send on failover event (alias of num_unsol_na) (int)
parm:           num_unsol_na:Number of peer notifications to send on failover event (alias of num_grat_arp) (int)
parm:           miimon:Link check interval in milliseconds (int)
parm:           updelay:Delay before considering link up, in milliseconds (int)
parm:           downdelay:Delay before considering link down, in milliseconds (int)
parm:           use_carrier:Use netif_carrier_ok (vs MII ioctls) in miimon; 0 for off, 1 for on (default) (int)
parm:           mode:Mode of operation; 0 for balance-rr, 1 for active-backup, 2 for balance-xor, 3 for broadcast, 4 for 802.3ad, 5 for balance-tlb, 6 for balance-alb (charp)
parm:           primary:Primary network device to use (charp)
parm:           primary_reselect:Reselect primary slave once it comes up; 0 for always (default), 1 for only if speed of primary is better, 2 for only on active slave failure (charp)
parm:           lacp_rate:LACPDU tx rate to request from 802.3ad partner; 0 for slow, 1 for fast (charp)
parm:           ad_select:803.ad aggregation selection logic; 0 for stable (default), 1 for bandwidth, 2 for count (charp)
parm:           min_links:Minimum number of available links before turning on carrier (int)
parm:           xmit_hash_policy:balance-xor and 802.3ad hashing method; 0 for layer 2 (default), 1 for layer 3+4, 2 for layer 2+3 (charp)
parm:           arp_interval:arp interval in milliseconds (int)
parm:           arp_ip_target:arp targets in n.n.n.n form (array of charp)
parm:           arp_validate:validate src/dst of ARP probes; 0 for none (default), 1 for active, 2 for backup, 3 for all (charp)
parm:           fail_over_mac:For active-backup, do not set all slaves to the same MAC; 0 for none (default), 1 for active, 2 for follow (charp)
parm:           all_slaves_active:Keep all frames received on an interfaceby setting active flag for all slaves; 0 for never (default), 1 for always. (int)
parm:           resend_igmp:Number of IGMP membership reports to send on link failure (int)

bonding的七种工作模式: 

bonding技术提供了七种工作模式,在使用的时候需要指定一种,每种有各自的优缺点.

  1. balance-rr (mode=0)        # 默认, 有高可用 (容错) 和负载均衡的功能, 需要交换机的配置,每块网卡轮询发包 (流量分发比较均衡)。
  2. active-backup (mode=1)  # 只有高可用 (容错) 功能,不需要交换机配置,这种模式只有一块网卡工作, 对外只有一个mac地址,缺点是端口利用率比较低。
  3. balance-xor (mode=2)     # 不常用
  4. broadcast (mode=3)        # 不常用
  5. 802.3ad (mode=4)           # IEEE 802.3ad 动态链路聚合,需要交换机配置
  6. balance-tlb (mode=5)      # 不常用
  7. balance-alb (mode=6)     # 有高可用 ( 容错 )和负载均衡的功能,不需要交换机配置  (流量分发到每个接口不是特别均衡)

具体的网上有很多资料,了解每种模式的特点根据自己的选择就行, 一般会用到0、1、4、6这几种模式。

 

二、CentOS7 配置 bonding

环境

系统:CentOS7
网卡:eth0,eth1,eth2,eth3
bond0:192.168.200.154
负载模式: mode6 Adaptive load balancing # 自适应负载均衡 

  

服务器上四张物理网卡eth0,eth1,eth2,eth3, 通过绑定成一个逻辑网卡bond0,bonding模式选择mode6。

注: ip地址配置在bond0上, 物理网卡不需要配置ip地址.

 

1、关闭和停止NetworkManager服务

[root@host ~]# systemctl stop NetworkManager.service # 停止NetworkManager服务 
[root@host ~]# systemctl disable NetworkManager.service # 禁止开机启动NetworkManager服务

ps: 一定要关闭,不关会对做bonding有干扰

2、加载bonding模块

[root@host ~]# modprobe --first-time bonding

没有提示说明加载成功, 如果出现modprobe: ERROR: could not insert 'bonding': Module already in kernel说明你已经加载了这个模块, 就不用管了

你也可以使用lsmod | grep bonding查看模块是否被加载

[root@host ~]# lsmod | grep bonding
bonding               129237  0 

 

3、创建基于bond0接口的配置文件

[root@host ~]# vim /etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0
TYPE=Bond
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.200.154    # 自定义
NETMASK=255.255.255.0    # 自定义
GATEWAY=192.168.200.1    # 自定义
USERCTL=no
BONDING_OPTS="mode=6 miimon=100"

上面的BONDING_OPTS="mode=6 miimon=100" 表示这里配置的工作模式是mode6(adaptive load balancing), miimon表示监视网络链接的频度 (毫秒), 我们设置的是100毫秒, 根据你的需求也可以指定mode成其它的负载模式。

 

4、修改eth0,eth1,eth2,eth3接口的配置文件

[root@host ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0 
TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth0
ONBOOT=yes
MASTER=bond0    # 需要和ifcfg-bond0的DEVICE值对应
SLAVE=yes

[root@host ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth1
ONBOOT=yes
MASTER=bond0    # 需要和ifcfg-bond0的DEVICE值对应
SLAVE=yes

[root@host ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth2
TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth2
ONBOOT=yes
MASTER=bond0    # 需要和ifcfg-bond0的DEVICE值对应
SLAVE=yes

[root@host ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth3
TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth3
ONBOOT=yes
MASTER=bond0    # 需要和ifcfg-bond0的DEVICE值对应
SLAVE=yes

 

5、重启网络并测试配置是否正确

[root@host ~]# systemctl restart network.service

 

查看bond0的接口状态信息  ( 如果报错说明没做成功,很有可能是bond0接口没起来)

[root@host ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: adaptive load balancing    # 绑定模式: 当前是ald模式(mode 6), 也就是高可用和负载均衡模式
Primary Slave: None                      # 主要从:无
Currently Active Slave: eth0             # 当前活动的从站:eth0
MII Status: up                           # 接口状态: up(MII是Media Independent Interface简称, 接口的意思)
MII Polling Interval (ms): 100           # 接口轮询的时间隔(这里是100ms)
Up Delay (ms): 0                         # 上升延迟(毫秒):0
Down Delay (ms): 0                       # 下降延迟(毫秒):0

Slave Interface: eth0                    # 备接口: eth0
MII Status: up                           # 接口状态: up(MII是Media Independent Interface简称, 接口的意思)
Speed: 1000 Mbps                         # 端口的速率是1000 Mpbs
Duplex: full                             # 全双工
Link Failure Count: 0                    # 链路失败次数:0
Permanent HW addr: 40:f2:e9:93:39:f2     # MAC地址
Slave queue ID: 0                        # 从队列ID:0

Slave Interface: eth1                    # 备接口: eth1
MII Status: up                           # 接口状态: up(MII是Media Independent Interface简称, 接口的意思)
Speed: 1000 Mbps                         # 端口的速率是1000 Mpbs
Duplex: full                             # 全双工
Link Failure Count: 0                    # 链路失败次数:0
Permanent HW addr: 40:f2:e9:93:39:f3     # MAC地址
Slave queue ID: 0                        # 从队列ID:0

Slave Interface: eth2                    # 备接口: eth2
MII Status: up                           # 接口状态: up(MII是Media Independent Interface简称, 接口的意思)
Speed: 1000 Mbps                         # 端口的速率是1000 Mpbs
Duplex: full                             # 全双工
Link Failure Count: 0                    # 链路失败次数:0
Permanent HW addr: 40:f2:e9:93:39:f4     # MAC地址
Slave queue ID: 0                        # 从队列ID:0

Slave Interface: eth3                    # 备接口: eth3
MII Status: up                           # 接口状态: up(MII是Media Independent Interface简称, 接口的意思)
Speed: 1000 Mbps                         # 端口的速率是1000 Mpbs
Duplex: full                             # 全双工
Link Failure Count: 0                    # 链路失败次数:0
Permanent HW addr: 40:f2:e9:93:39:f5     # MAC地址
Slave queue ID: 0                        # 从队列ID:0

 

通过ifconfig命令查看下网络的接口信息

[root@host ~]# ifconfig
        
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
        inet 192.168.200.154  netmask 255.255.255.0  broadcast 192.168.200.255
        inet6 fe80::42f2:e9ff:fe93:39f2  prefixlen 64  scopeid 0x20<link>
        ether 40:f2:e9:93:39:f2  txqueuelen 1000  (Ethernet)
        RX packets 23152893100  bytes 1393866390632 (1.2 TiB)
        RX errors 450  dropped 0  overruns 0  frame 297
        TX packets 36335132323  bytes 53170910965842 (48.3 TiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 40:f2:e9:93:39:f2  txqueuelen 1000  (Ethernet)
        RX packets 23151959732  bytes 1393806809634 (1.2 TiB)
        RX errors 450  dropped 0  overruns 0  frame 297
        TX packets 9085519181  bytes 13295601644948 (12.0 TiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xc4580000-c459ffff  

eth1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 40:f2:e9:93:39:f3  txqueuelen 1000  (Ethernet)
        RX packets 38190  bytes 2294232 (2.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 9079906256  bytes 13286414175042 (12.0 TiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xc45a0000-c45bffff  

eth2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 40:f2:e9:93:39:f4  txqueuelen 1000  (Ethernet)
        RX packets 895323  bytes 57295454 (54.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 9084061787  bytes 13293240780459 (12.0 TiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xc45c0000-c45dffff  

eth3: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 40:f2:e9:93:39:f5  txqueuelen 1000  (Ethernet)
        RX packets 3  bytes 192 (192.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 9085645362  bytes 13295654734856 (12.0 TiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xc45e0000-c45fffff  

 

通过ethtools命令查看查询网卡信息,这里可以看到bond0的Speed已经达到了4000Mb/s。

[root@host ~]# ethtool bond0
Settings for bond0:
        Supported ports: [ ]
        Supported link modes:   Not reported
        Supported pause frame use: No
        Supports auto-negotiation: No
        Supported FEC modes: Not reported
        Advertised link modes:  Not reported
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Advertised FEC modes: Not reported
        Speed: 4000Mb/s
        Duplex: Full
        Port: Other
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: off
        Link detected: yes

测试网络高可用, 我们拔掉其中一根网线进行测试, 结论是:

  • 测试 mode=0 模式拔掉网线丢包1个,恢复网络时( 网线插回去 ) 基本上没有丢包。
  • 测试 mode=1 模式拔掉网线丢包1个,恢复网络时( 网线插回去 ) 基本上没有丢包。
  • 测试 mode=6 模式拔掉网线丢包1个,恢复网络时( 网络插回去 ) 丢包在5-6个左右。
  • mode6这种负载模式除了故障恢复的时候有丢包和流量分发不均匀之外其它都挺好的,如果能够忽略这点的话可以这种模式;而mode1故障的切换和恢复都很快,基本没丢包和延时。但端口利用率比较低,因为这种主备的模式只有一张网卡在工作。mode0则是前面两种模式优点的结合,并且使用轮询发包,流量分分发比较均匀,但需要交换机支持。

 

三、CentOS6 配置 bonding

Centos6配置bonding和上面的Cetons7做bonding基本一样,只是配置有些不同。

系统:CentOS 6
网卡:eth0,eth1
bond0:192.168.200.254
负载模式: mode1(Active-backup policy)  # 主备模式

1、关闭和停止NetworkManager服务

[root@host ~]# service  NetworkManager stop
[root@host ~]# chkconfig NetworkManager off

ps: 如果有装的话关闭它,如果报错说明没有装这个,那就不用管。

 

2、加载bonding模块

[root@host ~]# modprobe --first-time bonding

 

3、创建基于bond0接口的配置文件

[root@host ~]# vim /etc/sysconfig/network-scripts/ifcfg-bond0 

DEVICE=bond0
TYPE=Bond
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.200.155     # 自定义
NETMASK=255.255.255.0      # 自定义
GATEWAY=192.168.200.1      # 自定义
USERCTL=no
BONDING_OPTS="mode=1 miimon=100"

 

4、加载bond0接口到内核

[root@host ~]# vi /etc/modprobe.d/bonding.conf    # 在文件中新增下面的内容

alias bond0 bonding

 

5、编辑eth0,eth1的接口文件

[root@host ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0
MASTER=bond0    # 需要和ifcfg-bond0的DEVICE值对应
SLAVE=yes
USERCTL=no
ONBOOT=yes
BOOTPROTO=none

[root@host ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1
MASTER=bond0    # 需要和ifcfg-bond0的DEVICE值对应
SLAVE=yes
USERCTL=no
ONBOOT=yes
BOOTPROTO=none

 

6、加载模块、重启网络并测试配置是否正确

[root@host ~]# modprobe bonding
[root@host ~]# service network restart

 

查看bond0的接口状态

[root@host ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)    # 绑定模式: 当前是active-backup模式(mode 1), 也就是主备模式
Primary Slave: None                              # 主要从:无
Currently Active Slave: eth0                     # 当前活动的从站:eth0
MII Status: up                                   # 接口状态: up(MII是Media Independent Interface简称, 接口的意思)
MII Polling Interval (ms): 100                   # 接口轮询的时间隔(这里是100ms)
Up Delay (ms): 0                                 # 上升延迟(毫秒):0
Down Delay (ms): 0                               # 下降延迟(毫秒):0

Slave Interface: eth0                            # 备接口: eth0
MII Status: up                                   # 接口状态: up(MII是Media Independent Interface简称, 接口的意思)
Speed: 1000 Mbps                                 # 端口的速率是1000 Mpbs
Duplex: full                                     # 全双工
Link Failure Count: 0                            # 链路失败次数:0
Permanent HW addr: 9e:21:6d:bf:93:3b             # MAC地址
Slave queue ID: 0                                # 从队列ID:0

Slave Interface: eth1                            # 备接口:eth1
MII Status: up                                   # 接口状态: up(MII是Media Independent Interface简称, 接口的意思)
Speed: 1000 Mbps                                 # 端口的速率是1000 Mpbs
Duplex: full                                     # 全双工
Link Failure Count: 0                            # 链路失败次数:0
Permanent HW addr: f6:c0:4b:f3:7d:b5             # MAC地址
Slave queue ID: 0                                # 从队列ID:0

 

通过ifconfig命令查看下接口的状态,你会发现mode=1模式下所有的mac地址都是一致的,说明对外逻辑就是一个mac地址。

[root@host ~]# ifconfig
bond0     Link encap:Ethernet  HWaddr 9E:21:6D:BF:93:3B  
          inet addr:192.168.200.155  Bcast:192.168.200.255  Mask:255.255.255.0
          inet6 addr: fe80::9c21:6dff:febf:933b/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:11658 errors:0 dropped:0 overruns:0 frame:0
          TX packets:131 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:934373 (912.4 KiB)  TX bytes:22341 (21.8 KiB)

eth0      Link encap:Ethernet  HWaddr 9E:21:6D:BF:93:3B  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:11545 errors:0 dropped:0 overruns:0 frame:0
          TX packets:637 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:929450 (907.6 KiB)  TX bytes:140496 (137.2 KiB)

eth1      Link encap:Ethernet  HWaddr 9E:21:6D:BF:93:3B  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:7102 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:560647 (547.5 KiB)  TX bytes:0 (0.0 b)

 

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!