关于链路聚合(Link Aggregation):
其实就是将多个物理接口汇聚在一起,形成一个逻辑端口。以实现端口的负荷分担,当一个端口链路出现故障时,就停止在此端口发送封包,根据策略在剩下的成员中重新计算发送报文的端口,当恢复时,再次担任收发端口。
作用:增加带宽,实现链路传输弹性,冗余备份
实例演示:
这里以CentOS7.6 1810 为例(vmware虚拟机)
[root@localhost ~] # uanme -r
3.10.0-957.el7.x86_64
[root@localhost ~] # cat /etc/redhat-release
CentOS Linux release 7.6.1810(Core)
查看网卡(我这里安装了三块网卡)
注:在创建虚拟机的时候可以直接添加网络适配器或者建好之后也可以添加,不过没有新添网卡的配置文件,需要自己手动复制修改
关于网卡的命名:
eno1:由主板BIOS内建的网卡
ens33:由主板BIOS内建的PCI-E界面的网卡
enp2s0:PCI-E界面的独立网卡,可能有多个接口,所以顺序依次可能是enp2s0,enp2s1...
eth0:上述规则不适用,则默认就为这个,eth1,eth2...
[root@localhost ~ ] # ip addr # 可以看出它们的mac地址都不一样
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> ...
link/ether 00:0f:27:ee:6f:4b brd ff:ff:ff:ff:ff:ff
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP>...
link/ether 00:0d:26:bb:8e:5f brd ff:ff:ff:ff:ff:ff
4: ens35: ens35: <BROADCAST,MULTICAST,UP,LOWER_UP>...
link/ether 00:1a:5c:ab:2c:7d brd ff:ff:ff:ff:ff:ff
5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:6b:f6:2a brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
6: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:6b:f6:2a brd ff:ff:ff:ff:ff:ff
创建一个逻辑端口
注:这里的类型有bond和team 两种,bond是redhat6版本提出的绑定方法,而team是7版本提出的方法。功能基本都是一样,只不过是team的性能更好
[ root@localhost ~] # nmcli connection add type team con-name team0 ifname team0 connection.autoconnect yes config '{"runner": {"name": "activebackup"}}'
# 关于 name后面 有两个可以跟的选项 一个是我们配置的这个叫热备方式 还有一个是 roundrobin 轮询方式
# connection.autoconnect yes 设置开机自动连接
可能有人说后面的那个我也记不住啊,或者格式一不小心就写错怎么办?关键时候,不要忘了这个男人man,让他来帮助你
[root@localhost ~]# man teamd # 记不住teamd后面到底跟啥,可以直接这样输入 然后往下翻 就可以看到 一串 man 后面可以接的命令
...
SEE ALSO
teamdctl(8), teamd.conf(5), teamnl(8), bond2team(1) # 这几个都可以
AUTHOR
Jiri Pirko is the original author and current maintainer of libteam.
libteam 2013-07-10 TEAMD(8)
Manual page teamd(8) line 69/94 (END) (press h for help or q to quit)
[ root@localhost ~] # man teamd.conf # 然后输入 / EXAMPLES 下面就有这个
TEAMD.CONF(5) Team daemon configuration TEAMD.CONF(5)
...
hwaddr (string)
Desired hardware address of new team device. Usual MAC
address format is accepted.
runner.name (string)
/EXAMPLES
EXAMPLES
{
"device": "team0",
"runner": {"name": "roundrobin"},
"ports": {"eth1": {}, "eth2": {}}
}
Very basic configuration.
{
"device": "team0",
"runner": {"name": "activebackup"},
"link_watch": {"name": "ethtool"},
"ports": {
"eth1": {
"prio": -10,
"sticky": true
},
"eth2": {
"prio": 100
}
}
}
...
绑定接口
注:这里的team-slave 不能Tab键补全 但是这个命令是正确的,如果是bond 就是bond-slave,我当时纠结了一会。
[ root@localhost ~] # nmcli connection add type team-slave con-name team0-1 ifname ens34 master team0
[ root@localhost ~] # nmcli connection add type team-slave con-name team0-2 ifname ens35 master team0
[ root@localhost ~] # nmcli connection add type team-slave con-name team0-3 ifname ens33 master team0
# 提示绑定成功 也就是sucess才可以
设置team0 地址
[ root@localhost ~] # nmcli connection modify team0 ipv4.method manual ipv4.addresses 192.168.137.145/24
激活team组及绑定的各个网卡
[ root@localhost ~] # nmcli connection up team0
Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/21)
[ root@localhost ~] # nmcli connection up team0-1
连接已成功激活(D-Bus 活动路径:/org/freedesktop/NetworkManager/ActiveConnection/24)
[ root@localhost ~] # nmcli connection up team0-2
[ root@localhost ~] # nmcli connection up team0-3
查看网卡连接状态
[ root@localhost ~] # nmcli connection show
NAME UUID TYPE DEVICE
team0 83fb9214-7622-4ae4-b3fa-780e623d9f68 team team0
team0-1 199240b1-dd53-4fbb-9616-00004fca46a5 ethernet ens34
team0-2 0757416b-b44f-41e5-8a6a-47111b24debb ethernet ens35
team0-3 97328899-4f21-4eea-8d26-c438ad510775 ethernet ens33
virbr0 0710b728-367d-45d9-b875-2328a220df47 bridge virbr0
ens33 bfe1b128-c37a-4014-936b-828355e00db6 ethernet --
ens34 9bf1c231-b4f2-42bf-a73c-f92a43423f9a ethernet --
ens35 87189bfb-b38c-4c08-b1bc-eda5dc9fa1c0 ethernet --
查看网卡配置信息:
[root@localhost ~]# ifconfig # 可以看出网卡的MAC地址是一样的
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:0c:29:dd:9d:44 txqueuelen 1000 (Ethernet)
RX packets 12943 bytes 1432542 (1.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1005 bytes 111819 (109.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens34: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:0c:29:dd:9d:44 txqueuelen 1000 (Ethernet)
RX packets 30266 bytes 19704139 (18.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 10564 bytes 1703720 (1.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens35: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:0c:29:dd:9d:44 txqueuelen 1000 (Ethernet)
RX packets 12095 bytes 769630 (751.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 160 bytes 18478 (18.0 KiB)
team0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.137.145 netmask 255.255.255.0 broadcast 192.168.137.255
inet6 fe80::eefb:c3c3:140b:33e2 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:dd:9d:44 txqueuelen 1000 (Ethernet)
...
查看一下team的状态
[root@localhost ~]# teamdctl team0 state
setup:
runner: activebackup
ports:
ens33
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
down count: 0
ens34
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
down count: 0
ens35
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
down count: 0
runner:
active port: ens35
ping一下网关和主机
[root@localhost ~]# ping 192.168.137.1
PING 192.168.137.1 (192.168.137.1) 56(84) bytes of data.
64 bytes from 192.168.137.1: icmp_seq=1 ttl=64 time=0.448 ms
64 bytes from 192.168.137.1: icmp_seq=2 ttl=64 time=0.458 ms
[root@localhost ~]# ping 192.168.137.2
PING 192.168.137.2 (192.168.137.2) 56(84) bytes of data.
64 bytes from 192.168.137.2: icmp_seq=1 ttl=128 time=1.43 ms
64 bytes from 192.168.137.2: icmp_seq=2 ttl=128 time=0.232 ms
从我的物理机测试一下
Microsoft Windows [版本 10.0.17134.950]
(c) 2018 Microsoft Corporation。保留所有权利。
C:\Users\guowei>ping 192.168.137.145
正在 Ping 192.168.137.145 具有 32 字节的数据:
来自 192.168.137.145 的回复: 字节=32 时间<1ms TTL=64
来自 192.168.137.145 的回复: 字节=32 时间<1ms TTL=64
来自 192.168.137.145 的回复: 字节=32 时间<1ms TTL=64
192.168.137.145 的 Ping 统计信息:
数据包: 已发送 = 3,已接收 = 3,丢失 = 0 (0% 丢失),
往返行程的估计时间(以毫秒为单位):
最短 = 0ms,最长 = 0ms,平均 = 0ms
删除链路的网卡
[root@localhost ~]# nmcli connection down team0-3 team0-2 team0-1 team0
成功取消激活连接 'team0-2'(D-Bus 活动路径:/org/freedesktop/NetworkManager/ActiveConnection/22)
[root@localhost ~]# nmcli connection delete team0-3 team0-2 team0-1 team0
成功删除连接 'team0-3'(97328899-4f21-4eea-8d26-c438ad510775)。
查看一下恢复后的
[root@localhost ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.137.146 netmask 255.255.255.0 broadcast 192.168.137.255
inet6 fe80::c378:aae8:717b:67c3 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:dd:9d:30 txqueuelen 1000 (Ethernet)
...
ens34: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.137.144 netmask 255.255.255.0 broadcast 192.168.137.255
inet6 fe80::7c02:aad8:9814:4fa2 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:dd:9d:3a txqueuelen 1000 (Ethernet)
...
ens35: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.137.147 netmask 255.255.255.0 broadcast 192.168.137.255
inet6 fe80::a886:352c:d32f:3c9b prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:dd:9d:44 txqueuelen 1000 (Ethernet)
...
[root@localhost ~]# nmcli connection show
NAME UUID TYPE DEVICE
ens33 bfe1b128-c37a-4014-936b-828355e00db6 ethernet ens33
ens34 9bf1c231-b4f2-42bf-a73c-f92a43423f9a ethernet ens34
ens35 87189bfb-b38c-4c08-b1bc-eda5dc9fa1c0 ethernet ens35
virbr0 0710b728-367d-45d9-b875-2328a220df47 bridge virbr0