需要双网卡进行bonding,又存在kvm虚拟机,需要配置网桥。桥接把物理机的网卡模拟成交换机,虚拟机的网卡直接连在虚拟的网桥即交换机上。这样kvm虚拟机分配的IP地址,就应该和物理机在同一网段,可以对外进行服务。所以需要将bonding 和 bridge结合。先将双网卡绑定在同一个bond下,再将bond与bridge相连接。

brige主要用在KVM虚拟化环境下,而bond是进行物理层面的冗余。

我们超融合网络的网卡一般都会选择链路聚合模式,将2张或2张以上的网卡做bond,当成一个网卡来使用,可以做负载均衡,增加网卡带宽上限。我们的管理网,业务网,存储网都是用2张网卡做链路聚合。但当前我们只使用管理网和存储网,管理网用来ssh登录管理云主机,xsky使用到了存储网,暂时不使用业务网。做网络检查的时候,集群内的所有主机的管理网之间都能相互ping通,存储网之间也能相互ping通,业务网是由用户自己去定义,不需要配置bridge。业务网不需要在kvm虚拟主机间桥接通信,而是zstack cloud上创建二层网络的时候去配的。

这个集群都在同一个二层广播域内,即交换机是互联的。如果要配置vlan,交换机上的端口必须设置成trunk模式.

比如管理网使用两张千兆网卡来做聚合,即/etc/sysconfig/network-script目录下的ifcfg-enp175s0f0,ifcfg-enp175s0f1。

链接:https://www.jianshu.com/p/5d690a08d92e

1,执行ip -4 a命令,查看活跃网卡。可知该主机管理网是172.34.11.3/16,192.168.2.3/24 是存储网。

101: br_bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 inet 172.34.11.3/16 brd 172.34.255.255 scope global br_bond0 valid_lft forever preferred_lft forever inet 172.34.11.7/32 scope global br_bond0 valid_lft forever preferred_lft forever102: br_bond2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 inet 192.168.2.3/24 brd 192.168.2.255 scope global br_bond2 valid_lft forever preferred_lft forever

2,cat ifcfg-enp175s0f0 。该网卡是bond0的从属网卡。

TYPE="Ethernet"PROXY_METHOD="none"BROWSER_ONLY="no"DEFROUTE="yes"IPV4_FAILURE_FATAL="no"IPV6INIT="yes"IPV6_AUTOCONF="yes"IPV6_DEFROUTE="yes"IPV6_FAILURE_FATAL="no"IPV6_ADDR_GEN_MODE="stable-privacy"NAME="enp175s0f0"UUID="63dc9d83-ed30-43db-af83-248cfcbc5017"DEVICE=enp175s0f0BOOTPROTO=noneONBOOT=yesSLAVE=yesMASTER=bond0

3,cat ifcfg-enp175s0f1。该网卡是bond0的从属网卡。

TYPE=EthernetPROXY_METHOD=noneBROWSER_ONLY=noDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=yesIPV6_AUTOCONF=yesIPV6_DEFROUTE=yesIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=enp175s0f1UUID=8a796e51-5470-4c1c-aa90-206533c38d93DEVICE=enp175s0f1BOOTPROTO=noneONBOOT=yesSLAVE=yesMASTER=bond0

4,cat ifcfg-bond0 。该网卡是bond0,网桥是br_bond0。

DEVICE=bond0NM_CONTROLLED=noBONDING_OPTS="miimon=100 mode=4 xmit_hash_policy=layer2+3"MTU=1500BOOTPROTO=noneONBOOT=yesBRIDGE=br_bond0

,5,cat ifcfg-br_bond0。 该网卡是br_bond0。网上也有叫br0的,都一样。

DEVICE=br_bond0NAME=br_bond0TYPE=BridgeONBOOT=yesBOOTPROTO=staticIPV6INIT=noIPV6_AUTOCONF=noDELAY=5STP=noIPADDR=172.34.11.3NETMASK=255.255.0.0GATEWAY=172.34.0.1

,6,使用cat /proc/net/bonding/bond0查看bond0。

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)Bonding Mode: IEEE 802.3ad Dynamic link aggregationTransmit Hash Policy: layer2+3 (2)MII Status: upMII Polling Interval (ms): 100Up Delay (ms): 0Down Delay (ms): 0802.3ad infoLACP rate: slowMin links: 0Aggregator selection policy (ad_select): stableSystem priority: 65535System MAC address: b4:05:5d:aa:20:9dActive Aggregator Info: Aggregator ID: 5 Number of ports: 1 Actor Key: 9 Partner Key: 1 Partner Mac Address: 00:00:00:00:00:00Slave Interface: enp175s0f0MII Status: upSpeed: 1000 MbpsDuplex: fullLink Failure Count: 0Permanent HW addr: b4:05:5d:aa:20:9dSlave queue ID: 0Aggregator ID: 5Actor Churn State: nonePartner Churn State: churnedActor Churned Count: 0Partner Churned Count: 1details actor lacp pdu: system priority: 65535 system mac address: b4:05:5d:aa:20:9d port key: 9 port priority: 255 port number: 1 port state: 77details partner lacp pdu: system priority: 65535 system mac address: 00:00:00:00:00:00 oper key: 1 port priority: 255 port number: 1 port state: 1Slave Interface: enp175s0f1MII Status: upSpeed: 1000 MbpsDuplex: fullLink Failure Count: 0Permanent HW addr: b4:05:5d:aa:20:9eSlave queue ID: 0Aggregator ID: 6Actor Churn State: churnedPartner Churn State: churnedActor Churned Count: 1Partner Churned Count: 1details actor lacp pdu: system priority: 65535 system mac address: b4:05:5d:aa:20:9d port key: 9 port priority: 255 port number: 2 port state: 69details partner lacp pdu: system priority: 65535 system mac address: 00:00:00:00:00:00 oper key: 1 port priority: 255 port number: 1 port state: 1

7,查看bond配置

[root@root-1 network-scripts]# cd /etc/modprobe.d/[root@root-1 modprobe.d]# lsbond.conf firewalld-sysctls.conf kvm.conf lockd.conf nbd.conf tuned.conf vhost-net.confdccp-blacklist.conf iommu_unsafe_interrupts.conf kvm-nested.conf mlx4.conf truescale.conf vhost.conf[root@root-1 modprobe.d]# cat bond.confoptions bonding max_bonds=0[root@root-1 modprobe.d]#