生产问题:配置主从模式的lvs去负载mongdb集群,将lvs和负载软件部署到同一台机器上

问题分析:集群需要三个节点,主从模式的lvs需要两个节点,将lvs和集群部署在相同的节点上,即有一台集群上的节点是没有lvs服务的,另外两台集群节点上有lvs的主从服务

作为小白,需要将问题细分,一步一步解决,理清思路,问题细分如下:

  • 下面是参考网上教程,lvs的两台机器与业务服务器分数不同物理机,进行实验

  1. 搭建LVS
    链接:LVS环境搭建.md
  2. 安装一个集群的单节点服务,访问虚拟ip,查看是否能够访问
    链接:LVS实战解决生产问题
  3. 搭建keepalived的主从模式,访问集群单节点
    链接:参考LVS环境搭建.md实现
  4. 搭建keepalived的主从模式,添加集群进行访问
    链接:

  • 有了上一部分的支撑,下面我们来解决我们的需求,将lvs与业务ip部署到一台物理机上
  1. 虚拟机中增加网卡
  • 在虚拟机中增加网络适配器
  • 在/etc/sysconfig/network-scripts下添加网卡对应名字文件
  • 按照已有网卡,修改DEVICE,HWADDR,IPADDR,GATEWAY,DNS1字段,(DEVICE,IPADDR必改,其他可不改)
  • HWADDR字段可通过ifconfig 网卡名查看得知
  1. 修改配置文件
    DEVICE=eth1
    HWADDR=00:50:56:9A:64:89
    TYPE=Ethernet
    UUID=ab75ba59-4bea-4d4e-b0f2-7915796ece21
    ONBOOT=yes
    NM_CONTROLLED=yes
    BOOTPROTO=static
    IPADDR=192.168.1.149
    NETMASK=255.255.255.0
    GATEWAY=192.168.1.1
    DNS1=192.168.1.1
  2. 关闭防火墙 iptables -F
  3. 现在有了两个网卡,1.63,1.149,1.148作为vip,149作为lvs设备ip,1.63作为业务ip

web访问1.63和1.48都是可以打开网页的,但是访问1.149不行,因为1.149是负载均衡设备,不指向web
4. 启动步骤,启动业务进程-》keppalived-》readserver.sh
5. 配置readserver.sh文件

#!/bin/bash  
SNS_VIP=192.168.1.148
DEV_ip="lo:0" #此处是一个虚拟的网卡,没有真实设备
. /etc/rc.d/init.d/functions
case "$1" in
start)
 ifconfig  $DEV_ip $SNS_VIP netmask 255.255.255.255 broadcast $SNS_VIP
 /sbin/route add -host $SNS_VIP dev $DEV_ip
 echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
 echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
 echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
 echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
 sysctl -p >/dev/null 2>&1
 echo "RealServer Start OK"  
 ;;
stop)
 ifconfig $DEV_ip down
 route del $SNS_VIP >/dev/null 2>&1
 echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
 echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
 echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
 echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
 echo "RealServer Stoped"  
 ;;
 *)
 echo "Usage: $0 {start|stop}"  
 exit 1
esac
exit 0
  1. 查看ifconfi内容如下
    eth0 Link encap:Ethernet HWaddr 00:50:56:9A:6D:EE
    inet addr:192.168.1.63 Bcast:192.168.1.255 Mask:255.255.255.0
    inet6 addr: fe80::250:56ff:fe9a:6dee/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:62678443 errors:0 dropped:0 overruns:0 frame:0
    TX packets:57216857 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:11965484646 (11.1 GiB) TX bytes:27189820524 (25.3 GiB)
    eth1 Link encap:Ethernet HWaddr 00:50:56:9A:64:89
    inet addr:192.168.1.149 Bcast:192.168.1.149 Mask:255.255.255.255
    inet6 addr: fe80::250:56ff:fe9a:6489/64 Scope:Link
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:34594 errors:0 dropped:0 overruns:0 frame:0
    TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:2432355 (2.3 MiB) TX bytes:382 (382.0 b)
    lo Link encap:Local Loopback
    inet addr:127.0.0.1 Mask:255.0.0.0
    inet6 addr: ::1/128 Scope:Host
    UP LOOPBACK RUNNING MTU:16436 Metric:1
    RX packets:234682 errors:0 dropped:0 overruns:0 frame:0
    TX packets:234682 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:86841630 (82.8 MiB) TX bytes:86841630 (82.8 MiB)
    lo:0 Link encap:Local Loopback
    inet addr:192.168.1.148 Mask:255.255.255.255
    UP LOOPBACK RUNNING MTU:16436 Metric:1
  2. 查看ip a内容如下
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet 192.168.1.148/32 brd 192.168.1.148 scope global lo:0
    inet6 ::1/128 scope host
    valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:9a:6d:ee brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.63/24 brd 192.168.1.255 scope global eth0
    inet 192.168.1.148/32 scope global eth0
    inet6 fe80::250:56ff:fe9a:6dee/64 scope link
    valid_lft forever preferred_lft forever
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:9a:64:89 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.149/32 brd 192.168.1.149 scope global eth1
    inet6 fe80::250:56ff:fe9a:6489/64 scope link
    valid_lft forever preferred_lft forever

总结:在同一台主机上,使用两块网卡,安装keppalived和业务,可以实现。

  • 上面一步将业务服务和lvs服务部署到同一台机器上,下面,将转换成集群模式,业务服务为mongo集群,需要3台机器,lvs为主从模式,需要2台机器
  1. 先将mongo的集群搭建起来
  2. 配置lvs主从
  3. 在mongo集群上设置vip