1、Nginx+keepalived 主从配置
这种方案,使用一个vip地址,前端使用2台机器,一台做主,一台做备,但同时只有一台机器工作,另一台备份机器在主机器不出现故障的时候,永远处于浪费状态,对于服务器不多的网站,该方案不经济实惠。
2、Nginx+keepalived 双主配置
这种方案,使用两个vip地址,前端使用2台机器,互为主备,同时有两台机器工作,当其中一台机器出现故障,两台机器的请求转移到一台机器负担,非常适合于当前架构环境。
1.环境如下
lb-01:192.168.75.136/24 nginx+keepalived-master lb-02:192.168.75.137/24 nginx+keepalived-backup VIP:192.168.75.135 rs-01:192.168.75.133/24 apache rs-02:192.168.75.13424 apache
lb操作系统centos7、rs操作系统ubuntu14.04
2.lb-01/02安装nginx切配置文件一致
nginx-repo仓库源
# cat /etc/yum.repos.d/nginx.repo [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0 enabled=1
安装nginx
# yum install nginx -y
启动nginx服务命令
# systemctl start nginx.service
nginx配置文件内容如下
[root@lb-01 conf.d]# pwd /etc/nginx/conf.d [root@lb-01 conf.d]# cat upstream.conf upstream pools { server 192.168.75.133:80 weight=3; server 192.168.75.134:80 weight=3; } server { listen 80; server_name www.zxl.com; location / { proxy_pass http://pools; include /etc/nginx/conf.d/a.conf; } }
include a.conf配置文件内容
[root@lb-01 conf.d]# cat a.conf proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 30; proxy_send_timeout 15; proxy_read_timeout 15;
把nginx相关配置文件拷贝到对端lb-02
机器相应的目录,然后检查nginx -t
语法以及重新加载即可
3.测试2台lb-01/02是否负载均衡
打开客户端浏览器分别访问lb-01/02的ip地址,结果如下
从上面可以看到lb-01/02已经均衡访问了
4.nginx结合keepalived高可用
为什么使用keepalived呢?使用keepalived就用来做高可用的,提供虚拟VIP
4.1分别在2台lb上安装keepalived
# yum install keepalived -y
查看keepalived版本
# keepalived -v Keepalived v1.2.13 (11/20,2015)
4.2关于2台keepalived配置文件
在lb-01-master keepalived配置文件内容如下
[root@lb-01 ~]# cat /etc/keepalived/keepalived.conf global_defs { notification_email { 19872672@qq.com } notification_email_from root@localhost.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_MASTER } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.75.135 } }
在lb-02-backup keepalived配置文件内容如下
[root@lb-02 ~]# cat /etc/keepalived/keepalived.conf global_defs { notification_email { 19872672@qq.com } notification_email_from root@localhost.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_BACKUP } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.75.135 } }
然后分别启动2台keepalived服务
[root@lb-01 ~]# systemctl start keepalived.service [root@lb-02 ~]# systemctl start keepalived.service
4.3查看虚拟VIP
lb-01-master机器查看
[root@lb-01 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:4f:23:ba brd ff:ff:ff:ff:ff:ff inet 192.168.75.136/24 brd 192.168.75.255 scope global dynamic eth0 valid_lft 1133sec preferred_lft 1133sec inet 192.168.75.135/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe4f:23ba/64 scope link valid_lft forever preferred_lft forever
从上面可以看到虚拟VIP
地址192.168.75.135
lb-02-backup机器查看
root@lb-02 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:9d:61:b5 brd ff:ff:ff:ff:ff:ff inet 192.168.75.137/24 brd 192.168.75.255 scope global dynamic eth0 valid_lft 1306sec preferred_lft 1306sec inet6 fe80::20c:29ff:fe9d:61b5/64 scope link valid_lft forever preferred_lft forever
从上面可以看到没有虚拟VIP
地址
4.4测试访问虚拟VIP
此时虚拟VIP可以轮询访问了
4.5模拟故障
把lb-01-master nginx和keepalived停止查看是否还能正常提供服务
[root@lb-01 ~]# nginx -s stop [root@lb-01 ~]# systemctl stop keepalived.service
查看nginx服务是否停止
或者ps -ef|grep nginx查看也可以
[root@lb-01 ~]# netstat -ntpl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1034/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1793/master tcp6 0 0 :::22 :::* LISTEN 1034/sshd tcp6 0 0 ::1:25 :::* LISTEN 1793/master
此时虚拟VIP已经不再lb-01-master上了
[root@lb-01 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:4f:23:ba brd ff:ff:ff:ff:ff:ff inet 192.168.75.136/24 brd 192.168.75.255 scope global dynamic eth0 valid_lft 1361sec preferred_lft 1361sec inet6 fe80::20c:29ff:fe4f:23ba/64 scope link valid_lft forever preferred_lft forever
4.6客户端打开浏览器访问是否正常访问
lb-01nginx和keepalived挂了也不影响服务
4.7查看lb-02-backup机器VIP情况
[root@lb-02 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:9d:61:b5 brd ff:ff:ff:ff:ff:ff inet 192.168.75.137/24 brd 192.168.75.255 scope global dynamic eth0 valid_lft 1366sec preferred_lft 1366sec inet 192.168.75.135/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe9d:61b5/64 scope link valid_lft forever preferred_lft forever
此时虚拟VIP 已经在lb-02机器上
4.8分别查看2台lb机器日志关于keepalived切换过程
lb-01的keepalived日志
[root@lb-01 ~]# tail -f /var/log/messages Jun 30 17:01:01 node1 systemd: Started Session 1159 of user root. Jun 30 17:05:32 node1 systemd: Stopped nginx - high performance web server. Jun 30 17:05:44 node1 systemd: Stopping LVS and VRRP High Availability Monitor... Jun 30 17:05:44 node1 Keepalived[32926]: Stopping Keepalived v1.2.13 (11/20,2015) Jun 30 17:05:44 node1 Keepalived_vrrp[32928]: VRRP_Instance(VI_1) sending 0 priority Jun 30 17:05:44 node1 Keepalived_vrrp[32928]: VRRP_Instance(VI_1) removing protocol VIPs. Jun 30 17:05:44 node1 Keepalived_healthcheckers[32927]: Netlink reflector reports IP 192.168.75.135 removed Jun 30 17:05:44 node1 systemd: Stopped LVS and VRRP High Availability Monitor.
可以看到服务停止后会发送一个检测试剂以及以及删除VIP等等情况
lb-02keepalived日志
[root@lb-02 log]# tail -f messages Jun 30 17:01:35 node2 systemd: Started Network Manager Script Dispatcher Service. Jun 30 17:01:35 node2 nm-dispatcher: Dispatching action 'dhcp4-change' for eth0 Jun 30 17:05:44 node2 Keepalived_vrrp[46346]: VRRP_Instance(VI_1) Transition to MASTER STATE Jun 30 17:05:45 node2 Keepalived_vrrp[46346]: VRRP_Instance(VI_1) Entering MASTER STATE Jun 30 17:05:45 node2 Keepalived_vrrp[46346]: VRRP_Instance(VI_1) setting protocol VIPs. Jun 30 17:05:45 node2 Keepalived_vrrp[46346]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.75.135 Jun 30 17:05:45 node2 Keepalived_healthcheckers[46345]: Netlink reflector reports IP 192.168.75.135 added Jun 30 17:05:50 node2 Keepalived_vrrp[46346]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.75.135
可以看到日志输出过程中设置为master虚拟机VIP:192.168.75.135
等信息
4.9上面故障模拟是人为手工操作,那么有木有自动检测切换呢?
检测脚本的内容如下
[root@lb-01 ~]# cat /data/scripts/check_nginx_status.sh #!/bin/bash start_nginx=`which nginx` nginx_status1=`ps -C nginx --no-header |wc -l` if [ $nginx_status1 -eq 0 ];then $start_nginx sleep 3 nginx_status2=`ps -C nginx --no-header |wc -l` if [ $nginx_status2 -eq 0 ];then systemctl stop keepalived.service fi fi
注:lb-01/02脚本内容一样,放入计划任务即可,比如每三秒检测一下
* * * * * sleep 3; /bin/bash /data/scripts/check_nginx_status.sh
那么如何实现nginx+keepalived双主模式呢?
1.其实只是需要更改下keepalived配置文件即可,配置文件实例如下
增加新的VIP192.168.75.150
,192.168.75.135
是lb-01机器上主虚拟VIP,192.168.75.150
是lb-02机器上主虚拟VIP
lb-01的keepalived配置文件内容如下
[root@lb-01 ~]# cat /etc/keepalived/keepalived.conf global_defs { notification_email { 1987277672@qq.com } notification_email_from root@localhost.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_MASTER } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.75.135 } } vrrp_instance VI_2 { state BACKUP interface eth0 virtual_router_id 52 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.75.150 } }
lb-02的keepalived配置文件内容如下
[root@lb-02 ~]# cat /etc/keepalived/keepalived.conf global_defs { notification_email { 1987277672@qq.com } notification_email_from root@localhost.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_BACKUP } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.75.135 } } vrrp_instance VI_2 { state MASTER interface eth0 virtual_router_id 52 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.75.150 } }
2.分别启动2台lb上nginx和keepalived服务
lb-01启动
[root@lb-01 keepalived]# nginx [root@lb-01 keepalived]# systemctl start keepalived.service
lb-01查看服务是否启动
[root@lb-01 ~]# ps -ef|grep ninx root 6336 6128 0 22:39 pts/1 00:00:00 grep --color=auto ninx [root@lb-01 ~]# ps -ef|grep nginx root 6298 1 0 22:28 ? 00:00:00 nginx: master process nginx nginx 6299 6298 0 22:28 ? 00:00:00 nginx: worker process root 6338 6128 0 22:39 pts/1 00:00:00 grep --color=auto nginx [root@lb-01 ~]# ps -ef|grep keepalived root 6304 1 0 22:29 ? 00:00:00 /usr/sbin/keepalived -D root 6305 6304 0 22:29 ? 00:00:00 /usr/sbin/keepalived -D root 6306 6304 0 22:29 ? 00:00:00 /usr/sbin/keepalived -D root 6340 6128 0 22:39 pts/1 00:00:00 grep --color=auto keepalived
lb-01查看虚拟ip
[root@lb-01 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:4f:23:ba brd ff:ff:ff:ff:ff:ff inet 192.168.75.136/24 brd 192.168.75.255 scope global dynamic eth0 valid_lft 1506sec preferred_lft 1506sec inet 192.168.75.135/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe4f:23ba/64 scope link valid_lft forever preferred_lft forever
注:可以看到lb-01机器上虚拟VIP是192.168.75.135
lb-02启动
[root@lb-02 keepalived]# nginx [root@lb-02 keepalived]# systemctl start keepalived.service
lb-02查看服务是否启动
[root@lb-02 ~]# ps -ef|grep nginx root 56849 1 0 22:27 ? 00:00:00 nginx: master process nginx nginx 56850 56849 0 22:27 ? 00:00:00 nginx: worker process root 56899 53901 0 22:41 pts/0 00:00:00 grep --color=auto nginx [root@lb-02 ~]# ps -ef|grep keepalived root 56856 1 0 22:28 ? 00:00:00 /usr/sbin/keepalived -D root 56857 56856 0 22:28 ? 00:00:00 /usr/sbin/keepalived -D root 56858 56856 0 22:28 ? 00:00:00 /usr/sbin/keepalived -D root 56901 53901 0 22:41 pts/0 00:00:00 grep --color=auto keepalived
lb-02查看虚拟ip
[root@lb-02 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:9d:61:b5 brd ff:ff:ff:ff:ff:ff inet 192.168.75.137/24 brd 192.168.75.255 scope global dynamic eth0 valid_lft 1087sec preferred_lft 1087sec inet 192.168.75.150/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe9d:61b5/64 scope link valid_lft forever preferred_lft forever
注:可以看到lb-02机器上虚拟VIP是192.168.75.150
3.客户端测试访问虚拟VIP
3.模拟故障
把lb-01服务停止
[root@lb-01 ~]# nginx -s stop [root@lb-01 ~]# systemctl stop keepalived.service
验证lb-01服务是否停止状态
[root@lb-01 ~]# ps -ef|grep nginx root 6355 6128 0 22:49 pts/1 00:00:00 grep --color=auto nginx [root@lb-01 ~]# ps -ef|grep keepalived root 6373 6128 0 22:49 pts/1 00:00:00 grep --color=auto keepalived
查看lb-01虚拟VIP是否存在结果如下
[root@lb-01 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:4f:23:ba brd ff:ff:ff:ff:ff:ff inet 192.168.75.136/24 brd 192.168.75.255 scope global dynamic eth0 valid_lft 1798sec preferred_lft 1798sec inet6 fe80::20c:29ff:fe4f:23ba/64 scope link valid_lft forever preferred_lft forever
从上面结果可以看到,此时虚拟VIP已经不再lb-01机器了
4.测试访问虚拟VIP**
从上面结果可以看到,即使lb-01机器发生了故障也不影响使用,这样也利用2台lb资源了
查看一下lb-02虚拟ip结果情况如下
[root@lb-02 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:9d:61:b5 brd ff:ff:ff:ff:ff:ff inet 192.168.75.137/24 brd 192.168.75.255 scope global dynamic eth0 valid_lft 1020sec preferred_lft 1020sec inet 192.168.75.150/32 scope global eth0 valid_lft forever preferred_lft forever inet 192.168.75.135/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe9d:61b5/64 scope link valid_lft forever preferred_lft forever
注:此时虚拟VIP地址都已经在lb-02机器上了,nginx+keepalived先这样了。服务级别还可以使用开源监控软件来进行监控,先这样吧。。。。