Redhat5下haproxy+keepalived+nginx配置笔记

----by knight

 HA:高可用性

keepalived个人简单理解就是实现一个虚IP在keepalived主从服务器之间切换的功能,当主的keeplived挂掉,从机会无缝接管该虚IP。

keepalived它作为一个辅助实现高可用性工具,一般都会配合某个具体服务工作,例如mysql,drbd,haproxy等,本篇介绍的就是haproxy,在前期做好keepalived配置后,做一个切换脚本,当haproxy挂掉后,脚本会立即执行keepalived关闭操作,从而让从机接管,该虚IP其实会和本机的物理IP做绑定,访问虚IP其实就可以理解为访问本机物理IP,该虚IP会在脚本被触发后切换到从机,而后访问虚IP也就是在访问从机物理IP,从而实现haproxy的高可用性。

这个实验模拟的功能其实就是利用haproxy实现apache服务器间的负载均衡,缓解并发压力,并保证haproxy-master若挂掉,haproxy-backup能无缝接管,实现WEB站点负载均衡+高可用性。保证客户端无缝获取网站资源。

解决方案:

系统环境:centos5

nginx: nginx-1.2.8

haproxy: haproxy-1.4.8

keepalived:keepalived-1.2.7

haproxy VIP(虚拟ip):                 192.168.1.120

haproxy-master(haproxy1):    192.168.1.108    www1.example.com

haproxy-backup(haproxy2):     192.168.1.109   www2.example.com

nginx1:                     192.168.1.108   www1.example.com

nginx2:                     192.168.1.109   www2.example.com

由于我只开了两个虚拟机,所以loadbalancer(负载均衡器)也做web。

192.168.1.108 ==192.168.7.71

192.168.1.109 ==192.168.7.72

192.168.1.108 == 192.168.7.73 web1

192.168.1.109 == 192.168.7.74

1

(haproxy1)为仅haproxy1配置

(haproxy2)为仅haproxy2配置

(haproxy1,haproxy2)为haproxy1和haproxy2都得配置

部署环境:

1.关闭iptables和SELINUX

# service iptables stop

# setenforce 0

# vi /etc/sysconfig/selinux

---------------

SELINUX=disabled

---------------

2.nginx安装

这里不做介绍。

安装完毕只需配置客户端浏览器访问根页面显示本机IP地址。

一.haproxy安装配置:(haproxy1,haproxy2)

# wget http://haproxy.1wt.eu/download/1.4/src/haproxy-1.4.8.tar.gz

# tar zxvf haproxy-1.4.8.tar.gz

# cd haproxy-1.4.8

# uname -a          //查看linux内核版本

# make TARGET=linux26 PREFIX=/usr/local/haproxy

# make install PREFIX=/usr/local/haproxy

# useradd -s /sbin/nologin haproxy

# passwd haproxy

# chown -R haproxy.haproxy /usr/local/haproxy

配置:

# vi /usr/local/haproxy/haproxy.cfg

-----------------

global

log 127.0.0.1 local0

maxconn 5120  

chroot /usr/local/haproxy  

user haproxy  

group haproxy  

daemon  

quiet  

nbproc  1  

pidfile /usr/local/haproxy/haproxy.pid

#启动服务后后会滚动日志,生产环境建议注释掉

debug  

defaults

log 127.0.0.1 local3  

mode http  

option httplog

option httpclose

option  dontlognull

#option  forwardfor  

option  redispatch

retries 2

maxconn 2000

balance source  

contimeout      5000  

clitimeout      50000  

srvtimeout      50000  

listen 192.168.1.120 :81//由于负载均衡器和web是同一台所以不能用80,会端口冲突。

    server www1 192.168.1.108:80  weight 5 check inter 2000 rise 2 fall 5

    server www2 192.168.1.109:80  weight 5 check inter 2000 rise 2 fall 5

listen stats :8888   //监控页面端口

mode http  

#transparent  

stats uri / haproxy-stats  

stats realm Haproxy \ statistic

#认证

stats auth haproxy:password   //haproxy监控页面的帐密

-----------------

启动haproxy

# /usr/local/haproxy/sbin/haproxy -f/usr/local/haproxy/haproxy.cfg &

注:这里加上“&”是为了让haproxy服务后台运行,去掉“&”可实时查看其滚动日志

日志:

------------------------

Available polling systems :

   sepoll : pref=400, test result OK

    epoll : pref=300, test result OK

     poll : pref=200, test result OK

   select : pref=150, test result OK

Total: 4 (4 usable), will use sepoll.

Using sepoll() as the polling mechanism.

00000000:web_proxy.accept(0004)=0007from [192.168.7.129:5752]

00000000:web_proxy.clireq[0007:ffff]:GET / HTTP/1.1

00000000:web_proxy.clihdr[0007:ffff]:Accept: text/html, application/xhtml+xml, */*

00000000:web_proxy.clihdr[0007:ffff]:Accept-Language: zh-CN

00000000:web_proxy.clihdr[0007:ffff]:User-Agent: Mozilla/5.0 (compatible; MSIE

9.0; Windows NT 6.1; WOW64; Trident/5.0)

00000000:web_proxy.clihdr[0007:ffff]:Accept-Encoding: gzip, deflate

00000000:web_proxy.clihdr[0007:ffff]:Host: 192.168.7.71

00000000:web_proxy.clihdr[0007:ffff]:If-Modified-Since: Tue, 28 May 2013 18:22:10

GMT

00000000:web_proxy.clihdr[0007:ffff]:If-None-Match: "10-4ddcb57ecf1ee"

00000000:web_proxy.clihdr[0007:ffff]: Connection:Keep-Alive

00000000:web_proxy.srvrep[0007:0008]:HTTP/1.1 304 Not Modified

00000000:web_proxy.srvhdr[0007:0008]:Date: Tue, 28 May 2013 19:48:35 GMT

00000000:web_proxy.srvhdr[0007:0008]:Server: Apache/2.4.4 (Unix)

00000000:web_proxy.srvhdr[0007:0008]:Connection: close

00000000:web_proxy.srvhdr[0007:0008]:ETag: "10-4ddcb57ecf1ee"

00000000:web_proxy.srvcls[0007:0008]

00000000:web_proxy.clicls[0007:0008]

00000000:web_proxy.closed[0007:0008]

------------------------

查看是否启动

# ps -ef|grep haproxy

spacer.gif


重启haproxy

# pkill haproxy

# /usr/local/haproxy/sbin/haproxy -f/usr/local/haproxy/haproxy.cfg

spacer.gif


其中:

haproxy代理:192.168.1.120:81

nginx1:    192.168.1.108:80

nginx2:    192.168.1.109:80

统计页面监听的端口:8888

访问页面:

http://192.168.1.108:8888

认证账号/密码:haproxy/password


spacer.gif


总结:

通过日志可以看出,客户端192.168.1.103连接haproxy192.168.7.71的80端口,且客户端无论如何刷新页面,haproxy都只会把访问请求跳转到nginx的192.168.1.109地址,这是因为balance source这个参数会保持会话ID,如果改成balance roundrobin模式,那么客户端会轮流连接两台web服务器,线上还是建议使用balance source,这样会保持某一客户端在长时间内保持他的会话,不会来回跳转。

这里其实已经利用haproxy完成了对两台nginx服务器的负载均衡功能,但如何保证负载均衡的高可用性,这里就得利用keepalived的热备功能,保证haproxy1如果挂掉,haproxy2能实时接管,实现网站前端负载均衡高可用,这也是我们目前比较流行的组合haproxy+keepalived。

二.keepalived安装配置:(haproxy1,haproxy2)

# wget http://www.keepalived.org/software/keepalived-1.2.7.tar.gz

# tar zxvf keepalived-1.2.7.tar.gz

# cd keepalived-1.2.7

# ./configure--prefix=/usr/local/keepalived --with-kernel-

dir=/usr/src/kernels/2.6.32-279.el6.x86_64

# make && make install

设置keepalived启动脚本

# cp/usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/rc.d/init.d/

# cp/usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/

# mkdir /etc/keepalived

# cp/usr/local/keepalived/sbin/keepalived /usr/sbin/

# chkconfig keepalived on

(haproxy1

# vi /etc/keepalived/keepalived.conf

----------------------

! Configuration File for keepalived

global_defs {

router_id LVS_DEVEL

}

vrrp_instance VI_1 {

state MASTER

interface eth0

virtual_router_id 51

priority 150

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.1.120

}

}

----------------------

(haproxy2

# vi /etc/keepalived/keepalived.conf

----------------------

! Configuration File for keepalived

global_defs {

router_id LVS_DEVEL

}

vrrp_instance VI_1 {

state BACKUP

interface eth0

virtual_router_id 51

priority 120

advert_int 1

authentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

192.168.1.120

}

}

----------------------

启动keepalived

# service keepalived start

# ps -ef |grep keepalived

----------------------


spacer.gif

注:在没有做haproxy+keepalived的脚本(check_haproxy.sh)是没有红色框那一行


-----------------------

创建haproxy+keepalived脚本:

实现当haproxy挂掉后,能再次启动haproxy,若无法再次启动则彻底关闭keepalived将VIP交给

从机处理。

(haproxy1,haproxy2)

# vi /etc/keepalived/check_haproxy.sh

---------------------

#!/bin/bash

while :

do

hapid=`ps -C haproxy --no-header |wc -l`

 if [ $hapid -eq 0 ];then

 /usr/local/haproxy/sbin/haproxy-f /usr/local/haproxy/haproxy.cfg

 sleep 5

   if [ $hapid -eq 0 ];then

   /etc/init.d/keepalivedstop

   fi

 fi

 sleep 5

done

--------------------

改执行权限

# chmod 755 /etc/keepalived/check_haproxy.sh

强制后台执行(关闭客户端连接也会继续运行)

# nohup sh /etc/keepalived/check_haproxy.sh

三.测试:

1.在两台机器上分别执行ip add,目前显示VIP在haproxy1上与本机网卡绑定

(haproxy1

# ip add

-----------------------


spacer.gif


-----------------------

(haproxy2

# ip add

-----------------------

spacer.gif


-----------------------

在浏览器访问该虚IP:

http://192.168.1.120

返回如图:

spacer.gif


证明,keepalived让haproxy1接管负载均衡,将页面跳转到nginx1上

2.停掉haproxy1上的haproxy服务,5秒后keepalived会自动将其再次启动

(haproxy1)

# pkill haproxy

等5秒

# ps -ef |grep haproxy

--------------

spacer.gif


--------------

3.停掉主的keepalived,备机马上接管服务

(haproxy1)

# service keepalived stop

# uname -a

---------------

spacer.gif


---------------

(haproxy2)

# ip add

spacer.gif


现已跳转到haproxy2,在浏览器再次访问该虚IP:

http://192.168.7.70

返回如图:

spacer.gif


OK