Nginx之二:负载均衡及高可用
一、Nginx负载均衡及高可用简介
负载均衡是为了解决单个节点压力过大,造成Web服务响应过慢,严重的情况下导致服务瘫痪,无法正常提供服务。负载均衡,将用户的所有HTTP请求均衡的分配到每一台机器上,充分发挥所有机器的性能,提高服务的质量和用户体验。负载均衡常用的Web服务器软件有Nginx、HAProxy、LVS、Apache。
Nginx负载均衡是通过upstream模块来实现的,常见四种负载策略:
轮循(默认:将每个请求均匀分配到每台服务器
最少连接:将请求分配给连接数最少的服务器
IP Hash:绑定处理请求的服务器。第一次请求时,根据该客户端的IP算出一个HASH值,将请求分配到集群中的某一台服务器上。后面该客户端的所有请求,都将通过HASH算法,找到之前处理这台客户端请求的服务器,然后将请求交给它来处理。
url Hash:url的hash结果来分配请求,使每个url定向到同一个后端服务器,服务器做缓存时比较有效。url hash 属于第三方模块,nginx1.7.2版本以后已经集成
官网说明:http://nginx.org/en/docs/http/load_balancing.html
Nginx 高可用一般与keepalived结合实现,使用两个vip地址,前端使用2台机器,互为主备,同时有两台机器工作,当其中一台机器出现故障,两台机器的请求转移到一台机器负担。
二、Nginx负载均衡实践
环境:
准备3台服务器:Nginx反向代理负载均衡服务器,ip:192.168.1.100/24(node1.whc.com);web服务器(httpd):192.168.1.102/24(node3.whc.com),192.168.1.103/24(node4.whc.com)
操作系统:centos 6.7
各服务器修改/etc/hosts:
192.168.1.100 node1.whc.com
192.168.1.102 node3.whc.com
192.168.1.200 node4.whc.com
测试方便:关闭防火墙,配置epel源,同步ntpdate -u 202.120.2.101时间
1、node1节点,安装Nginx(编译安装请参照上一篇文章,这里配置nginx源,用yum安装)
2、nginx-repo仓库源(Nginx版本为1.10.1)
[root@node1 ~]# cat /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx
baseurl=http://nginx.org/packages/centos/6/$basearch/
gpgcheck=0
3、安装nginx,并备份default.conf文件
[root@node1 ~]#yum install nginx -y
[root@node1 ~]# cd /etc/nginx/conf.d/
[root@node1 conf.d]# cp default.conf{,.bak}
4、启动服务器并且测试是否正常
[root@node1 nginx]# service nginx start
Starting nginx: [ OK ]
[root@node1 nginx]# ss -tlnp |grep nginx #查80端口
LISTEN 0 128 *:80 *:* users:(("nginx",24329,6),("nginx",24330,6))
[root@node1 nginx]# curl http://192.168.1.100 #测试访问
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
5、node1上编辑配置/etc/nginx/nginx.conf,在http段使用upstream定义一个集群,名称webserver
upstream webserver {
server 192.168.10.2; #默认80端口
server 192.168.10.3;
}
------nginx.conf配置如下:
user nginx; #运行用户
worker_processes 2; #启动进程数
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 5000; #工作线程连接数5000
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
upstream webserver {
server 192.168.1.102;
server 192.168.1.200;
}
include /etc/nginx/conf.d/*.conf;
}
在/etc/nginx/conf.d/default.conf在location / 中添加proxy_pass请求全部代理到刚才定义好webserver
location / {
proxy_pass http://webserver;
root html;
index index.html index.htm;
}
-----------default.conf配置如下:-------------------------------------------------
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
#location / {
# root /usr/share/nginx/html;
# index index.html index.htm;
#}
location / {
proxy_pass http://webserver;
root html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
[root@node1 conf.d]# service nginx configtest #测试配置文件
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: [warn] 5000 worker_connections exceed open file resource limit: 1024
nginx: configuration file /etc/nginx/nginx.conf test is successful
6、node3和node4节点安装apache服务,编辑/var/www/html/index.html,内容分别为<h1>web node3</h1>和<h1>web node4</h1>
[root@node3 ~]yum -y install httpd
[root@node4 ~]yum -y install httpd
[root@node3 ~]# cd /var/www/html/
[root@node3 html]# cat index.html
<h1>web node3</h1>
[root@node4 ~]# cd /var/www/html/
[root@node4 html]# cat index.html
<h1>web node4</h1>
#service httpd start #启动httpd服务
7、node1节点重载nginx配置,测试访问http://192.168.1.100/或http://node1.whc.com
[root@node1 conf.d]# service nginx reload
Reloading nginx: [ OK ]
[root@node1 conf.d]# curl node1.whc.com #测试访问
<h1>web node4</h1>
[root@node1 conf.d]# curl node1.whc.com #测试访问
<h1>web node3</h1>
三、Nginx+keepalived高可用实践
环境:
准备4台服务器:Nginx反向代理负载均衡服务器2台,ip:192.168.1.100/24(node1.whc.com),VIP:192.168.1.10/24;ip:192.168.1.101/24(node2.whc.com),VIP:192.168.1.11/24;web服务器(httpd):192.168.1.102/24(node3.whc.com),192.168.1.103/24(node4.whc.com)
keepalived:VIP:192.168.1.10/24和VIP:192.168.1.11/24互为主备
操作系统:centos 6.7
各服务器修改/etc/hosts:
192.168.1.100 node1.whc.com
192.168.1.101 node2.whc.com
192.168.1.102 node3.whc.com
192.168.1.200 node4.whc.com
测试方便:关闭防火墙,配置epel源,同步ntpdate -u 202.120.2.101时间
1、node2节点安装配置,同上面node1的配置(配置可以拷贝应用),并启动nginx服务
[root@node1 conf.d]# scp /etc/yum.repos.d/nginx.repo root@node2.whc.com:/etc/yum.repos.d/
The authenticity of host 'node2.whc.com (192.168.1.101)' can't be established.
RSA key fingerprint is b5:f5:49:36:58:c2:01:31:44:d1:fc:15:af:0b:8f:e7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2.whc.com,192.168.1.101' (RSA) to the list of known hosts.
root@node2.whc.com's password:
Permission denied, please try again.
root@node2.whc.com's password:
nginx.repo 100% 94 0.1KB/s 00:00
[root@node1 conf.d]# scp /etc/nginx/nginx.conf root@node2.whc.com:/etc/nginx/nginx.conf
root@node2.whc.com's password:
nginx.conf 100% 729 0.7KB/s 00:00
[root@node1 conf.d]# scp /etc/nginx/conf.d/default.conf root@node2.whc.com:/etc/nginx/conf.d/default.conf
root@node2.whc.com's password:
default.conf 100% 1205 1.2KB/s 00:00
[root@node1 conf.d]# service nginx start #启动服务
[root@node1 conf.d]# ss -tlnp |grep '80' #查看端口
LISTEN 0 128 *:80 *:* users:(("nginx",24329,6),("nginx",24589,6),("nginx",24590,6))
[root@node2 conf.d]# curl http://node2.whc.com #测试访问
<h1>web node3</h1>
[root@node2 conf.d]# curl http://node2.whc.com #测试访问
<h1>web node4</h1>
2、node1和node2 安装keepalived,备份keepalived.conf文件
#yum install -y keepalived
#cp /etc/keepalived/keepalived.conf{,.bak}
[root@node1 conf.d]# keepalived -v
Keepalived v1.2.13 (03/19,2015)
3、node1节点修改keepalived.conf配置如下,并启动服务
[root@node1 ~]#vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
monitor@whc.cn
}
notification_email_from 10001000@qq.com
smtp_server smtp.qq.com
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_nginx {
script "/etc/keepalived/chk_nginx.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 100
priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_nginx
}
virtual_ipaddress {
192.168.1.10
}
}
vrrp_instance VI_2 {
state MASTER
interface eth0
virtual_router_id 200
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_nginx
}
virtual_ipaddress {
192.168.1.11
}
}
[root@node1 conf.d]# service keepalived start
Starting keepalived: [ OK ]
4、node2节点修改keepalived.conf配置如下,并且启动服务
[root@node2 ~]#vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
monitor@whc.cn
}
notification_email_from 10001000@qq.com
smtp_server smtp.qq.com
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_nginx {
script "/etc/keepalived/chk_nginx.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 100
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_nginx
}
virtual_ipaddress {
192.168.1.10
}
}
vrrp_instance VI_2 {
state BACKUP
interface eth0
virtual_router_id 200
priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_nginx
}
virtual_ipaddress {
192.168.1.11
}
}
[root@node2 conf.d]# service keepalived start
Starting keepalived: [ OK ]
4、node1和node2节点,在/etc/keepalived/chk_nginx.sh增加nginx进程检测脚本,并且对该脚本赋权限
#!/bin/sh
#description: # 如果启动失败,则停止keepalived
status=$( ps -C nginx --no-heading| wc -l)
if [ "${status}" = "0" ]; then
/usr/sbin/nginx
status2=$( ps -C nginx --no-heading| wc -l)
if [ "${status2}" = "0" ]; then
service keepalived stop
fi
fi
#chmod +x /etc/keepalived/chk_nginx.sh #赋脚本执行权限
5、验证测试
1)、keepalive与nginx都正常时,访问http://192.168.1.10/
[root@node1 conf.d]# curl http://192.168.1.10
<h1>web node4</h1>
[root@node1 conf.d]# curl http://192.168.1.10
<h1>web node3</h1>
2)、如node1故障,nginx进程中止,客户端访问http://192.168.1.10是否正常
[root@node1 conf.d]# service nginx stop
Stopping nginx: [ OK ]
[root@node1 conf.d]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:cb:1b:e1 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.100/24 brd 192.168.1.255 scope global eth0
inet 192.168.1.11/32 scope global eth0
inet6 fe80::20c:29ff:fecb:1be1/64 scope link
valid_lft forever preferred_lft forever
[root@node1 conf.d]# tail /var/log/messages
Sep 10 01:00:43 node1 Keepalived_vrrp[28998]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
Sep 10 01:00:43 node1 Keepalived_vrrp[28998]: VRRP_Script(chk_nginx) succeeded
Sep 10 01:00:44 node1 Keepalived_vrrp[28998]: VRRP_Instance(VI_2) Transition to MASTER STATE
Sep 10 01:00:44 node1 Keepalived_vrrp[28998]: VRRP_Instance(VI_2) Received lower prio advert, forcing new election
Sep 10 01:00:45 node1 Keepalived_vrrp[28998]: VRRP_Instance(VI_2) Entering MASTER STATE
Sep 10 01:00:45 node1 Keepalived_vrrp[28998]: VRRP_Instance(VI_2) setting protocol VIPs.
Sep 10 01:00:45 node1 Keepalived_vrrp[28998]: VRRP_Instance(VI_2) Sending gratuitous ARPs on eth0 for 192.168.1.11
Sep 10 01:00:45 node1 Keepalived_healthcheckers[28997]: Netlink reflector reports IP 192.168.1.11 added
Sep 10 01:00:46 node1 ntpd[1845]: Listen normally on 11 eth0 192.168.1.11 UDP 123
Sep 10 01:00:50 node1 Keepalived_vrrp[28998]: VRRP_Instance(VI_2) Sending gratuitous ARPs on eth0 for 192.168.1.11
注:日志中发现中止的nginx被立即启动了,访问http://192.168.1.10正常提供服务
[root@node1 conf.d]# netstat -tnlp|grep 80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 29353/nginx
3)、如node1故障,keepalive进程中止,客户端访问http://192.168.1.10是否正常
[root@node1 conf.d]# service keepalived stop
Stopping keepalived: [ OK ]
[root@node1 conf.d]# tail /var/log/messages
Sep 10 01:10:16 node1 Keepalived[28995]: Stopping Keepalived v1.2.13 (03/19,2015)
Sep 10 01:10:16 node1 Keepalived_vrrp[28998]: VRRP_Instance(VI_2) sending 0 priority
Sep 10 01:10:16 node1 Keepalived_vrrp[28998]: VRRP_Instance(VI_2) removing protocol VIPs.
Sep 10 01:10:16 node1 Keepalived_healthcheckers[28997]: Netlink reflector reports IP 192.168.1.11 removed
Sep 10 01:10:18 node1 ntpd[1845]: Deleting interface #11 eth0, 192.168.1.11#123, interface stats: received=0, sent=0, dropped=0, active_time=572 secs
注:日志中体现node1节点vip被移除,这时可以查看node2的日志,访问http://192.168.1.10依然正常提供服务
Sep 10 01:10:24 node2 Keepalived_vrrp[5992]: VRRP_Instance(VI_2) Transition to MASTER STATE
Sep 10 01:10:25 node2 Keepalived_vrrp[5992]: VRRP_Instance(VI_2) Entering MASTER STATE
Sep 10 01:10:25 node2 Keepalived_vrrp[5992]: VRRP_Instance(VI_2) setting protocol VIPs.
Sep 10 01:10:25 node2 Keepalived_vrrp[5992]: VRRP_Instance(VI_2) Sending gratuitous ARPs on eth0 for 192.168.1.11
Sep 10 01:10:25 node2 Keepalived_healthcheckers[5991]: Netlink reflector reports IP 192.168.1.11 added
Sep 10 01:10:27 node2 ntpd[1864]: Listen normally on 10 eth0 192.168.1.11 UDP 123
Sep 10 01:10:30 node2 Keepalived_vrrp[5992]: VRRP_Instance(VI_2) Sending gratuitous ARPs on eth0 for 192.168.1.11
转载于:https://blog.51cto.com/daisywei/1851273