Nginx反代多台tomcat,并实现session共享
实验介绍:当一台服务器无法响应过多的请求时,此时的方案再增加一台服务器,以减少服务器的压力。也可以防止服务器宕机后,无法访问网站。
实验环境:centos 6.3 64bit,tomcat 7,nginx 1.2.6
一,给nginx打补丁
#wget https://nodeload.github.com/yaoweibin/nginx_upstream_check_module/zip/master
#wget http://nginx-upstream-jvm-route.googlecode.com/files/nginx-upstream-jvm-route-0.1.tar.gz
[root@test1 nginx-1.2.6]# patch -p0 < /root/nginx_upstream_jvm_route/jvm_route.patch
[root@test1 nginx-1.2.6]# patch -p1< /root/nginx_upstream_check_module-master/check_1.2.6+.patch
#yum -y install zlib zlib-devel openssl openssl-devel pcre pcre-devel
[root@test1 nginx-1.2.6]# ./configure --prefix=/usr/local/nginx --with-http_ssl_module --with-http_flv_module --with-http_stub_status_module --with-http_gzip_static_module --add-module=/root/nginx_upstream_jvm_route/ --add-module=/root/nginx_upstream_check_module-master[root@test1 nginx-1.2.6]# make && make install
不打补丁会提示下面错误
[root@test1 nginx-1.2.6]# make
make -f objs/Makefile
make[1]: Entering directory `/root/nginx-1.2.6'
gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/http -I src/http/modules -I src/mail \
-o objs/addon/nginx_upstream_jvm_route/ngx_http_upstream_jvm_route_module.o \
/root/nginx_upstream_jvm_route/ngx_http_upstream_jvm_route_module.c
/root/nginx_upstream_jvm_route/ngx_http_upstream_jvm_route_module.c: In function ?.gx_http_upstream_jvm_route_get_socket?.
/root/nginx_upstream_jvm_route/ngx_http_upstream_jvm_route_module.c:191: error: ?.gx_http_upstream_srv_conf_t?.has no member named ?.everse?
/root/nginx_upstream_jvm_route/ngx_http_upstream_jvm_route_module.c:192: error: ?.gx_http_upstream_server_t?.has no member named ?.run_id?
/root/nginx_upstream_jvm_route/ngx_http_upstream_jvm_route_module.c:193: error: ?.gx_http_upstream_server_t?.has no member named ?.run_id?
/root/nginx_upstream_jvm_route/ngx_http_upstream_jvm_route_module.c:198: error: ?.gx_http_upstream_server_t?.has no member named ?.run_id?
/root/nginx_upstream_jvm_route/ngx_http_upstream_jvm_route_module.c:198: error: ?.gx_http_upstream_server_t?.has no member named ?.run_id?
/root/nginx_upstream_jvm_route/ngx_http_upstream_jvm_route_module.c:198: error: ?.gx_http_upstream_server_t?.has no member named ?.run_id?
二,配置nginx.conf
#vim nginx.conf
#user nobody;
worker_processes 8;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
use epoll;
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
tcp_nopush on;
keepalive_timeout 60;
tcp_nodelay on;
gzip on;
gzip_min_length 1k;
gzip_buffers 4 16k;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_types text/plain application/x-javascript text/css application/xml;
gzip_vary on;
server_names_hash_bucket_size 128;
client_header_buffer_size 32k;
large_client_header_buffers 4 32k;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 600;
proxy_read_timeout 600;
proxy_send_timeout 600;
proxy_buffer_size 8k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
upstream tomcat {
ip_hash;
server 192.168.1.22:8080;
server 192.168.1.23:8080;
check interval=3000 rise=2 fall=5 timeout=1000;
}
server {
listen 80;
server_name localhost;
index index.jsp index.action;
# root /usr/local/tomcat/app1/apps/fis;
location ~ .*\.(jsp|action|js)$ {
proxy_pass http://tomcat;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
chunked_transfer_encoding off;
}
location /status {
check_status;
access_log off;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
注释:
在nginx配置文件中,采用了ip_hash调度算法,ip_hash工作模式,对于每个请求按访问ip的hash结果分配,这样来自同一个ip的访问用户固定访问一个后台的服务器,这个能有效解决动态网页session共享问题。当然还有其他工作模式。
check interval=3000 rise=2 fall=5 timeout=1000;这个是后台的健康检测
三,建立nginx脚本:
Vim /etc/init.d/nginx
#!/bin/sh
# nginx - this script starts and stops the nginx daemon
# chkconfig: - 85 15
# description: Nginx is an HTTP(S) server, HTTP(S) reverse \
# proxy and IMAP/POP3 proxy server
# processname: nginx
# config: /etc/nginx/nginx.conf
# config: /etc/sysconfig/nginx
# pidfile: /var/run/nginx.pid
# Source function library.
. /etc/rc.d/init.d/functions
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0
nginx="/usr/local/nginx/sbin/nginx"
prog=$(basename $nginx)
NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf"
[ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx
lockfile=/var/lock/subsys/nginx
make_dirs() {
# make required directories
user=`nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -`
options=`$nginx -V 2>&1 | grep 'configure arguments:'`
for opt in $options; do
if [ `echo $opt | grep '.*-temp-path'` ]; then
value=`echo $opt | cut -d "=" -f 2`
if [ ! -d "$value" ]; then
# echo "creating" $value
mkdir -p $value && chown -R $user $value
fi
fi
done
}
start() {
[ -x $nginx ] || exit 5
[ -f $NGINX_CONF_FILE ] || exit 6
make_dirs
echo -n $"Starting $prog: "
daemon $nginx -c $NGINX_CONF_FILE
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}
stop() {
echo -n $"Stopping $prog: "
killproc $prog -QUIT
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}
restart() {
configtest || return $?
stop
sleep 1
start
}
reload() {
configtest || return $?
echo -n $"Reloading $prog: "
killproc $nginx -HUP
RETVAL=$?
echo
}
force_reload() {
restart
}
configtest() {
$nginx -t -c $NGINX_CONF_FILE
}
rh_status() {
status $prog
}
rh_status_q() {
rh_status >/dev/null 2>&1
}
case "$1" in
start)
rh_status_q && exit 0
$1
;;
stop)
rh_status_q || exit 0
$1
;;
restart|configtest)
$1
;;
reload)
rh_status_q || exit 7
$1
;;
force-reload)
force_reload
;;
status)
rh_status
;;
condrestart|try-restart)
rh_status_q || exit 0
;;
*)
echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
exit 2
esac
打开nginx默认网页测试nginx是否安装成功
安装tomcatA
四,配置tomcatA
[root@test1 ~]# rpm -ivh jdk-7u9-linux-x64.rpm
[root@test1 ~]# vim /etc/profile 在export做下列修改和最一行修改ulimit
JAVA_HOME=/usr/java/jdk1.7.0_09/
CLASS_PATH=$JAVA_HOME/lib:$JAVA_HOME/jre/lib
PATH=$PATH:$JAVA_HOME:/bin
CATALINA_HOME=/usr/local/tomcat
export JAVA_HOME CATALINA_HOME
export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL HISTTIMEFORMAT
unset i
unset pathmunge
ulimit -SHn 65535
[root@test1 ~]# . /etc/profile
[root@test1 ~]# java -version
java version "1.7.0_09"
Java(TM) SE Runtime Environment (build 1.7.0_09-b05)
Java HotSpot(TM) 64-Bit Server VM (build 23.5-b02, mixed mode)
[root@test1 ~]# tar xf apache-tomcat-7.0.32.tar.gz -C /usr/local/
[root@test1 ~]# cd /usr/local/
[root@test1 local]# ln -s apache-tomcat-7.0.32 tomcat
[root@test1 local]# cd tomcat/
[root@test1 tomcat]#mkdir webapps/test/WEB-INF -pv
[root@test1 tomcat]# cd webapps/test
[root@test1 test]# vim test.jsp
<%@ page language="java" %>
<html>
<head><title>TomcatA</title></head>
<body>
<h1><fontcolor="red">TomcatA </font></h1>
<tablealign="centre"border="1">
<tr>
<td>Session ID</td>
<td><%= session.getId() %></td>
</tr>
<tr>
<td>Created on</td>
<td><%= session.getCreationTime() %></td>
</tr>
</table>
</body>
</html>
[root@test1 test]# vim WEB-INF/web.xml 不能修改conf/web.xml,修改conf/web.xml session并不会复制。只能修改对应项目下的web.xml。
<web-app xmlns="http://java.sun.com/xml/ns/j2ee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"
version="2.5">
<display-name>TomcatDemo</display-name>
<distributable/> 添加一行
</web-app>
注释:添加<distributable/>属性,表示应用程序为分布式
五,配置tomcat集群:作用让tomcat之间同步session
[root@test1 tomcat]# vim conf/server.xml
<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatB">
<!--For clustering, please take a look at documentation at:
/docs/cluster-howto.html (simple how to)
/docs/config/cluster.html (reference documentation) -->
<!--
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
-->
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8">
<Manager className="org.apache.catalina.ha.session.DeltaManager" expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<MemberShip className="org.apache.catalina.tribes.membership.McastService"
bind="192.168.1.22"
address="228.0.0.4"
port="45564"
frequency="500"
droptTime="3000"/>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="192.168.1.22"
port="4000"
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=".*\.gif;.*\.js;.*\.jpg;.*\.htm;.*\.html;.*\.txt;"/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
对192.168.1.23配置tomcat做同样的配置,只需要将test.jsp中tomcatA改成tomcatB
将server.xml中IP改成192.168.1.23,tomcatA改变tomcatB。
启动tomcat并做如下测试:两台tomcat目前正常。
打开test.jsp测试页,现在访问的tomcatA,继续用相同浏览器打开,访问的还是tomcatA,并且会话ID是不会变的
将tomcatA的服务停了,验证会不会自动将会话复制到tomcatB,
#bin/catalina.sh stop
继续刷新,此时tomcatA变成tomcatB,会话ID并不变化,结果证明成功。
[root@~]$ grep -i session logs/catalina.out 查看日志
信息: Manager [telcom.yeezhao.com#], requesting session state from org.apache.catalina.tribes.membership.MemberImpl[tcp://{222, 200, 185, 27}:4000,{222, 200, 185, 27},4000, alive=290104, securePort=-1, UDP Port=-1, id={124 -108 -44 67 40 110 73 -36 -82 -88 -1 88 67 -91 46 43 }, payload={}, command={}, domain={}, ]. This operation will timeout if no session state has been received within 60 seconds.
2013-8-29 14:45:25 org.apache.catalina.ha.session.DeltaManager waitForSendAllSessions
信息: Manager [telcom.yeezhao.com#]; session state send at 8/29/13 2:45 PM received in 105 ms.
在生产环境中验证,先查看访问的是哪台tomcat,并登录,将访问tomcat 停了,刷新网页查看会不会要求重新登录,如果不会则session复制成功。
转载于:https://blog.51cto.com/damondeng/1231090