一般来说,负载均衡就是将客户端的请求分流给后端的各个真实服务器,达到负载均衡的目的。还有一种方式是用两台服务器,一台作为主服务器(Master),另一台作为热备份(Hot Standby),请求全部分给主服务器,在主服务器当机时,立即切换到备份服务器,以提高系统的整体可靠性。
负载均衡的设置
Apache可以应对上面这两种需求。先来讨论一下如何做负载均衡。首先需要启用Apache的几个模块:
程序代码
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule proxy_http_module modules/mod_proxy_http.so
mod_proxy提供代理服务器功能,mod_proxy_balancer提供负载均衡功能, mod_proxy_http让代理服务器能支持HTTP协议。如果把mod_proxy_http换成其他协议模块(如mod_proxy_ftp),或许能支持其他协议的负载均衡,有兴趣的朋友可以自己尝试一下。
然后要添加以下配置:
程序代码
ProxyRequests Off
<Proxy balancer://mycluster>
BalancerMember http://node-a.myserver.com:8080
BalancerMember http://node-b.myserver.com:8080
</Proxy>
ProxyPass / balancer://mycluster
# 警告:以下这段配置仅用于调试,绝不要添加到生产环境中!!!
<Location /balancer-manager>
SetHandler balancer-manager
order Deny,Allow
Deny from all
Allow from localhost
</Location>
从上面的 ProxyRequests Off 这条可以看出,实际上负载均衡器就是一个反向代理,只不过它的代理转发地址不是某台具体的服务器,而是一个 balancer:// 协议:
ProxyPass / balancer://mycluster协议地址可以随便定义。然后,在<Proxy>段中设置该balancer协议的内容即可。 BalancerMember指令可以添加负载均衡组中的真实服务器地址。
下面那段<Location /balancer-manager>是用来监视负载均衡的工作情况的,调试时可以加上(生产环境中禁止使用!),然后访问 http://localhost/balancer-manager/ 即可看到负载均衡的工作状况。
OK,改完之后重启服务器,访问你的Apache所在服务器的地址,即可看到负载均衡的效果了。打开 balancer-manager 的界面,可以看到请求是平均分配的。
如果不想平均分配怎么办?给 BalancerMember 加上 loadfactor 参数即可,取值范围为1-100。比如你有三台服务器,负载分配比例为 7:2:1,只需这样设置:
程序代码
ProxyRequests Off
<Proxy balancer://mycluster>
BalancerMember http://node-a.myserver.com:8080 loadfactor=7
BalancerMember http://node-b.myserver.com:8080 loadfactor=2
BalancerMember http://node-c.myserver.com:8080 loadfactor=1
</Proxy>
ProxyPass / balancer://mycluster
默认情况下,负载均衡会尽量让各个服务器接受的请求次数满足预设的比例。如果要改变算法,可以使用 lbmethod 属性。如:
程序代码
ProxyRequests Off
<Proxy balancer://mycluster>
BalancerMember http://node-a.myserver.com:8080 loadfactor=7
BalancerMember http://node-b.myserver.com:8080 loadfactor=2
BalancerMember http://node-c.myserver.com:8080 loadfactor=1
</Proxy>
ProxyPass / balancer://mycluster
ProxySet lbmethod=bytraffic
lbmethod可能的取值有:
lbmethod=byrequests 按照请求次数均衡(默认)
lbmethod=bytraffic 按照流量均衡
lbmethod=bybusyness 按照繁忙程度均衡(总是分配给活跃请求数最少的服务器)
各种算法的原理请参见Apache的文档。
热备份(Hot Standby)
热备份的实现很简单,只需添加 status=+H 属性,就可以把某台服务器指定为备份服务器:
程序代码
ProxyRequests Off
<Proxy balancer://mycluster>
BalancerMember http://node-a.myserver.com:8080
BalancerMember http://node-b.myserver.com:8080 status=+H
</Proxy>
ProxyPass / balancer://mycluster
从 balancer-manager 界面中可以看到,请求总是流向 node-a ,一旦node-a挂掉, Apache会检测到错误并把请求分流给 node-b。Apache会每隔几分钟检测一下 node-a 的状况,如果node-a恢复,就继续使用node-a。
--------------------------------------------------------------------------------------------------------------------
配合play!的配置轻松完成负载均衡
Set-up a front-end HTTP server
You can easily deploy your application as a stand-alone server by setting the application HTTP port to 80 :
%production.http.port=80
But if you plan to host several applications in the same server or load balance several instances of your application for scalability or fault tolerance, you can use a front-end HTTP server.
Note that using a front-end HTTP server will never give you better performance than using Play server directly!
Set-up with lighttpd
This example shows you how to configure lighttpd as a front-end web server. Note that you can do the same with Apache, but if you only need virtual hosting or load balancing, lighttpd is a very good choice and much easier to configure!
The /etc/lighttpd/lighttpd.conf file should define things like this:
server.modules = (
"mod_access",
"mod_proxy",
"mod_accesslog"
)
...
$HTTP["host"] =~ "www.myapp.com" {
proxy.balance = "round-robin" proxy.server = ( "/" =>
( ( "host" => "127.0.0.1", "port" => 9000 ) ) )
}
$HTTP["host"] =~ "www.loadbalancedapp.com" {
proxy.balance = "round-robin" proxy.server = ( "/" => (
( "host" => "127.0.0.1", "port" => 9000 ),
( "host" => "127.0.0.1", "port" => 9001 ) )
)
}
Set-up with Apache
The example below shows a simple set-up with Apache httpd server running in front of a standard Play configuration.
LoadModule proxy_module modules/mod_proxy.so
...
<VirtualHost *:80>
ProxyPreserveHost On
ServerName www.loadbalancedapp.com
ProxyPass / http://127.0.0.1:9000/
ProxyPassReverse / http://127.0.0.1:9000/
</VirtualHost>
Apache as a front proxy to allow transparent upgrade of your application
The basic idea is to run 2 Play instances of your web application and let the front-end proxy load-balance them. In case one is not available, it will forward all the requests to the available one.
Let’s start the same Play application two times: one on port 9999 and one on port 9998.
Copy the application 2 times and edit the application.conf in the conf directory to change the port numbers.
For each web application directory:
play start mysuperwebapp
Now, let’s configure our Apache web server to have a load balancer.
In Apache, I have the following configuration:
如何需要监听90端口,需要在外部配上
listen 90
<VirtualHost localhost:90>
即可,如是默认是80端口,则可按照下面的配置进行
<VirtualHost mysuperwebapp.com:80>
ServerName mysuperwebapp.com
<Location /balancer-manager>
SetHandler balancer-manager
Order Deny,Allow
Deny from all
Allow from .mysuperwebapp.com
</Location>
<Proxy balancer://mycluster>
BalancerMember http://localhost:9999
BalancerMember http://localhost:9998 status=+H
</Proxy>
<Proxy *>
Order Allow,Deny
Allow From All
</Proxy>
ProxyPreserveHost On
ProxyPass /balancer-manager !
ProxyPass / balancer://mycluster/
ProxyPassReverse / http://localhost:9999/
ProxyPassReverse / http://localhost:9998/
</VirtualHost>
The important part is balancer://mycluster. This declares a load balancer. The +H option means that the second Play application is on stand-by. But you can also instruct it to load-balance.
Every time you want to upgrade mysuperwebapp, here is what you need to do:
play stop mysuperwebapp1
The load-balancer then forwards everything to mysuperwebapp2. In the meantime update mysuperwebapp1. Once you are done:
play start mysuperwebapp1
You can now safely update mysuperwebapp2.
Apache also provides a way to view the status of your cluster. Simply point your browser to /balancer-manager to view the current status of your clusters.
Because Play is completely stateless you don’t have to manage sessions between the 2 clusters. You can actually easily scale to more than 2 Play instances.