灰狼算法单目标&多目标算法讲解
- 引言
- 元启发式算法
- 概念
- 优势
- 特征
- 分类
- 1. 灰狼算法思想
- 参考文献
- 源代码下载
- 灰狼社会等级及捕猎过程
- 2. 单目标灰狼算法
- 包围猎物
- 追逐猎物
- 搜捕猎物(exploration)和猎杀猎物(exploitation )
- GWO代码实现
- 3. 多目标灰狼算法
- Differences between Priori and Posteriori method
- Differences between GWO and MOGWO
- MOGWO思维导图
- 闲言碎语-MOGWO实现关键点
引言
在开始讲解灰狼算法之前,我们先参考 S. Mirijalili总结的元启发式算法,对元启发式算法进行一个总的描述。
元启发式算法
概念
百度百科中给予元启发式算法的定义:元启发式算法是相对于最优化算法提出来的,一个问题的最优化算法可以求得该问题的最优解,而元启发式算法是一个基于直观或经验构造的算法,它可以在可接受的花费(指计算时间和空间)下给出问题的一个可行解,并且该可行解与最优解的偏离程度不一定可以事先预计。
优势
那么为什么元启发式算法得到了广泛的应用呢?S. Mirijalili在文章中回答了此问题:简单性、灵活性、免推导机制、规避局部最优性(对于其解释,我这里直接上原文了,大家自己理解下,原文讲得比较通俗易懂,没必要细展开总结了)。
1. 简单性
2. 灵活性
3. 免推导机制
4. 规避局部最优性
特征
分类
元启发式算法可主要分为三大类:进化算法(EA)、基于物理学的算法以及群体智能算法 (SI)(有关三者的具体描述及区别,请参考: Seyedali Mirjalili a,Seyed Mohammad Mirjalili,Andrew Lewis.Grey Wolf Optimizer [J]. Advances in Engineering Software,2014,69:46-61), 而我们本博客中要讲的就是SI中的一种算法—灰狼算法。在讲解之前,我们再添加一下SI算法相对于其它类型算法的优势,还是直接上原文。
灰狼社会等级及捕猎过程
灰狼生存环境中存在两个重要的特征:社会等级和群体猎食。对于灰狼的社会等级主要分类四类:α,β,δ,ω。依次等级逐渐降低,各个等级均有自己的任务,缺一不可。而对于灰狼群体猎食的行为,主要包括三个过程:追踪靠近猎物、追赶围捕猎物、猎杀猎物。其解释如下图所示。
社会等级:
捕猎过程:
2. 单目标灰狼算法
包围猎物
追逐猎物
注:追逐猎物过程按照实际来讲是在包围猎物之前的
搜捕猎物(exploration)和猎杀猎物(exploitation )
如何理解(图a), 狼群向着猎物进行捕杀,如下文所述:
而当时(图b),狼群远离猎物,其实,这个过程我们可以通过下图来进行理解,也就说,是在圆内的,而是在圆外的。
GWO代码实现
main函数
%___________________________________________________________________%
% Grey Wold Optimizer (GWO) source codes version 1.0 %
% %
% Developed in MATLAB R2011b(7.13) %
% %
% Author and programmer: Seyedali Mirjalili %
% %
% e-Mail: ali.mirjalili@gmail.com %
% seyedali.mirjalili@griffithuni.edu.au %
% %
% Homepage: http://www.alimirjalili.com %
% %
% Main paper: S. Mirjalili, S. M. Mirjalili, A. Lewis %
% Grey Wolf Optimizer, Advances in Engineering %
% Software , in press, %
% DOI: 10.1016/j.advengsoft.2013.12.007 %
% %
%___________________________________________________________________%
% You can simply define your cost in a seperate file and load its handle to fobj
% The initial parameters that you need are:
%__________________________________________
% fobj = @YourCostFunction
% dim = number of your variables
% Max_iteration = maximum number of generations
% SearchAgents_no = number of search agents
% lb=[lb1,lb2,...,lbn] where lbn is the lower bound of variable n
% ub=[ub1,ub2,...,ubn] where ubn is the upper bound of variable n
% If all the variables have equal lower bound you can just
% define lb and ub as two single number numbers
% To run GWO: [Best_score,Best_pos,GWO_cg_curve]=GWO(SearchAgents_no,Max_iteration,lb,ub,dim,fobj)
%__________________________________________
clear all
clc
SearchAgents_no=30; % Number of search agents
Function_name='F1'; % Name of the test function that can be from F1 to F23 (Table 1,2,3 in the paper)
Max_iteration=500; % Maximum numbef of iterations
% Load details of the selected benchmark function
[lb,ub,dim,fobj]=Get_Functions_details(Function_name);
[Best_score,Best_pos,GWO_cg_curve]=GWO(SearchAgents_no,Max_iteration,lb,ub,dim,fobj);
PSO_cg_curve=PSO(SearchAgents_no,Max_iteration,lb,ub,dim,fobj); % run PSO to compare to results
figure('Position',[500 250 660 290]) %弹出框的Location and size of the drawable area, specified as a vector of the form [left bottom width height].
%Draw search space
subplot(1,2,1); %图的排列,表示1行2列图,第一幅图
func_plot(Function_name);
title('Parameter space')
xlabel('x_1');
ylabel('x_2');
zlabel([Function_name,'( x_1 , x_2 )'])
%Draw objective space
subplot(1,2,2);
semilogy(GWO_cg_curve,'Color','r') %y轴以10为底的对数形式与x对应
hold on %hold on retains plots in the current axes so that new plots added to the axes do not delete existing plots
semilogy(PSO_cg_curve,'Color','b')
title('Objective space')
xlabel('Iteration');
ylabel('Best score obtained so far');
axis tight
grid on
box on
legend('GWO','PSO')
display(['The best solution obtained by GWO is : ', num2str(Best_pos)]);
display(['The best optimal value of the objective funciton found by GWO is : ', num2str(Best_score)]);
GWO.m文件
%___________________________________________________________________%
% Grey Wold Optimizer (GWO) source codes version 1.0 %
% %
% Developed in MATLAB R2011b(7.13) %
% %
% Author and programmer: Seyedali Mirjalili %
% %
% e-Mail: ali.mirjalili@gmail.com %
% seyedali.mirjalili@griffithuni.edu.au %
% %
% Homepage: http://www.alimirjalili.com %
% %
% Main paper: S. Mirjalili, S. M. Mirjalili, A. Lewis %
% Grey Wolf Optimizer, Advances in Engineering %
% Software , in press, %
% DOI: 10.1016/j.advengsoft.2013.12.007 %
% %
%___________________________________________________________________%
% Grey Wolf Optimizer
function [Alpha_score,Alpha_pos,Convergence_curve]=GWO(SearchAgents_no,Max_iter,lb,ub,dim,fobj)
% initialize alpha, beta, and delta_pos
Alpha_pos=zeros(1,dim);
Alpha_score=inf; %change this to -inf for maximization problems 现在求的是最小,最大用-inf
Beta_pos=zeros(1,dim);
Beta_score=inf; %change this to -inf for maximization problems
Delta_pos=zeros(1,dim);
Delta_score=inf; %change this to -inf for maximization problems
%Initialize the positions of search agents 这一步为初始化所有代理(智能体)不同维度(参量)上的数值
Positions=initialization(SearchAgents_no,dim,ub,lb);
% 收敛曲线
Convergence_curve=zeros(1,Max_iter);
l=0;% Loop counter
% Main loop
while l<Max_iter
for i=1:size(Positions,1) %Positions第1维为代理的个数即SearchAgents
% Return back the search agents that go beyond the boundaries of the
% search space?
Flag4ub=Positions(i,:)>ub;
Flag4lb=Positions(i,:)<lb;
Positions(i,:)=(Positions(i,:).*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lb;
% Calculate objective function for each search agent
fitness=fobj(Positions(i,:));
% Update Alpha, Beta, and Delta
if fitness<Alpha_score
Alpha_score=fitness; % Update alpha
Alpha_pos=Positions(i,:);
end
if fitness>Alpha_score && fitness<Beta_score
Beta_score=fitness; % Update beta
Beta_pos=Positions(i,:);
end
if fitness>Alpha_score && fitness>Beta_score && fitness<Delta_score
Delta_score=fitness; % Update delta
Delta_pos=Positions(i,:);
end
end
a=2-l*((2)/Max_iter); % a decreases linearly fron 2 to 0
% Update the Position of search agents including omegas
for i=1:size(Positions,1) %第一维是代理
for j=1:size(Positions,2) %第二维是参量
r1=rand(); % r1 is a random number in [0,1]
r2=rand(); % r2 is a random number in [0,1]
A1=2*a*r1-a; % Equation (3.3)
C1=2*r2; % Equation (3.4)
D_alpha=abs(C1*Alpha_pos(j)-Positions(i,j)); % Equation (3.5)-part 1
X1=Alpha_pos(j)-A1*D_alpha; % Equation (3.6)-part 1
r1=rand();
r2=rand();
A2=2*a*r1-a; % Equation (3.3)
C2=2*r2; % Equation (3.4)
D_beta=abs(C2*Beta_pos(j)-Positions(i,j)); % Equation (3.5)-part 2
X2=Beta_pos(j)-A2*D_beta; % Equation (3.6)-part 2
r1=rand();
r2=rand();
A3=2*a*r1-a; % Equation (3.3)
C3=2*r2; % Equation (3.4)
D_delta=abs(C3*Delta_pos(j)-Positions(i,j)); % Equation (3.5)-part 3
X3=Delta_pos(j)-A3*D_delta; % Equation (3.5)-part 3
Positions(i,j)=(X1+X2+X3)/3;% Equation (3.7)
end
end
l=l+1;
Convergence_curve(l)=Alpha_score;
end
initialization函数
%___________________________________________________________________%
% Grey Wold Optimizer (GWO) source codes version 1.0 %
% %
% Developed in MATLAB R2011b(7.13) %
% %
% Author and programmer: Seyedali Mirjalili %
% %
% e-Mail: ali.mirjalili@gmail.com %
% seyedali.mirjalili@griffithuni.edu.au %
% %
% Homepage: http://www.alimirjalili.com %
% %
% Main paper: S. Mirjalili, S. M. Mirjalili, A. Lewis %
% Grey Wolf Optimizer, Advances in Engineering %
% Software , in press, %
% DOI: 10.1016/j.advengsoft.2013.12.007 %
% %
%___________________________________________________________________%
% This function initialize the first population of search agents
function Positions=initialization(SearchAgents_no,dim,ub,lb)
Boundary_no= size(ub,2); % numnber of boundaries 帮助解释:Find only the length of the second dimension of A.
% If the boundaries of all variables are equal and user enter a signle
% number for both ub and lb
if Boundary_no==1
Positions=rand(SearchAgents_no,dim).*(ub-lb)+lb;
end
% If each variable has a different lb and ub 每个维度即每个参量的上下限
if Boundary_no>1
for i=1:dim
ub_i=ub(i);
lb_i=lb(i);
Positions(:,i)=rand(SearchAgents_no,1).*(ub_i-lb_i)+lb_i;
end
end
3. 多目标灰狼算法
Differences between Priori and Posteriori method
与a priori aggregation of objectives into a single objective method 相比较,这种meta heuristic algorithm可以避免先验方法所存在的两个缺陷:
Differences between GWO and MOGWO
MOGWO思维导图
多目标函数文件较多,此处将不再一一粘贴,我构建了一个算法函数及实现的思维导图,里面解释了很多有关多目标灰狼算法实现的细节。
闲言碎语-MOGWO实现关键点
解释几点:
- 超立方体: 所谓构建的超立方体是基于目标函数而构建的,可以将每个目标函数看作一个维度,通过获取cost进而获取在每个维度上的位置。
- 选择领导者和删除溢出非支配解: 两者均是通过构建轮盘赌的形式进行概率选择的,一个个数越多,概率越小,另一个反之。
- 支配函数循环中,只要存在前边一个个体支配后边个体,即结束,不再继续寻找后边个体的其它支配与非支配情况。
代码如下:
if ~pop(j).Dominated
if Dominates(pop(i),pop(j))
pop(j).Dominated=true;
elseif Dominates(pop(j),pop(i))
pop(i).Dominated=true;
break;
end
end
- 此算法对我来说最难理解的地方是Archive 判断那,在理解之前我们需要再理一下对于Archive理解的思路。
(1) Archive包含着非支配解,也就是说Archive中种群的Cost是不一样的,亦可以得到其种群的Position也是不一样的;
(2)Archive的cost虽然是不一样的,但是在超立方体中的位置有可能是一样的,这是根据构建的Grid有关;
(3)least crowed 针对的是Archive个体的位置而言的,轮盘赌方法的目的就是更大概率的选择least crowed swarm;
然后,我们再理解下Archive判断的目的,其实我们再SelectLeader中已经包含了least crowed的选择,然后Archive中实现的主要是diversity。也就是说让尽可能的多样性,因为SelectLeader是根据least crowed来选择的,因此,我们再排除一个最优的解时,剩下的archive 解将crowed将逐渐变大,也就是代码中的rep2, rep3。
注:代码中三个最优狼的顺序是颠倒的,优先级顺序依次为,而且下列代码中也对应了公式,公式中的狼是相反的,但这对于算法的结果没有影响,因为算法中并不是要找到的狼,而是根据三个狼的位置平均,来获取更优的非支配解