complex_constraints_method:求解复杂约束的方法,默认为'loop',即如果解不满足复杂约束,则再次随机产生解,直到满足约束,暂时不支持其他的求解方式。

RandomWalk 除了提供基本的random_walk函数之外,还提供了一个更加强大的improved_random_walk函数,后者的全局寻优能力要更强。

6. 求解带复杂约束的目标函数

上面所述的各种优化方法求解的都是变量仅有简单边界约束(形如\(a \le x_i \le b\)),下面介绍如何使用各种优化方法求解带有复杂约束条件的目标函数。其实,求解方法也非常简单,以GA为例,下面的例子即对Rosenbrock函数求解了带有三个复杂约束条件的最优值:

from time import time
from sopt.GA.GA import GA
from sopt.util.functions import *
from sopt.util.ga_config import *
from sopt.util.constraints import *
class TestGA:
def __init__(self):
self.func = Rosenbrock
self.func_type = Rosenbrock_func_type
self.variables_num = Rosenbrock_variables_num
self.lower_bound = Rosenbrock_lower_bound
self.upper_bound = Rosenbrock_upper_bound
self.cross_rate = 0.8
self.mutation_rate = 0.1
self.generations = 300
self.population_size = 200
self.binary_code_length = 20
self.cross_rate_exp = 1
self.mutation_rate_exp = 1
self.code_type = code_type.real
self.cross_code = False
self.select_method = select_method.proportion
self.rank_select_probs = None
self.tournament_num = 2
self.cross_method = cross_method.uniform
self.arithmetic_cross_alpha = 0.1
self.arithmetic_cross_exp = 1
self.mutation_method = mutation_method.uniform
self.none_uniform_mutation_rate = 1
self.complex_constraints = [constraints1,constraints2,constraints3]
self.complex_constraints_method = complex_constraints_method.penalty
self.complex_constraints_C = 1e8
self.M = 1e8
self.GA = GA(**self.__dict__)
def test(self):
start_time = time()
self.GA.run()
print("GA costs%.4fseconds!" % (time()-start_time))
self.GA.save_plot()
self.GA.show_result()
if __name__ == '__main__':
TestGA().test()
运行结果如下:
GA costs 1.9957 seconds!
-------------------- GA config is: --------------------
lower_bound:[-2.048, -2.048]
cross_code:False
complex_constraints_method:penalty
mutation_method:uniform
mutation_rate:0.1
mutation_rate_exp:1
cross_rate:0.8
upper_bound:[2.048, 2.048]
arithmetic_cross_exp:1
variables_num:2
generations:300
tournament_num:2
select_method:proportion
func_type:max
complex_constraints_C:100000000.0
cross_method:uniform
complex_constraints:[, , ]
func:
none_uniform_mutation_rate:1
cross_rate_exp:1
code_type:real
M:100000000.0
binary_code_length:20
global_generations_step:300
population_size:200
arithmetic_cross_alpha:0.1
-------------------- GA caculation result is: --------------------
global best target generation index/total generations:226/300
global best point:[ 1.7182846 -1.74504313]
global best target:2207.2089435117955


图6 GA求解带有三个复杂约束的Rosenbrock函数

上面的constraints1,constraints2,constraints3是三个预定义的约束条件函数,其定义分别为:\(constraints1:x_1^2 + x_2^2 - 6 \le 0\);\(constraints2:x_1 + x_2 \le 0\);\(constraints3:-2-x_1 - x_2 \le 0\),函数原型为:
def constraints1(x):
x1 = x[0]
x2 = x[1]
return x1**2 + x2**2 -3
def constraints2(x):
x1 = x[0]
x2 = x[1]
return x1+x2
def constraints3(x):
x1 = x[0]
x2 = x[1]
return -2 -x1 -x2

其实观察可以发现,上面的代码和原始的GA实例代码唯一的区别,就是其增加了self.complex_constraints = [constraints1,constraints2,constraints3]这样一句,对于其他的优化方法,其都定义了complex_constraints和complex_constraints_method这两个属性,只要传入相应的约束条件函数列表以及求解约束条件的方法就可以求解带复杂约束的目标函数了。比如我们再用Random Walk求解和上面一样的带三个约束的Rosenbrock函数,代码及运行结果如下:

from time import time
from sopt.util.functions import *
from sopt.util.constraints import *
from sopt.util.random_walk_config import *
from sopt.Optimizers.RandomWalk import RandomWalk
class TestRandomWalk:
def __init__(self):
self.func = Rosenbrock
self.func_type = Rosenbrock_func_type
self.variables_num = Rosenbrock_variables_num
self.lower_bound = Rosenbrock_lower_bound
self.upper_bound = Rosenbrock_upper_bound
self.generations = 100
self.init_step = 10
self.eps = 1e-4
self.vectors_num = 10
self.init_pos = None
self.complex_constraints = [constraints1,constraints2,constraints3]
self.complex_constraints_method = complex_constraints_method.loop
self.RandomWalk = RandomWalk(**self.__dict__)
def test(self):
start_time = time()
self.RandomWalk.random_walk()
print("random walk costs%.4fseconds!" %(time() - start_time))
self.RandomWalk.save_plot()
self.RandomWalk.show_result()
if __name__ == '__main__':
TestRandomWalk().test()
运行结果:
Finish 1 random walk!
Finish 2 random walk!
Finish 3 random walk!
Finish 4 random walk!
Finish 5 random walk!
Finish 6 random walk!
Finish 7 random walk!
Finish 8 random walk!
Finish 9 random walk!
Finish 10 random walk!
Finish 11 random walk!
Finish 12 random walk!
Finish 13 random walk!
Finish 14 random walk!
Finish 15 random walk!
Finish 16 random walk!
Finish 17 random walk!
random walk costs 0.1543 seconds!
-------------------- random walk config is: --------------------
eps:0.0001
func_type:max
lower_bound:[-2.048 -2.048]
upper_bound:[2.048 2.048]
init_step:10
vectors_num:10
func:
variables_num:2
walk_nums:17
complex_constraints_method:loop
generations:100
generations_nums:2191
complex_constraints:[, , ]
-------------------- random walk caculation result is: --------------------
global best generation index/total generations:2091/2191
global best point is: [-2.41416736 0.41430367]
global best target is: 2942.6882849234585


图7 Random Walk求解带有三个复杂约束的Rosenbrock函数

可以发现Random Walk 求解得到的最优解要比GA好,而且运行时间更快,经过实验发现,在所有的优化方法中,不论是求解带复杂约束还是不带复杂约束条件的目标函数,求解效果大体上排序是:Random Walk > PSO > GA > SA 。所以当你在求解具体问题时,不妨多试几种优化方法,然后择优选择。

7. 基于梯度的系列优化方法

上面所述的各种优化方法,比如GA,PSO,SA等都是基于随机搜索的优化算法,其计算是不依赖于目标函数的具体形式,也不需要知道其梯度的,更加传统的优化算法是基于梯度的算法,比如经典的梯度下降(上升)法(Gradient Descent)以及其一系列变种。下面就简要介绍sopt中GD,Momentum,AdaGrad,RMSProp以及Adam的实现。关于这些基于梯度的优化算法的具体原理,可以参考我之前的一篇博文深度学习中常用的优化方法。另外需要注意,以下所述的基于梯度的各种优化算法,一般都是用在无约束优化问题里面的,如果是有约束的问题,请选择上面其他的优化算法。下面是GradientDescent的一个使用实例:

from time import time
from sopt.util.gradients_config import *
from sopt.util.functions import *
from sopt.Optimizers.Gradients import GradientDescent
class TestGradientDescent:
def __init__(self):
self.func = quadratic50
self.func_type = quadratic50_func_type
self.variables_num = quadratic50_variables_num
self.init_variables = None
self.lr = 1e-3
self.epochs = 5000
self.GradientDescent = GradientDescent(**self.__dict__)
def test(self):
start_time = time()
self.GradientDescent.run()
print("Gradient Descent costs%.4fseconds!" %(time()-start_time))
self.GradientDescent.save_plot()
self.GradientDescent.show_result()
if __name__ == '__main__':
TestGradientDescent().test()
运行结果为:
Gradient Descent costs 14.3231 seconds!
-------------------- Gradient Descent config is: --------------------
func_type:min
variables_num:50
func:
epochs:5000
lr:0.001
-------------------- Gradient Descent caculation result is: --------------------
global best epoch/total epochs:4999/5000
global best point: [ 0.9999524 1.99991045 2.99984898 3.9998496 4.99977767 5.9997246
6.99967516 7.99964102 8.99958143 9.99951782 10.99947879 11.99944665
12.99942492 13.99935192 14.99932708 15.99925856 16.99923686 17.99921689
18.99911527 19.9991255 20.99908968 21.99899699 22.99899622 23.99887832
24.99883597 25.99885616 26.99881394 27.99869772 28.99869349 29.9986766
30.99861142 31.99851987 32.998556 33.99849351 34.99845985 35.99836731
36.99832444 37.99831792 38.99821067 39.99816567 40.99814951 41.99808199
42.99808161 43.99806655 44.99801207 45.99794449 46.99788003 47.99785468
48.99780825 49.99771656]
global best target: 1.0000867498727912


图7 GradientDescent 求解quadratic50

下面简要说明以下GradientDescent,Momentum等类中的主要参数,像func,variables_num等含义已经解释很多次了,不再赘述,这里主要介绍各类特有的一些参数。