环境
OS:Centos 7
mysql(5.7)主从:

 

192.168.1.100 hmaster 主
192.168.1.102 hslavea 从
192.168.1.103 hslaveb 从

192.168.1.104 orc元数据使用的数据库 + raft + orc软件 
192.168.1.105 orc元数据使用的数据库 + raft + orc软件
192.168.1.106 orc元数据使用的数据库 + raft + orc软件

 

 

 

 

前提条件:部署好被管控的mysql主从(1主2从,启用GTID,使用MASTER_AUTO_POSITION=1)

1.下载安装介质
下载地址:https://github.com/openark/orchestrator/releases,选择相应的版本
orchestrator-client-3.1.4-1.x86_64.rpm
orchestrator-3.1.4-1.x86_64.rpm

下面两个安装包需要自行下载
oniguruma-6.8.2-2.el7.x86_64.rpm
jq-1.6-2.el7.x86_64.rpm

 

2.安装在每个orc进程所在服务器上都进行安装

192.168.1.104,192.168.1.105,192.168.106上安装

 

rpm -ivh oniguruma-6.8.2-2.el7.x86_64.rpm
rpm -ivh jq-1.6-2.el7.x86_64.rpm
rpm -ivh orchestrator-client-3.1.4-1.x86_64.rpm
rpm -ivh orchestrator-3.1.4-1.x86_64.rpm

 

3.设置mysql数据库的hostname

192.168.1.100,192.168.1.102,192.168.103上执行
每台机器设置相应的hostname
hostnamectl set-hostname hmaster ##192.168.1.100
hostnamectl set-hostname hslavea ##192.168.1.102
hostnamectl set-hostname hslaveb ##192.168.1.103

 

4.修改/etc/hosts文件

192.168.1.104,192.168.1.105,192.168.106上配置

每台orc进程所在机器设置hosts文件
vi /etc/hosts文件添加如下项目
192.168.1.100 hmaster
192.168.1.102 hslavea
192.168.1.103 hslaveb

 

5.orchestrator元数据(mysql)使用的数据库安装
192.168.1.104,192.168.1.105,192.168.1.106上操作
安装mysql数据库步骤省略,然后需要创建元数据库和用户

[mysql@localhost bin]$ /opt/mysql57/bin/mysql -h localhost -uroot -p -S /opt/mysql57/mysql.sock
mysql>create database orchestrator;
mysql>create user 'orchestrator'@'%' identified by 'mysql';
mysql>grant all on orchestrator.* to 'orchestrator'@'%';

 

若使用sqlite3做为元数据的话,只需要配置如下即可,不需要创建账号也不需要授权

 

"BackendDB": "sqlite",
  "SQLite3DataFile": "/usr/local/orchestrator/orchestrator.sqlite3",

 

 若查看里面的数据直接进入数据文件查看

[root@localhost orchestrator]# sqlite3 /usr/local/orchestrator/orchestrator.sqlite3
SQLite version 3.7.17 2013-05-20 00:56:22
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> .table
sqlite> .exit

 

 

 

 

6.在mysql主库上创建用户
192.168.1.100上操作

GRANT SELECT, RELOAD, PROCESS, SUPER, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'orchestrator'@'%' identified by 'mysql';

创建后会自动同步到另外2个从库

 

 

 

7.编辑配置文件
在192.168.1.104上操作,然后拷贝到另外2个orc节点
拷贝一个模板的过来进行修改

[root@localhost orchestrator]# cd /usr/local/orchestrator/
[root@localhost orchestrator]# cp orchestrator-sample.conf.json orchestrator.conf.json
{
  "Debug": true,
  "EnableSyslog": false,
  "ListenAddress": ":3000",
  "MySQLTopologyUser": "orchestrator",
  "MySQLTopologyPassword": "mysql",
  "MySQLTopologyCredentialsConfigFile": "",
  "MySQLTopologySSLPrivateKeyFile": "",
  "MySQLTopologySSLCertFile": "",
  "MySQLTopologySSLCAFile": "",
  "MySQLTopologySSLSkipVerify": true,
  "MySQLTopologyUseMutualTLS": false,
  "MySQLOrchestratorHost": "192.168.1.104",
  "MySQLOrchestratorPort": 3306,
  "MySQLOrchestratorDatabase": "orchestrator",
  "MySQLOrchestratorUser": "orchestrator",
  "MySQLOrchestratorPassword": "mysql",
  "MySQLOrchestratorCredentialsConfigFile": "",
  "MySQLOrchestratorSSLPrivateKeyFile": "",
  "MySQLOrchestratorSSLCertFile": "",
  "MySQLOrchestratorSSLCAFile": "",
  "MySQLOrchestratorSSLSkipVerify": true,
  "MySQLOrchestratorUseMutualTLS": false,
  "MySQLConnectTimeoutSeconds": 1,
  "RaftEnabled": true,
  "RaftDataDir": "/data/orchestrator",
  "RaftBind": "192.168.1.104",
  "DefaultRaftPort": 10008,
  "RaftNodes": [
    "192.168.1.104",
    "192.168.1.105",
    "192.168.1.106"
  ],

 

模板默认是没有raft配置项的,需要自行加入

"RaftEnabled": true,
  "RaftDataDir": "/data/orchestrator",
  "RaftBind": "192.168.1.104",
  "DefaultRaftPort": 10008,
  "RaftNodes": [
    "192.168.1.104",
    "192.168.1.105",
    "192.168.1.106"
  ],

 

/data/orchestrator 目录不许手工创建,系统启动的时候会在每个orc进程所在的机器上创建.

 

以下2项需要设置为"*",否则不会发生启动切换

 

"RecoverMasterClusterFilters": [
    "*"
  ],
  "RecoverIntermediateMasterClusterFilters": [
    "*"
  ],

 

 

 

8.拷贝配置文件到另外2个节点
scp /usr/local/orchestrator/orchestrator.conf.json root@192.168.1.105:/usr/local/orchestrator/
scp /usr/local/orchestrator/orchestrator.conf.json root@192.168.1.106:/usr/local/orchestrator/

拷贝过来修改的地方:
"RaftBind": "192.168.1.104"   ##这里分别修改更各自节点的ip

"MySQLOrchestratorHost": "192.168.1.104", ##这里分别修改更各自节点对应的orc元数据库信息

 

9.启动

192.168.1.104,192.168.1.105,192.168.1.106上执行

每台机器都需要执行,也可以使用nohup后台执行
cd /usr/local/orchestrator
./orchestrator --config=./orchestrator.conf.json http

 

10.web访问
每个数据库节点都可以访问
http://192.168.1.104:3000http://192.168.1.105:3000
http://192.168.1.106:3000

 

1.主从切换

这里是在orc进程所在的机器上执行,我这里是在192.168.1.104上执行

优雅切换

[root@localhost /]# orchestrator-client -c graceful-master-takeover -a hmaster:3306 -d hslavea:3306
hslavea:3306

主从切换,旧主库会指向新主库,但是IO线程是停止的,需要人工手动执行start slave,恢复复制

新主库为:hslavea,但是hmaster不会自动成为新主库的从,需要手工指定后重启动

mysql> CHANGE MASTER TO MASTER_HOST='hslavea', 
              MASTER_PORT=3306,
              MASTER_USER='repl', 
              MASTER_PASSWORD='mysql', 
              MASTER_AUTO_POSITION=1,
              MASTER_CONNECT_RETRY=1, 
              MASTER_RETRY_COUNT=86400, 
              MASTER_HEARTBEAT_PERIOD=2;

mysql> start slave;

 

或是

 

报错信息
Last_IO_Error: Fatal error: Invalid (empty) username when attempting to connect to the master server.

执行如下脚本
mysql> stop slave;
mysql> CHANGE MASTER TO MASTER_USER='repl', MASTER_PASSWORD='mysql';
mysql> start slave;

 

 

指定候选强制切换

[root@localhost /]# orchestrator-client -c force-master-takeover -i hslavea:3306 -d hmaster:3306
hmaster:3306

主库hslavea强制切换,目前主库:hmaster,但是hslavea已经独立,不会自动作为从库了,需要手工加入:

不管主库是否正常,强制主从切换,-i指定集群中任一实例,-d 指定新主库, 注意 切换后旧主库不会指向新主库,需要手动操作。

 

mysql> show slave status \G;
Empty set (0.00 sec)
mysql> reset slave all;
mysql> CHANGE MASTER TO MASTER_HOST='hmaster', 
              MASTER_PORT=3306,
              MASTER_USER='repl',
              MASTER_PASSWORD='mysql',
              MASTER_AUTO_POSITION=1,
              MASTER_CONNECT_RETRY=1,
              MASTER_RETRY_COUNT=86400,
              MASTER_HEARTBEAT_PERIOD=2;

mysql> start slave;

 

 

 不指定候选主库切换

 

[root@localhost /]# orchestrator-client -c force-master-failover -i hmaster:3306
hslaveb:3306
不管主库是否正常,强制故障切换,切换后主库不关闭,新主库不需要指定,由orchestrator选择。这个操作比较危险,谨慎使用。

 

 

 

2.查看拓扑情况
[root@hmaster orchestrator]# orchestrator-client -c topology -i hmaster:3306
hmaster:3306 [0s,ok,5.7.29-log,rw,ROW,>>,GTID]
+ hslavea:3306 [0s,ok,5.7.29-log,ro,ROW,>>,GTID]
+ hslaveb:3306 [0s,ok,5.7.29-log,rw,ROW,>>,GTID]

 

说明:

1.三个节点组成的raft,死掉1个节点不影响程序运行,可以正常切换,死掉2个节点的话,orc不可用,也不能切换,切换会报如下错误;

 

[root@localhost ~]# orchestrator-client -c graceful-master-takeover -a hmaster:3306 -d hslavea:3306
Unauthorized

 

 

####################################优先级切换规则###########################

1.在被管理的mysql主库上创建库和表
我这里是在192.168.1.100 主库上执行
create database orchestrator character set utf8mb4;
GRANT SELECT ON orchestrator.* TO orchestrator@'%';

CREATE TABLE `orchestrator`.`cluster` (
`id` tinyint unsigned NOT NULL auto_increment,
`hostname` varchar(128) NOT NULL DEFAULT '' COMMENT '主机名',
`promotion_rule` varchar(128) NOT NULL DEFAULT 'prefer' COMMENT '晋升规则,可选值:prefer/neutral/prefer_not/must_not',
`D_UPDATED_AT` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '更新时间',
`D_CREATED_AT` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='orchestrator实例元数据表';

 

2.定义切换规则
promotion_rule字段的值只能是下面几种
prefer/neutral/prefer_not/must_not

insert into cluster(hostname,promotion_rule) values('hmaster','prefer');
insert into cluster(hostname,promotion_rule) values('hslavea','prefer');
insert into cluster(hostname,promotion_rule) values('hslaveb','must_not');##不参与竞选

 

3.修改配置
"DetectPromotionRuleQuery": "select promotion_rule from orchestrator.cluster where hostname=@@hostname",

4.我们尝试将设置为must_not不参与切换的实例(hslaveb)执行切换
优雅切换:
[root@localhost orchestrator]# orchestrator-client -c graceful-master-takeover -a hmaster:3306 -d hslaveb:3306
GracefulMasterTakeover: designated instance hslaveb:3306 cannot be promoted due to promotion rule or it is explicitly ignored in PromotionIgnoreHostnameFilters configuration

 

5.可以尝试配置另外的参数
上面的配置是需要读库的,若不需要读库可以配置如下参数:
"PromotionIgnoreHostnameFilters": ["hslaveb"],

 

 

-- The End--