MMM配置环境

MMM-monitor 192.168.78.128 (M1/M128)

master1:192.168.78.


1设置hosts解析

三台都需要能彼此解析

追加hosts文件 /etc/hosts

192.168.78.128 M1

192.168.78.129 M2

192.168.78.130 slave1


2配置主从服务器账号

grant replication slave, replication client on *.* to 'repl'@'%' identified by 'repl';

grant process,super, replication client on *.* to 'mmm_agent'@'%' identified by '123456';

grant replication client on *.* to 'mmm_monitor'@'%' identified by '123456';


3安装MMM

三台上分别配置perl源

rpm -Uvh http://mirrors.ustc.edu.cn/fedora/epel/6/x86_64/epel-release-6-8.noarch.rpm

安装MMM(测试用的,所以是rpm包,不要用在生产环境)

yum -y install mysql-mmmm*

查看安装

rpm -qa|grep mysql-mmm

mmm,anget,monitor,tools


4配置MMM

cd /etc/mysql-mmm

修改mmm_common.conf同步到三台机器

vim mmm_common.conf 


active_master_role      writer


<host default>

    cluster_interface       eth0

    pid_path                /var/run/mysql-mmm/mmm_agentd.pid

    bin_path                /usr/libexec/mysql-mmm/

    replication_user        repl

    replication_password    repl

    agent_user              mmm_agent

    agent_password          123456

</host>


<host M1>

    ip      192.168.78.128

    mode    master

    peer    M1

</host>


<host M2>

    ip      192.168.78.129

    mode    master

    peer    M2

</host>


<host slave1>

    ip      192.168.78.130

    mode    slave

    peer    slave1

</host>


<role writer>

    hosts   M1,M2

    ips     192.168.78.140

    mode    exclusive

</role>


<role reader>

    hosts   M1,M2,slave1

    ips     192.168.78.138, 192.168.78.139

    mode    balanced

</role>


for i in  M2 slave1;do scp /etc/mysql-mmm/mmm_common.conf $i:/etc/mysql-mmm/;done


分别修改mmm_agent.conf

vim mmm_agent.conf

include mmm_common.conf

# The 'this' variable refers to this server.  Proper operation requires

# that 'this' server (db1 by default), as well as all other servers, have the

# proper IP addresses set in mmm_common.conf.

this M1


配置mysql_mon.conf(三台上分别配置,但只能启动一台监控)

[root@M128 mysql-mmm]# vim mmm_mon.conf 

include mmm_common.conf


<monitor>

    ip                  127.0.0.1

    pid_path            /var/run/mysql-mmm/mmm_mond.pid

    bin_path            /usr/libexec/mysql-mmm

    status_path         /var/lib/mysql-mmm/mmm_mond.status

    ping_ips            192.168.78.128, 192.168.78.129, 192.168.78.130

    auto_set_online     10


    # The kill_host_bin does not exist by default, though the monitor will

    # throw a warning about it missing.  See the section 5.10 "Kill Host

    # Functionality" in the PDF documentation.

    #

    # kill_host_bin     /usr/libexec/mysql-mmm/monitor/kill_host

    #

</monitor>


<host default>

    monitor_user        mmm_monitor

    monitor_password    123456

</host>

debug 0


5启动MMM

三台均启动agent端

/etc/init.d/mysql-mmm-agent start

监控端启动

/etc/init.d/mysql-mmm-monitor start

正常启动后的进程如下:

agent端

[root@M130 mysql-mmm]# ps -ef|grep mmm

root     15686     1  0 02:44 ?        00:00:00 mmm_agentd

root     15688 15686  0 02:44 ?        00:00:44 mmm_agentd

root     54389 12494  0 18:47 pts/0    00:00:00 grep mmm

monitor端

[root@M129 ~]# ps -ef|grep mmm

root     27681     1  0 18:51 ?        00:00:00 mmm_mond

root     27682 27681  9 18:51 ?        00:00:01 mmm_mond

root     27712 27682  1 18:51 ?        00:00:00 perl /usr/libexec/mysql-mmm/monitor/checker ping_ip

root     27715 27682  2 18:51 ?        00:00:00 perl /usr/libexec/mysql-mmm/monitor/checker mysql

root     27717 27682  2 18:51 ?        00:00:00 perl /usr/libexec/mysql-mmm/monitor/checker ping

root     27719 27682  3 18:51 ?        00:00:00 perl /usr/libexec/mysql-mmm/monitor/checker rep_backlog

root     27721 27682  2 18:51 ?        00:00:00 perl /usr/libexec/mysql-mmm/monitor/checker rep_threads

root     27739 35397  0 18:51 pts/1    00:00:00 grep mmm

root     35239     1  0 02:44 ?        00:00:00 mmm_agentd

root     35241 35239  0 02:44 ?        00:00:50 mmm_agentd

[root@M129 ~]# 

6查看状态

[root@M128 mysql-mmm]# mmm_control show

  M1(192.168.78.128) master/ONLINE. Roles: reader(192.168.78.138)

  M2(192.168.78.129) master/ONLINE. Roles: writer(192.168.78.140)

  slave1(192.168.78.130) slave/ONLINE. Roles: reader(192.168.78.139)

[root@M128 mysql-mmm]# mmm_control checks all

M1      ping         [last change: 2015/10/20 18:51:21]  OK

M1      mysql        [last change: 2015/10/20 18:51:21]  OK

M1      rep_threads  [last change: 2015/10/20 18:51:21]  OK

M1      rep_backlog  [last change: 2015/10/20 18:51:21]  OK: Backlog is null

slave1  ping         [last change: 2015/10/20 18:51:21]  OK

slave1  mysql        [last change: 2015/10/20 18:51:21]  OK

slave1  rep_threads  [last change: 2015/10/20 18:51:21]  OK

slave1  rep_backlog  [last change: 2015/10/20 18:51:21]  OK: Backlog is null

M2      ping         [last change: 2015/10/20 18:51:21]  OK

M2      mysql        [last change: 2015/10/20 18:51:21]  OK

M2      rep_threads  [last change: 2015/10/20 18:51:21]  OK

M2      rep_backlog  [last change: 2015/10/20 18:51:21]  OK: Backlog is null


7删除RPM包安装的MMM

yum remove mysql-mmm*


8总结:

MMM集群的坑比较多,只做测试,不用在生产环境