Mysql Cluster安装:

 

软件版本:mysql-cluster-gpl-7.2.19-linux2.6-x86_64.tar.gz

mysql官方下载页面中会访问google的js,google不能访问的话,选择平台的下拉框就会失效。需要代理服务器下载。

安装环境:

Centos5.8  Linux 2.6.18-308.el5 x86_64  内存4G  非常消耗内存

 

10.0.12.150:管理节点

10.0.12.151SQL节点

10.0.12.152SQL节点

10.0.12.153DATA节点

10.0.12.154DATA节点

 

 

  • 管理节点安装:

  • :管理节点


[root@localhost ~]# groupadd mysql
[root@localhost ~]# useradd mysql-g mysql
[root@localhost ~]# cd /usr/local
[root@localhost ~]# tar –xf mysql-cluster-gpl-7.2.19-linux2.6-x86_64.tar.gz
[root@localhost ~]# mv mysql-cluster-gpl-7.2.19-linux2.6-x86_64  mysql
[root@localhost ~]# chown -R mysql:mysql mysql
[root@localhost ~]# mkdir /usr/local/mysql/data
[root@localhost ~]# mkdir /usr/local/mysql/logs
[root@localhost ~]# mkdir/var/lib/mysql-cluster
[root@localhost ~]# cd mysql
[root@localhost ~]# scripts/mysql_install_db  --user=mysql --basedir=/usr/local/mysql  --datadir=/usr/local/mysql/data
[root@localhost ~]#vi /var/lib/mysql-cluster/config.ini


写入以下内容:

[NDBD DEFAULT]
NoOfReplicas=2
DataMemory=1024MB
IndexMemory=1024MB
MaxNoOfTables=300
MaxNoOfOrderedIndexes=500
MaxNoOfUniqueHashIndexes=500
MaxNoOfAttributes=20000
[TCP DEFAULT]
portnumber=3306
[NDB_MGMD]
hostname=10.0.12.150
datadir=/usr/local/mysql/data/
[NDBD]
hostname=10.0.12.153
datadir=/usr/local/mysql/data/
[NDBD]
hostname=10.0.12.154
datadir=/usr/local/mysql/data/
[MYSQLD]
hostname=10.0.12.151
[MYSQLD]
hostname=10.0.12.152


[root@localhost ~]#



 

  • DATA节点安装

  • 10.0.12.153 Data节点

  • 10.0.12.154 Data节点


[root@localhost ~]# groupadd mysql
[root@localhost ~]# useradd mysql-g mysql
[root@localhost ~]# cd /usr/local
[root@localhost ~]# tar –xf mysql-cluster-gpl-7.2.19-linux2.6-x86_64.tar.gz
[root@localhost ~]# mv mysql-cluster-gpl-7.2.19-linux2.6-x86_64  mysql
[root@localhost ~]# chown -R mysql:mysql mysql
[root@localhost ~]# mkdir /usr/local/mysql/data
[root@localhost ~]# mkdir /usr/local/mysql/logs
[root@localhost ~]# mkdir /var/lib/mysql-cluster
[root@localhost ~]# cd mysql
[root@localhost ~]# scripts/mysql_install_db  --user=mysql --basedir=/usr/local/mysql  --datadir=/usr/local/mysql/data
[root@localhost ~]# cpsupport-files/mysql.server /etc/init.d/mysqld
[root@localhost ~]# vi /etc/my.cnf


写入以下内容:

[mysqld]
basedir         = /usr/local/mysql
datadir         = /usr/local/mysql/data
user            = mysql
port            = 3306
socket          = /var/lib/mysql/mysql.sock
ndbcluster
ndb-connectstring=10.0.12.150
[mysql_cluster]
ndb-connectstring=10.0.12.150
[NDB_MGM]
connect-string=10.0.12.150


[root@localhost ~]# chown -Rmysql:mysql /var/lib/mysql
[root@localhost ~]#
[root@localhost ~]#




  • SQL节点安装

同Data节点安装

  1. 10.0.12.151 sql节点

  2. 10.0.12.152 sql节点

 

  • Cluster启动

1、    启动管理节点

/usr/local/mysql/bin/ndb_mgmd -f/var/lib/mysql-cluster/config.ini --initial

首次启动后面加 –initial

关闭:/usr/local/mysql/bin/ndb_mgm -e shutdown

2、    启动data节点

/usr/local/mysql/bin/ndbd --initial

首次启动或者修改参数后启动需要加--initial

3、    启动sql节点

service mysqld start

关闭:service mysqld stop

 

 

启动完成后在管理节点执行/usr/local/mysql/bin/ndb_mgm

ndb_mgm>show
Connected toManagement Server at: localhost:1186
ClusterConfiguration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @10.0.12.153  (mysql-5.5.41 ndb-7.2.19, Nodegroup: 0, *)
id=3    @10.0.12.154  (mysql-5.5.41 ndb-7.2.19, Nodegroup: 0)
 
[ndb_mgmd(MGM)] 1node(s)
id=1    @10.0.12.150  (mysql-5.5.41 ndb-7.2.19)
 
[mysqld(API)]   2 node(s)
id=4    @10.0.12.151  (mysql-5.5.41 ndb-7.2.19)
id=5    @10.0.12.152  (mysql-5.5.41 ndb-7.2.19)
 
ndb_mgm>


 

 

测试:在sql节点10.0.12.151进入mysql创建库testcluster,创建测试table,在10.0.12.152可以看见新建的表,mysqlcluster创建成功。


mysql cluster节点,分组, 复制和分片详见以下文档

MySQL Cluster Nodes, Node Groups, Replicas, and Partitions

http://dev.mysql.com/doc/refman/5.5/en/mysql-cluster-nodes-groups.html


*********************************************************


Replica.  This is a copy of a cluster partition. Each node in a node group stores a replica. Also sometimes known as a partition replica. The number of replicas is equal to the number of nodes per node group.

The following diagram illustrates a MySQL Cluster with four data nodes, arranged in two node groups of two nodes each; nodes 1 and 2 belong to node group 0, and nodes 3 and 4 belong to node group 1.

Note

Only data (ndbd) nodes are shown here; although a working cluster requires an ndb_mgm process for cluster management and at least one SQL node to access the data stored by the cluster, these have been omitted in the figure for clarity.

mysql cluster 安装配置_mysql

The data stored by the cluster is divided into four partitions, numbered 0, 1, 2, and 3. Each partition is stored—in multiple copies—on the same node group. Partitions are stored on alternate node groups as follows:

  • Partition 0 is stored on node group 0; a primary replica (primary copy) is stored on node 1, and a backup replica (backup copy of the partition) is stored on node 2.

  • Partition 1 is stored on the other node group (node group 1); this partition's primary replica is on node 3, and its backup replica is on node 4.

  • Partition 2 is stored on node group 0. However, the placing of its two replicas is reversed from that of Partition 0; for Partition 2, the primary replica is stored on node 2, and the backup on node 1.

  • Partition 3 is stored on node group 1, and the placement of its two replicas are reversed from those of partition 1. That is, its primary replica is located on node 4, with the backup on node 3.

What this means regarding the continued operation of a MySQL Cluster is this: so long as each node group participating in the cluster has at least one node operating, the cluster has a complete copy of all data and remains viable. This is illustrated in the next diagram.

mysql cluster 安装配置_安装配置_02

In this example, where the cluster consists of two node groups of two nodes each, any combination of at least one node in node group 0 and at least one node in node group 1 is sufficient to keep the cluster alive (indicated by arrows in the diagram). However, if both nodes from either node group fail, the remaining two nodes are not sufficient (shown by the arrows marked out with an X); in either case, the cluster has lost an entire partition and so can no longer provide access to a complete set of all cluster data.