Docker搭建Twemproxy SSDB分片多主集群

  • 环境准备
  • 依赖
  • 安装Docker
  • 安装redis-cli
  • 基本单例
  • 启动
  • 测试
  • 带配置的单实例
  • 编写配置文件
  • 启动
  • SSDB分片副本集群
  • 创建目录
  • 编写twemproxy配置
  • 编写SSDB配置
  • 创建overlay网络
  • 启动SSDB
  • 启动twemproxy
  • Docker Stack 部署service集群
  • 清除容器和网络
  • 编写ssdb.yaml
  • 更改twemproxy配置
  • 更改SSDB配置
  • 启动
  • 测试


环境准备

依赖

  • CentOS7.6

安装Docker

参照安装(点击)

安装redis-cli

预先安装redis-cli用于测试ssdb的连接。

yum install -y redis-cli

基本单例

启动

docker pull leobuskin/ssdb-docker
docker run -p 6379:8888 -v /root/volumns/ssdb/var:/ssdb/var --name ssdb  -d leobuskin/ssdb-docker

测试

redis-cli set 1 a
redis-cli get 1

连接成功

带配置的单实例

编写配置文件

vi /root/volumns/ssdb/ssdb.conf

复制下面的配置信息到文件中,修改文件注意一定要用TAB.
建议在windows上编辑好以后,使用rz传到远程host。复制粘贴太容易出现空格问题。

# ssdb-server config
# MUST indent by TAB!

# absolute path, or relative to path of this file, directory must exists
work_dir = /ssdb/var
pidfile = /run/ssdb.pid

server:
	ip: 0.0.0.0
	port: 8888
	# bind to public ip
	#ip: 0.0.0.0
	# format: allow|deny: all|ip_prefix
	# multiple allows or denys is supported
	#deny: all
	#allow: 127.0.0.1
	#allow: 192.168
	# auth password must be at least 32 characters
	#auth: very-strong-password
	#readonly: yes
	# in ms, to log slowlog with WARN level
	#slowlog_timeout:

replication:
	binlog: yes
	# Limit sync speed to *MB/s, -1: no limit
	sync_speed: -1
	slaveof:
		# to identify a master even if it moved(ip, port changed)
		# if set to empty or not defined, ip: 0.0.0.0
		#id: svc_2
		# sync|mirror, default is sync
		#type: sync
		#host: localhost
		#port: 8889

logger:
	level: info
	output: stdout
	rotate:
		size: 1000000000

leveldb:
	# in MB
	cache_size: 500
	# in MB
	write_buffer_size: 128
	# in MB/s
	compaction_speed: 1000
	# yes|no
	compression: yes

文件配置定义可以参照官方配置文档,这里还是据说一下几个配置点:

  • output配置,可以配置到文件中,这里配置成stdout 方便使用docker logs查看日志
  • cache_size: 配置成物理内存的一半。
  • write_buffer_size:范围[4,128],越大越好,现在的物理内存都比较大,可以配置成128。
  • compaction_speed:根据具体的磁盘性能数据填写。SSD可以填[500, 1000],本地NVME SSD可以填写更高的数值。也可以降低这个值来进行写限速。
  • compression:绝大多数情况用yes,获得10倍于硬盘空的数据存储内容。

另外,SSDB没有最大内存限制,一般不用关心这个问题。

启动

docker run -p 6379:8888 -v /root/volumns/ssdb/ssdb.conf:/ssdb/ssdb.conf -v /root/volumns/ssdb/var:/ssdb/var --name ssdb  -d leobuskin/ssdb-docker

启动完成后执行

docker logs ssdb

显示

ssdb-server 1.9.7
Copyright (c) 2012-2015 ssdb.io

2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(46): ssdb-server 1.9.7
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(47): conf_file        : /ssdb/ssdb.conf
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(48): log_level        : info
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(49): log_output       : stdout
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(50): log_rotate_size  : 1000000000
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(52): main_db          : /ssdb/var/data
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(53): meta_db          : /ssdb/var/meta
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(54): cache_size       : 8000 MB
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(55): block_size       : 32 KB
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(56): write_buffer     : 64 MB
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(57): max_open_files   : 1000
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(58): compaction_speed : 1000 MB/s
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(59): compression      : yes
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(60): binlog           : yes
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(61): binlog_capacity  : 20000000
2019-08-12 08:20:59.129 [INFO ] ssdb-server.cpp(62): sync_speed       : -1 MB/s
2019-08-12 08:20:59.132 [INFO ] binlog.cpp(179): binlogs capacity: 20000000, min: 0, max: 0
2019-08-12 08:20:59.136 [INFO ] server.cpp(159): server listen on 0.0.0.0:8888
2019-08-12 08:20:59.136 [INFO ] server.cpp(169):     auth    : off
2019-08-12 08:20:59.136 [INFO ] server.cpp(209):     readonly: no
2019-08-12 08:20:59.137 [INFO ] serv.cpp(222): key_range.kv: "", ""
2019-08-12 08:20:59.137 [INFO ] ssdb-server.cpp(85): pidfile: /run/ssdb.pid, pid: 1
2019-08-12 08:20:59.137 [INFO ] ssdb-server.cpp(86): ssdb server started.

如果显示如下就是空格问题

ssdb-server 1.9.7
Copyright (c) 2012-2015 ssdb.io

error loading conf file: '/ssdb/ssdb.conf'
2019-08-12 08:16:47.861 [ERROR] config.cpp(62): invalid line(33): unexpected whitespace char ' '

SSDB分片副本集群

  • 官网上对于分片推荐使用twemproxy。对于twemproxy,也可以做到像mongos那样的多节点的水平代理层,部署多节点twemproxy。
  • 把twemproxy部署在客户端是可以的,但会导致部署架构不够清晰。
  • 在客户端可以使用haproxy TCP做反向代理来轮询可用性,也可使用负载均衡,当单proxy出现故障时可以动态切换。需要注意的是,如果网络不稳定,可能会出现瞬间的一致性问题,当然这也取决于架构实现。haproxy并不像twemproxy会合并数据请求,而是原始的使用一对一方式的代理TCP连接,不用担心haproxy的消息在多实例模式下的乱序执行问题。与之相比较,twemproxy会合并消息,它的事务一致性由它本身保证。
  • 副本集群是SSDB自带的特性,可以使用slave或者mirror方式,slave对应主从,类似Mongodb 已淘汰的主从;mirror对应多主模式,类似于Mongodb的副本集。备份的实例越多,资源消耗越高,这里主观的设定一个平衡值,使用双实例。。
  • 部署设计:使用3台机器,使用双主模式,做3个分片,一共6个SSDB实例,再加3个twemproxy实例。也就是每台机器上部署一个twemproxy进程和两个SSDB进程。部署图:

vm1

vm2

vm3

twemproxy-1

twemproxy-2

twemproxy-3

shard-1-server-1

shard-2-server-1

shard-3-server-1

shard-3-server-2

shard-1-server-2

shard-2-server-2

创建目录

#vm1
mkdir -p /root/volumns/ssdb-twemproxy-1 /root/volumns/ssdb-shard-1-server-1/var /root/volumns/ssdb-shard-3-server-2/var
#vm2
mkdir -p /root/volumns/ssdb-twemproxy-2 /root/volumns/ssdb-shard-2-server-1/var /root/volumns/ssdb-shard-1-server-2/var
#vm3
mkdir -p /root/volumns/ssdb-twemproxy-3 /root/volumns/ssdb-shard-3-server-1/var /root/volumns/ssdb-shard-2-server-2/var

编写twemproxy配置

  • 因为没有像configsvr注册副本集的过程,这里需要把所有的SSDB实例全部放到twemproxy代理下面。所有的twemproxy代理的配置完全一样。
alpha:
  listen: 0.0.0.0:11211
  hash: fnv1a_64
  distribution: ketama
  auto_eject_hosts: true
  redis: true
  server_retry_timeout: 2000
  server_failure_limit: 1
  servers:
    - shard-1-server-1:8888:1
    - shard-1-server-2:8888:1
    - shard-2-server-1:8888:1
    - shard-2-server-2:8888:1
    - shard-3-server-1:8888:1
    - shard-3-server-2:8888:1
  • trwemproxy配置方法见官网
  • hash:哈希算法,可选值 one_at_a_time,md5,crc16,crc32 (crc32 implementation compatible with libmemcached),crc32a (correct crc32 implementation as per the spec),fnv1_64,fnv1a_64,fnv1_32,fnv1a_32,hsieh,murmur,jenkins
  • distribution:分片算法,可选值 ketama、modula、random
  • auto_eject_hosts:在节点无法响应时自动从服务器列表中剔除,重新响应时自动加入服务器列表中。

编写SSDB配置

  • 在一台机上部署多个SSDB时,不可使用cache size为物理内存一半原则。按照一台机器上部署两个SSDB进程,配置应该是四分之一物理内存。更改配置时注意host变化。

shard-1-server-1配置

# ssdb-server config
# MUST indent by TAB!

# absolute path, or relative to path of this file, directory must exists
work_dir = /ssdb/var
pidfile = /run/ssdb.pid

server:
	ip: 0.0.0.0
	port: 8888
	# bind to public ip
	#ip: 0.0.0.0
	# format: allow|deny: all|ip_prefix
	# multiple allows or denys is supported
	#deny: all
	#allow: 127.0.0.1
	#allow: 192.168
	# auth password must be at least 32 characters
	#auth: very-strong-password
	#readonly: yes
	# in ms, to log slowlog with WARN level
	#slowlog_timeout:

replication:
	binlog: yes
	# Limit sync speed to *MB/s, -1: no limit
	sync_speed: -1
	slaveof:
		id: shard-1-server-2
		type: mirror
		host: shard-1-server-2
		port: 8888
		# to identify a master even if it moved(ip, port changed)
		# if set to empty or not defined, ip: 0.0.0.0
		#id: svc_2
		# sync|mirror, default is sync
		#type: sync
		#host: localhost
		#port: 8889

logger:
	level: info
	output: stdout
	rotate:
		size: 1000000000

leveldb:
	# in MB
	cache_size: 500
	# in MB
	write_buffer_size: 64
	# in MB/s
	compaction_speed: 1000
	# yes|no
	compression: yes

shard-1-server-2配置

# ssdb-server config
# MUST indent by TAB!

# absolute path, or relative to path of this file, directory must exists
work_dir = /ssdb/var
pidfile = /run/ssdb.pid

server:
	ip: 0.0.0.0
	port: 8888
	# bind to public ip
	#ip: 0.0.0.0
	# format: allow|deny: all|ip_prefix
	# multiple allows or denys is supported
	#deny: all
	#allow: 127.0.0.1
	#allow: 192.168
	# auth password must be at least 32 characters
	#auth: very-strong-password
	#readonly: yes
	# in ms, to log slowlog with WARN level
	#slowlog_timeout:

replication:
	binlog: yes
	# Limit sync speed to *MB/s, -1: no limit
	sync_speed: -1
	slaveof:
		id: shard-1-server-1
		type: mirror
		host: shard-1-server-1
		port: 8888
		# to identify a master even if it moved(ip, port changed)
		# if set to empty or not defined, ip: 0.0.0.0
		#id: svc_2
		# sync|mirror, default is sync
		#type: sync
		#host: localhost
		#port: 8889

logger:
	level: info
	output: stdout
	rotate:
		size: 1000000000

leveldb:
	# in MB
	cache_size: 500
	# in MB
	write_buffer_size: 128
	# in MB/s
	compaction_speed: 1000
	# yes|no
	compression: yes

shard-2-server-1配置

# ssdb-server config
# MUST indent by TAB!

# absolute path, or relative to path of this file, directory must exists
work_dir = /ssdb/var
pidfile = /run/ssdb.pid

server:
	ip: 0.0.0.0
	port: 8888
	# bind to public ip
	#ip: 0.0.0.0
	# format: allow|deny: all|ip_prefix
	# multiple allows or denys is supported
	#deny: all
	#allow: 127.0.0.1
	#allow: 192.168
	# auth password must be at least 32 characters
	#auth: very-strong-password
	#readonly: yes
	# in ms, to log slowlog with WARN level
	#slowlog_timeout:

replication:
	binlog: yes
	# Limit sync speed to *MB/s, -1: no limit
	sync_speed: -1
	slaveof:
		id: shard-2-server-2
		type: mirror
		host: shard-2-server-2
		port: 8888
		# to identify a master even if it moved(ip, port changed)
		# if set to empty or not defined, ip: 0.0.0.0
		#id: svc_2
		# sync|mirror, default is sync
		#type: sync
		#host: localhost
		#port: 8889

logger:
	level: info
	output: stdout
	rotate:
		size: 1000000000

leveldb:
	# in MB
	cache_size: 500
	# in MB
	write_buffer_size: 64
	# in MB/s
	compaction_speed: 1000
	# yes|no
	compression: yes

shard-2-server-2配置

# ssdb-server config
# MUST indent by TAB!

# absolute path, or relative to path of this file, directory must exists
work_dir = /ssdb/var
pidfile = /run/ssdb.pid

server:
	ip: 0.0.0.0
	port: 8888
	# bind to public ip
	#ip: 0.0.0.0
	# format: allow|deny: all|ip_prefix
	# multiple allows or denys is supported
	#deny: all
	#allow: 127.0.0.1
	#allow: 192.168
	# auth password must be at least 32 characters
	#auth: very-strong-password
	#readonly: yes
	# in ms, to log slowlog with WARN level
	#slowlog_timeout:

replication:
	binlog: yes
	# Limit sync speed to *MB/s, -1: no limit
	sync_speed: -1
	slaveof:
		id: shard-2-server-1
		type: mirror
		host: shard-2-server-1
		port: 8888
		# to identify a master even if it moved(ip, port changed)
		# if set to empty or not defined, ip: 0.0.0.0
		#id: svc_2
		# sync|mirror, default is sync
		#type: sync
		#host: localhost
		#port: 8889

logger:
	level: info
	output: stdout
	rotate:
		size: 1000000000

leveldb:
	# in MB
	cache_size: 500
	# in MB
	write_buffer_size: 128
	# in MB/s
	compaction_speed: 1000
	# yes|no
	compression: yes

shard-3-server-1配置

# ssdb-server config
# MUST indent by TAB!

# absolute path, or relative to path of this file, directory must exists
work_dir = /ssdb/var
pidfile = /run/ssdb.pid

server:
	ip: 0.0.0.0
	port: 8888
	# bind to public ip
	#ip: 0.0.0.0
	# format: allow|deny: all|ip_prefix
	# multiple allows or denys is supported
	#deny: all
	#allow: 127.0.0.1
	#allow: 192.168
	# auth password must be at least 32 characters
	#auth: very-strong-password
	#readonly: yes
	# in ms, to log slowlog with WARN level
	#slowlog_timeout:

replication:
	binlog: yes
	# Limit sync speed to *MB/s, -1: no limit
	sync_speed: -1
	slaveof:
		id: shard-3-server-2
		type: mirror
		host: shard-3-server-2
		port: 8888
		# to identify a master even if it moved(ip, port changed)
		# if set to empty or not defined, ip: 0.0.0.0
		#id: svc_2
		# sync|mirror, default is sync
		#type: sync
		#host: localhost
		#port: 8889

logger:
	level: info
	output: stdout
	rotate:
		size: 1000000000

leveldb:
	# in MB
	cache_size: 500
	# in MB
	write_buffer_size: 64
	# in MB/s
	compaction_speed: 1000
	# yes|no
	compression: yes

shard-3-server-2配置

# ssdb-server config
# MUST indent by TAB!

# absolute path, or relative to path of this file, directory must exists
work_dir = /ssdb/var
pidfile = /run/ssdb.pid

server:
	ip: 0.0.0.0
	port: 8888
	# bind to public ip
	#ip: 0.0.0.0
	# format: allow|deny: all|ip_prefix
	# multiple allows or denys is supported
	#deny: all
	#allow: 127.0.0.1
	#allow: 192.168
	# auth password must be at least 32 characters
	#auth: very-strong-password
	#readonly: yes
	# in ms, to log slowlog with WARN level
	#slowlog_timeout:

replication:
	binlog: yes
	# Limit sync speed to *MB/s, -1: no limit
	sync_speed: -1
	slaveof:
		id: shard-3-server-1
		type: mirror
		host: shard-3-server-1
		port: 8888
		# to identify a master even if it moved(ip, port changed)
		# if set to empty or not defined, ip: 0.0.0.0
		#id: svc_2
		# sync|mirror, default is sync
		#type: sync
		#host: localhost
		#port: 8889

logger:
	level: info
	output: stdout
	rotate:
		size: 1000000000

leveldb:
	# in MB
	cache_size: 500
	# in MB
	write_buffer_size: 128
	# in MB/s
	compaction_speed: 1000
	# yes|no
	compression: yes

创建overlay网络

创建overlay网络

启动SSDB

逐行执行

#vm1
docker pull leobuskin/ssdb-docker
docker run --network overlay --restart=always -p 8888:8888 \
-v /root/volumns/ssdb-shard-1-server-1/ssdb.conf:/ssdb/ssdb.conf \
-v /root/volumns/ssdb-shard-1-server-1/var:/ssdb/var \
--name=ssdb-shard-1-server-1 --hostname=ssdb-shard-1-server-1 \
-d leobuskin/ssdb-docker
docker run --network overlay --restart=always -p 8889:8888 \
-v /root/volumns/ssdb-shard-3-server-2/ssdb.conf:/ssdb/ssdb.conf \
-v /root/volumns/ssdb-shard-3-server-2/var:/ssdb/var \
--name=ssdb-shard-3-server-2 --hostname=ssdb-shard-3-server-2 \
-d leobuskin/ssdb-docker
#vm2
docker pull leobuskin/ssdb-docker
docker run --network overlay --restart=always -p 8888:8888 \
-v /root/volumns/ssdb-shard-2-server-1/ssdb.conf:/ssdb/ssdb.conf \
-v /root/volumns/ssdb-shard-2-server-1/var:/ssdb/var \
--name=ssdb-shard-2-server-1 --hostname=ssdb-shard-2-server-1 \
-d leobuskin/ssdb-docker
docker run --network overlay --restart=always -p 8889:8888 \
-v /root/volumns/ssdb-shard-1-server-2/ssdb.conf:/ssdb/ssdb.conf \
-v /root/volumns/ssdb-shard-1-server-2/var:/ssdb/var \
--name=ssdb-shard-1-server-2 --hostname=ssdb-shard-1-server-2 \
-d leobuskin/ssdb-docker
#vm3
docker pull leobuskin/ssdb-docker
docker run --network overlay --restart=always -p 8888:8888 \
-v /root/volumns/ssdb-shard-3-server-1/ssdb.conf:/ssdb/ssdb.conf \
-v /root/volumns/ssdb-shard-3-server-1/var:/ssdb/var \
--name=ssdb-shard-3-server-1 --hostname=ssdb-shard-3-server-1 \
-d leobuskin/ssdb-docker
docker run --network overlay --restart=always -p 8889:8888 \
-v /root/volumns/ssdb-shard-2-server-2/ssdb.conf:/ssdb/ssdb.conf \
-v /root/volumns/ssdb-shard-2-server-2/var:/ssdb/var \
--name=ssdb-shard-2-server-2 --hostname=ssdb-shard-2-server-2 \
-d leobuskin/ssdb-docker

使用docker logs逐个检查启动成功

ssdb-server 1.9.7
Copyright (c) 2012-2015 ssdb.io

2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(46): ssdb-server 1.9.7
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(47): conf_file        : /ssdb/ssdb.conf
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(48): log_level        : info
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(49): log_output       : stdout
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(50): log_rotate_size  : 1000000000
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(52): main_db          : /ssdb/var/data
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(53): meta_db          : /ssdb/var/meta
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(54): cache_size       : 500 MB
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(55): block_size       : 32 KB
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(56): write_buffer     : 128 MB
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(57): max_open_files   : 500
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(58): compaction_speed : 1000 MB/s
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(59): compression      : yes
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(60): binlog           : yes
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(61): binlog_capacity  : 20000000
2019-08-12 12:43:07.949 [INFO ] ssdb-server.cpp(62): sync_speed       : -1 MB/s
2019-08-12 12:43:07.952 [INFO ] binlog.cpp(179): binlogs capacity: 20000000, min: 0, max: 0
2019-08-12 12:43:07.953 [INFO ] server.cpp(159): server listen on 0.0.0.0:8888
2019-08-12 12:43:07.953 [INFO ] server.cpp(169):     auth    : off
2019-08-12 12:43:07.953 [INFO ] server.cpp(209):     readonly: no
2019-08-12 12:43:07.953 [INFO ] serv.cpp(207): slaveof: shard-2-server-1:8888, type: mirror
2019-08-12 12:43:07.953 [INFO ] serv.cpp(222): key_range.kv: "", ""
2019-08-12 12:43:07.953 [INFO ] ssdb-server.cpp(85): pidfile: /run/ssdb.pid, pid: 1
2019-08-12 12:43:07.953 [INFO ] ssdb-server.cpp(86): ssdb server started.
2019-08-12 12:43:07.954 [INFO ] slave.cpp(171): [shard-2-server-1][0] connecting to master at shard-2-server-1:8888...
2019-08-12 12:43:07.970 [INFO ] slave.cpp(200): [shard-2-server-1] ready to receive binlogs
2019-08-12 12:43:07.970 [INFO ] backend_sync.cpp(54): fd: 19, accept sync client
2019-08-12 12:43:07.972 [INFO ] backend_sync.cpp(246): [mirror] 127.0.0.1:38452 fd: 19, copy begin, seq: 0, key: ''
2019-08-12 12:43:07.972 [INFO ] backend_sync.cpp(260): 127.0.0.1:38452 fd: 19, copy begin
2019-08-12 12:43:07.972 [INFO ] backend_sync.cpp(291): new iterator, last_key: ''
2019-08-12 12:43:07.972 [INFO ] backend_sync.cpp(297): iterator created, last_key: ''
2019-08-12 12:43:07.972 [INFO ] backend_sync.cpp(349): 127.0.0.1:38452 fd: 19, copy end
2019-08-12 12:43:07.972 [INFO ] slave.cpp(349): copy begin
2019-08-12 12:43:07.972 [INFO ] slave.cpp(359): copy end, copy_count: 0, last_seq: 0, seq: 0

启动twemproxy

#vm1
docker pull docker pull anchorfree/twemproxy
docker run --network overlay -p 6379:6379 -v /root/volumns/ssdb-twemproxy-1/nutcracker.yml:/opt/nutcracker.yml --name=ssdb-twemproxy-1 --hostname=ssdb-twemproxy-1 -d anchorfree/twemproxy
#vm2
docker pull docker pull anchorfree/twemproxy
docker run --network overlay -p 6379:6379 -v /root/volumns/ssdb-twemproxy-2/nutcracker.yml:/opt/nutcracker.yml --name=ssdb-twemproxy-2 --hostname=ssdb-twemproxy-2 -d anchorfree/twemproxy
#vm3
docker pull docker pull anchorfree/twemproxy
docker run --network overlay -p 6379:6379 -v /root/volumns/ssdb-twemproxy-3/nutcracker.yml:/opt/nutcracker.yml --name=ssdb-twemproxy-3 --hostname=ssdb-twemproxy-3 -d anchorfree/twemproxy

启动后测试

redis-cli set hello 1

分别从6个实例中寻找数据,只有2个实例有数据

通过docker ps -a查看 twemproxy状态中有一个unhealthy显示,但是并不影响程序运行,这应该是一个小瑕疵。

docker run -it --rm anchorfree/twemproxy bash

检查healthcheck.bat

#!/usr/bin/env bats

@test "[INFRA-6245] [nutcracker] Check nutcracker configuration" {
    /usr/sbin/nutcracker --test-conf -c /opt/nutcracker.yml
}

@test "[INFRA-6245] [nc] Test memcache port" {
    run nc -zv localhost 11211
    [ "$status" -eq 0 ]
    [[ "$output"  == *"open"* ]]
}

@test "[INFRA-6245] [nutcracker] Check nutcracker version" {
    run /usr/sbin/nutcracker --version
    [ "$status" -eq 0 ]
    [[ "$output"  == *"This is nutcracker-0.4.1"* ]]
}

可能对端口有限制要求是11211,更改配置里面的twemproxy的端口为11211,重新尝试,显示“healthy”。

Docker Stack 部署service集群

经测试,双主模式在stack模式下,只能够从一方同步到另外一方,原因是docker stack swarm 不支持同时部署相互强依赖的两个进程。

清除容器和网络

docker stop $(docker ps -a -q)
docker container prune
docker network rm overlay

同时删除所有的var目录下的数据

编写ssdb.yaml

version: '3'
services:
  shard-1-server-1:
    image: leobuskin/ssdb-docker
    hostname: shard-1-server-1
    networks:
      - overlay
    ports:
      - 8881:8888
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/volumns/ssdb-shard-1-server-1/ssdb.conf:/ssdb/ssdb.conf
      - /root/volumns/ssdb-shard-1-server-1/var:/ssdb/var
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm1
  shard-1-server-2:
    image: leobuskin/ssdb-docker
    hostname: shard-1-server-2
    networks:
      - overlay
    ports:
      - 8882:8888
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/volumns/ssdb-shard-1-server-2/ssdb.conf:/ssdb/ssdb.conf
      - /root/volumns/ssdb-shard-1-server-2/var:/ssdb/var
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm2
  shard-2-server-1:
    image: leobuskin/ssdb-docker
    hostname: shard-2-server-1
    networks:
      - overlay
    ports:
      - 8883:8888
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/volumns/ssdb-shard-2-server-1/ssdb.conf:/ssdb/ssdb.conf
      - /root/volumns/ssdb-shard-2-server-1/var:/ssdb/var
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm2
  shard-2-server-2:
    image: leobuskin/ssdb-docker
    hostname: shard-2-server-2
    networks:
      - overlay
    ports:
      - 8884:8888
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/volumns/ssdb-shard-2-server-2/ssdb.conf:/ssdb/ssdb.conf
      - /root/volumns/ssdb-shard-2-server-2/var:/ssdb/var
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm3
  shard-3-server-1:
    image: leobuskin/ssdb-docker
    hostname: shard-3-server-1
    networks:
      - overlay
    ports:
      - 8885:8888
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/volumns/ssdb-shard-3-server-1/ssdb.conf:/ssdb/ssdb.conf
      - /root/volumns/ssdb-shard-3-server-1/var:/ssdb/var
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm3
  shard-3-server-2:
    image: leobuskin/ssdb-docker
    hostname: shard-3-server-2
    networks:
      - overlay
    ports:
      - 8886:8888
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/volumns/ssdb-shard-3-server-2/ssdb.conf:/ssdb/ssdb.conf
      - /root/volumns/ssdb-shard-3-server-2/var:/ssdb/var
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm1

  twemproxy-1:
    image: anchorfree/twemproxy
    hostname: twemproxy-1
    networks:
      - overlay
    ports:
      - 6379:11211
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/volumns/ssdb-twemproxy-1/nutcracker.yml:/opt/nutcracker.yml
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm1
  twemproxy-2:
    image: anchorfree/twemproxy
    hostname: twemproxy-2
    networks:
      - overlay
    ports:
      - 6380:11211
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/volumns/ssdb-twemproxy-2/nutcracker.yml:/opt/nutcracker.yml
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm2
  twemproxy-3:
    image: anchorfree/twemproxy
    hostname: twemproxy-3
    networks:
      - overlay
    ports:
      - 6381:11211
    volumes:
      - /etc/localtime:/etc/localtime
      - /root/volumns/ssdb-twemproxy-3/nutcracker.yml:/opt/nutcracker.yml
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm3
networks:
  overlay:
    driver: overlay

更改twemproxy配置

alpha:
  listen: 0.0.0.0:11211
  hash: fnv1a_64
  distribution: ketama
  auto_eject_hosts: true
  redis: true
  server_retry_timeout: 2000
  server_failure_limit: 1
  servers:
    - ssdb-shard-1-server-1:8888:1
    - ssdb-shard-2-server-1:8888:1
    - ssdb-shard-3-server-1:8888:1

更改SSDB配置

master 即去掉所有slaveof配置,slave将mirror改为sync即可
master截取配置如下

replication:
        binlog: yes
        # Limit sync speed to *MB/s, -1: no limit
        sync_speed: -1
        slaveof:
                # to identify a master even if it moved(ip, port changed)
                # if set to empty or not defined, ip: 0.0.0.0
                #id: svc_2
                # sync|mirror, default is sync
                #type: sync
                #host: localhost
                #port: 8889

slave截取配置如下

replication:
        binlog: yes
        # Limit sync speed to *MB/s, -1: no limit
        sync_speed: -1
        slaveof:
                id: ssdb-shard-1-server-1
                type: sync
                host: ssdb-shard-1-server-1
                port: 8888
                # to identify a master even if it moved(ip, port changed)
                # if set to empty or not defined, ip: 0.0.0.0
                #id: svc_2
                # sync|mirror, default is sync
                #type: sync
                #host: localhost
                #port: 8889

启动

docker stack deploy -c ssdb.yaml ssdb

查看services

docker stack services ssdb
ID                  NAME                         MODE                REPLICAS            IMAGE                          PORTS
1hzg9nau4ek4        ssdb_ssdb-shard-2-server-2   replicated          1/1                 leobuskin/ssdb-docker:latest   *:8884->8888/tcp
8tqpkzmuoz1i        ssdb_ssdb-twemproxy-2        replicated          1/1                 anchorfree/twemproxy:latest    *:6380->11211/tcp
9ffxf3779fvb        ssdb_ssdb-shard-1-server-2   replicated          1/1                 leobuskin/ssdb-docker:latest   *:8882->8888/tcp
larlbx0cizlv        ssdb_ssdb-twemproxy-1        replicated          1/1                 anchorfree/twemproxy:latest    *:6379->11211/tcp
mrez447h81p6        ssdb_ssdb-shard-1-server-1   replicated          1/1                 leobuskin/ssdb-docker:latest   *:8881->8888/tcp
mu561y479nvq        ssdb_ssdb-twemproxy-3        replicated          1/1                 anchorfree/twemproxy:latest    *:6381->11211/tcp
vr7dfuyp7rb1        ssdb_ssdb-shard-3-server-1   replicated          1/1                 leobuskin/ssdb-docker:latest   *:8885->8888/tcp
w8zscndcitku        ssdb_ssdb-shard-2-server-1   replicated          1/1                 leobuskin/ssdb-docker:latest   *:8883->8888/tcp
z4t6ojv4fvn3        ssdb_ssdb-shard-3-server-2   replicated          1/1                 leobuskin/ssdb-docker:latest   *:8886->8888/tcp

测试

测试多个twemproxy端口存储数据是否相通

[root@localhost ~]# redis-cli
127.0.0.1:6379> set 1 1
OK
127.0.0.1:6379> get 1
"1"
127.0.0.1:6379> 
[root@localhost ~]# redis-cli -p 6380
127.0.0.1:6380> get 1
"1"

测试Replicates配置是否正常

[root@localhost volumns]# redis-cli -p 8881 set final 1
OK
[root@localhost volumns]# redis-cli -p 8881 get final
"1"

测试单点故障,可以重启service

docker service update --force 1hzg9nau4ek4