导入Conductor 不能build

  1. 修改conductor中所有的项目依赖为如下
  • common 项目的build.gradle
  • mysql-persistence 项目的build.gradle
  • postgres-persistence 项目的build.gradle
  • test-harness 项目的build.gradle
  • 最外层的build.gradle
jcenter()
// 变为如下
jcenter(){url 'http://maven.aliyun.com/nexus/content/groups/public/'}

Dynomite简介

Dynomite是NetFlix对亚马逊分布式存储引擎Dynamo的一个开源通用实现,使用C/C++语言编写、以代理的方式实现的Redis缓存集群方案。Dynomite不仅能够将基于内存的Redis和Memcached打造成分布式数据库,还支持持久化的MySQL、BerkeleyDBLevelDB等数据库,并具有简单、高效、支持跨数据中心的数据复制等优点。Dynomite的最终目标是提供数据库存储引擎不能提供的简单、高效、跨数据中心的数据复制功能。Dynomite遵循Apache License 2.0开源协议发布,更多关于Dynomite的信息请查看NetFlix技术博客对Dynomite的介绍

注意事项

  • Dynomite 集群由一个或多个 datacenter 组成,每个 datacenter 包含一个或多个 rack,每个 rack 包含一个或多个节点。每个 rack 上的节点数量可以不相同,同理,每个 datacenter 上的 rack 数量也可以不相同;
  • 在 Dynomite 集群中,每个 rack 都存储了一份完整的集群数据;
  • Rack 中每个节点存储的范围是从当前节点的 token 到下一个节点的 token - 1,通过一致性 Hash 算法计算 Key 的 token,将数据存储到正确的节点上;
  • 假如 Redis 节点在加入 Dynomite 集群之前已经存了一部分数据,加入集群后通过 Dynomite 可能读取不到其中一些数据,因为这些数据计算出来的 token 不在节点的管理范围内;
redis-cli -h 10.130.138.47 -p 8102
> set ca California
> get ca
> "California"
  • 通过dynomite的端口进行操作,dynomite会计算key的hash,按照Node的token计算出实际操作的Node,并将请求转发给次节点。 使用Dyno客户端,可以同步节点的token信息,从而使Dyno客户端在java层计算出其实际存储的节点token,可以避免dynomite的二次计算。

Dynomite 的拓扑结构如下图:

首先,启动三个redis,port分别为637963896399

这里通过Docker实现,分别创建3个不同端口的Redis服务

Redis1 6379端口

  • docker run -p 6379:6379 --name redis1 -v /docker-data/myredis1/data:/data -v /docker-data/myredis1/conf/redis.conf:/usr/local/etc/redis/redis.conf -d redis:3.2 redis-server /usr/local/etc/redis/redis.conf --appendonly yes

Redis2 6389端口

  • docker run -p 6389:6379 --name redis2 -v /docker-data/myredis2/data:/data -v /docker-data/myredis2/conf/redis.conf:/usr/local/etc/redis/redis.conf -d redis:3.2 redis-server /usr/local/etc/redis/redis.conf --appendonly yes

Redis3 6399端口

  • docker run -p 6399:6379 --name redis3 -v /docker-data/myredis3/data:/data -v /docker-data/myredis3/conf/redis.conf:/usr/local/etc/redis/redis.conf -d redis:3.2 redis-server /usr/local/etc/redis/redis.conf --appendonly yes

单数据中心单机架模式

  • docker run -p 6409:6379 --name redis4 -v /docker-data/myredis4/data:/data -v /docker-data/myredis4/conf/redis.conf:/usr/local/etc/redis/redis.conf -d redis:3.2 redis-server /usr/local/etc/redis/redis.conf --appendonly yes
dyn_o_mite:
  datacenter: dc1
  rack: rack1
  dyn_listen: 192.168.1.101:8101
  dyn_seeds:
  - 192.168.1.101:8201:rack1:dc1:1431655765
  - 192.168.1.101:8301:rack1:dc1:2863311530
  data_store: 0
  listen: 192.168.1.101:8102
  dyn_seed_provider: simple_provider
  servers:
  - 192.168.1.101:6379:1
  tokens: 0
  pem_key_file: /apps/dynomite/conf/dynomite.pem


dyn_o_mite:
  datacenter: dc1
  rack: rack1
  dyn_listen: 192.168.1.101:8201
  dyn_seeds:
  - 192.168.1.101:8101:rack1:dc1:0
  - 192.168.1.101:8301:rack1:dc1:2863311530
  data_store: 0
  listen: 192.168.1.101:8202
  dyn_seed_provider: simple_provider
  servers:
  - 192.168.1.101:6389:1
  tokens: 1431655765
  pem_key_file: /apps/dynomite/conf/dynomite.pem


dyn_o_mite:
  datacenter: dc1
  rack: rack1
  dyn_listen: 192.168.1.101:8301
  dyn_seeds:
  - 192.168.1.101:8101:rack1:dc1:0
  - 192.168.1.101:8201:rack1:dc1:1431655765
  data_store: 0
  listen: 192.168.1.101:8302
  dyn_seed_provider: simple_provider
  servers:
  - 192.168.1.101:6399:1
  tokens: 2863311530
  pem_key_file: /apps/dynomite/conf/dynomite.pem
  
  
  
`node1.yaml`
dyn_o_mite:
  # 数据中心
  datacenter: dc1
  # 集群分区 
  rack: rack1
  # dynomite之间的内部通讯端口
  dyn_listen: 192.168.1.101:8101
  # 其他的dynomite节点 ip:port:rack:datacenter:tokens
  dyn_seeds:
  - 192.168.1.101:8201:rack1:dc1:1431655765
  - 192.168.1.101:8301:rack1:dc1:2863311530
  # 存储类型  0 代表redis 1 代表memcached
  data_store: 0
  # 外部访问的端口
  listen: 192.168.1.101:8102
  dyn_seed_provider: simple_provider
  # 代理的redis server节点 ip:port:weight
  # servers应该是可以配置多个的,但是文档说目前只能配一个,根据weight猜测,这里可能是给单节点高可用预留的
  servers:
  - 192.168.1.101:6379:1
  # 集群标识 计算公式 从0开始 token = (4294967295 / numberOfNodesInRack) * nodeIndex
  tokens: 0
  # 状态监听端口 0.0.0.0:30000
  stats_listen: 192.168.1.101:30000
  # rsa密匙 生成命令 ssh-keygen -t rsa -f ~/dynomite/conf/dynomite.pem
  pem_key_file: conf/dynomite.pem
  # 生成方法如上
  recon_key_file: conf/recon_key.pem
  recon_iv_file: conf/recon_iv.pem

`node2.yaml`
dyn_o_mite:
  # 数据中心
  datacenter: dc1
  # 集群分区 
  rack: rack1
  # dynomite之间的内部通讯端口
  dyn_listen: 192.168.1.101:8201
  # 其他的dynomite节点 ip:port:rack:datacenter:tokens
  dyn_seeds:
  - 192.168.1.101:8101:rack1:dc1:0
  - 192.168.1.101:8301:rack1:dc1:2863311530
  # 存储类型  0 代表redis 1 代表memcached
  data_store: 0
  # 外部访问的端口
  listen: 192.168.1.101:8202
  dyn_seed_provider: simple_provider
  # 代理的redis server节点 ip:port:weight
  # servers应该是可以配置多个的,但是文档说目前只能配一个,根据weight猜测,这里可能是给单节点高可用预留的
  servers:
  - 192.168.1.101:6389:1
  # 集群标识 计算公式 从0开始 token = (4294967295 / numberOfNodesInRack) * nodeIndex
  tokens: 1431655765
  # 状态监听端口 0.0.0.0:30001
  stats_listen: 192.168.1.101:30001
  # rsa密匙 生成命令 ssh-keygen -t rsa -f   # rsa密匙 生成命令 ssh-keygen -t rsa -f ~/dynomite/conf/dynomite.pem
  pem_key_file: conf/dynomite.pem
  # 生成方法如上
  recon_key_file: conf/recon_key.pem
  recon_iv_file: conf/recon_iv.pem

`node3.yaml`
dyn_o_mite:
  # 数据中心
  datacenter: dc1
  # 集群分区 
  rack: rack1
  # dynomite之间的内部通讯端口
  dyn_listen: 192.168.1.101:8301
  # 其他的dynomite节点 ip:port:rack:datacenter:tokens
  dyn_seeds:
  - 192.168.1.101:8101:rack1:dc1:0
  - 192.168.1.101:8201:rack1:dc1:1431655765
  # 存储类型  0 代表redis 1 代表memcached
  data_store: 0
  # 外部访问的端口
  listen: 192.168.1.101:8302
  dyn_seed_provider: simple_provider
  # 代理的redis server节点 ip:port:weight
  # servers应该是可以配置多个的,但是文档说目前只能配一个,根据weight猜测,这里可能是给单节点高可用预留的
  servers:
  - 192.168.1.101:6399:1
  # 集群标识 计算公式 从0开始 token = (4294967295 / numberOfNodesInRack) * nodeIndex
  tokens: 2863311530
  # 状态监听端口  0.0.0.0:30002
  stats_listen: 192.168.1.101:30002
  # rsa密匙 生成命令 ssh-keygen -t rsa -f ~/dynomite/conf/dynomite.pem
  pem_key_file: conf/dynomite.pem
  # 生成方法如上
  recon_key_file: conf/recon_key.pem
  recon_iv_file: conf/recon_iv.pem
手动构建dynomite

先要安装如下包:

# 先要安装如下包
yum install -y gcc gcc-c++ openssl openssl-devel autoconf automake libtool libffi-dev git net-tools
# 创建并进入目录
mkdir -p /apps && cd /apps
# 获取dynomite的项目源码
git clone https://gitee.com/XMHans/dynomite.git
# 进入dynomite文件夹
cd dynomite
# 编译文件
autoreconf -fvi
./configure --enable-debug=log
make
# 编译完成之后dynomite的src目录下会多出一个 dynomite可执行文件
src/dynomite -h

# 在dynomite/conf文件夹中分别创建node1.yaml、node2.yaml、node3.yaml这3个文件
# 文件内容如上

# 执行下面的命令为每一个redis-server节点上需要挂在一个dynomite代理
# -s 是stats的监听端口 -d 表示后台运行
$ src/dynomite -c conf/node1.yml -s 22221 -d --output=node1.log
$ src/dynomite -c conf/node2.yml -s 22222 -d --output=node2.log
$ src/dynomite -c conf/node3.yml -s 22223 -d --output=node3.log

# 3个节点构成一个普通集群,无副本复制,但是通过dynomite的端口可以获得任意节点的数据。


# 测试
redis-cli -h 10.130.138.47 -p 8102
> set ca California
> get ca
> "California"

Docker构建Dynomite

docker run --name dynomite1 -d  -p 8102:8102 -p 8101:8101 -v /root/netflix-conductor/node1.yaml:/etc/dynomitedb/dynomite.yaml dynomitedb/dynomite

docker run --name dynomite2 -d  -p 8202:8202 -p 8201:8201 -v /root/netflix-conductor/node1.yaml:/etc/dynomitedb/dynomite.yaml dynomitedb/dynomite

docker run --name dynomite3 -d  -p 8302:8302 -p 8301:8301 -v /root/netflix-conductor/node1.yaml:/etc/dynomitedb/dynomite.yaml dynomitedb/dynomite
Docker安装ES
# 单机版
docker run --name elasticsearch -p 10200:9200 -p 10300:9300 -e "discovery.type=single-node" -d elasticsearch:7.2.0

docker pull elastic/elasticsearch:6.7.1
docker run -d --name es -p 11200:9200 -p 11300:9300 -e "discovery.type=single-node" elastic/elasticsearch:6.7.1

# 安装ES5
1 docker pull elasticsearch:5.6.11
2 mkdir -p /data/elasticsearch5/config
3 mkdir -p /data/elasticsearch5/data
4 echo "http.host: 0.0.0.0" >> /data/elasticsearch5/config/elasticsearch.yml

5 docker run --name elasticsearch5 -p 12200:9200 -p 12300:9300 -e "discovery.type=single-node" -v /data/elasticsearch5/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /data/elasticsearch5/data:/usr/share/elasticsearch/data -d elasticsearch:5.6.11


`docker安装 elasticsearch-head`
docker create --name elasticsearch-head -p 9100:9100 mobz/elasticsearch-head:5
docker start elasticsearch-head
# 进入容器修改配置文件
docker exec -it elasticsearch /bin/bash
vi config/elasticsearch.yml
# 在最下面添加2行
http.cors.enabled: true 
http.cors.allow-origin: "*"
# 重启容器
docker restart elasticsearch-head

相关连接

容器运行Conductor

# Server端
docker run -d --name conductor-server -e CONFIG_PROP=my-config.properties  -e LOG4J_PROP=log4j-file-appender.properties -p 18080:8080 -p 18090:8090 --add-host dyno1:192.168.1.101 -v /root/netflix-conductor/conductor/docker/server/config:/app/config -v /data/conductor/server/logs:/app/logs  conductor:server


docker run -d --name conductor-server-host --net host -e CONFIG_PROP=my-config.properties  -e LOG4J_PROP=log4j-file-appender.properties --add-host dyno-conductor:192.168.1.101 -v /root/netflix-conductor/conductor/docker/server/config:/app/config -v /data/conductor-host/server/logs:/app/logs  conductor:server

# UI端
docker run --name conductor-ui -e WF_SERVER=http://192.168.1.101:18080/api/ -p 15000:5000 conductor:ui

docker run --name conductor-ui -e WF_SERVER=http://192.168.1.101:17777/api/ -p 18888:5000 conductor:ui

cd /apps/dynomite


  • src/dynomite -c /apps/self-conf/node1.yaml -s 22221 -d --output=logs/node1.log
  • src/dynomite -c /apps/self-conf/node2.yaml -s 22222 -d --output=logs/node2.log
  • src/dynomite -c /apps/self-conf/node3.yaml -s 22223 -d --output=logs/node3.log

编译Conductor 源码

#  /d/javawork_backup/conductor/grpc/src/main/proto/model 目录下编译这里面的所有proto文件

protoc --proto_path=./ --java_out=/d/javawork_backup/conductor/server/src/main/java * 

protoc --proto_path=./ --java_out=/d/javawork_backup/conductor/grpc/src/main/java *

# 编译grpc端的
protoc --proto_path=/d/javawork_backup/conductor/grpc/src/main/proto --java_out=/d/javawork_backup/conductor/grpc/src/main/java /d/javawork_backup/conductor/grpc/src/main/proto/grpc/*.proto


# 编译Server端的
protoc --proto_path=/d/javawork_backup/conductor/grpc/src/main/proto --java_out=/d/javawork_backup/conductor/server/src/main/java /d/javawork_backup/conductor/grpc/src/main/proto/model/*.proto
Conductor任务系统:
创建任务

POST http://localhost:8080/api/metadata/taskdefs

RequestBody:

[
  {
    "name": "leaderRatify",
    "retryCount": 3,
    "ownerEmail": "xmhans@qq.com",
    "timeoutSeconds": 1200,
    "inputKeys": [
      "staffName",
      "staffDepartment"
    ],
    "outputKeys": [
      "leaderAgree",
      "leaderDisagree"
    ],
    "timeoutPolicy": "TIME_OUT_WF",
    "retryLogic": "FIXED",
    "retryDelaySeconds": 600,
    "responseTimeoutSeconds": 1200
  },
  {
    "name": "managerRatify",
    "retryCount": 3,
    "ownerEmail": "xmhans@qq.com",
    "timeoutSeconds": 1200,
    "inputKeys": [
      "managerName",
      "managerDeparment"
    ],
    "outputKeys": [
      "managerAgree",
      "managerDisagree"
    ],
    "timeoutPolicy": "TIME_OUT_WF",
    "retryLogic": "FIXED",
    "retryDelaySeconds": 600,
    "responseTimeoutSeconds": 1200
  }
]
获取定义的任务

GET http://localhost:8080/api/metadata/taskdefs

ResponseBody

[
  {
    "createTime": 1615369918965,
    "createdBy": "",
    "name": "leaderRatify",
    "retryCount": 3,
    "timeoutSeconds": 1200,
    "inputKeys": [
      "staffName",
      "staffDepartment"
    ],
    "outputKeys": [
      "leaderAgree",
      "leaderDisagree"
    ],
    "timeoutPolicy": "TIME_OUT_WF",
    "retryLogic": "FIXED",
    "retryDelaySeconds": 600,
    "responseTimeoutSeconds": 1200,
    "inputTemplate": {},
    "rateLimitPerFrequency": 0,
    "rateLimitFrequencyInSeconds": 1,
    "ownerEmail": "xmhans@qq.com"
  },
  {
    "createTime": 1615369918976,
    "createdBy": "",
    "name": "managerRatify",
    "retryCount": 3,
    "timeoutSeconds": 1200,
    "inputKeys": [
      "managerName",
      "managerDeparment"
    ],
    "outputKeys": [
      "managerAgree",
      "managerDisagree"
    ],
    "timeoutPolicy": "TIME_OUT_WF",
    "retryLogic": "FIXED",
    "retryDelaySeconds": 600,
    "responseTimeoutSeconds": 1200,
    "inputTemplate": {},
    "rateLimitPerFrequency": 0,
    "rateLimitFrequencyInSeconds": 1,
    "ownerEmail": "xmhans@qq.com"
  }
]
更新一个任务的定义

PUT http://localhost:8080/api/metadata/taskdefs

RequestBody

{
  "name": "leaderRatify",
  "retryCount": 3,
  "timeoutSeconds": 1200,
  "inputKeys": [
    "staffName",
    "staffDepartment"
  ],
  "outputKeys": [
    "leaderAgree",
    "leaderDisagree",
    "leaderName",
    "leaderDeparment"
  ],
  "timeoutPolicy": "TIME_OUT_WF",
  "retryLogic": "FIXED",
  "retryDelaySeconds": 600,
  "responseTimeoutSeconds": 1200,
  "inputTemplate": {},
  "rateLimitPerFrequency": 0,
  "rateLimitFrequencyInSeconds": 1,
  "ownerEmail": "xmhans@qq.com"
}
定义工作流

POST http://localhost:8080/api/metadata/workflow

RequestBody

{
  "updateTime": 1540448903202,
  "ownerEmail": "xmhans@qq.com",
  "name": "Leave process",
  "description": "a demo for workflow",
  "version": 1,
  "tasks": [
    {
      "name": "leaderRatify",
      "taskReferenceName": "node1",
      "inputParameters": {
        "staffName": "${workflow.input.staffName}",
        "staffDepartment": "${workflow.input.staffDepartment}"
      },
      "type": "SIMPLE",
      "startDelay": 0
    },
    {
      "name": "managerRatify",
      "taskReferenceName": "node2",
      "inputParameters": {
        "managerName": "${node1.output.leaderName}",
        "managerDepartment": "${node1.output.leaderDeparment}"
      },
      "type": "SIMPLE",
      "startDelay": 0
    }
  ],
  "outputParameters": {
    "leaderName": "${node1.output.leaderAgree}",
    "leaderDepartment": "${node1.output.leaderDisagree}",
    "managerAgree": "${node2.output.managerAgree}",
    "managerDisagree": "${node2.output.managerDisagree}"
  },
  "restartable": true,
  "schemaVersion": 2
}
获取定义的工作流

GET http://localhost:8080/api/metadata/workflow

ResponseBody

[
  {
    "createTime": 1615370236472,
    "updateTime": 1540448903202,
    "name": "Leave process",
    "description": "a demo for workflow",
    "version": 1,
    "tasks": [
      {
        "name": "leaderRatify",
        "taskReferenceName": "node1",
        "inputParameters": {
          "staffName": "${workflow.input.staffName}",
          "staffDepartment": "${workflow.input.staffDepartment}"
        },
        "type": "SIMPLE",
        "decisionCases": {},
        "defaultCase": [],
        "forkTasks": [],
        "startDelay": 0,
        "joinOn": [],
        "optional": false,
        "defaultExclusiveJoinTask": [],
        "asyncComplete": false,
        "loopOver": []
      },
      {
        "name": "managerRatify",
        "taskReferenceName": "node2",
        "inputParameters": {
          "managerName": "${node1.output.leaderName}",
          "managerDepartment": "${node1.output.leaderDepartment}"
        },
        "type": "SIMPLE",
        "decisionCases": {},
        "defaultCase": [],
        "forkTasks": [],
        "startDelay": 0,
        "joinOn": [],
        "optional": false,
        "defaultExclusiveJoinTask": [],
        "asyncComplete": false,
        "loopOver": []
      }
    ],
    "inputParameters": [],
    "outputParameters": {
      "leaderName": "${node1.output.leaderName}",
      "leaderDepartment": "${node1.output.leaderDepartment}",
      "managerAgree": "${node2.output.managerAgree}",
      "managerDisagree": "${node2.output.managerDisagree}"
    },
    "schemaVersion": 2,
    "restartable": true,
    "workflowStatusListenerEnabled": false,
    "ownerEmail": "xmhans@qq.com",
    "timeoutPolicy": "ALERT_ONLY",
    "timeoutSeconds": 0,
    "variables": {}
  }
]
更新工作流

PUT http://localhost:8080/api/metadata/workflow

RequuestBody

[
  {
    "ownerEmail": "xmhans@qq.com",
    "name": "Leave process",
    "description": "a demo for workflow",
    "version": 1,
    "tasks": [
      {
        "name": "leaderRatify",
        "taskReferenceName": "node1",
        "inputParameters": {
          "staffName": "${workflow.input.staffName}",
          "staffDepartment": "${workflow.input.staffDepartment}"
        },
        "type": "SIMPLE",
        "startDelay": 0
      },
      {
        "name": "managerRatify",
        "taskReferenceName": "node2",
        "inputParameters": {
          "managerName": "${node1.output.leaderName}",
          "managerDepartment": "${node1.output.leaderDeparment}"
        },
        "type": "SIMPLE",
        "startDelay": 0
      }
    ],
    "outputParameters": {
      "leaderName": "${node1.output.leaderAgree}",
      "leaderDepartment": "${node1.output.leaderDisagree}",
      "managerAgree": "${node2.output.managerAgree}",
      "managerDisagree": "${node2.output.managerDisagree}"
    },
    "restartable": true,
    "schemaVersion": 2
  }
]
执行一个任务

POST http://localhost:8080/api/workflow/{name}

  • name: 定义工作流的名称
# Leave process

{
  "staffName": "xmhans",
  "staffDepartment": "技术部"
}

ResponseBody

be0c7b04-0c85-4181-bfb2-65aedf052fbf

Workers端执行任务

version: '3'
services:
  conductor-server:
    environment:
      - CONFIG_PROP=config.properties
    image: dalongrong/conductor:server
    volumes: 
    - "./config.properties:/app/config/config.properties"
    networks:
      - internal
    ports:
      - 8080:8080
  conductor-ui:
    environment:
      - WF_SERVER=http://conductor-server:8080/api/
    image: dalongrong/conductor:ui
    networks:
      - internal
    ports:
      - 5000:5000
  dynomite:
    image: v1r3n/dynomite
    networks:
      - internal
    ports:
      - 8102:8102
    healthcheck:
      test: timeout 5 bash -c 'cat < /dev/null > /dev/tcp/localhost/8102'
      interval: 5s
      timeout: 5s
      retries: 12

  # https://www.elastic.co/guide/en/elasticsearch/reference/5.6/docker.html
  elasticsearch:
    image: elasticsearch:5.6.8
    environment:
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - transport.host=0.0.0.0
      - discovery.type=single-node
      - xpack.security.enabled=false
    networks:
      - internal
    ports:
      - 9200:9200
      - 9300:9300
networks:
  internal: