1、环境说明:2台arm架构服务器,操作系统为centos-7.4版本

                         ip:10.2.151.140

                         ip:10.2.151.138

2、下载mesos源码包:http://mesos.apache.org/downloads/

3、下载Apache maven并安装

                下载地址:http://maven.apache.org/download.cgi

解压maven源码包

$:mkdir -p /usr/local/maven
$:tar -zxvf apache-maven-3.3.9-bin.tar.gz –C /usr/local/maven

修改配置文件

$:vim /etc/profile
MAVEN_HOME=/usr/local/maven/apache-maven-3.3.9
export MAVEN_HOME
export PATH=${PATH}:${MAVEN_HOME}/bin

保存生效:

$:source  /etc/profile

查看是否生效

$:mvn  –v

4、安装mesos的依赖库vim 

$:  yum install -y tar wget git
$:	yum install cppunit-devel
$:  yum groupinstall -y "Development Tools"
$:	yum install -y python-devel java-1.8.0-openjdk-devel zlib-devel libcurl-devel openssl-devel cyrus-sasl-devel cyrus-sasl-md5 apr-devel subversion-devel apr-util-devel

5、解压mesos源码包并编译

$: cd 	/usr/local
$: mkdir	/mesos
$: cd 	mesos
$: mkdir  mesos
$: tar  –xvf  mesos-1.6.0.tar.gz  -C  /usr/local/mesos
$: cd  mesos-1.6.0
$: ./configure --prefix=/usr/local/mesos 
$: make –j6
$: make –j6 install

编译和安装时间时间较长,可以去看个电影!!!

6、配置hosts文件(mastes&salve)

$:  vim /etc/hosts
加
Ip                  主机名
10.2.151.138     Master.mesos
10.2.151.140     Slave1.mesos
10.2.151.141     Slave2.mesos
10.2.151.142     Slave3.mesos
.......

测试:ping  +  主机名(能ping通表示设置成功!)

7、SSH无密码验证配置

(1)安装和启动ssh服务

可以通过下面命令查看结果显示如下:

$:rpm -qa | grep openssh
openssh-7.4p1-11.el7.aarch64
openssh-server-7.4p1-11.el7.aarch64
openssh-clients-7.4p1-11.el7.aarch64
$:rpm -qa | grep rsync
rsync-3.1.2-4.el7.aarch64

若未安装可使用yum安装

          yum install ssh      #安装SSH协议

          yum install rsync (rsync是一个远程数据同步工具,可通过LAN/WAN快速同步多台主机间的文件)

          service sshd restart 启动服务

(2)配置Master无密码登录所有Salve

        a、ssh无密码登录原理

       Master(NameNode | JobTracker)作为客户端,要实现无密码公钥认证,连接到服务器Salve(DataNode | Tasktracker)上时,需要在Master上生成一个密钥对,包括一个公钥和一个私钥,而后将公钥复制到所有的Slave上。当Master通过SSH连接Salve时,Salve就会生成一个随机数并用Master的公钥对随机数进行加密,并发送给Master。Master收到加密数之后再用私钥解密,并将解密数回传给Slave,Slave确认解密数无误之后就允许Master进行连接了。这就是一个公钥认证过程,其间不需要用户手工输入密码。重要过程是将客户端Master复制到Slave上。

        b、master机器上生成密码对

        在Master节点上执行以下命令:ssh-keygen –t rsa –P ''

这条命是生成其无密码密钥对,询问其保存路径时直接回车采用默认路径。生成的密钥对:id_rsa和id_rsa.pub,默认存储在执行命令的目录下。

$: ssh-keygen -t rsa -P ''
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:Ubr8vGxPKdtUHUSaTgjjC80nRoOzVuPMSz1rKu7iNpc root@localhost
The key's randomart image is:
+---[RSA 2048]----+
|        .=.   .o |
|       o==+ . +  |
|       .X*oo + . |
|       +oB+oo  ..|
|      . S.. o.. .|
|         + o o   |
|        . * +    |
|     + E o.B     |
|    o.*o..+.o    |
+----[SHA256]-----+

查看是否生成秘钥:

[root@localhost ~]# ll -a
total 7237680
dr-xr-x---.  6 root root       4096 Aug  3 13:28 .
dr-xr-xr-x. 17 root root        264 Aug  3 11:36 ..
-rw-------.  1 root root       1774 Dec 30  2017 anaconda-ks.cfg
-rw-------.  1 root root       3194 Aug  3 10:05 .bash_history
-rw-r--r--.  1 root root         18 Dec 29  2013 .bash_logout
-rw-r--r--.  1 root root        176 Dec 29  2013 .bash_profile
-rw-r--r--.  1 root root        176 Dec 29  2013 .bashrc
drwx------.  3 root root         17 Aug  3 11:09 .cache
-rwxr-xr-x.  1 root root 7411329024 Mar 20 14:20 CentOS-7-aarch64-Everything.iso
-rw-r--r--.  1 root root        100 Dec 29  2013 .cshrc
-rwxrwxrwx.  1 root root         74 May 24 15:04 force-eth0-100Mbps.sh
-rwxr-xr-x.  1 root root        493 May 30 15:08 lvm-resize-sda.sh
drwx------.  2 root root         38 Aug  3 13:28 .ssh
-rw-r--r--.  1 root root        129 Dec 29  2013 .tcshrc
drwxr-xr-x.  2 root root       4096 Apr 19 10:41 updates
-rw-------.  1 root root       5360 Aug  3 12:43 .viminfo
[root@localhost ~]# cd .ssh/
[root@localhost .ssh]# ll
total 8
-rw-------. 1 root root 1679 Aug  3 13:28 id_rsa
-rw-r--r--. 1 root root  396 Aug  3 13:28 id_rsa.pub

        c、在master节点中将id_rsa.pub追加到授权的key里面去

[root@localhost .ssh]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[root@localhost .ssh]# ll
total 12
-rw-r--r--. 1 root root  792 Aug  3 13:35 authorized_keys
-rw-------. 1 root root 1679 Aug  3 13:28 id_rsa
-rw-r--r--. 1 root root  396 Aug  3 13:28 id_rsa.pub

        d、修改文件"authorized_keys"权限

[root@localhost .ssh]# ll
total 12
-rw-r--r--. 1 root root  792 Aug  3 13:35 authorized_keys
-rw-------. 1 root root 1679 Aug  3 13:28 id_rsa
-rw-r--r--. 1 root root  396 Aug  3 13:28 id_rsa.pub
[root@localhost .ssh]# chmod 600 ~/.ssh/authorized_keys
[root@localhost .ssh]# ll
total 12
-rw-------. 1 root root  792 Aug  3 13:35 authorized_keys
-rw-------. 1 root root 1679 Aug  3 13:28 id_rsa
-rw-r--r--. 1 root root  396 Aug  3 13:28 id_rsa.pub

e、用root用户设置"/etc/ssh/sshd_config"的内容

[root@localhost .ssh]# vim /etc/ssh/sshd_config
修改配置如下:
RSAAuthentication yes # 启用 RSA 认证
PubkeyAuthentication yes # 启用公钥私钥配对认证方式
AuthorizedKeysFile .ssh/authorized_keys # 公钥文件路径(和上面生成的文件同)

重启ssh服务使配置生效:service sshd restart

f、验证是否生效

[root@localhost ~]# ssh localhost
The authenticity of host 'localhost (::1)' can't be established.
ECDSA key fingerprint is SHA256:a1NFzC3BwML16Ic2ZDgqOjyrX9DWWFTaipmSU3AQC34.
ECDSA key fingerprint is MD5:d7:cd:5c:29:db:b0:b1:33:47:fe:9a:91:48:f1:32:5c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
Last login: Fri Aug  3 12:42:53 2018 from 10.2.154.39
[root@localhost ~]# ls
anaconda-ks.cfg  CentOS-7-aarch64-Everything.iso  force-eth0-100Mbps.sh  lvm-resize-sda.sh  updates  zhaochuang
[root@localhost ~]# exit
logout
Connection to localhost closed.
[root@localhost ~]#

g、把公钥复制所有的Slave机器上

[root@localhost ~]# scp ~/.ssh/id_rsa.pub root@10.2.151.140:~/
The authenticity of host '10.2.151.140 (10.2.151.140)' can't be established.
ECDSA key fingerprint is SHA256:a1NFzC3BwML16Ic2ZDgqOjyrX9DWWFTaipmSU3AQC34.
ECDSA key fingerprint is MD5:d7:cd:5c:29:db:b0:b1:33:47:fe:9a:91:48:f1:32:5c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.2.151.140' (ECDSA) to the list of known hosts.
root@10.2.151.140's password: 
id_rsa.pub                                                                                                                                 100%  396   194.4KB/s   00:00    
[root@localhost ~]#

h、登录salve节点,针对salve节点进行配置

《1》在~/创建.ssh文件

$:mkdir ~/.ssh
$:chmod 700 ~/.ssh

《2》追加到授权文件"authorized_keys"

[root@localhost-slave1 ~]# cat ~/id_rsa.pub >> ~/.ssh/authorized_keys
[root@localhost-slave1 ~]# chmod 600 ~/.ssh/authorized_keys

《2》修改"/etc/ssh/sshd_config"

[root@localhost-slave1 ~]# vim /etc/ssh/sshd_config
RSAAuthentication yes 
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys

重启ssh服务:service sshd restart

《3》在master上测试ssh无密码登录salve是否成功

[root@localhost ~]# ssh 10.2.151.140
Last login: Fri Aug  3 12:42:57 2018 from 10.2.154.39
[root@localhost-slave1 ~]#

把~/目录下的"id_rsa.pub"文件删除掉:rm –rf ~/id_rsa.pub

到此为止以实现master节点和slave节点的ssh无密码登录,重复上面步骤实现其他master和slave节点的无密码登录!!!

(3)实现slave节点ssh无密码登录master节点

a、创建"Slave"自己的公钥和私钥,并把自己的公钥追加到"authorized_keys"文件中

[root@localhost-slave1 ~]# rm –r ~/id_rsa.pub
rm: cannot remove ‘–r’: No such file or directory
rm: remove regular file ‘/root/id_rsa.pub’? y
[root@localhost-slave1 ~]# ssh-keygen -t rsa -P ''
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:bb17EEH7OwT1/CR/UZRCt09ZOfeW36TDdUlMrqb1aU0 root@localhost-slave1
The key's randomart image is:
+---[RSA 2048]----+
|           .o.++*|
|            .+oO*|
|            o.+=@|
|         . ..o.BX|
|        S o .*o=E|
|         .  =o=o*|
|           ...o=.|
|             .o. |
|            ..   |
+----[SHA256]-----+
[root@localhost-slave1 ~]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[root@localhost-slave1 ~]#

b、在slave上,用命令"scp"复制"Slave"的公钥"id_rsa.pub"到"Master"的"~/"目录下,并追加到"Master"的"authorized_keys"中。

slave:

[root@localhost-slave1 ~]# scp ~/.ssh/id_rsa.pub root@10.2.151.138:~/
The authenticity of host '10.2.151.138 (10.2.151.138)' can't be established.
ECDSA key fingerprint is SHA256:a1NFzC3BwML16Ic2ZDgqOjyrX9DWWFTaipmSU3AQC34.
ECDSA key fingerprint is MD5:d7:cd:5c:29:db:b0:b1:33:47:fe:9a:91:48:f1:32:5c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.2.151.138' (ECDSA) to the list of known hosts.
root@10.2.151.138's password: 
id_rsa.pub                                                                                                                                 100%  403    72.3KB/s   00:00    
[root@localhost-slave1 ~]#

master:

[root@localhost ~]# cat ~/id_rsa.pub >> ~/.ssh/authorized_keys
[root@localhost ~]# ll
total 7237648
-rw-------. 1 root root       1774 Dec 30  2017 anaconda-ks.cfg
-rwxr-xr-x. 1 root root 7411329024 Mar 20 14:20 CentOS-7-aarch64-Everything.iso
-rwxrwxrwx. 1 root root         74 May 24 15:04 force-eth0-100Mbps.sh
-rw-r--r--. 1 root root        403 Aug  3 14:21 id_rsa.pub
-rwxr-xr-x. 1 root root        493 May 30 15:08 lvm-resize-sda.sh
drwxr-xr-x. 2 root root       4096 Apr 19 10:41 updates
drwxr-xr-x. 2 root root        107 Aug  3 08:49 zhaochuang
[root@localhost ~]# rm -rf id_rsa.pub

e、测试slave无密码登录master

从"slave"到"master"无密码登录

[root@localhost-slave1 ~]# ssh 10.2.151.138
Last login: Fri Aug  3 13:44:30 2018 from ::1
[root@localhost ~]#

从"Master"到"Slave"无密码登录

[root@localhost ~]# ssh 10.2.151.140
Last login: Fri Aug  3 14:11:05 2018 from 10.2.151.138
[root@localhost-slave1 ~]#

到此为止以实 现master和salve互相之间的无密码登录。(其他master和slave节点实现过程同上)

8、配置安装zookeeper

(1)Zookeeper概述

         Zookeeper是一个分布式协调服务,就是为用户的分布式应用程序提供协调服务。

     1)zookeeper是为别的分布式程序服务的

     2)zookeeper本身就是一个分布式程序(半数以上节点存在,zk即可正常服务)

     3)zookeeper锁提供的服务涵盖:主从协调、服务器节点动态上下线、统一配置管理、分布式共享锁、统一名称服务...

     4)虽然提供各种服务,但是zookeeper在底层其实只提供两个功能

           a)管理(存储,读取)用户提交的数据

           b)为用户节点提供数据监听服务

(2)安装zookeeper

下载zookeeper安装包:http://mirror.bit.edu.cn/apache/zookeeper/

zookeeper-3.4.9.tar.gz 解压到/usr/local/zookeeper中

$:tar -xvf zookeeper-3.4.12.tar.gz -C /usr/local/zookeeper/
$:cd /usr/local/zookeeper/zookeeper-3.4.12
$:cp conf/zoo_sample.cfg conf/zoo.cfg
$:vim conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper/data
dataLogDir=/data/zookeeper/log
clientPort=12181
server.1=192.168.122.152:2888:3888
server.2=192.168.122.153:2888:3888

上面几个参数的意义:

*tickTime:发送心跳的间隔时间,单位:毫秒

*initLimit : zookeeper集群中的包含多台server, 其中一台为leader, 集群中其余的server为follower. initLimit参数配置初始化连接时, follower和leader之间的最长心跳时间. 此时该参数设置为5, 说明时间限制为5倍tickTime, 即5*2000=10000ms=10s.

*syncLimit : 该参数配置leader和follower之间发送消息, 请求和应答的最大时间长度. 此时该参数设置为2, 说明时间限制为2倍tickTime, 即4000ms.

*server.X=A:B:C 其中X是一个数字, 表示这是第几号server. A是该server所在的IP地址. B配置该server和集群中的leader交换消息所使用的端口. C配置选举leader时所使用的端口. 由于配置的是伪集群模式, 所以各个server的B, C参数必须不同.

*dataDir:zookeeper保存数据的目录。

*dataLogDir : log目录, 同样可以是任意目录. 如果没有设置该参数, 将使用和dataDir相同的设置.

*clientPort:客户端连接 Zookeeper 服务器的端口,Zookeeper 会监听这个端口,接受客户端的访问请求。这个端口号原来为2181,但极易冲突,所以改成了12181.用netstat -lntup检查是没被使用的,再用 lsof -i:2181检查是使用过的。

其中:
2888端口号是zookeeper服务之间通信的端口。
3888是zookeeper与其他应用程序通信的端口。

(3)根据配置文件创建目录

$:mkdir -p /data
$:mkdir -p /data/zookeeper
$:mkdir -p /data/zookeeper/data
$:mkdir -p /data/zookeeper/log

(4)在之前设置的dataDir中新建myid文件, 写入一个数字, 该数字表示这是第几号server. 该数字必须和zoo.cfg文件中的server.X中的X一一对应.并且同一集群内每个节点的数字是不相同的。

例:echo 1 > /data/zookeeper/data/myid

配置zookeeper

vim  /etc/profile

         #zookeeper
         export ZOOKEEPER_HOME=/usr/local/zookeeper/zookeeper-3.4.12
         export PATH=$ZOOKEEPER_HOME/bin:$PATH
保存退出使生效:source   /etc/profile

                记着关闭所有节点的防火墙!!!!

进入到bin 执行:

./zkServer.sh start 启动

./zkServer.sh status 查看状态

jps            #查看进程,其中,QuorumPeerMain是zookeeper进程,启动正常。

tail -500f zookeeper.out               #服务器输出信息

./zkServer.sh stop      #停止zookeeper进程

错误:

[root@localhost bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.
解决:

这错报的人x疼,找了一早没找出啊,网上查了好多,最后slave端发现防火墙没关!!

(5)设置zookeeper开机自启动

$:cd /etc/rc.d/init.d
$:touch zookeeper 
$:chmod +x zookeeper

#编辑文件,在zookeeper里面输入如下内容  
#!/bin/bash  
#chkconfig:2345 20 90  
#description:zookeeper  
#processname:zookeeper  
export JAVA_HOME=/usr
export PATH=$JAVA_HOME/bin:$PATH
case $1 in
          start)su root /usr/local/zookeeper/zookeeper-3.4.12/bin/zkServer.sh start;;
          stop)su root /usr/local/zookeeper/zookeeper-3.4.12/bin/zkServer.sh stop;;
          status)su root /usr/local/zookeeper/zookeeper-3.4.12/bin/zkServer.sh status;;
          restart)su root /usr/local/zookeeper/zookeeper-3.4.12/bin/zkServer.sh restart;;
          *)  echo "require start|stop|status|restart"  ;;
esac

然后我们就可以用service zookeeper start/stop来启动停止zookeeper服务了。

使用命令把zookeeper添加到开机启动里面:

chkconfig zookeeper on

chkconfig --add zookeeper 添加完成之后接这个使用chkconfig --list 来看看我们添加的zookeeper是否在里面。

[root@localhost init.d]# chkconfig zookeeper on
[root@localhost init.d]# chkconfig --add zookeeper
[root@localhost init.d]# chkconfig --list

Note: This output shows SysV services only and does not include native
      systemd services. SysV configuration data might be overridden by native
      systemd configuration.

      If you want to list systemd services use 'systemctl list-unit-files'.
      To see services enabled on particular target use
      'systemctl list-dependencies [target]'.

netconsole     	0:off	1:off	2:off	3:off	4:off	5:off	6:off
network        	0:off	1:off	2:on	3:on	4:on	5:on	6:off
zookeeper      	0:off	1:off	2:on	3:on	4:on	5:on	6:off

9、配置mesos

(1)将几个配置模板文件分别生成新的文件

[root@localhost mesos]# ls
mesos-agent-env.sh.template  mesos-deploy-env.sh.template  mesos-master-env.sh.template  mesos-slave-env.sh.template
[root@localhost mesos]# pwd
/usr/local/mesos/etc/mesos
[root@localhost mesos]# cp mesos-agent-env.sh.template mesos-agent-env.sh
[root@localhost mesos]# cp mesos-master-env.sh.template mesos-master-env.sh
[root@localhost mesos]# cp mesos-deploy-env.sh.template mesos-deploy-env.sh
[root@localhost mesos]# cp mesos-slave-env.sh.template mesos-slave-env.sh
[root@localhost mesos]# ls
mesos-agent-env.sh           mesos-deploy-env.sh           mesos-master-env.sh           mesos-slave-env.sh
mesos-agent-env.sh.template  mesos-deploy-env.sh.template  mesos-master-env.sh.template  mesos-slave-env.sh.template
[root@localhost mesos]#

(2)修改mesos-master-env.sh

#vim mesos-master-env.sh 

export MESOS_log_dir=/data/mesos/log
export MESOS_work_dir=/data/mesos/data
export MESOS_ZK=zk://10.2.151.138:12181/mesos
export MESOS_quorum=1

(3)修改mesos-agent-env.sh和mesos-slave-env.sh一样

#vim mesos-agent-env.sh

 export MESOS_master=10.2.151.138:5050
 export MESOS_log_dir=/data/mesos/log
 export MESOS_work_dir=/data/mesos/run
 export MESOS_isolation=cgroups

#vim mesos-slave-env.sh
export MESOS_master=10.2.151.138:5050
 export MESOS_log_dir=/data/mesos/log
 export MESOS_work_dir=/data/mesos/run
 export MESOS_isolation=cgroups

mesos-deploy-env.sh文件不需要修改!!

(4)创建master和slave文件,并配置

[root@localhost mesos]# echo 10.2.151.138 > masters
[root@localhost mesos]# ls
masters             mesos-agent-env.sh.template  mesos-deploy-env.sh.template  mesos-master-env.sh.template  mesos-slave-env.sh.template
mesos-agent-env.sh  mesos-deploy-env.sh          mesos-master-env.sh           mesos-slave-env.sh
[root@localhost mesos]# cat masters 
10.2.151.138
[root@localhost mesos]# ls
masters             mesos-agent-env.sh.template  mesos-deploy-env.sh.template  mesos-master-env.sh.template  mesos-slave-env.sh.template
mesos-agent-env.sh  mesos-deploy-env.sh          mesos-master-env.sh           mesos-slave-env.sh
[root@localhost mesos]# cp masters slaves
[root@localhost mesos]# vim slaves 
10.2.151.140

_(5)vim /usr/local/mesos/sbin/mesos-daemon.sh

将ulimit -n 8192修改为ulimit -n 1024

这条指令是在向服务器索要资源,但是 通过ulimit -a查看可以看出-n为1024,系统要求这个数不能更大,因此把8192更改为1024即可。

[root@localhost mesos]# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 125683
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 125683
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

(6)配置profile文件

#vim /etc/profile

#mesos
export MESOS_HOME=/usr/local/mesos
export PATH=${PATH}:${MESOS_HOME}/sbin:${MESOS_HOME}/bin
保存退出使生效:source  /etc/profile

(7)创建mesos数据目录

$:mkdir -p /data/mesos
$:mkdir -p /data/mesos/data
$:mkdir -p /data/mesos/run
$:mkdir -p /data/mesos/log

(8)分别复制到slaves1 slaves2中,然后到master的sbin目录执行

./mesos-start-cluster.sh 启动集群

可以看到slaves上的mesos也启动了

[root@localhost sbin]# ./mesos-start-cluster.sh
Starting mesos-master on 10.2.151.138
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=2 10.2.151.138 /usr/local/mesos/sbin/mesos-daemon.sh mesos-master </dev/null >/dev/null
Warning: Permanently added '10.2.151.138' (ECDSA) to the list of known hosts.
Starting mesos-agent on 10.2.151.140
ssh -o StrictHostKeyChecking=no -o ConnectTimeout=2 10.2.151.140 /usr/local/mesos/sbin/mesos-daemon.sh mesos-agent </dev/null >/dev/null
Everything's started!

master

[root@localhost sbin]# ps -ef|grep mesos
root      6516     1  3 17:07 ?        00:00:01 /usr/local/mesos/sbin/mesos-master
root      6609  3996  0 17:08 pts/1    00:00:00 grep --color=auto mesos

slave

[root@localhost-slave1 mesos]# ps -ef|grep mesos
root      4718     1  1 17:07 ?        00:00:02 /usr/local/mesos/sbin/mesos-agent
root      4787  2399  0 17:09 pts/1    00:00:00 grep --color=auto mesos

(9)访问http://masters:5050即可看见监控页面

arm架构centos7单用户修改fstab arm centos7_无密码登录

10、部署docker

(1)yum安装docker

yum install docker

(2)启动docker

systemctl start docker.service

(3)加入开机自启动服务

systemctl enable docker.service

11.安装marathon

(1)下载http://downloads.mesosphere.com/marathon/v1.1.1/marathon-1.1.1.tgz

(2)解压安装

$:mkdir -p /usr/local/marathon
$:tar -xvf marathon-1.1.1.tar.gz -C /usr/local/marathon
$:cd marathon-1.1.1
$:mv * ../
$:cd ..
$:rm -rf marathon-1.1.1

(3)运行

$:./bin/start --master zk://10.2.151.138:12181,10.2.151.140:12181/mesos --zk zk://10.2.151.138:12181,10.2.151.140:12181/marathon

浏览器输入:http://10.2.151.138:8080        (测试时marathon需要运行着)

arm架构centos7单用户修改fstab arm centos7_maven_02

例程测试

[root@localhost-master mesos-1.6.0]# ./src/examples/python/test-framework 10.2.151.138:5050
I0809 17:19:49.814330 10410 sched.cpp:232] Version: 1.6.0
I0809 17:19:49.839795 10461 sched.cpp:336] New master detected at master@10.2.151.138:5050
I0809 17:19:49.841190 10461 sched.cpp:351] No credentials provided. Attempting to register without authentication
I0809 17:19:49.854107 10463 sched.cpp:749] Framework registered with 3c917ac8-a83e-4568-bb3c-307cca134600-0000
Registered with framework ID 3c917ac8-a83e-4568-bb3c-307cca134600-0000
Received offer 3c917ac8-a83e-4568-bb3c-307cca134600-O0 with cpus: 40.0 and mem: 96411.0
Launching task 0 using offer 3c917ac8-a83e-4568-bb3c-307cca134600-O0
Launching task 1 using offer 3c917ac8-a83e-4568-bb3c-307cca134600-O0
Launching task 2 using offer 3c917ac8-a83e-4568-bb3c-307cca134600-O0
Launching task 3 using offer 3c917ac8-a83e-4568-bb3c-307cca134600-O0
Launching task 4 using offer 3c917ac8-a83e-4568-bb3c-307cca134600-O0
Task 0 is in state TASK_RUNNING
Task 1 is in state TASK_RUNNING
Task 2 is in state TASK_RUNNING
Task 3 is in state TASK_RUNNING
Task 4 is in state TASK_RUNNING
Task 0 is in state TASK_FINISHED
Task 1 is in state TASK_FINISHED
Task 2 is in state TASK_FINISHED
Task 3 is in state TASK_FINISHED
Task 4 is in state TASK_FINISHED
All tasks done, waiting for final framework message
Received message: 'data with a \x00 byte'
Received message: 'data with a \x00 byte'
Received message: 'data with a \x00 byte'
Received message: 'data with a \x00 byte'
Received message: 'data with a \x00 byte'
All tasks done, and all messages received, exiting
I0809 17:19:54.263712 10459 sched.cpp:2013] Asked to stop the driver
I0809 17:19:54.264086 10459 sched.cpp:1189] Stopping framework 3c917ac8-a83e-4568-bb3c-307cca134600-0000
I0809 17:19:54.268311 10410 sched.cpp:2013] Asked to stop the driver
[root@localhost-master mesos-1.6.0]# pwd
/usr/local/mesos/mesos-1.6.0

测试Marathon(抄的测了neng用)

3.2.1 在各节点上安装netcat

1

# yum install nmap-ncat

3.2.2 在Marathon页面,点击“Create Application”创建任务

command: while true; do ( echo "HTTP/1.0 200 Ok"; echo; echo "Hello World" ) | nc -l $PORT; done

arm架构centos7单用户修改fstab arm centos7_maven_03

3.2.3 点击“Create”后,创建并执行任务

3.2.4 在Applications页面,点击任务,可以看到任务的详细信息

可以看到任务分布在两个节点上,访问HTTP服务启动的端口

arm架构centos7单用户修改fstab arm centos7_vim_04

3.2.5 在各节点可以看到nc进程正启动着

arm架构centos7单用户修改fstab arm centos7_maven_05

test166:

arm架构centos7单用户修改fstab arm centos7_无密码登录_06

test167:

arm架构centos7单用户修改fstab arm centos7_vim_07