简介

SaltStack是一个服务器基础架构集中化管理平台,具备配置管理、远程执行、监控等功能,一般可以理解为简化版的puppet和加强版的func。SaltStack基于Python语言实现,结合轻量级消息队列(ZeroMQ)与Python第三方模块(Pyzmq、PyCrypto、Pyjinjia2、python-msgpack和PyYAML等)构建。

通过部署SaltStack环境,我们可以在成千上万台服务器上做到批量执行命令,根据不同业务特性进行配置集中化管理、分发文件、采集服务器数据、操作系统基础及软件包管理等,SaltStack是运维人员提高工作效率、规范业务配置与操作的利器。

官方网站:https://saltstack.com/

官方文档:https://docs.saltstack.com/en/latest/

GitHub:https://github.com/saltstack

中国saltstack用户组:https://www.saltstack.cn/

特性

(1)、部署简单、方便;
(2)、支持大部分UNIX/Linux及Windows环境;
(3)、主从集中化管理;
(4)、配置简单、功能强大、扩展性强;
(5)、主控端(master)和被控端(minion)基于证书认证,安全可靠;

(6)、支持API及自定义模块,可通过Python轻松扩展。

saltstack 安装部署及主从认证 saltstack官网_saltstack 安装部署及主从认证

 

三种运行方式:
Local,Master/Minion ,Salt SSH

三大功能:

远程执行,配置管理,云管理

操作

安装:

IP地址  

角色

192.168.20.93

master

192.168.20.172

minion

master 端:
yum install salt-master salt-minion -y
minion端:
yum install salt-minion -y
开机自启动:
chkconfig salt-minion on
启动master:
/etc/init.d/salt-master start
编译:vim /etc/salt/minion(master端和minion端都要配置)
配置17行:master 的主机IP地址
master: 192.168.20.93
id:XXXXXXXX          #在master端显示的名称。

 

Master与Minion认证与快速入门

(1)、minion在第一次启动时,会在/etc/salt/pki/minion/(该路径在/etc/salt/minion里面设置)下自动生成minion.pem(private key)和 minion.pub(public key),然后将 minion.pub发送给master。

(2)、master在接收到minion的public key后,通过salt-key命令accept minion public key,这样在master的/etc/salt/pki/master/minions下的将会存放以minion id命名的 public key,然后master就能对minion发送指令了。

minion端:
/etc/salt/pki/minion
[root@openfire2 minion]# ll
total 8
-r-------- 1 root root 1679 May 31 12:57 minion.pem
-rw-r--r-- 1 root root  451 May 31 12:57 minion.pub

master端:
[root@js-93 master]# pwd
/etc/salt/pki/master
[root@js-93 master]# ll
total 28
-r-------- 1 root root 1679 May 31 13:00 master.pem
-rw-r--r-- 1 root root  451 May 31 13:00 master.pub
drwxr-xr-x 2 root root 4096 May 31 13:00 minions
drwxr-xr-x 2 root root 4096 May 31 13:00 minions_autosign
drwxr-xr-x 2 root root 4096 May 31 13:00 minions_denied
drwxr-xr-x 2 root root 4096 May 31 13:05 minions_pre
drwxr-xr-x 2 root root 4096 May 31 13:00 minions_rejected

在minion_pre 中有一个等待同意的主机名称

[root@js-93 master]# tree
.
├── master.pem
├── master.pub
├── minions
├── minions_autosign
├── minions_denied
├── minions_pre
│   └── openfire2
└── minions_rejected

 

[root@js-93 master]# salt-key
Accepted Keys:
Denied Keys:
Unaccepted Keys:    #未同意的key
openfire2
Rejected Keys:

同意:

—A 同意所有; 

[root@js-93 master]# salt-key -a openfire2
The following keys are going to be accepted:
Unaccepted Keys:
openfire2
Proceed? [n/Y] Y
Key for minion openfire2 accepted.

和上面的tree 的结果进行对比发现openfire2 在不同的目录下了。

[root@js-93 master]# tree
.
├── master.pem
├── master.pub
├── minions
│   └── openfire2
├── minions_autosign
├── minions_denied
├── minions_pre
└── minions_rejected

 

发现客户端也有master的public key

[root@openfire2 minion]# ls
minion_master.pub

 

简单操作:

[root@js-93 master]# salt '*' test.ping
openfire2:
    True
[root@js-93 master]# salt '*' cmd.run 'uptime'
openfire2:
     13:17:55 up 255 days,  1:29,  1 user,  load average: 0.08, 0.07, 0.05
You have mail in /var/spool/mail/root
[root@js-93 master]# salt '*' cmd.run 'df -h'
openfire2:
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sda3       195G  4.6G  181G   3% /
    tmpfs           1.9G   12K  1.9G   1% /dev/shm
    /dev/sda1       194M   27M  158M  15% /boot

 

快速入门:

修改 master端的  /etc/salt/master 文件:  #该配置文件里面一定要保持使用空格,如果使用tab键,那么一定会报错。
416 file_roots:
417   base:
418     - /srv/salt

然后查看master端是否有 /srv/salt 这个目录,如果没有则创建。 mkdir /srv/salt
然后重启服务: /etc/init.d/salt-master restart

 

写一个apache服务的配置:

[root@js-93 salt]# cat apache.sls 
apache-install:
  pkg.installed:
    - names:
      - httpd
      - httpd-devel

apache-service:
  service.running:
    - name: httpd
    - enable: True                #开机启动
    - reload: True

****一定要注意缩进,该缩进按照2个空格进行缩进,每个层级关系在一起,下一个层级关系则会多增加两个空格。一定不要有别的字符,例如:tab

 

执行Apache.sls 配置。

master端:
[root@js-93 salt]# salt '*' state.sls apache
openfire2:
----------
ID: apache-install
Function: pkg.installed
Name: httpd
Result: True
Comment: Package httpd is already installed.
Started: 14:46:21.941103
Duration: 1343.879 ms
Changes: 
----------
ID: apache-install
Function: pkg.installed
Name: httpd-devel
Result: True
Comment: Package httpd-devel is already installed.
Started: 14:46:23.285596
Duration: 1.452 ms
Changes: 
----------
ID: apache-service
Function: service.running
Name: httpd
Result: True
Comment: Service httpd has been enabled, and is running
Started: 14:46:23.288696
Duration: 263.856 ms
Changes: 
----------
httpd:
True

Summary
------------
Succeeded: 3 (changed=1)
Failed: 0
------------
Total states run: 3

minion 端:
[root@openfire2 minion]# ps -ef|grep yum

root 6961 6921 4 14:38 ? 00:00:00 /usr/bin/python2.6.6 /usr/bin/yum --quiet check-update

[root@openfire2 minion]# chkconfig --list|grep httpd
httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off

[root@openfire2 minion]# ps -ef|grep httpd
root 7170 1 0 14:46 ? 00:00:00 /usr/sbin/httpd
apache 7172 7170 0 14:46 ? 00:00:00 /usr/sbin/httpd
apache 7173 7170 0 14:46 ? 00:00:00 /usr/sbin/httpd
apache 7174 7170 0 14:46 ? 00:00:00 /usr/sbin/httpd
apache 7175 7170 0 14:46 ? 00:00:00 /usr/sbin/httpd
apache 7176 7170 0 14:46 ? 00:00:00 /usr/sbin/httpd
apache 7177 7170 0 14:46 ? 00:00:00 /usr/sbin/httpd
apache 7179 7170 0 14:46 ? 00:00:00 /usr/sbin/httpd
apache 7180 7170 0 14:46 ? 00:00:00 /usr/sbin/httpd
root 7190 6358 0 14:47 pts/0 00:00:00 grep httpd

 

 

saltstack 数据系统  

grains数据系统

Grains:存放着minion启动时的数据信息(存储在minion端),只有在minion启动的时候才会收集。重启以后才会重新收集

saltstack 安装部署及主从认证 saltstack官网_CentOS_02

saltstack 安装部署及主从认证 saltstack官网_apache_03

[root@js-93 salt]# salt '*' grains.ls
openfire2:
    - SSDs
    - cpu_flags
    - cpu_model
    - cpuarch
    - domain
    - fqdn
    - fqdn_ip4
    - fqdn_ip6
    - gpus
    - host
    - hwaddr_interfaces
    - id
    - init
    - ip4_interfaces
    - ip6_interfaces
    - ip_interfaces
    - ipv4
    - ipv6
    - kernel
    - kernelrelease
    - locale_info
    - localhost
    - lsb_distrib_codename
    - lsb_distrib_id
    - lsb_distrib_release
    - machine_id
    - master
    - mdadm
    - mem_total
    - nodename
    - num_cpus
    - num_gpus
    - os
    - os_family
    - osarch
    - oscodename
    - osfinger
    - osfullname
    - osmajorrelease
    - osrelease
    - osrelease_info
    - path
    - ps
    - pythonexecutable
    - pythonpath
    - pythonversion
    - saltpath
    - saltversion
    - saltversioninfo
    - selinux
    - server_id
    - shell
    - virtual
    - zmqversion

grains 的名称

saltstack 安装部署及主从认证 saltstack官网_CentOS_02

saltstack 安装部署及主从认证 saltstack官网_apache_03

[root@js-93 salt]# salt '*' grains.items
openfire2:
    ----------
    SSDs:
    cpu_flags:
        - fpu
        - vme
        - de
        - pse
        - tsc
        - msr
        - pae
        - mce
        - cx8
        - apic
        - sep
        - mtrr
        - pge
        - mca
        - cmov
        - pat
        - pse36
        - clflush
        - dts
        - mmx
        - fxsr
        - sse
        - sse2
        - ss
        - ht
        - syscall
        - nx
        - rdtscp
        - lm
        - constant_tsc
        - arch_perfmon
        - pebs
        - bts
        - xtopology
        - tsc_reliable
        - nonstop_tsc
        - aperfmperf
        - unfair_spinlock
        - pni
        - pclmulqdq
        - ssse3
        - cx16
        - sse4_1
        - sse4_2
        - x2apic
        - popcnt
        - aes
        - hypervisor
        - lahf_lm
        - arat
        - dts
    cpu_model:
        Intel(R) Xeon(R) CPU           E5606  @ 2.13GHz
    cpuarch:
        x86_64
    domain:
    fqdn:
        openfire2
    fqdn_ip4:
        - 192.168.20.172
    fqdn_ip6:
    gpus:
        |_
          ----------
          model:
              SVGA II Adapter
          vendor:
              unknown
    host:
        openfire2
    hwaddr_interfaces:
        ----------
        eth1:
            00:0c:29:ed:a9:14
        lo:
            00:00:00:00:00:00
    id:
        openfire2
    init:
        upstart
    ip4_interfaces:
        ----------
        eth1:
            - 192.168.20.172
        lo:
            - 127.0.0.1
    ip6_interfaces:
        ----------
        eth1:
            - fe80::20c:29ff:feed:a914
        lo:
            - ::1
    ip_interfaces:
        ----------
        eth1:
            - 192.168.20.172
            - fe80::20c:29ff:feed:a914
        lo:
            - 127.0.0.1
            - ::1
    ipv4:
        - 127.0.0.1
        - 192.168.20.172
    ipv6:
        - ::1
        - fe80::20c:29ff:feed:a914
    kernel:
        Linux
    kernelrelease:
        2.6.32-431.el6.x86_64
    locale_info:
        ----------
        defaultencoding:
            UTF8
        defaultlanguage:
            en_US
        detectedencoding:
            UTF-8
    localhost:
        openfire2
    lsb_distrib_codename:
        Final
    lsb_distrib_id:
        CentOS
    lsb_distrib_release:
        6.5
    machine_id:
        e8c670405579b8d0809d7d1800000022
    master:
        192.168.20.93
    mdadm:
    mem_total:
        3824
    nodename:
        openfire2
    num_cpus:
        4
    num_gpus:
        1
    os:
        CentOS
    os_family:
        RedHat
    osarch:
        x86_64
    oscodename:
        Final
    osfinger:
        CentOS-6
    osfullname:
        CentOS
    osmajorrelease:
        6
    osrelease:
        6.5
    osrelease_info:
        - 6
        - 5
    path:
        /sbin:/usr/sbin:/bin:/usr/bin
    ps:
        ps -efH
    pythonexecutable:
        /usr/bin/python2.6
    pythonpath:
        - /usr/bin
        - /usr/lib64/python26.zip
        - /usr/lib64/python2.6
        - /usr/lib64/python2.6/plat-linux2
        - /usr/lib64/python2.6/lib-tk
        - /usr/lib64/python2.6/lib-old
        - /usr/lib64/python2.6/lib-dynload
        - /usr/lib64/python2.6/site-packages
        - /usr/lib/python2.6/site-packages
        - /usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info
    pythonversion:
        - 2
        - 6
        - 6
        - final
        - 0
    saltpath:
        /usr/lib/python2.6/site-packages/salt
    saltversion:
        2015.5.10
    saltversioninfo:
        - 2015
        - 5
        - 10
        - 0
    selinux:
        ----------
        enabled:
            False
        enforced:
            Disabled
    server_id:
        1314904039
    shell:
        /bin/bash
    virtual:
        VMware
    zmqversion:
        3.2.5

grains 的详细信息

获取机器单个选项信息:
[root@js-93 salt]# salt '*' grains.item fqdn
openfire2:
    ----------
    fqdn:
        openfire2
[root@js-93 salt]# salt '*' grains.get  fqdn
openfire2:
    openfire2

[root@js-93 salt]# salt '*' grains.get ip_interfaces
openfire2:
----------
eth1:
- 192.168.20.172
- fe80::20c:29ff:feed:a914
lo:
- 127.0.0.1
- ::1

 

grains 的操作实例:

指定系统进行远程命令操作;

[root@js-93 salt]# salt -G os:CentOS cmd.run 'free -m'
openfire2:
                 total       used       free     shared    buffers     cached
    Mem:          3824       2860        964          0        169       1358
    -/+ buffers/cache:       1333       2491
    Swap:         2047          0       2047

参数:-G 表示用grains 的数据进行匹配。

 

grains 可以给minion端赋予 roles角色,然后通过salt  远程执行对应的roles角色就行。

minion端:
[root@openfire2 minion]# cat /etc/salt/grains    #这里面的role名称不能用roles 否则会冲突 
web: check12 
[root@openfire2 minion]# /etc/init.d/salt-minion restart
master端:
[root@js-93 master]# salt -G web:check12 cmd.run 'w'
openfire2:
     17:50:08 up 255 days,  6:01,  1 user,  load average: 0.12, 0.08, 0.01
    USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
    root     pts/0    192.168.16.7     10:55   21.00s  0.58s  0.58s -bash

 

grains: 在master 的top.sls 里面进行配置minion 的信息,进行执行命令;

master端:
[root@js-93 salt]# cat top.sls 
base:
  'web:check12':
    - match: grain
    - apache

这里的apache 是/src/salt/apache.sls 里面的信息。
[root@js-93 salt]# salt '*' state.highstate    #执行这个命令就会有信息返回

 

 

Pillar 数据系统

pillar 数据系统默认是关着的(master上面),因此要使用pillar需要修改 master端的 /etc/salt/master 的 552行 "#pillar_opts: False "改成 pillar_opts: True"

[root@js-93 salt]# /etc/init.d/salt-master restart
[root@js-93 salt]# salt '*' pillar.items    #就可以查看到很多信息。

saltstack 安装部署及主从认证 saltstack官网_CentOS_02

saltstack 安装部署及主从认证 saltstack官网_apache_03

[root@js-93 salt]# salt '*' pillar.items
openfire2:
    ----------
    master:
        ----------
        __role:
            master
        auth_mode:
            1
        auto_accept:
            False
        cache_sreqs:
            True
        cachedir:
            /var/cache/salt/master
        cli_summary:
            False
        client_acl:
            ----------
        client_acl_blacklist:
            ----------
        cluster_masters:
        cluster_mode:
            paranoid
        con_cache:
            False
        conf_file:
            /etc/salt/master
        config_dir:
            /etc/salt
        cython_enable:
            False
        daemon:
            True
        default_include:
            master.d/*.conf
        enable_gpu_grains:
            False
        enforce_mine_cache:
            False
        enumerate_proxy_minions:
            False
        environment:
            None
        event_return:
        event_return_blacklist:
        event_return_queue:
            0
        event_return_whitelist:
        ext_job_cache:
        ext_pillar:
        extension_modules:
            /var/cache/salt/extmods
        external_auth:
            ----------
        failhard:
            False
        file_buffer_size:
            1048576
        file_client:
            local
        file_ignore_glob:
            None
        file_ignore_regex:
            None
        file_recv:
            False
        file_recv_max_size:
            100
        file_roots:
            ----------
            base:
                - /srv/salt
        fileserver_backend:
            - roots
        fileserver_followsymlinks:
            True
        fileserver_ignoresymlinks:
            False
        fileserver_limit_traversal:
            False
        gather_job_timeout:
            10
        gitfs_base:
            master
        gitfs_env_blacklist:
        gitfs_env_whitelist:
        gitfs_insecure_auth:
            False
        gitfs_mountpoint:
        gitfs_passphrase:
        gitfs_password:
        gitfs_privkey:
        gitfs_pubkey:
        gitfs_remotes:
        gitfs_root:
        gitfs_user:
        hash_type:
            md5
        hgfs_base:
            default
        hgfs_branch_method:
            branches
        hgfs_env_blacklist:
        hgfs_env_whitelist:
        hgfs_mountpoint:
        hgfs_remotes:
        hgfs_root:
        id:
            openfire2
        interface:
            0.0.0.0
        ioflo_console_logdir:
        ioflo_period:
            0.01
        ioflo_realtime:
            True
        ioflo_verbose:
            0
        ipv6:
            False
        jinja_lstrip_blocks:
            False
        jinja_trim_blocks:
            False
        job_cache:
            True
        keep_jobs:
            24
        key_logfile:
            /var/log/salt/key
        keysize:
            2048
        log_datefmt:
            %H:%M:%S
        log_datefmt_logfile:
            %Y-%m-%d %H:%M:%S
        log_file:
            /var/log/salt/master
        log_fmt_console:
            [%(levelname)-8s] %(message)s
        log_fmt_logfile:
            %(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s][%(process)d] %(message)s
        log_granular_levels:
            ----------
        log_level:
            warning
        loop_interval:
            60
        maintenance_floscript:
            /usr/lib/python2.6/site-packages/salt/daemons/flo/maint.flo
        master_floscript:
            /usr/lib/python2.6/site-packages/salt/daemons/flo/master.flo
        master_job_cache:
            local_cache
        master_pubkey_signature:
            master_pubkey_signature
        master_roots:
            ----------
            base:
                - /srv/salt-master
        master_sign_key_name:
            master_sign
        master_sign_pubkey:
            False
        master_tops:
            ----------
        master_use_pubkey_signature:
            False
        max_event_size:
            1048576
        max_minions:
            0
        max_open_files:
            100000
        minion_data_cache:
            True
        minionfs_blacklist:
        minionfs_env:
            base
        minionfs_mountpoint:
        minionfs_whitelist:
        nodegroups:
            ----------
        open_mode:
            False
        order_masters:
            False
        outputter_dirs:
        peer:
            ----------
        permissive_pki_access:
            False
        pidfile:
            /var/run/salt-master.pid
        pillar_opts:
            True
        pillar_roots:
            ----------
            base:
                - /srv/pillar
        pillar_safe_render_error:
            True
        pillar_source_merging_strategy:
            smart
        pillar_version:
            2
        pillarenv:
            None
        ping_on_rotate:
            False
        pki_dir:
            /etc/salt/pki/master
        preserve_minion_cache:
            False
        pub_hwm:
            1000
        publish_port:
            4505
        publish_session:
            86400
        queue_dirs:
        raet_alt_port:
            4511
        raet_clear_remotes:
            False
        raet_main:
            True
        raet_mutable:
            False
        raet_port:
            4506
        range_server:
            range:80
        reactor:
        reactor_refresh_interval:
            60
        reactor_worker_hwm:
            10000
        reactor_worker_threads:
            10
        renderer:
            yaml_jinja
        ret_port:
            4506
        root_dir:
            /
        rotate_aes_key:
            True
        runner_dirs:
        saltversion:
            2015.5.10
        search:
        search_index_interval:
            3600
        serial:
            msgpack
        show_jid:
            False
        show_timeout:
            True
        sign_pub_messages:
            False
        sock_dir:
            /var/run/salt/master
        sqlite_queue_dir:
            /var/cache/salt/master/queues
        ssh_passwd:
        ssh_port:
            22
        ssh_scan_ports:
            22
        ssh_scan_timeout:
            0.01
        ssh_sudo:
            False
        ssh_timeout:
            60
        ssh_user:
            root
        state_aggregate:
            False
        state_auto_order:
            True
        state_events:
            False
        state_output:
            full
        state_top:
            salt://top.sls
        state_top_saltenv:
            None
        state_verbose:
            True
        sudo_acl:
            False
        svnfs_branches:
            branches
        svnfs_env_blacklist:
        svnfs_env_whitelist:
        svnfs_mountpoint:
        svnfs_remotes:
        svnfs_root:
        svnfs_tags:
            tags
        svnfs_trunk:
            trunk
        syndic_dir:
            /var/cache/salt/master/syndics
        syndic_event_forward_timeout:
            0.5
        syndic_jid_forward_cache_hwm:
            100
        syndic_master:
        syndic_max_event_process_time:
            0.5
        syndic_wait:
            5
        timeout:
            5
        token_dir:
            /var/cache/salt/master/tokens
        token_expire:
            43200
        transport:
            zeromq
        user:
            root
        verify_env:
            True
        win_gitrepos:
            - https://github.com/saltstack/salt-winrepo.git
        win_repo:
            /srv/salt/win/repo
        win_repo_mastercachefile:
            /srv/salt/win/repo/winrepo.p
        worker_floscript:
            /usr/lib/python2.6/site-packages/salt/daemons/flo/worker.flo
        worker_threads:
            5
        zmq_filtering:
            False

pillar 的信息

 

修改 master端的:/etc/salt/master 

#pillar_opts: True 这个再次改成False
pillar_opts: False
pillar_roots:
  base:
    - /srv/pillar

[root@js-93 salt]# mkdir /srv/pillar
[root@js-93 salt]# /etc/init.d/salt-master restart  #重启

 

使用:

先创建一个pillar的一个信息文件:
[root@js-93 salt]# cat /srv/pillar/apache.sls 
{% if grains['os'] == 'CentOS' %}
apache: httpd
{% elif grains['os'] == 'Debian' %}
apache: apache2
{% endif %}

然后指定能够读取这个文件的命令或文件:
[root@js-93 salt]# cat /srv/pillar/top.sls 
base:
  '*':
    - apache

 

查看pillar数据系统的信息:
[root@js-93 salt]# salt '*' pillar.items
openfire2:
    ----------
    apache:
        httpd
因为  apache.sls 里面指定的是CentOS的信息才会显示上面的内容,不然不会显示。

 

[root@js-93 salt]# salt -I 'apache:httpd' test.ping
openfire2:
    Minion did not return. [No response]

You have mail in /var/spool/mail/root
[root@js-93 salt]# salt '*' saltutil.refresh_pillar
openfire2:
    True

当不能使用的时候,要刷新一下。

[root@js-93 salt]# salt -I 'apache:httpd' test.ping
openfire2:
True

 

 

Grains 与Pillar的区别

名称 

存储位置

数据类型

采集更新方式

应用

Grains

MInion端

静态数据

minion启动时手机,也可以使用saltutil.sync_grains进行刷新

存储Minion基本数据。比如用于匹配Minion,自身数据可以用来做资产管理等

Pillar

Master端

动态数据

在Master端定义,指定给对应的Minion。可以使用saltutil.refresh_pillar刷新

存储Master指定的数据,只有指定的Minion可以看到。用于敏感数据保存。

 

远程执行命令详解

 远程执行命令三大内容:目标(targeting),模块(Module),返回(Returners)

 

目标(targeting)

https://docs.saltstack.com/en/latest/topics/targeting/  所有支持的远程操作命令:主要是看正则表达式;

saltstack 安装部署及主从认证 saltstack官网_apache_08

saltstack 安装部署及主从认证 saltstack官网_CentOS_09

 

模块(Module)

 https://docs.saltstack.com/en/latest/ref/modules/all/  官方所有模块配置的地址。

 

返回(Returners)

https://docs.saltstack.com/en/latest/ref/returners/all/index.html

 

 

 salt 的访问控制

首先修改/etc/salt/master 的acl控制:
vi /etc/salt/master 

245 client_acl:
246   ryan:
247     - test.ping
248     - network.*
修改这里表示,ryan 系统用户只能执行 test.ping 以及network.* 的方法。
别的模块及方法则不行
[ryan@js-93 ~]$ salt '*' test.ping
openfire2:
    True
[ryan@js-93 ~]$ salt '*' cmd.run 'w'
Failed to authenticate! This is most likely because this user is not permitted to execute commands, but there is a small possibility that a disk error occurred (check disk/inode usage).


还可以指定某个用户只能访问某类型的机器,或者指定用户不能使用某个方法,以及不能使用某些模块。

 salt 的信息还可以写入到mysql数据库。这个主要用于二次开发,或者收集记录信息等等,可以在官方文档查看到。

saltstack配置管理

top file 是saltstack的入口文件。

首先修改master端的 /etc/salt/master
file_roots:
  base:
    - /srv/salt/base
  test:
    - /srv/salt/test
  prod:
    - /srv/salt/prod

重启:

[root@js-93 salt]# /etc/init.d/salt-master restart

[root@js-93 salt]# mkdir /srv/salt/{base,test,prod}

 

写入一个批量修改reslov.conf的配置:

[root@js-93 base]# pwd
/srv/salt/base
[root@js-93 base]# cat dns.sls 
/etc/resolv.conf:
  file.managed:
    - source: salt://files/resolv.conf
    - user: root
    - group: root
    - mode: 644
注释 salt://  这是一个相对路径就相当于:/srv/salt/base的绝对路径一样。
因此,还要在base目录下面创建一个 files目录,用于存放resolv.conf文件;

 

查看一下需要配置的resolv.conf文件:

[root@js-93 files]# pwd
/srv/salt/base/files
[root@js-93 files]# cat resolv.conf 
# Generated by NetworkManager
nameserver 202.96.209.133
nameserver 4.4.4.4
nameserver 8.8.8.8
nameserver 114.114.114.114

 

直接命令行执行:

saltstack 安装部署及主从认证 saltstack官网_CentOS_02

saltstack 安装部署及主从认证 saltstack官网_apache_03

[root@js-93 base]# salt '*' state.sls dns
js-93:
----------
          ID: /etc/resolv.conf
    Function: file.managed
      Result: True
     Comment: File /etc/resolv.conf updated
     Started: 14:12:36.775940
    Duration: 41.112 ms
     Changes:   
              ----------
              diff:
                  ---  
                  +++  
                  @@ -1,3 +1,5 @@
                   # Generated by NetworkManager
                   nameserver 202.96.209.133
                  -#nameserver 114.114.114.114
                  +nameserver 4.4.4.4
                  +nameserver 8.8.8.8
                  +nameserver 114.114.114.114

Summary
------------
Succeeded: 1 (changed=1)
Failed:    0
------------
Total states run:     1
openfire2:
----------
          ID: /etc/resolv.conf
    Function: file.managed
      Result: True
     Comment: File /etc/resolv.conf updated
     Started: 14:05:11.366982
    Duration: 38.805 ms
     Changes:   
              ----------
              diff:
                  ---  
                  +++  
                  @@ -1,3 +1,5 @@
                  -
                  -nameserver 192.168.19.1
                  -nameserver 202.96.209.5
                  +# Generated by NetworkManager
                  +nameserver 202.96.209.133
                  +nameserver 4.4.4.4
                  +nameserver 8.8.8.8
                  +nameserver 114.114.114.114

Summary
------------
Succeeded: 1 (changed=1)
Failed:    0
------------
Total states run:     1

View Code

 

查看minion端的reslov.conf 是否修改成功:

[root@openfire2 minion]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 202.96.209.133
nameserver 4.4.4.4
nameserver 8.8.8.8
nameserver 114.114.114.114

 

这上面 命令行执行只是一个低级状态,如果有多个状态,则需要多次执行,因此。可以写一个高级状态,top file文件里面去定义。

[root@js-93 base]# pwd
/srv/salt/base
[root@js-93 base]# cat dns.sls 
/etc/resolv.conf:
  file.managed:
    - source: salt://files/resolv.conf
    - user: root
    - group: root
    - mode: 644
     
[root@js-93 base]# cat top.sls 
base:
  '*':
    - dns

 

执行结果: 
执行:salt '*' state.highstate 

saltstack 安装部署及主从认证 saltstack官网_CentOS_02

saltstack 安装部署及主从认证 saltstack官网_apache_03

[root@js-93 base]# salt '*' state.highstate 
js-93:
----------
          ID: /etc/resolv.conf
    Function: file.managed
      Result: True
     Comment: File /etc/resolv.conf updated
     Started: 14:24:08.642733
    Duration: 11.801 ms
     Changes:   
              ----------
              diff:
                  ---  
                  +++  
                  @@ -1,2 +1,5 @@
                   # Generated by NetworkManager
                   nameserver 202.96.209.133
                  +nameserver 4.4.4.4
                  +nameserver 8.8.8.8
                  +nameserver 114.114.114.114

Summary
------------
Succeeded: 1 (changed=1)
Failed:    0
------------
Total states run:     1
openfire2:
----------
          ID: /etc/resolv.conf
    Function: file.managed
      Result: True
     Comment: File /etc/resolv.conf updated
     Started: 14:16:48.282799
    Duration: 29.424 ms
     Changes:   
              ----------
              diff:
                  ---  
                  +++  
                  @@ -1,2 +1,5 @@
                   # Generated by NetworkManager
                   nameserver 202.96.209.133
                  +nameserver 4.4.4.4
                  +nameserver 8.8.8.8
                  +nameserver 114.114.114.114

Summary
------------
Succeeded: 1 (changed=1)
Failed:    0
------------
Total states run:     1

View Code

 

 

案例1:

saltstack 安装部署及主从认证 saltstack官网_python_14

 

 

 

 

 

 

 

 

 

 

 

 

 

新系统思想:

1.系统初始化

2.功能模块

3.业务模块

YAML规则语法:

1,缩进;Yaml使用一个固定的缩进放歌表示数据层结构关系。salt需要每个缩进级别有两个空格组成。不要使用tab

2,冒号;每个冒号后面都有一个空格,除了以冒号结尾和路径。

3,短横线;想要表示列表项,使用一个短横杠加一个空格。多个项使用同样的缩进级别作为同一列表的一部分。

jinja 用法:

[root@js-93 base]# cat dns.sls 
/etc/resolv.conf:
  file.managed:
    - source: salt://files/resolv.conf
    - user: root
    - group: root
    - mode: 644
    - template: jinja
    - defaults: 
      DNS_SERVER01: 202.96.209.133
      DNS_SERVER02: 114.114.114.114
[root@js-93 base]# cat files/resolv.conf
# Generated by NetworkManager
nameserver {{DNS_SERVER01}}
nameserver {{DNS_SERVER02}}

 

执行:salt '*' state.highstate

saltstack 安装部署及主从认证 saltstack官网_CentOS_02

saltstack 安装部署及主从认证 saltstack官网_apache_03

[root@js-93 base]# salt '*' state.highstate
openfire2:
----------
          ID: /etc/resolv.conf
    Function: file.managed
      Result: True
     Comment: File /etc/resolv.conf updated
     Started: 14:49:54.703800
    Duration: 40.246 ms
     Changes:   
              ----------
              diff:
                  ---  
                  +++  
                  @@ -1,5 +1,4 @@
                   # Generated by NetworkManager
                   nameserver 202.96.209.133
                  -nameserver 4.4.4.4
                  -nameserver 8.8.8.8
                   nameserver 114.114.114.114
                  +

Summary
------------
Succeeded: 1 (changed=1)
Failed:    0
------------
Total states run:     1
js-93:
----------
          ID: /etc/resolv.conf
    Function: file.managed
      Result: True
     Comment: File /etc/resolv.conf updated
     Started: 14:57:40.053564
    Duration: 24.137 ms
     Changes:   
              ----------
              diff:
                  ---  
                  +++  
                  @@ -1,5 +1,4 @@
                   # Generated by NetworkManager
                   nameserver 202.96.209.133
                  -nameserver 4.4.4.4
                  -nameserver 8.8.8.8
                   nameserver 114.114.114.114
                  +

Summary
------------
Succeeded: 1 (changed=1)
Failed:    0
------------
Total states run:     1

执行结果

 

查看minion结果:

[root@openfire2 minion]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 202.96.209.133
nameserver 114.114.114.114

 

其中:

- template: jinja 这里表示一个模板;
- defaults: 
      DNS_SERVER01: 202.96.209.133
      DNS_SERVER02: 114.114.114.114
这上面三句话表示变量。
files/resolv.conf 这里面表示变量的引用。模板的名称和文件里面的名称一定要一样。

系统初始化“模块”

配置resolv.conf 文件:

[root@js-93 init]# cat dns.sls 
/etc/resolv.conf:
  file.managed:
    - source: salt://init/files/resolv.conf
    - user: root
    - group: root
    - mode: 644

 

设置history的操作时间,操作用户:

[root@js-93 init]# cat history.sls 
/etc/profile:
  file.append:
    - text:
      - export HISTTIMEFORMAT="%F %T `whoami`"

 

把操作命令已日志的方式记录到 /var/log/message 里面;

export PROMPT_COMMAND='{ msg=$(history 1|{ read x y; echo $y; });logger "[euid=$(whoami)]":$(who am i):[`pwd`]"$msg";}'

 

写成 sls 文件:

[root@js-93 init]# cat audit.sls 
/etc/bashrc:
  file.append:
    - text:
      - export PROMPT_COMMAND='{ msg=$(history 1|{ read x y; echo $y; });logger "[euid=$(whoami)]":$(who am i):[`pwd`]"$msg";}'

 

 调整内核参数:

[root@js-93 init]# cat sysctl.sls 
vm.swappiness:                                       #不使用交换分区
  sysctl.present:
    - value: 0

net.ipv4.ip_local_port_range:                   #本地随机端口号范围
  sysctl.present:
    - value: 10000 65000
 
fs.file-max:                                              #打开的最大文件数
  sysctl.present:
    - value: 100000

 

把上面的所有初始化的内容写入到一个sls文件里面:

[root@js-93 init]# cat env_init.sls 
include:
  - init.dns
  - init.history
  - init.audit
  - init.sysctl

 

然后写入 top file文件里面 引用高级状态:

[root@js-93 base]# cat top.sls 
base:
  '*':
    - init.env_init
init.env_init:表示init目录下的 env_init.sls 文件。

 

执行结果:

salt '*' state.highstate test=True    #test=True 测试执行,可以查看到需要更高后的结果,但是,并不会真正的改变内容。

saltstack 安装部署及主从认证 saltstack官网_CentOS_02

saltstack 安装部署及主从认证 saltstack官网_apache_03

[root@js-93 base]# salt '*' state.highstate test=True
js-93:
----------
          ID: /etc/resolv.conf
    Function: file.managed
      Result: True
     Comment: The file /etc/resolv.conf is in the correct state
     Started: 16:07:09.456946
    Duration: 8.706 ms
     Changes:   
----------
          ID: /etc/profile
    Function: file.append
      Result: None
     Comment: File /etc/profile is set to be updated
     Started: 16:07:09.465787
    Duration: 3.052 ms
     Changes:   
              ----------
              diff:
                  ---  
                  +++  
                  @@ -80,3 +80,4 @@
                   export PATH=/application/mysql/bin:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
                   export PATH=/usr/local/mysql/bin:$PATH
                   export PATH=/usr/local/python27/bin:$PATH
                  +export HISTTIMEFORMAT="%F %T `whoami`"
----------
          ID: /etc/bashrc
    Function: file.append
      Result: None
     Comment: File /etc/bashrc is set to be updated
     Started: 16:07:09.468979
    Duration: 3.908 ms
     Changes:   
              ----------
              diff:
                  ---  
                  +++  
                  @@ -84,3 +84,4 @@
                       unset pathmunge
                   fi
                   # vim:ts=4:sw=4
                  +export PROMPT_COMMAND='{ msg=$(history 1|{ read x y; echo $y; });logger "[euid=$(whoami)]":$(who am i):[`pwd`]"$msg";}'
----------
          ID: vm.swappiness
    Function: sysctl.present
      Result: None
     Comment: Sysctl option vm.swappiness set to be changed to 0
     Started: 16:07:09.474272
    Duration: 29.895 ms
     Changes:   
----------
          ID: net.ipv4.ip_local_port_range
    Function: sysctl.present
      Result: None
     Comment: Sysctl option net.ipv4.ip_local_port_range set to be changed to 10000 65000
     Started: 16:07:09.504508
    Duration: 25.928 ms
     Changes:   
----------
          ID: fs.file-max
    Function: sysctl.present
      Result: None
     Comment: Sysctl option fs.file-max set to be changed to 100000
     Started: 16:07:09.530710
    Duration: 28.35 ms
     Changes:   

Summary
------------
Succeeded: 6 (unchanged=5, changed=2)
Failed:    0
------------
Total states run:     6
openfire2:
----------
          ID: /etc/resolv.conf
    Function: file.managed
      Result: True
     Comment: The file /etc/resolv.conf is in the correct state
     Started: 15:59:44.178131
    Duration: 22.931 ms
     Changes:   
----------
          ID: /etc/profile
    Function: file.append
      Result: None
     Comment: File /etc/profile is set to be updated
     Started: 15:59:44.201484
    Duration: 5.118 ms
     Changes:   
              ----------
              diff:
                  ---  
                  +++  
                  @@ -83,3 +83,4 @@
                   
                   PATH=$PATH:$HOME/bin
                   PATH=$PATH:$HOME/bin:/usr/local/python27/bin
                  +export HISTTIMEFORMAT="%F %T `whoami`"
----------
          ID: /etc/bashrc
    Function: file.append
      Result: None
     Comment: File /etc/bashrc is set to be updated
     Started: 15:59:44.206841
    Duration: 7.952 ms
     Changes:   
              ----------
              diff:
                  ---  
                  +++  
                  @@ -84,3 +84,4 @@
                       unset pathmunge
                   fi
                   # vim:ts=4:sw=4
                  +export PROMPT_COMMAND='{ msg=$(history 1|{ read x y; echo $y; });logger "[euid=$(whoami)]":$(who am i):[`pwd`]"$msg";}'
----------
          ID: vm.swappiness
    Function: sysctl.present
      Result: None
     Comment: Sysctl option vm.swappiness set to be changed to 0
     Started: 15:59:44.218022
    Duration: 67.859 ms
     Changes:   
----------
          ID: net.ipv4.ip_local_port_range
    Function: sysctl.present
      Result: None
     Comment: Sysctl option net.ipv4.ip_local_port_range set to be changed to 10000 65000
     Started: 15:59:44.286496
    Duration: 51.831 ms
     Changes:   
----------
          ID: fs.file-max
    Function: sysctl.present
      Result: None
     Comment: Sysctl option fs.file-max set to be changed to 100000
     Started: 15:59:44.338812
    Duration: 51.0 ms
     Changes:   

Summary
------------
Succeeded: 6 (unchanged=5, changed=2)
Failed:    0
------------
Total states run:     6

初始化 top file 的结果

 

salt '*' state.highstate

saltstack 安装部署及主从认证 saltstack官网_CentOS_02

saltstack 安装部署及主从认证 saltstack官网_apache_03

[root@js-93 base]# salt '*' state.highstate 
js-93:
----------
          ID: /etc/resolv.conf
    Function: file.managed
      Result: True
     Comment: File /etc/resolv.conf updated
     Started: 16:22:25.288082
    Duration: 11.363 ms
     Changes:   
              ----------
              diff:
                  ---  
                  +++  
                  @@ -1,4 +1,4 @@
                   # Generated by NetworkManager
                   nameserver 202.96.209.133
                  -nameserver 4.4.4.4
                  +nameserver 114.114.114.114
                   
----------
          ID: /etc/profile
    Function: file.append
      Result: True
     Comment: Appended 1 lines
     Started: 16:22:25.299569
    Duration: 2.153 ms
     Changes:   
              ----------
              diff:
                  ---  
                  +++  
                  @@ -80,3 +80,4 @@
                   #export PATH=/application/mysql/bin:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
                   export PATH=/usr/local/mysql/bin:$PATH
                   export PATH=/usr/local/python27/bin:$PATH
                  +export HISTTIMEFORMAT="%F %T `whoami`"
----------
          ID: /etc/bashrc
    Function: file.append
      Result: True
     Comment: Appended 1 lines
     Started: 16:22:25.301827
    Duration: 3.519 ms
     Changes:   
              ----------
              diff:
                  ---  
                  +++  
                  @@ -84,3 +84,4 @@
                       unset pathmunge
                   fi
                   # vim:ts=4:sw=4
                  +export PROMPT_COMMAND='{ msg=$(history 1|{ read x y; echo $y; });logger "[euid=$(whoami)]":$(who am i):[`pwd`]"$msg";}'
----------
          ID: vm.swappiness
    Function: sysctl.present
      Result: True
     Comment: Updated sysctl value vm.swappiness = 0
     Started: 16:22:25.306579
    Duration: 34.697 ms
     Changes:   
              ----------
              vm.swappiness:
                  0
----------
          ID: net.ipv4.ip_local_port_range
    Function: sysctl.present
      Result: True
     Comment: Updated sysctl value net.ipv4.ip_local_port_range = 10000 65000
     Started: 16:22:25.341528
    Duration: 34.841 ms
     Changes:   
              ----------
              net.ipv4.ip_local_port_range:
                  10000 65000
----------
          ID: fs.file-max
    Function: sysctl.present
      Result: True
     Comment: Updated sysctl value fs.file-max = 100000
     Started: 16:22:25.376573
    Duration: 38.181 ms
     Changes:   
              ----------
              fs.file-max:
                  100000

Summary
------------
Succeeded: 6 (changed=6)
Failed:    0
------------
Total states run:     6
openfire2:
----------
          ID: /etc/resolv.conf
    Function: file.managed
      Result: True
     Comment: File /etc/resolv.conf updated
     Started: 16:14:50.016655
    Duration: 28.94 ms
     Changes:   
              ----------
              diff:
                  ---  
                  +++  
                  @@ -1,4 +1,4 @@
                   # Generated by NetworkManager
                   nameserver 202.96.209.133
                  -nameserver 4.4.4.4
                  +nameserver 114.114.114.114
                   
----------
          ID: /etc/profile
    Function: file.append
      Result: True
     Comment: Appended 1 lines
     Started: 16:14:50.045861
    Duration: 5.464 ms
     Changes:   
              ----------
              diff:
                  ---  
                  +++  
                  @@ -80,3 +80,4 @@
                   PATH=$PATH:$HOME/bin:/usr/local/python3/bin
                   PATH=$PATH:$HOME/bin
                   PATH=$PATH:$HOME/bin:/usr/local/python27/bin
                  +export HISTTIMEFORMAT="%F %T `whoami`"
----------
          ID: /etc/bashrc
    Function: file.append
      Result: True
     Comment: Appended 1 lines
     Started: 16:14:50.051566
    Duration: 8.297 ms
     Changes:   
              ----------
              diff:
                  ---  
                  +++  
                  @@ -84,3 +84,4 @@
                       unset pathmunge
                   fi
                   # vim:ts=4:sw=4
                  +export PROMPT_COMMAND='{ msg=$(history 1|{ read x y; echo $y; });logger "[euid=$(whoami)]":$(who am i):[`pwd`]"$msg";}'
----------
          ID: vm.swappiness
    Function: sysctl.present
      Result: True
     Comment: Updated sysctl value vm.swappiness = 0
     Started: 16:14:50.062927
    Duration: 68.223 ms
     Changes:   
              ----------
              vm.swappiness:
                  0
----------
          ID: net.ipv4.ip_local_port_range
    Function: sysctl.present
      Result: True
     Comment: Updated sysctl value net.ipv4.ip_local_port_range = 10000 65000
     Started: 16:14:50.131850
    Duration: 64.959 ms
     Changes:   
              ----------
              net.ipv4.ip_local_port_range:
                  10000 65000
----------
          ID: fs.file-max
    Function: sysctl.present
      Result: True
     Comment: Updated sysctl value fs.file-max = 100000
     Started: 16:14:50.197403
    Duration: 67.141 ms
     Changes:   
              ----------
              fs.file-max:
                  100000

Summary
------------
Succeeded: 6 (changed=6)
Failed:    0
------------
Total states run:     6

View Code

 

 

编译安装proxy

状态模块:状态间关系
功      能:条件判断,用于cmd状态模块
常用方法:
         onlyif:检查命令,仅当“onlyif”选项指向的命令返回true时才执行name定义的命令
         unless:用于检查的命令,仅当“unless”选项指向的命令返回false时才执行name指向的命令。

功能名称:requisites
功   能:处理状态间关系
常用方法:
  require       #我依赖某个状态
  require_in    #我被某个状态依赖
  watch      #我关注某个状态
  watch_in    #我被某个状态关注

 

已完成以后的目录结构:

[root@js-93 prod]# tree
.
├── haproxy
│   ├── files
│   │   ├── haproxy-1.6.10.tar.gz
│   │   └── haproxy.init
│   └── install.sls
└── pkg
    └── pkg-init.sls

3 directories, 4 files

 

linux安装一个软件之前,首先要安装这个软件所依赖的软件包。因此,写一个批量安装依赖包的sls

[root@js-93 prod]# cat pkg/pkg-init.sls 
pkg-init:
  pkg.installed:
    - names:
      - gcc
      - gcc-c++
      - glibc
      - make
      - autoconf
      - openssl-devel

 

安装haproxy:通过saltstack进行远程批量安装,(当然,批量是指,minion端已经和master端进行过认证的服务器)

[root@js-93 prod]# cat haproxy/install.sls 
include:
  - pkg.pkg-init

haproxy-install:
  file.managed:
    - name: /usr/local/src/haproxy-1.6.10.tar.gz
    - source: salt://haproxy/files/haproxy-1.6.10.tar.gz
    - user: root
    - group: root
    - mode: 755
  cmd.run:
    - name: cd /usr/local/src && tar zxf haproxy-1.6.10.tar.gz && cd haproxy-1.6.10 && make TARGET=linux26 PREFIX=/usr/local/haproxy && make install PREFIX=/usr/local/haproxy
    - unless: test -d /usr/local/haproxy
    - require:
      - pkg: pkg-init
      - file: haproxy-install

haproxy-init:
  file.managed:
    - name: /etc/init.d/haproxy
    - source: salt://haproxy/files/haproxy.init
    - user: root
    - group: root
    - mode: 755
    - require:
      - cmd: haproxy-install
  cmd.run:
    - name: chkconfig --add haproxy
    - unless: chkconfig --list|grep haproxy
    - require:
      - file: haproxy-init
  

net.ipv4.ip_nonlocal_bind:
  sysctl.present:
    - value: 1

haproxy-config-dir:
  file.directory:
    - name: /etc/haproxy
    - user: root
    - group: root
    - mode: 755

 

在 /usr/local/src 下面需要有haproxy的安装包。并且在 files下面有haproxy的启动脚本,会放在/etc/init.d下面。(启动脚本需要修改:BIN=/usr/local/haproxy/sbin/$BASENAME)

执行:salt 'openfire*' state.sls haproxy.install env=prod  (这里一定要指定环境,而且,这个执行的过程比较慢,主要是yum安装依赖包,并且编译安装haproxy的配置文件。)

Saltstack配置管理-业务引用haproxy

首先要有一个haproxy.cfg的配置文件:

saltstack 安装部署及主从认证 saltstack官网_CentOS_02

saltstack 安装部署及主从认证 saltstack官网_apache_03

[root@js-93 prod]# cat cluster/files/haproxy-outside.cfg 
global  
        log 127.0.0.1   local3 notice
        maxconn 4096
        uid 0 
        gid 0 
        daemon  
defaults
        log     global  
        mode    tcp    
        option  tcplog 
        option  dontlognull
        retries 3
        maxconn 2000
        option redispatch
        timeout connect 5000
        timeout client 50000
        timeout server 50000
 
listen  server_haproxy 
bind    *:8066
balance roundrobin
mode tcp
server server_01 192.168.20.93:8066  weight 3 check 
server server_02 192.168.20.172:8066  weight 3 check

haproxy-outside.cfg

 

然后写一个haproxy的文件配置管理,与服务管理:

[root@js-93 prod]# cat cluster/haproxy-outside.sls 
include:
  - haproxy.install   #引用上面的安装。

haproxy-service:
  file.managed:
    - name: /etc/haproxy/haproxy.cfg
    - source: salt://cluster/files/haproxy-outside.cfg
    - user: root
    - group: root
    - mode: 644
  service.running:
    - name: haproxy
    - enable: True        #加入开机自启动
    - reload: True
    - require:
      - cmd: haproxy-init
    - watch:         #如果检测到文件被改变,则服务会被重启
      - file: haproxy-service

这里的逻辑性比较高,需要结合前面的haproxy 的配置进行解释。

 

然后到base目录下去编写 top  file文件:

[root@js-93 salt]# cat base/top.sls 
base:
  '*':
    - init.env_init

prod:
  'openfire*':
    - cluster.haproxy-outside
  'js-*':
    - cluster.haproxy-outside

 

执行:  salt ‘*’ state.highstate

因此:salt下的整体目录结构就是:

[root@js-93 salt]# tree 
.
├── base
│   ├── init                                    #系统初始化时需要做的一些事情。
│   │   ├── audit.sls
│   │   ├── dns.sls
│   │   ├── env_init.sls
│   │   ├── files
│   │   │   └── resolv.conf
│   │   ├── history.sls
│   │   └── sysctl.sls
│   └── top.sls
├── prod
│   ├── cluster                                  #这里主要是配置文件管理,集群安装,服务启动等等
│   │   ├── files
│   │   │   └── haproxy-outside.cfg
│   │   └── haproxy-outside.sls
│   ├── haproxy                                   #这里主要是安装依赖包,安装haproxy的软件
│   │   ├── files
│   │   │   ├── haproxy-1.6.10.tar.gz
│   │   │   └── haproxy.init
│   │   └── install.sls
│   └── pkg
│       └── pkg-init.sls
└── test

 

 

安装keepalived

keepalived的目录结构:

[root@js-93 prod]# tree keepalived/
keepalived/
├── files
│   ├── keepalived                        #启动脚本     ==>/usr/local/src/keepalived-1.3.5/keepalived/etc/init.d/keepalived
│   ├── keepalived-1.3.5.tar.gz           #安装包
│   ├── keepalived.conf                   #配置文件   ==> /usr/local/src/keepalived-1.3.5/keepalived/etc/keepalived/keepalived.conf
│   └── keepalived.sysconfig              #配置文件(编译安装生成以后的文件)   ==> ll /usr/local/keepalived/etc/sysconfig/keepalived
└── install.sls                           #saltstack安装keepalived的安装文件

 

其中上面的启动脚本keepalived 的 “daemon keepalived ${KEEPALIVED_OPTIONS}”要修改成为:“daemon /usr/local/keepalived/sbin/keepalived ${KEEPALIVED_OPTIONS}”

安装keepalived的sls配置文件。

[root@js-93 keepalived]# cat install.sls 
include:
  - pkg.pkg-init

keepalived-install:
  file.managed:
    - name: /usr/local/src/keepalived-1.3.5.tar.gz
    - source: salt://keepalived/files/keepalived-1.3.5.tar.gz
    - user: root
    - group: root
    - mode: 755
  cmd.run:
    - name: cd /usr/local/src && tar xf keepalived-1.3.5.tar.gz && cd keepalived-1.3.5 &&./configure --prefix=/usr/local/keepalived --disable-fwmark && make && make install
    - unless: test -d /usr/local/keepalived
    - require:
      - pkg: pkg-init
      - file: keepalived-install

keepalived-init:
  file.managed:
    - name: /etc/init.d/keepalived
    - source: salt://keepalived/files/keepalived
    - user: root
    - group: root
    - mode: 755
  cmd.run:
    - name: chkconfig --add keepalived
    - unless: chkconfig --list|grep keepalived
    - require:
      - file: keepalived-init
/etc/sysconfig/keepalived:
  file.managed:
    - source: salt://keepalived/files/keepalived.sysconfig
    - user: root
    - group: root
    - mode: 644

/etc/keepalived:
  file.directory:
    - user: root
    - group: root

 

执行: salt '*' state.sls keepalived.install env=prod

 

业务引用keepalived

keepalived+haproxy 当前的目录结构:

[root@js-93 salt]# tree
.
├── base
│   ├── ;
│   ├── init
│   │   ├── audit.sls
│   │   ├── dns.sls
│   │   ├── env_init.sls
│   │   ├── files
│   │   │   └── resolv.conf
│   │   ├── history.sls
│   │   └── sysctl.sls
│   └── top.sls
├── prod
│   ├── cluster
│   │   ├── files
│   │   │   ├── haproxy-outside.cfg
│   │   │   └── haproxy-outside-keepalived.conf
│   │   ├── haproxy-outside-keepalived.sls
│   │   └── haproxy-outside.sls
│   ├── haproxy
│   │   ├── files
│   │   │   ├── haproxy-1.6.10.tar.gz
│   │   │   └── haproxy.init
│   │   └── install.sls
│   ├── keepalived
│   │   ├── files
│   │   │   ├── keepalived
│   │   │   ├── keepalived-1.3.5.tar.gz
│   │   │   ├── keepalived.conf
│   │   │   └── keepalived.sysconfig
│   │   └── install.sls
│   └── pkg
│       └── pkg-init.sls
└── test

 

haproxy-outside-keepalived.conf 的内容为:

saltstack 安装部署及主从认证 saltstack官网_CentOS_02

saltstack 安装部署及主从认证 saltstack官网_apache_03

[root@js-93 cluster]# cat files/haproxy-outside-keepalived.conf 
! Configuration File for keepalived
global_defs{
    notification_email {
        saltstack@example.com
    }
    notification_email_from keepalived@example.com
    smtp_server 127.0.0.1
    smtp_connect_timeout 39
    router_id {{ROUTEID}}
}
vrrp_instance haproxy_ha{
state {{STATEID}}
interface eth1
    virutal_router_id 36
priority {{PRIORITYID}}
    advert_int 1
authentication {
auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    192.168.20.249
    }
}

haproxy-outside-keepalived.conf

 

haproxy-outside.cfg 的内容为:

saltstack 安装部署及主从认证 saltstack官网_CentOS_02

saltstack 安装部署及主从认证 saltstack官网_apache_03

[root@js-93 cluster]# cat files/haproxy-outside.cfg 
global
maxconn 100000
chroot /usr/local/haproxy
uid 99
gid 99
daemon
nbproc 1
pidfile /usr/local/haproxy/logs/haproxy.pid
log 127.0.0.1 local3 info

defaults
option http-keep-alive
maxconn 100000
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms

listen stats
mode http
bind 0.0.0.0:8777
stats enable
stats uri /haproxy-status
stats auth haproxy:saltstack

frontend frontend_www_example_com
bind 192.168.20.93:80
mode http
option httplog
log global
     default_backend backend_www_example_com

backend backend_www_example_com
option httpchk HEAD / HTTP/1.0
balance roundrobin
server js-93 192.168.20.93:8080 check inter 2000 rise 30 fall 15
server openfire2  192.168.20.172:8080 check inter 2000 rise 30 fall 15

haproxy-outside.cfg

 

haproxy-outside-keepalived.sls 部署keepalived的sls文件:

saltstack 安装部署及主从认证 saltstack官网_CentOS_02

saltstack 安装部署及主从认证 saltstack官网_apache_03

[root@js-93 cluster]# cat haproxy-outside-keepalived.sls 
include:
  - keepalived.install

keepalived-service:
  file.managed:
    - name: /etc/keepalived/keepalived.conf
    - source: salt://cluster/files/haproxy-outside-keepalived.conf
    - user: root
    - group: root
    - mode: 644
    - template: jinja
    {%  if grains['fqdn'] == 'js-93' %}
    - ROUTEID: haproxy_ha
    - STATEID: MASTER
    - PRIORITYID: 150
    {% elif grains['fqdn'] == 'openfire2' %}
    - ROUTEID: haproxy_ha
    - STATEID: BACKUP
    - PRIORITYID: 100
    {% endif %}
  service.running:
    - name: keepalived
    - enable: True
    - watch:
      - file: keepalived-service

haproxy-outside-keepalived.sls

 

然后写入到 top file 里面:
[root@js-93 cluster]# cat /srv/salt/base/top.sls 
base:
  '*':
    - init.env_init

prod:
  'openfire*':
    - cluster.haproxy-outside
    - cluster.haproxy-outside-keepalived
  'js-*':
    - cluster.haproxy-outside
    - cluster.haproxy-outside-keepalived

 

增加zabbix agent 

目前为止所有目录结构:

[root@js-93 /]# tree srv
srv
├── pillar
│   ├── apache.sls
│   ├── base
│   │   ├── top.sls
│   │   └── zabbix.sls
│   └── top.sls
└── salt
    ├── base
    │   ├
    │   ├── init
    │   │   ├── audit.sls
    │   │   ├── dns.sls
    │   │   ├── env_init.sls
    │   │   ├── files
    │   │   │   ├── resolv.conf
    │   │   │   └── zabbix_agentd.conf
    │   │   ├── history.sls
    │   │   ├── sysctl.sls
    │   │   └── zabbix_agent.sls
    │   └── top.sls
    ├── prod
    │   ├── cluster
    │   │   ├── files
    │   │   │   ├── haproxy-outside.cfg
    │   │   │   └── haproxy-outside-keepalived.conf
    │   │   ├── haproxy-outside-keepalived.sls
    │   │   └── haproxy-outside.sls
    │   ├── haproxy
    │   │   ├── files
    │   │   │   ├── haproxy-1.6.10.tar.gz
    │   │   │   └── haproxy.init
    │   │   └── install.sls
    │   ├── keepalived
    │   │   ├── files
    │   │   │   ├── keepalived
    │   │   │   ├── keepalived-1.3.5.tar.gz
    │   │   │   ├── keepalived.conf
    │   │   │   └── keepalived.sysconfig
    │   │   └── install.sls
    │   └── pkg
    │       └── pkg-init.sls
    └── test

 

添加zabbix agent 用到了pillar 因此,修改了  /etc/salt/master 的配置文件:

pillar_roots:
  base:
    - /srv/pillar/base

 

其中:zabbix_agent.sls 的配置如下:

saltstack 安装部署及主从认证 saltstack官网_CentOS_02

saltstack 安装部署及主从认证 saltstack官网_apache_03

[root@js-93 init]# cat /srv/salt/base/init/zabbix_agent.sls 
zabbix-agent-install:
  pkg.installed:
    - name: zabbix-agent

  file.managed:
    - name: /etc/zabbix/zabbix_agentd.conf
    - source: salt://init/files/zabbix_agentd.conf
    - template: jinja
    - defaults:
      Server: {{ pillar['zabbix-agent']['Zabbix_Server'] }}
    - require:
      - pkg: zabbix-agent-install
  service.running:
    - name: zabbix-agent
    - enable: True
    - watch:
      - pkg: zabbix-agent-install
      - file: zabbix-agent-install

zabbix_agent.sls

 

然后在环境初始化  env_init.sls 添加zabbix_agent:

[root@js-93 init]# cat env_init.sls 
include:
  - init.dns
  - init.history
  - init.audit
  - init.sysctl
  - init.zabbix_agent

 

zabbix_agent.conf 的配置内容为:

[root@js-93 init]# cat files/zabbix_agentd.conf |egrep  -v "^#|^$"
PidFile=/var/run/zabbix/zabbix_agentd.pid
LogFile=/var/log/zabbix/zabbix_agentd.log
LogFileSize=0
Server={{ Server }}                       #注意这一行,这一行是用到了pillar的变量
ServerActive=116.228.90.94
Hostname=hezitwo
UserParameter=mount_disk_discovery,/bin/bash /etc/zabbix/monitor_scripts/mount_disk_discovery.sh mount_disk_discovery

 

 因为现在用了了pillar 修改了pillar的路径:/srv/pillar/base

[root@js-93 base]# cat zabbix.sls 
zabbix-agent:
  Zabbix_Server: 192.168.20.212
[root@js-93 base]# cat top.sls 
base:
  '*':
    - zabbix

 

执行: salt '*' state.highstate 即可完成minion端的 zabbix agent的安装;

github 的文档:包含了  nginx memecached pcre  等等。

https://github.com/unixhot/saltbook-code

 

架构扩展

[root@js-93 base]# lsof -i:4505
COMMAND     PID USER   FD   TYPE   DEVICE SIZE/OFF NODE NAME
salt-mini 19331 root   24u  IPv4 17761248      0t0  TCP 192.168.20.93:11393->192.168.20.93:4505 (ESTABLISHED)
salt-mast 30477 root   12u  IPv4 17759720      0t0  TCP *:4505 (LISTEN)
salt-mast 30477 root   14u  IPv4 17760302      0t0  TCP 192.168.20.93:4505->openfire2:24389 (ESTABLISHED)
salt-mast 30477 root   15u  IPv4 17761249      0t0  TCP 192.168.20.93:4505->192.168.20.93:11393 (ESTABLISHED)
[root@js-93 base]# lsof -i:4506
COMMAND     PID USER   FD   TYPE   DEVICE SIZE/OFF NODE NAME
salt-mini 19331 root   13u  IPv4 17759778      0t0  TCP 192.168.20.93:37211->192.168.20.93:4506 (ESTABLISHED)
salt-mast 30493 root   20u  IPv4 17759741      0t0  TCP *:4506 (LISTEN)
salt-mast 30493 root   22u  IPv4 17759777      0t0  TCP 192.168.20.93:4506->openfire2:12236 (ESTABLISHED)
salt-mast 30493 root   23u  IPv4 17759779      0t0  TCP 192.168.20.93:4506->192.168.20.93:37211 (ESTABLISHED)

 

可以看到 minion端和master端,保持一个长连接的模式。这个模式的好处再于,master处理的信息发送到4505,然后发送到minion端,然后处理后的结果会发送到4506 这个端口。

salt信息显示:

[root@js-93 base]# yum install python-setproctitle    #安装这个可以看到python启动的进程信息
[root@js-93 base]# ps -ef|grep salt
root      3148     1  0 17:57 ?        00:00:00 /usr/bin/python2.6 /usr/bin/salt-master -d ProcessManager             #salt的中心管理进程      
root      3149  3148  9 17:57 ?        00:00:00 /usr/bin/python2.6 /usr/bin/salt-master -d Maintenance
root      3150  3148  0 17:57 ?        00:00:00 /usr/bin/python2.6 /usr/bin/salt-master -d Publisher
root      3159  3148  0 17:57 ?        00:00:00 /usr/bin/python2.6 /usr/bin/salt-master -d EventPublisher
root      3160  3148  0 17:57 ?        00:00:00 /usr/bin/python2.6 /usr/bin/salt-master -d ReqServer_ProcessManager
root      3172  3160 18 17:57 ?        00:00:00 /usr/bin/python2.6 /usr/bin/salt-master -d MWorker
root      3173  3160 18 17:57 ?        00:00:00 /usr/bin/python2.6 /usr/bin/salt-master -d MWorker
root      3174  3160  9 17:57 ?        00:00:00 /usr/bin/python2.6 /usr/bin/salt-master -d MWorker
root      3175  3160  9 17:57 ?        00:00:00 /usr/bin/python2.6 /usr/bin/salt-master -d MWorker
root      3176  3160  9 17:57 ?        00:00:00 /usr/bin/python2.6 /usr/bin/salt-master -d MWorker
root      3181  3160  0 17:57 ?        00:00:00 /usr/bin/python2.6 /usr/bin/salt-master -d MWorkerQueue
root     19331     1  0 May31 ?        00:01:04 /usr/bin/python2.6 /usr/bin/salt-minion -d
root     32133 26018  0 16:49 pts/2    00:00:00 /usr/bin/python2.6 /usr/bin/salt * state.highstate

 

优化1: 把master 的 /etc/salt/master的 timeout修改成为: timeout: 30(如果是公网的可能会数值调大)

 

salt-syndic

具体使用方法可查看官当文档:Salt Syndic 这一章节

saltstack 安装部署及主从认证 saltstack官网_CentOS_31

 

 更换master 的方法:

1.先停止 minion端的 服务,以及master服务。

2.清理 minion端所有key:/etc/salt/pki 下面所有的内容,

3.修改 /etc/salt/minion :master: 192.168.20.93(修改改变的master IP地址。)

4.重启:/etc/init.d/salt-minion start

 

 saltstack 二次开发

grains:

注释:grains 是静态的。获取一次就不会改变。

 首先要在 file_roots 下面创建 _grains 的目录;然后写一个编写一个  my grains 的python脚本

[root@js-93 _grains]# pwd
/srv/salt/base/_grains


[root@js-93 _grains]# cat my_grains.py 
#!/usr/bin/env python
#coding:utf-8

def my_grains():
    '''
    My Custin Grains
    '''
    grains = { 'key1':'val1','key2':'val2'}
    return grains

 

 然后执行:salt '*' saltutil.sync_grains   这会把自己写的grians 同步到各个minion上面;

[root@js-93 _grains]# salt '*' saltutil.sync_grains
js-93:
    - grains.my_grains
openfire2:
    - grains.my_grains

 

上面这一步会同步到minion端,如下:

[root@openfire2 grains]# ll /var/cache/salt/minion/extmods/grains
total 8
-rw------- 1 root root 157 Jun  2 19:07 my_grains.py
-rw------- 1 root root 382 Jun  2 19:07 my_grains.pyc
[root@openfire2 grains]# cat my_grains.py
#!/usr/bin/env python
#coding:utf-8

def my_grains():
    '''
    My Custin Grains
    '''
    grains = { 'key1':'val1','key2':'val2'}
    return grains

 

 获取自己写的 grains:

saltstack 安装部署及主从认证 saltstack官网_CentOS_02

自己写的grains的结果

 

 

 模块: modules

安装salt 时,自己的模块存放路径为: /usr/lib/python2.6/site-packages/salt/modules 

首先要在 file_roots 下面创建一个  _module 目录,然后写一个模块:

[root@js-93 _modules]# pwd
/srv/salt/base/_modules
[root@js-93 _modules]# cat my_disk.py 
#!/usr/bin/env python

def list():
    cmd = 'df -h'
    ret = __salt__['cmd.run'](cmd)
    return ret

 

同步到minion:

[root@js-93 _modules]# salt '*' saltutil.sync_modules
openfire2:
    - modules.my_disk
js-93:
    - modules.my_disk

 

minion端查看同步的模块:

[root@openfire2 modules]# pwd
/var/cache/salt/minion/extmods/modules
[root@openfire2 modules]# cat my_disk.py 
#!/usr/bin/env python

def list():
    cmd = 'df -h'
    ret = __salt__['cmd.run'](cmd)
    return ret

 

执行自己写的模块:

[root@js-93 _modules]# salt '*' my_disk.list
js-93:
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/md1              1.8T  1.2T  517G  71% /
    tmpfs                 3.9G   16K  3.9G   1% /dev/shm
    /dev/md0              194M   59M  126M  32% /boot
openfire2:
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sda3       195G  4.7G  181G   3% /
    tmpfs           1.9G   12K  1.9G   1% /dev/shm
    /dev/sda1       194M   27M  158M  15% /boot