1.使用内核自带Cgroups 控制docker资源上限:

1.1:

pwd :/sys/fs/cgroup/cpu


查看docker集成的cgoups控制cpu的命令:(还有memory)
[root@server1 cpu]# docker run --help | grep cpu
      --cpu-count int                         CPU count (Windows only)
      --cpu-percent int                       CPU percent (Windows only)
      --cpu-period int                        Limit CPU CFS (Completely Fair Scheduler) period
      --cpu-quota int                         Limit CPU CFS (Completely Fair Scheduler) quota
      --cpu-rt-period int                     Limit CPU real-time period in microseconds
      --cpu-rt-runtime int                    Limit CPU real-time runtime in microseconds
  -c, --cpu-shares int                        CPU shares (relative weight)
      --cpus decimal                          Number of CPUs (default 0.000)
      --cpuset-cpus string                    CPUs in which to allow execution (0-3, 0,1)
      --cpuset-mems string                    MEMs in which to allow execution (0-3, 0,1)



lscpu  查看cpu信息,如果是两个cpu,则需要禁用一个做实验:echo 0 > /sys/devices/system/cpu/cpu1/online
[root@server1 cgroup]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                1
On-line CPU(s) list:   0
Thread(s) per core:    1
Core(s) per socket:    1

docker run -it --rm --cpu-quota 20000 ubuntu ; 使用20%的系统cpu,默认值为100000,20000为20%

在容器中:dd if=/dev/zero of=/dev/null &,在top看见cpu占有20%;

docker compose 限制磁盘空间 docker compose限制cpu_bash


docker run -it --rm ubuntu ; 在容器中:dd if=/dev/zero of=/dev/null & ctrl+pq 打入后台,

docker run -it --rm --cpu-shares 512 ubuntu ; 更改docker的资源争抢优先级,默认为1024 ;在容器中:dd if=/dev/zero of=/dev/null &

在top查看争抢资源情况,一个为66.6,一个为33.3 ;

docker compose 限制磁盘空间 docker compose限制cpu_docker_02

2.cgroup控制内存使用:

2-1:

yum install libcgroup-tools.x86_64 ;

cd  /sys/fs/cgroup/memory ;

mkdir x1 (会自动继承父目录参数信息)

cat memory.memsw.limit_in_bytes ; 查看可用的memory+swap;(默认很大,有多少用多少)

echo 20971520 > memory.limit_in_bytes ; 内存限制20M ;(此时写入的文件超过限制时会存在swap分区中,因此需要限制memory+swap)

echo 20971520 > memory.memsw.limit_in_bytes ; 限制memory+swap的可用量,需要大于等于内存的限制量,否则会报错;

df 查看系统内存信息 ;

cd /dev/shm (这个目录使用的是真实的IO设备)

cgexec -g memory:x1 dd if=/dev/zero of=file bs=1M count=5 ;使用x1组的内存限定参数写入文件(写完可用free -m查看)

当写入文件超过设定的内存限制时,会被kill掉 ;

设定memory+swap不得超过20M

docker compose 限制磁盘空间 docker compose限制cpu_bash_03


超过限定时,会被kill掉:

docker compose 限制磁盘空间 docker compose限制cpu_linux_04

2-2:

对用户使用cgroup控制:

docker compose 限制磁盘空间 docker compose限制cpu_结点_05

root 用户可以随便写,lin用户只能写下x1中限制的20M(不用加 cgexec -g memory:x1 )

docker compose 限制磁盘空间 docker compose限制cpu_docker_06

2-3:
容器使用cgroup控制内存资源:

docker compose 限制磁盘空间 docker compose限制cpu_linux_07


在创建时直接使用参数即可,可以在/sys/fs/cgroup/memory/docker/中查看容器的详细信息:

docker compose 限制磁盘空间 docker compose限制cpu_docker_08

3:

3-1:
docker安全加固:
由于容器共享宿主机的缘故,虽然docker run -d --name demo --memory 20M --memory-swap 20M nginx对容器能够使用的内存做了限制,但是在容器中使用:free -m 查询到的仍然是宿主机的内存容量;说明docker的隔离性个资源可见性还不够好,可以通过LXCFS来进行加固:

yum install lxcfs-2.0.5-3.el7.centos.x86_64.rpm

后台运行:
[root@server1 ~]# lxcfs /var/lib/lxcfs & 
[1] 8918
[root@server1 ~]# hierarchies:
  0: fd:   5: perf_event
  1: fd:   6: devices
  2: fd:   7: cpuset
  3: fd:   8: pids
  4: fd:   9: net_prio,net_cls
  5: fd:  10: freezer
  6: fd:  11: cpuacct,cpu
  7: fd:  12: hugetlb
  8: fd:  13: blkio
  9: fd:  14: memory
 10: fd:  15: name=systemd


使用详细参数建立容器:
[root@server1 ~]# docker run  -it -m 256m \
>       -v /var/lib/lxcfs/proc/cpuinfo:/proc/cpuinfo:rw \
>       -v /var/lib/lxcfs/proc/diskstats:/proc/diskstats:rw \
>       -v /var/lib/lxcfs/proc/meminfo:/proc/meminfo:rw \
>       -v /var/lib/lxcfs/proc/stat:/proc/stat:rw \
>       -v /var/lib/lxcfs/proc/swaps:/proc/swaps:rw \
>       -v /var/lib/lxcfs/proc/uptime:/proc/uptime:rw \
>       ubuntu

建立容器时是多少,看见的就是多少:

docker compose 限制磁盘空间 docker compose限制cpu_docker_09


3-2:

容器中的超级权限:

默认情况下,由于共享宿主机,因此容器中的超级用户都是受限的,可以使用 --privileged=true 建立容器(拥有超级权限):

例如:docker run -it --privileged=true ubuntu
但是这样会造成权限的滥用,docker提供了权限白名单来解决这个问题: --cap-add ;

docker compose 限制磁盘空间 docker compose限制cpu_linux_10

4: docker machine

4-1:machine子结点的部署

scp lbin:/home/westos/Downloads/qq-files/1208277683/file_recv/docker-machine-Linux-x86_64-0.16.1 ./

mv docker-machine-Linux-x86_64-0.16.1 /usr/local/bin/docker-machine

chmod +x /usr/local/bin/docker-machine

ssh-keygen

ssh-copy-id server2

ssh-copy-id server3

docker-machine create --driver generic --generic-ip-address=172.25.254.102 server2  创建子结点server2;

列出子结点信息:
[root@server1 ~]# docker-machine ls
NAME      ACTIVE   DRIVER    STATE     URL                         SWARM   DOCKER    ERRORS
server2   -        generic   Running   tcp://172.25.254.102:2376           v1.13.1   

docker `docker-machine config server2` ps  使用子结点server2 执行 docker ps ;

docker-machine env server2  显示访问server2所需的环境变量:
[root@server1 ~]# docker-machine env server2
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://172.25.254.102:2376"
export DOCKER_CERT_PATH="/root/.docker/machine/machines/server2"
export DOCKER_MACHINE_NAME="server2"
# Run this command to configure your shell: 
# eval $(docker-machine env server2)

eval $(docker-machine env server2) 切至目标主机,此后执行docker命令时都是在server2的环境中 ;

server2中只有三个镜像:(此时eval $(docker-machine env server*)切换至别的子结点,但如果需要切回当前主机,则要断连接)

docker compose 限制磁盘空间 docker compose限制cpu_docker_11

4-2:更改命令行提示符,显示当前所在的子结点hostname

首先需要安装:
yum install bash-completion.noarch -y ;

cd /etc/bash_completion.d/

scp lbin:/home/westos/Downloads/qq-files/1208277683/file_recv/docker-machine.bash 
scp lbin:/home/westos/Downloads/qq-files/1208277683/file_recv/docker-machine-prompt.bash
scp lbin:/home/westos/Downloads/qq-files/1208277683/file_recv/docker-machine-wrapper.bash
[root@server1 bash_completion.d]# pwd
/etc/bash_completion.d
[root@server1 bash_completion.d]# ls
docker-machine.bash          iprutils          rhn-migrate-classic-to-rhsm  rhsm-icon
docker-machine-prompt.bash   rct               rhsmcertd                    subscription-manager
docker-machine-wrapper.bash  redefine_filedir  rhsm-debug

cd

vim  .bashrc :最后一行加入: PS1='[\u@\h \W$(__docker_machine_ps1)]\$'

退出重连切入server2可以看见:
[root@server1 ~]#eval $(docker-machine env server2)
[root@server1 ~ [server2]]#

前提是需要三个docker-machine,并放入/etc/bash_completion.d中:

docker compose 限制磁盘空间 docker compose限制cpu_linux_12

5.:docker-compose:部署haproxy和web1,web2

5-1:部署docker-compose(过程和docker-machine一样)

[root@server1 ~]# ls
auth  certs  docker-compose-Linux-x86_64-1.27.0  harbor  harbor-offline-installer-v1.10.1.tgz
[root@server1 ~]# mv docker-compose-Linux-x86_64-1.27.0 /usr/local/bin/
[root@server1 ~]# cd /usr/local/bin/
[root@server1 bin]# ls
docker-compose-Linux-x86_64-1.27.0
[root@server1 bin]# chmod +x docker-compose-Linux-x86_64-1.27.0

mv docker-compose-Linux-x86_64-1.27.0   docker-compose

5-2:根据需要编写docker-compose.yml

[root@server1 compose]# cat docker-compose.yml 
version: "3.9"
services:
  web1:
    image: nginx
    networks:
      - mynet
    volumes:
      - ./web1:/usr/share/nginx/html
  
  web2:
    image: nginx
    networks:
      - mynet
    volumes:
      - ./web2:/usr/share/nginx/html

  haproxy:
    image: haproxy
    networks:
      - mynet
    ports:
      - "80:80"
    volumes:
      - ./haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg

networks:
  mynet:

5-3:根据编写过程创建相应的目录;

[root@server1 compose]#tree .
.
├── docker-compose.yml
├── haproxy
│   └── haproxy.cfg   haproxy的配置文件
├── web1
│   └── index.html  web1默认发布目录
└── web2
    └── index.html  web2默认发布目录

3 directories, 4 files

hapro配置文件:

[root@server1 haproxy]# cat haproxy.cfg 

global
        maxconn         65535
        stats socket    /var/run/haproxy.stat mode 600 level admin
        log             127.0.0.1 local0
        uid             200
        gid             200
        #chroot          /var/empty
        daemon

defaults
        mode            http
        log             global
        option          httplog
        option          dontlognull
        monitor-uri     /monitoruri
        maxconn         8000
        timeout client  30s
        retries         2
        option redispatch
        timeout connect 5s
        timeout server  5s
        stats uri       /status



# The public 'www' address in the DMZ
frontend public
        bind            *:80 name clear
        #bind            192.168.1.10:443 ssl crt /etc/haproxy/haproxy.pem

        #use_backend     static if { hdr_beg(host) -i img }
        #use_backend     static if { path_beg /img /css   }
        default_backend dynamic

# The static backend backend for 'Host: img', /img and /css.
backend dynamic
        balance         roundrobin
        server          app1 web1:80 check inter 1000
        server          app2 web2:80 check inter 1000

上述步骤完成后,需要在有docker-compose.yml的目录中执行docker-compose命令:

docker compose 限制磁盘空间 docker compose限制cpu_bash_13


使用docker-compose up 命令。建立并启动docker-compose镜像:

docker compose 限制磁盘空间 docker compose限制cpu_docker_14

通过docker logs compose_haproxy_1 查看日志并解决问题后,使用docker-compose start 启动容器(此时ports位置显示三个容器均已经正常启动):

docker compose 限制磁盘空间 docker compose限制cpu_bash_15


此时服务部署已经完成,haproxy可以完成负载均衡,可以通过curl命令或浏览器进行检查:

docker compose 限制磁盘空间 docker compose限制cpu_结点_16


附上history内容:

733  cd
  734  pwd
  735  mkdir compose
  736  cd co
  737  cd compose/
  738  ls
  739  mkdir haproxy
  740  cd haproxy/
  741  ls
  742  cp /etc/haproxy/haproxy.cfg ./
  743  ls
  744  cd ..
  745  mkdir web1
  746  cd web1/
  747  echo web1111111 > index.heml
  748  ls
  749  mv index.heml index.html
  750  ls
  751  cd ..
  752  mkdir web2
  753  cd web2
  754  ls
  755  echo web222222 > index.html
  756  ls
  757  cd ..
  758  vim docker-compose.yml
  759  docker images
  760  docker pull haproxy
  761  docker images
  762  docker tag docker.io/haproxy:latest haproxy:latest
  763  docker images
  764  docker rmi docker.io/haproxy
  765  docker images
  766  vim ./haproxy/haproxy.cfg 
  767  systemctl status haproxy.service
  768  vim docker-compose.yml
  769  cd
  770  scp lbin:/home/westos/Downloads/qq-files/1208277683/file_recv/lllllllll/docker-compose-Linux-x86_64-1.27.0 ./
  771  ls
  772  ll docker-compose-Linux-x86_64-1.27.0
  773  mv docker-compose-Linux-x86_64-1.27.0 /usr/local/bin/
  774  cd /usr/local/bin/
  775  ls
  776  mv docker-compose-Linux-x86_64-1.27.0 docker-compose
  777  ls
  778  cd
  779  docker-compose ps
  780  cd compose/
  781  ls
  782  docker-compose ps
  783  docker-compose up
  784  docker ps
  785  docker-compose ps
  786  docker-compose logs
  787  docker-compose ps
  788  netstat -antlupe
  789  netstat -antlupe | grep :80
  790  vim haproxy/haproxy.cfg 
  791  docker-compose ps
  792  docker-compose start
  793  netstat -antlupe | grep :80
  794  ps ax |grep haproxu
  795  ps ax |grep haproxy
  796  netstat -antlupe | grep :80
  797  docker-compose ps
  798  docker-compose start
  799  netstat -antlupe | grep :80
  800  docker-compose ps
  801  docker logs haproxy
  802  docker logs compose_haproxy_1
  803  ll /usr/local/etc/haproxy/haproxy.cfg
  804  vim docker-compose.yml 
  805  docker inspect compose_haproxy_1
  806  docker-compose stop
  807  vim haproxy/haproxy.cfg 
  808  docker-compose up
  809  vim haproxy/haproxy.cfg 
  810  docker-compose ps
  811  docker-compose start
  812  docker-compose ps
  813  docker logs compose_haproxy_1
  814  vim /etc/haproxy/haproxy.cfg 
  815  vim haproxy/haproxy.cfg 
  816  docker-compose stop
  817  docker-compose start
  818  docker-compose ps
  819  netstat -antlupe |grep :80
  820  tree .