kubernetes 容器介绍与安装(一)

标签(空格分隔): kubernetes系列


  • 一:Kubernetes介绍与功能
  • 二:kubernetes基本对象概念
  • 三:kubernetes 部署环境准备 
  • 四:kubernetes 集群部署
  • 五:kubernetes 运行一个测试案例
  • 六:kubernetes UI 的配置

一:Kubernetes介绍与功能

1.1: kubernetes介绍

Kubernetes是Google在2014年6月开源的一个容器集群管理系统,使用Go语言开发,Kubernetes也叫K8S。
K8S是Google内部一个叫Borg的容器集群管理系统衍生出来的,Borg已经在Google大规模生产运行十年之久。
K8S主要用于自动化部署、扩展和管理容器应用,提供了资源调度、部署管理、服务发现、扩容缩容、监控等一整套功能。
2015年7月,Kubernetes v1.0正式发布,截止到2018年1月27日最新稳定版本是v1.9.2。
Kubernetes目标是让部署容器化应用简单高效。
官方网站:www.kubernetes.io

1.2 Kubernetes 主要功能

 数据卷
Pod中容器之间共享数据,可以使用数据卷。

 应用程序健康检查
容器内服务可能进程堵塞无法处理请求,可以设置监控检查策略保证应用健壮性。

  复制应用程序实例
控制器维护着Pod副本数量,保证一个Pod或一组同类的Pod数量始终可用。

  弹性伸缩
根据设定的指标(CPU利用率)自动缩放Pod副本数。

  服务发现
使用环境变量或DNS服务插件保证容器中程序发现Pod入口访问地址。

  负载均衡
一组Pod副本分配一个私有的集群IP地址,负载均衡转发请求到后端容器。在集群内部其他Pod可通过这个ClusterIP访问应用。

  滚动更新
更新服务不中断,一次更新一个Pod,而不是同时删除整个服务。

  服务编排
通过文件描述部署服务,使得应用程序部署变得更高效。

  资源监控
Node节点组件集成cAdvisor资源收集工具,可通过Heapster汇总整个集群节点资源数据,然后存储到InfluxDB时序数据库,再由Grafana展示。

  提供认证和授权
支持角色访问控制(RBAC)认证授权等策略。

二:kubernetes基本对象概念

2.1:基于基本对象更高层次抽象

 ReplicaSet
下一代Replication Controller。确保任何给定时间指定的Pod副本数量,并提供声明式更新等功能。
RC与RS唯一区别就是lable selector支持不同,RS支持新的基于集合的标签,RC仅支持基于等式的标签。

 Deployment
Deployment是一个更高层次的API对象,它管理ReplicaSets和Pod,并提供声明式更新等功能。
官方建议使用Deployment管理ReplicaSets,而不是直接使用ReplicaSets,这就意味着可能永远不需要直接操作ReplicaSet对象。

 StatefulSet
StatefulSet适合持久性的应用程序,有唯一的网络标识符(IP),持久存储,有序的部署、扩展、删除和滚动更新。

 DaemonSet
DaemonSet确保所有(或一些)节点运行同一个Pod。当节点加入Kubernetes集群中,Pod会被调度到该节点上运行,当节点从集群中
移除时,DaemonSet的Pod会被删除。删除DaemonSet会清理它所有创建的Pod。

 Job
一次性任务,运行完成后Pod销毁,不再重新启动新容器。还可以任务定时运行。

2.2 系统架构及组件功能

image_1chg601nq1qhq19hla9m19nd1mcp9.png-124.2kB

Master 组件:
 kube- - apiserver
Kubernetes API,集群的统一入口,各组件协调者,以HTTP API提供接口服务,所有对象资源的增删改查和监听操作都交给APIServer处理后再提交给Etcd存储。
 kube- - controller- - manager
处理集群中常规后台任务,一个资源对应一个控制器,而ControllerManager就是负责管理这些控制器的。
 kube- - scheduler
根据调度算法为新创建的Pod选择一个Node节点。

Node 组件:
 kubelet
kubelet是Master在Node节点上的Agent,管理本机运行容器的生命周期,比如创建容器、Pod挂载数据卷、
下载secret、获取容器和节点状态等工作。kubelet将每个Pod转换成一组容器。
 kube- - proxy
在Node节点上实现Pod网络代理,维护网络规则和四层负载均衡工作。
 docker 或 rocket/rkt
运行容器。

第三方服务:
 etcd
分布式键值存储系统。用于保持集群状态,比如Pod、Service等对象信息。

三:安装kubernetes

3.1 集群的规划

系统: 
     CentOS7.5x64 

主机规划:

master:
172.17.100.11:
    kube-apiserver
    kube-controller-manager
    kube-scheduler
    etcd
slave1:
172.17.100.12
  kubelet
  kube-proxy
  docker
  flannel
  etcd

slave2:
172.17.100.13 
 kubelet
 kube-proxy
 docker
 flannel
 etcd

3.2 安装kubernetes

3.2.1 集群安装dockers

yum install -y yum-utils device-mapper-persistent-data lvm2
# yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
# yum install docker-ce
# cat << EOF > /etc/docker/daemon.json
{
"registry-mirrors": [ "https://registry.docker-cn.com"],
"insecure-registries":["192.168.0.210:5000"]
}
EOF
# systemctl start docker
# systemctl enable docker

image_1chg92pdc1chb1g66s45ckt1j3a16.png-440.5kB

3.2.2 集群部署 – 自签TLS证书

image_1chg945pfjkp17451d651ddk1gs11j.png-126.8kB

安装证书生成工具 cfssl :
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

mkdir /ssl
cd /ssl
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

image_1chvjku4s1vs5117j1m5l1m713p09.png-650.1kB

image_1chvjljn4tcoj8n12hr1ons1n5sm.png-59.2kB

chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

image_1chvjp5rrach1j681vucc3c176713.png-284.1kB

准备好证书文件

cd /ssl/Deploy/

chmod +x certificate.sh

生成证书的模板文件:
cfssl print-defaults config > config.json

cat config.json 

image_1chvk532l13r1hordq3hsgtc11g.png-321.9kB

cfssl print-defaults csr > csr.json  ### 请求证书的颁发方式

image_1chvk92okboo9pj1lmum4pqa81t.png-243kB

查看证书

cat certificate.sh

./certificate.sh

ls *.pem |grep -v |xargs rm -rf {} 

image_1chvkq7vra1nshk16um16au2th2a.png-495.6kB

certificate.sh 文件

---
cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------

cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "172.17.100.11",
      "172.17.100.12",
      "172.17.100.13",
      "10.10.10.1",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

#-----------------------

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#-----------------------

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
---

3.2.3 部署ETCD 集群

二进制包下载地址:https://github.com/coreos/etcd/releases/tag/v3.2.12
查看集群状态:
# /opt/kubernetes/bin/etcdctl \
--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
--endpoints="https://172.17.100.11:2379,https://172.17.100.12:2379,https://172.17.100.13:2379" \
cluster-health

mkdir -p /opt/kubernetes

mkdir -p /opt/kubernetes/{bin,cfg,ssl}

cd /etcd/

tar -zxvf etcd-v3.2.12-linux-amd64.tar.gz

cd etcd-v3.2.12-linux-amd64/

mv etcdctl /opt/kubernetes/bin/

mv etcd /opt/kubernetes/bin/

image_1chvm2uldjfcnvp1bmsp7e1q6c2n.png-574.7kB
image_1chvm3lfho7p315s4k1mne12ta34.png-524.2kB

image_1chvm4e2ois3v483d41llqqgq3h.png-172.7kB

image_1chvm4vms18sv8qgeb42lf107r3u.png-134.6kB

vim /opt/kubernetes/cfg/etcd
---
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.17.100.11:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.17.100.11:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.17.100.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.17.100.11:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://172.17.100.11:2380,etcd02=https://172.17.100.12:2380,etcd03=https://172.17.100.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
---
vim /usr/lib/systemd/system/etcd.service
---
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=-/opt/kubernetes/cfg/etcd
ExecStart=/opt/kubernetes/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-state=new \
--cert-file=/opt/kubernetes/ssl/server.pem \
--key-file=/opt/kubernetes/ssl/server-key.pem \
--peer-cert-file=/opt/kubernetes/ssl/server.pem \
--peer-key-file=/opt/kubernetes/ssl/server-key.pem \
--trusted-ca-file=/opt/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

---
cd /ssl/pem/
cp -ap *pem /opt/kubernetes/ssl/
启动etcd  

systemctl start etcd 

ps -ef |grep etcd 

image_1chvn2kmr1gaf2u2jka1h7l1uqe4b.png-238.8kB

scp -r /opt/kubernetes  172.17.100.12:/opt/
scp -r /opt/kubernets   172.17.100.13:/opt/

scp /usr/lib/systemd/system/etcd.service  172.17.100.12:/usr/lib/systemd/system/

scp /usr/lib/systemd/system/etcd.service  172.17.100.13:/usr/lib/systemd/system/
172.17.100.12  更改 配置文件

vim /opt/kubernetes/cfg/etcd 

---

#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.17.100.12:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.17.100.12:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.17.100.12:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.17.100.12:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://172.17.100.11:2380,etcd02=https://172.17.100.12:2380,etcd03=https://172.17.100.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

---

systemctl start etcd 

chkconfig etcd on

image_1chvnp4fp3f21qlj59913hgfji62.png-193.8kB

172.17.100.13  更改 配置文件

vim /opt/kubernetes/cfg/etcd 

---

#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.17.100.13:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.17.100.13:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.17.100.13:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.17.100.13:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://172.17.100.11:2380,etcd02=https://172.17.100.12:2380,etcd03=https://172.17.100.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

---
systemctl start etcd 

chkconfig etcd on

image_1chvnplh91pa51mp3g4a1ug315ro6f.png-236.7kB

vim /etc/profile 

export K8S_HOME=/opt/kubernetes

PATH=$PATH:$HOME/bin:$K8S_HOME/bin
---

source /etc/profile

--

etcdctl --help 

ectdctl --help |grep ca 

image_1chvnvmgmj6of4c19tp1vkj2ds6s.png-212.3kB

image_1chvo3giis5g1vaj1s8nrg11j6179.png-126.3kB


查看集群状态:

cd /opt/kubernetes/ssl/

# /opt/kubernetes/bin/etcdctl \
--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
--endpoints="https://172.17.100.11:2379,https://172.17.100.12:2379,https://172.17.100.13:2379" \
cluster-health

image_1chvo90rl1k3h2f1taa1q2f1brd7m.png-322.8kB


3.2.4 集群部署 – 部署Flannel网络

Overlay Network :覆盖网络,在基础网络上叠加的一种虚拟网络技术模式,该网络中的主机通过虚拟链路连接起来。
VXLAN :将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上传输,到达目的地后由隧道端点解封装并将数据发送给目标地址。
Flannel :是Overlay网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、AWS VPC和GCE路由等数据转发方式。
多主机容器网络通信其他主流方案:隧道方案( Weave、OpenvSwitch ),路由方案(Calico)等。

image_1chvp2rr410bs19tg15td1a3onaa83.png-60.3kB

image_1chvp7s4g15n0134iv1tvag1mkd8g.png-374.6kB


1 )写入分配的子网段到 etcd ,供 flanneld 使用
# /opt/kubernetes/bin/etcdctl \
--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
--endpoints="https://172.17.100.11:2379,https://172.17.100.12:2379,https://172.17.100.13:2379" \
set /coreos.com/network/config '{ "Network": "10.0.0.0/16", "Backend": {"Type": "vxlan"}}'

 2 )下载二进制包
# wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz
 3 )配置 Flannel
 4 ) systemd 管理 Flannel
 5 )配置 Docker 启动指定子网段
 6 )

部署:
 mkdir /root/flannel
 ls -ld * 
 cp -p flanneld mk-docker-opts.sh /opt/kubernetes/bin/
 tar -zxvf flannel-v0.9.1-linux-amd64.tar.gz
 scp flanneld mk-docker-opts.sh 172.17.100.12:/opt/kubernetes/bin/
 scp flanneld mk-docker-opts.sh 172.17.100.13:/opt/kubernetes/bin/

image_1ci298f4fskk1jpnucld886n29.png-201.1kB

image_1ci298uu4f1i1dm51fd2g2igr7m.png-261.7kB

flannel 配置的文件

---
#!/bin/bash

ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}

cat <<EOF >/opt/kubernetes/cfg/flanneld

FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/kubernetes/ssl/ca.pem \
-etcd-certfile=/opt/kubernetes/ssl/server.pem \
-etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

EOF
---

配置172.17.100.12的node节点的配置文件
---
vim /opt/kubernetes/conf/flanneld

FLANNEL_OPTIONS="--etcd-endpoints="https://172.17.100.11:2379,https://172.17.100.12:2379,https://172.17.100.13:2379 -etcd-cafile=/opt/kubernetes/ssl/ca.pem \
-etcd-certfile=/opt/kubernetes/ssl/server.pem \
-etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
---
配置启动flanned 文件

vim /usr/lib/systemd/system/flanneld.service

---
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

---
172.17.100.11 执行

设置VXLAN 网络
cd /opt/kubernetes/ssl/

etcdctl \
--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
--endpoints="https://172.17.100.11:2379,https://172.17.100.12:2379,https://172.17.100.13:2379" \
set /coreos.com/network/config '{ "Network": "10.0.0.0/8", "Backend": {"Type": "vxlan"}}'

image_1ci2diflno14jpf1ul41pld1ttn9.png-178.5kB

image_1ci2diu2391d121u8bv5l17lom.png-188.1kB


在node 节点上面检查
172.17.100.12:

cd /opt/kubernetes/ssl/

etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://172.17.100.11:2379,https://172.17.100.12:2379,https://172.17.100.13:2379" get /coreos.com/network/config

image_1ci3llr9qskp14bp1vfi1e3e1nse13.png-169.9kB

172.17.100.13

cd /opt/kubernetes/ssl/

etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://172.17.100.11:2379,https://172.17.100.12:2379,https://172.17.100.13:2379" get /coreos.com/network/config

image_1ci3lmom210oq1kdjtr9fme1dub1g.png-207kB

ifconfig   

会生成一个flannel.1 的网段

cat /run/flannel/subnet.env

image_1ci3los7na4bslc17rt1la11mhr1t.png-729kB

image_1ci3m2cc41v5b1jjir7d1l511hsi2a.png-199kB


配置docker 的启动 加载应用flanneld 网络

---
vim /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd  \$DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
---

启动docker 

systemctl daemon-reload

service docker restart

image_1ci3mbe8o1c3ikp1pa118n11ge02n.png-213kB


部署172.17.100.13

---
从 172.17.100.12 当中去同步文件数据

cd /usr/lib/systemd/system

scp /opt/kubernetes/cfg/flanneld 172.17.100.13:/opt/kubernetes/cfg/

scp flanneld.service 172.17.100.13:/usr/lib/systemd/system

scp docker.service 172.17.100.13:/usr/lib/systemd/system

启动 flanneld 与 重启docker

service flanneld start 

service docker restart 

chkconfig flanneld on 

ifconfig |more 

测试: 
从 172.17.100.12 当中去测试flanneld 网络是否能通 

ping 10.0.23.1

image_1ci3mvg96ip8l567je117r5573k.png-211kB

image_1ci3n09mh1aaf12bu1v95k17o3741.png-403.9kB

image_1ci3n103l1ntjeus7gjbl2nf14e.png-395.6kB

image_1ci3n2dl31kfc1l8h1crsgf5phl4r.png-746.3kB

image_1ci3n4m1m15e41srd1ljm1t0pehc58.png-234.6kB

在主节点上面查看flannel 的网络

etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://172.17.100.11:2379,https://172.17.100.12:2379,https://172.17.100.13:2379" ls /coreos.com/network/subnets

判断 flanneld 网络的 节点

etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://172.17.100.11:2379,https://172.17.100.12:2379,https://172.17.100.13:2379" get /coreos.com/network/subnets/10.0.23.0-24

image_1ci3nfbdb1kil58v1hpf1f3f123u5l.png-499.4kB

image_1ci3nnfobb75ahb1r6j1hlg1pl62.png-220.4kB

route -n 

image_1ci3pmouj138t1uhmrg5braueg6f.png-211.8kB

3.2.5 集群部署 – 创建Node节点kubeconfig文件

集群部署 – 创建Node节点kubeconfig文件
将文件 kubeconfig.sh 上传到172。17.100.11 的主节点上面去

1、创建TLS Bootstrapping Token
---
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')

cat > token.csv <<EOF
> ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
> EOF
----
查看Token文件

cat token.csv 

---

6a694cc8d6e025e97ea74c1a14cff8bf,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
---

image_1ci3qfhbf1glkevl1tmm1qn0s186s.png-550.3kB

2. 设置 bootstrap.kubeconfig的 配置文件
设置kube_apiserver

export KUBE_APISERVER="https://172.17.100.11:6443"

上传kubectl的命令到 /opt/kubernetes/bin 下面:

cd /opt/kubernetes/bin

chmod +x kubectl 

# 设置集群参数

cd /opt/kubernetes/ssl/

kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

  生成bootstrap.kubeconfig 文件

image_1ci3rdp1r14lb1m7q1hj1lsrjls79.png-80.5kB

image_1ci3re75417521pjr72vqhr1jp77m.png-108.5kB

image_1ci3rgcvd1p2a16t61111fbe17ct93.png-355.4kB

image_1ci3rieu41r67ume180m1rps19i09g.png-184kB

image_1ci3rpl991g961oqtl8p19aa17t3aa.png-874.7kB

设置 证书的信息:
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

image_1ci3rt1bq13hl1t5v1kcu1jt67bdan.png-197.1kB

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

image_1ci3russjb251vpc1vs21ojs1hg1b4.png-162.1kB

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

image_1ci3s52pg177h1f0o1kva115p1luabh.png-100.8kB

cat bootstrap.kubeconfig

image_1ci3sbq864ct1gv01dc61mbmb8hbu.png-991.7kB

3. 创建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

image_1ci3sj0lc1t1fb7c64m186f13cb.png-496.5kB

cat kube-proxy.kubeconfig

image_1ci3skkve9f9cr51fej15pqkcjd8.png-829.8kB

image_1ci3sm12eatkct5mnc19st33pdl.png-1580.8kB

将 bootstrap.kubeconfig 和 kube-proxy.kubeconfig 同步到其它节点

cp -p *kubeconfig /opt/kubernetes/cfg

scp *kubeconfig 172.17.100.12:/opt/kubernetes/cfg/

scp *kubeconfig 172.17.100.13:/opt/kubernetes/cfg/

image_1ci41cnuhtok1pb91c2el5014teke.png-387.2kB


四:kubernetes 集群部署 – 获取K8S二进制包

下载k8s的 软件包

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md

下载 k8s 的版本:1.9.2 server 包
kubernetes-server-linux-amd64.tar.gz

4.1 部署 master 节点

上传 master.zip 到 172.17.100.11 上面的/root/master 目录下面

mkdir master
cd master 
unzip master.zip 

cp -p kube-controller-manager kube-apiserver kube-scheduler /opt/kubernetes/bin/

cd /opt/kubernetes/bin/

chmod +x * 

image_1ci3u8dpu1paee2t1p8b1gl96d0e2.png-378.7kB

image_1ci3u90781qanne025i10i6pn5ef.png-502.3kB

image_1ci3u9o346rs1o3gngt171l1dl9fc.png-709kB


cp -p /root/token.csv /opt/kubernetes/cfg/

cd /root/master/
./apiserver.sh 172.17.100.11 https://172.17.100.11:2379,https://172.17.100.12:2379,https://172.17.100.13:2379

image_1ci3uoehokck1dsd1ibk1g641qgifp.png-161.2kB

cd /opt/kubernetes/cfg/

cat kube-apiserver

---
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://172.17.100.11:2379,https://172.17.100.12:2379,https://172.17.100.13:2379 \
--insecure-bind-address=127.0.0.1 \
--bind-address=172.17.100.11 \
--insecure-port=8080 \
--secure-port=6443 \
--advertise-address=172.17.100.11 \
--allow-privileged=true \
--service-cluster-ip-range=10.10.10.0/24 \
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/kubernetes/ssl/ca.pem \
--etcd-certfile=/opt/kubernetes/ssl/server.pem \
--etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
---

cd /usr/lib/systemd/system/
cat kube-apiserver.service
---
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

---

启动apiserver 

service kube-apiserver start
ps -ef |grep apiserver 

image_1ci3vrob6mvd1jhi199jr8i1tpeg6.png-696.6kB
image_1ci40albk1d2tode16hkle70nhd.png-332.2kB

执行contronller 脚本
./controller-manager.sh 127.0.0.1

ps -ef |grep controller 

image_1ci400uif1igb88to0c15i3gt3h0.png-130.1kB

image_1ci40bhcdp3eg721i5s8f1hdbia.png-236.7kB

启动调度
./scheduler.sh 127.0.0.1 

ps -ef |grep scheduler

image_1ci40c32hirqh3atif1i9ig08in.png-175.1kB

image_1ci40d9hm18t3fclo8b1hda1j1cj4.png-175.8kB

cd /opt/kubernetes/cfg/
cat kube-controller-manager
---
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.10.10.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem"
---

cat kube-scheduler

---

KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect"

---
检查节点:

kubectl get cs

image_1ci40lfjl3q182ev0j1bi416qqjh.png-178.7kB

4.2 部署node 节点

172.17.100.12 与 172.17.100.13 

mkdir /root/node 

上传node.zip 到 /root/node 目录下面

cd /root/node 

unzip node.zip 

image_1ci41j0v81h411tgo1imdte944nkr.png-527.4kB

image_1ci41k6o4fed1jlghoe157114hpl8.png-518.7kB

cp -p kube-proxy kubelet /opt/kubernetes/bin/

cd /opt/kubernetes/bin/
chmod +x * 

image_1ci41scrljc1buh1u751pi1vmll.png-494.5kB

111.png-618.7kB

172.17.100.12 node 节点

cd /root/node 
chmod +x *.sh 
./kubelet.sh 172.17.100.12 10.10.10.2

----
注意报错: 
Jul 11 15:40:18 node-02 kubelet: error: failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "kubelet-bootstrap" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope
-----

去master(172.17.100.11) 节点上面执行命令:

kubectl create clusterrolebinding  kubelet-bootstrap  --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

然后重启

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet

1.png-126.6kB

image_1ci455n51r4i1vgljqh1hp3e6sqt.png-126.5kB

image_1ci4573i8hb0i9teacumda26ra.png-342.1kB

proxy的配置:

./proxy.sh 172.17.100.12

ps -ef |grep proxy

3.png-324.8kB

去master 节点上面查看证书

kubectl get csr 

4.png-197kB

kubectl certificate approve node-csr-3B70dKcCjJuitWcWTjqb2rjadH1ld4Tq0mU9QAd5j7I

kubectl get csr

image_1ci46ar3910sa1fk91frroti1vcntv.png-260.1kB

节点加入
kubectl get node

image_1ci46divfjk210ha1hgjsfphbsuc.png-83.3kB

172.17.100.12 多出了证书
cd /opt/kubernetes/ssl/
ls 

6.png-113.6kB


172.17.100.13 节点:

cd /root/node 
./kubelet.sh 172.17.100.13 10.10.10.2

./proxy.sh 172.17.100.13

image_1ci46vq6h1hoa15epdje183uu8t10p.png-729.9kB

在master(172.17.100.11) 上面执行

kubectl get csr

kubectl certificate approve node-csr-ubm9Uq4P7VhzB_zryLhH3WM5SbpaunS5sg9cYqG5wLA

7.png-328.6kB

kubectl get csr

image_1ci48dibi1jfqcn21oe6us015ca13v.png-224.5kB

172.17.100.13 上面会自动生成kubelet的 证书

cd /opt/kubernetes/ssl
ls

image_1ci48ck2r12e311jbs2e1eju1urm132.png-111.3kB

在master 上面执行命令:

kubectl get node

11.png-153.9kB


kubectl get cs

至此kubernetes 部署完成

image_1ci494f0l1j0m1i5pl0g1v4610p115o.png-192.2kB

五:启动一个测试示例

# kubectl run nginx --image=nginx --replicas=3
# kubectl get pod
# kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
# kubectl get svc nginx

# kubectl get all 

image_1ci49mdog1tvmjcq1e331a2g1as7175.png-472.2kB

image_1ci49vdr6hfv649510r3v1949p.png-429.6kB

kubectl get pod -o wide 

image_1ci4a2k5t1llr1s6bulq84o1uc116.png-181.1kB

# kubectl get svg nginx 

image_1ci4cksrv1hgeu3675v1d9h13r41j.png-133.3kB

flanneld 网络访问

curl -i 10.10.10.235:88 

image_1ci4cmk3l17l213fr5fc6jndco20.png-413.1kB

外部网络访问:

172.17.100.12:40463

image_1ci4d83n3v1bre148n1g0j1ab2d.png-233kB

六:kubernetes 的UI界面

mkdir -p /root/ui

上传文件
dashboard-deployment.yaml  dashboard-rbac.yaml  dashboard-service.yaml
到 /root/ui 
cd /root/ui
ls

image_1ci6jkau0of13i41egecsl6plp.png-104.5kB

执行构建
# kubectl create -f dashboard-rbac.yaml
# kubectl create -f dashboard-deployment.yaml
# kubectl create -f dashboard-service.yaml

### 查看容器

kubectl get pods --all-namespaces

kubectl get svc --all-namespaces

image_1ci6jofidam71kup1brff38157j1m.png-610.9kB

浏览器访问:

http://172.17.100.12:41389/ui

至此 kubernetes UI 安装完成

image_1ci6jqlfj160h5g2ali2kajob23.png-371.5kB