摘要:

mnt;securityContext;privileged;/usr/sbin/init;nslookup;spec.serviceName;headless Service;network.xml ip_a="172.29.115.138" ;程序适配Tip;IPPool;ClusterIP;spec.ports;port;targetPort;containerPort;initContainers;postStart;mid.xml;

问题:

  • docker启动镜像手动运行mnt,其它进程自启动OK;但是同一镜像k8s pod运行mnt,除了mid,其它进程不自动启动。
  • 制作镜像Dockerfile文件加:CMD  ["/bin/sh","-c"," /usr/sbin/init "],现象一样。
  • Dockerfile文件加chmod 6777  /home/cdatc/AirNet/bin/* 之后OK了;另保证容器资源充足
yaml以下配置securityContext,还是除了mid,其它进程不自动启动
          securityContext:
            runAsUser: 0     
            privileged: true  
            allowPrivilegeEscalation: true
            capabilities: 
              add: ["SYS_ADMIN","SYS_RESOURCE","SYS_TIME"]
  • smc监控上控制容器化MSDP2服务器进程启停失败;原因:smc主机除了加到pod宿主机的路由,还需要配同网段的地址(如:192.168.31.31)
  • SMC主机的network.xml中hostname="FDP2"(包括大小写)必须和该服务器的主机名一致,否则也如下报错:
[root@SMC1 log]# tailf SMC1_smc_20240222.log | grep  stop
2024-02-22 07:54:31 341 Info  qnh stop fail qnh stop fail(CDATC)         <---hostname的大小写不一致时,报错 fail
2024-02-22 07:59:16 691 Info  qnh stop success qnh stop success(CDATC)   <---hostname一致时,success

AirNet使用笔记11(! AirNet服务器容器化)_redis

  • k8s pod运行命令只能是Init,怎么实现依次自启动init/redis/mnt;(tip:privileged: true;/usr/sbin/init)
kind: Pod 
spec:
    containers:
          securityContext:
            runAsUser: 0     
            privileged: true  
          command: ["/bin/sh","-c"," /usr/sbin/init "]

1、容器里/etc/rc.local添加自动运行redis服务的命令,docker/k8s容器启动后未自动运行,说明容器启动过程不调用rc.local

more /etc/rc.local 
#auto start redis
/usr/local/redis/bin/redis-server /usr/local/redis/conf/redis.conf   1>/dev/null 2>&1
#auto start redis for maps 
/usr/local/redis/bin/redis-server /usr/local/redis/conf/redis_6378.conf 1>/dev/null 2>&1
nohup /home/cdatc/AirNet/bin/mnt  > /dev/null 2>&1 &

2、--privileged=true  /usr/sbin/init方式运行镜像,将redis-server制作成服务测试:OK

dockerfile:
ADD  redis.tar.gz /usr/local
COPY mnt6378.service  /usr/lib/systemd/system/mnt6378.service
RUN ln -s /usr/lib/systemd/system/mnt6378.service /etc/systemd/system/multi-user.target.wants/mnt6378.service 
COPY mnt.service  /usr/lib/systemd/system/mnt.service
RUN ln -s /usr/lib/systemd/system/mnt.service /etc/systemd/system/multi-user.target.wants/mnt.service

3、实现容器内自动依次启动redis/mnt:由于mnt的原因,要求usr/sbin/init 启动容器(–privileged=true 获取宿主机root权限;/usr/sbin/init:用于启动dbus-daemon)之后使用systemctl方法,将redis/mnt做成了service服务自启动(redis6378.service、redis.service、mnt.service)。

FROM centos:centos7.9.2009
USER root
RUN yum install libudisks2-devel sudo -y 
COPY lib64/*  /usr/lib64/
COPY ldconf/*  /etc/ld.so.conf.d/
ADD  AirNet.tar.gz /usr/
ADD  AirNet-cdatc.tar.gz   /home/cdatc/
ADD  redis.tar.gz /usr/local
COPY redis6378.service  /usr/lib/systemd/system/redis6378.service
RUN ln -s /usr/lib/systemd/system/redis6378.service /etc/systemd/system/multi-user.target.wants/redis6378.service 
COPY redis.service  /usr/lib/systemd/system/redis.service
RUN ln -s /usr/lib/systemd/system/redis.service /etc/systemd/system/multi-user.target.wants/redis.service 
COPY mnt.service  /usr/lib/systemd/system/mnt.service
RUN ln -s /usr/lib/systemd/system/mnt.service /etc/systemd/system/multi-user.target.wants/mnt.service 
RUN  ldconfig
RUN chmod 6777  /home/cdatc/AirNet/bin/*
WORKDIR  /home/cdatc/AirNet/bin/
ENV HOSTNAME=msdp2
ENTRYPOINT ["/bin/sh","-c"," /usr/sbin/init "]

4、另、depoyment的问题,pod的主机名是msdp2-7545fffb69-k9n5k,导致mnt启动报错在network.xml找不到主机名。改成StatefulSet测试,但是network.xml对应主机名格式需改为(statefulset名称)−(序号):即msdp-1、msdp-2这种StatefulSet有状态格式。

  •  Pod 是从序号索引 0到 N-1顺序创建的;k8s v1.26新增 spec.ordinals.start 字段来控制 Pod 的起始编号,但是AirNet默认是msdp1格式,statefulset格式msdp-1在序号前有 −
spec:
  ordinals: 
    start: 1
  • 需要指定msdp-1、msdp-2的IP地址,但是由于“ cannot have more than one IPv4 address for "cni.projectcalico.org/ipAddrs" annotation”,只能指定msdp-1的IP,msdp-2由于没有IP启动失败。参见本文6,ippool的方式解决。
annotations:
        cni.projectcalico.org/ipAddrs: "[\"172.29.115.154\"]"   
---> Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox: plugin type="calico" failed (add): cannot have more than one IPv4 address for "cni.projectcalico.org/ipAddrs" annotation

5、问题:

1)、问题1(原因mid.xml修改主机名,修改后OK):刚从VM切换到容器(StatefulSet)运行fdp(以及单独运行容器fdp),现象如下图,所有航迹多出一个计划航迹,SDD手动相关没反应;一段时间后在SDD显示都是非管制状态。(vm<--->容器之间互相可以ping通)

AirNet使用笔记11(! AirNet服务器容器化)_xml_02AirNet使用笔记11(! AirNet服务器容器化)_redis_03

/home/AirNet/config/gconf/mid/mid.xml
 <!--系统本地模式判断 多个hostname以逗号隔开,程序名能配置多个,ALL程序down了算down,最底层标签之间或关系判判断    -->
  <systemmode>
	<degradeds position="ACC">
		<degraded hostname="FDP-1,FDP2" proname="fdp"/>
		<degraded hostname="FDP-1,FDP2" proname="mdp"/>
		<degraded hostname="MSDP-1,MSDP-2" proname="snet"/>
	 </degradeds>
	<bypasses position ="ACC">
		<bypass hostname="SDFP-1,SDFP-2" proname="afp,rfp"/>
		<bypass hostname="MSDP-1,MSDP-2" proname="msdp"/>
		<bypass hostname="MSDP-1,MSDP-2" proname="adp,rdp"/>
	 </bypasses>
	<emergencys position ="ACC">
		<emergency hostname="SDFP-1,SDFP-2,BSFP1" proname="afp,rfp,brfp,bafp"/>
		<emergency hostname="SDFP-1,SDFP-2,BSDP1" proname="afp,rfp,brdp,badp"/>
		<emergency hostname="SDFP-1,SDFP-2,BSDP1" proname="afp,rfp,bsdp"/>
		<emergency hostname="MSDP-1,MSDP-2,BSDP1" proname="msdp,bsdp"/>
		<emergency hostname="MSDP-1,MSDP-2,BSFP1" proname="msdp,brfp,bafp"/>
		<emergency hostname="MSDP-1,MSDP-2,BSDP1" proname="adp,rdp,brdp,badp"/>
		<emergency hostname="MSDP-1,MSDP-2,BSFP1" proname="adp,rdp,brfp,bafp"/>
	 </emergencys>
  </systemmode>

——系统本地模式判断 多个hostname以逗号隔开,程序名能配置多个,ALL程序down了算down,最底层标签之间或关系判判断

  • 使用Pod,测试不带杠的主机名fdp2,也不行,现象一样!说明不是像mdp/qnh进程(LINKERROR的原因)是主机名fdp-2/fdp2带不带横杠导致的。
  • !!!测试如下docker(--net=host)运行fdp没问题,也可以手动相关
[root@k8s-master01 home]# docker run -itd  --privileged=true --net=host --name fdp2 airnet:v1.0
[root@k8s-master01 home]# hostname FDP2
[root@k8s-master01 home]# ip a
4: enp4s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    inet 192.168.5.59/24 scope global enp4s0f0
       valid_lft forever preferred_lft forever
    inet 192.168.10.59/24 scope global enp4s0f0
       valid_lft forever preferred_lft forever
  • 容器fdp主用,在SDD上手动创建计划OK(抓包看使用组播port 40072,fdp-2_fdp日志也有记录);但是手动相关操作日志没有任何记录(通过抓包看,使用组播port 40072,但是fdp-2_fdp_20240212_0611.log日志没有记录)
  • 10006prodata->40006: 在SDD上手动相关、去相关这里记录内容有“PAIR_COUPLED_MANUAL”、“PAIR_UNCOUPLED_MANUAL”
  • 10072prodata->40072:在SDD上创建计划、相关、去相关、ASSUME等操作都有该 10072prodata;
--->在TWR-1上创建计划
[root@fdp-2 log]# tail -f fdp-2_fdp_20240212_0005.log  |grep CSZ5666
2024-02-12 03:14:40.039,308,info,0,# FdpCoreTrackManager::dealPlanEdit from[TWR-1] {{"planHead":{"hostName":"TWR-1","proName":"SDD","lpsName":"TWR","timestamp":"2024-02-12 03:14:38.194","dataType":1,"createKey":"{f627e724-60bc-47f6-abf2-01d4228fec1e}"},"act":["ACT_ADD","ACT_EDIT"],"editPlan":{"flightEdit":{"callSign":"CSZ5666",
--->容器fdp-2主用时,在TWR-1上手动相关MCOU,fdp-2_fdp日志没有任何记录!!!,以下是容器宿主机上的抓包:组播端口40006没有该航班的数据包
[root@k8s-node04 home]# tcpdump -i enp4s0f0 -XX  -s0 -nn -vv  port 40006 |grep CSN5982
--->VM虚拟机192.168.5.58运行正常,以下是抓包:
192.168.5.58.27417 > 225.1.0.1.40006
    0x0040:  3eb8 0000 ff00 0000 0000 0001 4b00 3130  >...........K.10
    0x0050:  3030 3670 726f 6461 7461 0010 e7ff 877b  006prodata.....{
	0x0100:  4644 5031 0000 0000 0000 6664 7000 0000  FDP1......fdp...
	0x0110:  0000 0000 0a04 496e 666f 1206 466c 6967  ......Info..Flig
	0x0120:  6874 1a07 4353 5a35 3737 3722 0d54 5752  ht..CSZ5777".TWR
	0x0130:  2f54 5752 2d31 2f53 4444 2a04 5a55 5446  /TWR-1/SDD*.ZUTF
	0x0140:  3204 5a42 594e 4209 3831 3736 3734 3538  2.ZBYNB.81767458
	0x0150:  325a 1163 6f75 706c 655f 6163 6e5f 7374  2Z.couple_acn_st
	0x0160:  6174 7573 6230 3831 3736 3734 3538 322c  atusb0817674582,
	0x0170:  4353 5a35 3737 372c 5a55 5446 2c5a 4259  CSZ5777,ZUTF,ZBY
	0x0180:  4e23 2050 4149 525f 434f 5550 4c45 445f  N#.PAIR_COUPLED_
	0x0190:  4d41 4e55 414c                           MANUAL
---> 10072prodata   //在SDD上创建计划、相关、去相关、ASSUME等操作都有该 10072prodata
# tcpdump -i enp4s0f0 -XX  -s0 -nn -vv  ! arp and  src host 192.168.5.200 and dst host ! 192.168.5.248 and ! 192.168.5.244 and port ! 40001 and ! 40002 and ! 40003
    192.168.5.200.25355 > 225.1.0.1.40072: [bad udp cksum 0xa8b9 -> 0x567d!] UDP, length 301
        0x0000:  0100 5e01 0001 001b 7858 dbd8 0800 45b8  ..^.....xX....E.
        0x0010:  0149 cd58 4000 2011 e420 c0a8 05c8 e101  .I.X@...........
        0x0020:  0001 630b 9c88 0135 a8b9 0074 9c88 0400  ..c....5...t....
        0x0030:  8a7f a090 def2 1b74 0115 0000 002b 0000  .......t.....+..
        0x0040:  0023 0000 ff00 0000 0000 0001 0a00 3130  .#............10
        0x0050:  3037 3270 726f 6461 7461 0030 631d edfc  072prodata.0c...
        0x00e0:  0000 0000 0000 0001 0000 0041 4343 0000  ...........ACC..
        0x00f0:  0000 0030 0200 0000 0000 0036 3837 0a32  ...0.......687.2
        0x0100:  0a05 5457 522d 3112 0353 4444 1a03 5457  ..TWR-1..SDD..TW
        0x0110:  5222 1732 3032 342d 3032 2d31 3220 3134  R".2024-02-12.14
        0x0120:  3a30 303a 3130 2e33 3432 30d9 e1fe 8503  :00:10.3420.....
        0x0130:  5801 1201 0318 0120 af05 2a1b 1a19 1217  X.........*.....
        0x0140:  0a12 5457 5240 4143 4b54 5241 4e53 4645  ..TWR@ACKTRANSFE
        0x0150:  524f 5554 1201 54                        ROUT..T

2)、问题2(确证):服务端network.xml中不能使用域名 ip_a="msdp-2.msdp.vm-airnet",也不能在IP地址(ip_a="172.29.115.138")的前后加空格,否则SMC监控不到,且没有雷达目标。程序把它当作字符串解析,且没有去除前后空格。是不是,外送信号时,数据包中含源IP地址信息?这里没有DNS解析,直接写的域名字符串?导致SMC监控不到,且没有雷达目标。

3)、(未确证)在SMC端,network.xml使用域名,测试sdfp-1,sdfp-2监控正常,但是主备切换控制,和进程启停操作失败使用ip正常但是有时候测试使用域名进程启停操作也ok,但是主备切换控制失败,测试中不止使用IP的sdfp主备切换正常,此时使用域名的fdp切换也正常,只要有一个正常的,其它都正常。总之使用IP一般主备切换控制,和进程启停操作都OK。

<node hostname="sdfp-1"  radar="A" showname="SDFP1" position="ACC" logic_position="ACC" stationno="1" bakenode="2"   grouptype="SERVER" ip_a="172.27.29.172" ip_b="192.168.6.50" ip_c="192.168.7.50" ip_interface_a="192.168.10.50" ip_interface_b="192.168.11.50"  netcardname="eth0,eth1,eth2"  interface_netcardname="eth4,eth5" monitor_switchid="1,2,3,5,6,11,12,13" connect_switchid="5,6" type="SDFP" groupname="SDFP" supportsync="1" binid="rfp,afp" />
<node hostname="sdfp-2"  radar="B" showname="SDFP2" position="ACC" logic_position="ACC" stationno="2" bakenode="1"   grouptype="SERVER" ip_a="172.27.29.130" ip_b="192.168.6.51" ip_c="192.168.7.51" ip_interface_a="192.168.10.51" ip_interface_b="192.168.11.51"  netcardname="eth0,eth1,eth2"  interface_netcardname="eth4,eth5" monitor_switchid="1,2,3,5,6,11,12,13" connect_switchid="5,6" type="SDFP" groupname="SDFP" supportsync="1" binid="rfp,afp" />
--->SMC
<node hostname="sdfp-1"  radar="A" showname="sdfp-1" position="ACC" logic_position="ACC" stationno="1" bakenode="2"   grouptype="SERVER" ip_a="sdfp-1.sdfp.vm-airnet" ip_b="192.168.6.50" ip_c="192.168.7.50" ip_interface_a="192.168.10.50" ip_interface_b="192.168.11.50"  netcardname="eth0,eth1,eth2"  interface_netcardname="eth4,eth5" monitor_switchid="1,2,3,5,6,11,12,13" connect_switchid="5,6" type="SDFP" groupname="SDFP" supportsync="1" binid="rfp,afp" />
<node hostname="sdfp-2"  radar="B" showname="sdfp-2" position="ACC" logic_position="ACC" stationno="2" bakenode="1"   grouptype="SERVER" ip_a="sdfp-2.sdfp.vm-airnet" ip_b="192.168.6.51" ip_c="192.168.7.51" ip_interface_a="192.168.10.51" ip_interface_b="192.168.11.51"  netcardname="eth0,eth1,eth2"  interface_netcardname="eth4,eth5" monitor_switchid="1,2,3,5,6,11,12,13" connect_switchid="5,6" type="SDFP" groupname="SDFP" supportsync="1" binid="rfp,afp" />
hosts文件:
127.0.0.1 SMC1 localhost
172.27.29.186   sdfp-1.sdfp.vm-airnet
172.27.29.181   sdfp-2.sdfp.vm-airnet

6、Tip:

  • 容器运行的k8s-node节点需添加到二所192.168.5.0/24网段的路由
ip route add 192.168.5.0/24 dev enp4s0f0
  • 组播路由,在cali网卡重新生成后需要重新运行进程smcrouted,因为pod关闭cali网卡删除后,该网卡的组播路由信息会自动删除。
  • StatefulSet方式运行Pod,Pod的hostname模式为(statefulset名称)−(序号),由于主机名和序号之间多了字符“-”,序号修改一下xml文件;
  • 问题现象:sdfp2的名称不能是sdfp-2(statefulset名称)−(序号),否者SDD上没有雷达信号问题的原因!(但是监控状态正常,不是黄色或红色);FDP服务器上则是mdp/qnh进程(LINKERROR):配置文件服务器名称如 value="SDFP-2",加横杠即可,不区分大小写,另需注意加与Nport设备同网段192.168.10.59/24的IP;
  • 另Pod 是从序号索引 0到 N-1顺序创建的;k8s v1.26新增 spec.ordinals.start 字段来控制 Pod 的起始编号;
  • 当StatefulSet 的replicas: 2,即大于1时,pod IP是自动赋的,不能指定,原因“plugin type="calico" failed (add): cannot have more than one IPv4 address for "cni.projectcalico.org/ipAddrs" annotation” 
  • 固定多个 ip,只能通过 ippool 的方式(Kubernetes 网络插件 Calico 完全运维指南 - 知乎 (zhihu.com)
SDFP:   SDFP1 SDFP2 MSDP1 MSDP2 FDP2
 home\AirNet\config\atc_sfp_adsb_offline.xml
 home\AirNet\config\atc_sfp_radar_offline.xml
 //服务器名称 value="SDFP-2",加横杠即可,不区分大小写。
 <node param="HOSTNAME" tip="服务器名称" value="SDFP-2"/>
FDP:
 /home/AirNet/config/atc_qnh_offline.xml
 /home/AirNet/config/atc_mdp_offline.xml
 • 控制 Pod 的起始编号
   ordinals: 
    start: 1  
replicas: 2,即大于1时,pod IP是自动赋的,不能指定
      # annotations:
        # cni.projectcalico.org/ipAddrs: "[\"172.29.115.154\"]"  
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox: plugin type="calico" failed (add): cannot have more than one IPv4 address for "cni.projectcalico.org/ipAddrs" annotation
  • start: 1 指定主机序号从1开始,这样命名为sdfp-1 sdfp-2;通过 ippool 的方式固定IP(问题:不按STS创建顺序依次分配IP,IP地址还是不固定分别使用多个IPPool "[\"pool-sdfp-1\",\"pool-sdfp-2\"]"测试OK);mountPath: /home/cdatc/AirNet/config是为了方便通过宿主机修改容器内的参数。
apiVersion: crd.projectcalico.org/v1
kind: IPPool
metadata:
  name: pool-sdfp-1
spec:
  blockSize: 32
  cidr: 172.27.29.130/32
  ipipMode: Always
  natOutgoing: true        
---
apiVersion: crd.projectcalico.org/v1
kind: IPPool
metadata:
  name: pool-sdfp-2
spec:
  blockSize: 32
  cidr: 172.27.29.131/32
  ipipMode: Always
  natOutgoing: true 
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: sdfp
  namespace: vm-airnet
  labels:   
    app: sdfp
    version: stable  
spec:
  serviceName: "sdfp" 
  replicas: 2
  ordinals: 
    start: 1  
  selector: 
    matchLabels:
      name: sdfp
  template:
    metadata:
      annotations:     
        cni.projectcalico.org/ipv4pools: "[\"pool-sdfp-1\",\"pool-sdfp-2\"]"        
      labels:
        name: sdfp
    spec:
      nodeSelector:
        kubernetes.io/hostname: k8s-node07
      restartPolicy: Always 
      containers:
      - image: airnet:v1.0  
        name: sdfp
        imagePullPolicy: IfNotPresent
        securityContext:   
          privileged: true         
        workingDir: /home/cdatc/AirNet/bin/                      
        resources:
          requests:
            cpu: "4000m" 
            memory: "6Gi"
          limits:
            cpu: "6000m"
            memory: "8Gi"
        volumeMounts:
        - name: bin-log
          mountPath: /home/cdatc/AirNet/bin/log
        - name: config
          mountPath: /home/cdatc/AirNet/config  
      volumes:
      - name: bin-log
        emptyDir: {}       
      - name: config
        hostPath:
          path: /home/AirNet/config
  • 在构建过程中,可以将非必要的命令(如错误输出的重定向)设置为不保存到镜像中,或者直接丢弃,比如使用>&/dev/null
  • AirNet-cdatc.tar.gz包里已经将执行程序权限设置为了:-rwsrwsrwx
FROM centos:centos7.9.2009
LABEL version="AirNet v1.0" description=" libudisks2-devel,sudo,mnt,entrypoint " by="mi_zy"
USER root
COPY lib64/*  /usr/lib64/
COPY ldconf/*  /etc/ld.so.conf.d/
ADD  AirNet.tar.gz /usr/
ADD  AirNet-cdatc.tar.gz   /home/cdatc/
ADD  redis.tar.gz /usr/local
COPY redis6378.service  /usr/lib/systemd/system/redis6378.service
COPY redis.service  /usr/lib/systemd/system/redis.service
COPY mnt.service  /usr/lib/systemd/system/mnt.service
RUN  ldconfig &&  yum install libudisks2-devel sudo -y && ln -s /usr/lib/systemd/system/redis6378.service /etc/systemd/system/multi-user.target.wants/redis6378.service && ln -s /usr/lib/systemd/system/redis.service /etc/systemd/system/multi-user.target.wants/redis.service && ln -s /usr/lib/systemd/system/mnt.service /etc/systemd/system/multi-user.target.wants/mnt.service 
WORKDIR  /home/cdatc/AirNet/bin/
ENTRYPOINT ["/bin/sh","-c"," /usr/sbin/init "]
  • initContainers:重启smcrouted 使得组播路由生效;后续将sshpass包加到airnet:v1.0 通过postStart重启smcrouted
initContainers: 
      - image: ictu/sshpass:latest  
        name: sshpass
        imagePullPolicy: IfNotPresent
        securityContext:
          runAsUser: 0  
          privileged: true              
        command: ["/bin/sh","-c","--"]
        args: 
        - |
          sshpass -p 123456 ssh -o StrictHostKeyChecking=no root@$MY_NODE_IP "pkill smcrouted; sleep 2; smcrouted "   
        env:
        - name: MY_NODE_IP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP

7、 必须在StatefulSet中指定使用哪个 service 来管理 dns: 例如 serviceName: "msdp" ,否则nslookup解析不到sts POD 的IP(能解析到SVC的IP);StatefulSet可以不配spec.ports;service提示spec.ports: Required value但不是必须配置所有的ports,这样虽然能运行起来,但是对应用有影响,不指定的话不能通过clusterIP:port访问。

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: msdp
spec:
  serviceName: "msdp"
#  kv exec -ti tool-97f9cfd66-4j9b2 -- nslookup  fdp-2.fdp.vm-airnet.svc.cluster.local
  • (pod)管理有状态应用的StatefulSet:通过配置一个 headless Service(clusterIP: None),使每个 Pod 有一个唯一的网络标识 (hostname),注意spec.ports: Required value
apiVersion: v1
kind: Service
metadata:
  name: msdp
  namespace: vm-airnet  
  labels:
    name: msdp
spec:
  type: ClusterIP
  clusterIP: None  
  selector:
    name: msdp
  ports:   
  - name: mnt
    port: 46006
    protocol: TCP   
#  kv exec -ti tool-97f9cfd66-4j9b2 -- nslookup  msdp-1.msdp.vm-airnet.svc.cluster.local
Server:         10.16.0.10
Address:        10.16.0.10#53
Name:   msdp-1.msdp.vm-airnet.svc.cluster.local
Address: 172.29.115.138
  • (workloadentry)selector VM的workloadentry,Service不能是headless Service(clusterIP: None),域名只能解析到ClusterIP,解析不到VM的IP
apiVersion: v1
kind: Service
metadata:
  name: fdp2-istio
  labels:
    app: airnet-fdp2
  namespace: vm-airnet
spec:
  selector:
    app: airnet-fdp2
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080    
  type: ClusterIP
  • Service中为什么必须配置spec.ports;因为port是暴露在cluster ip上的端口,提供了集群内部客户端访问service的入口,即clusterIP:port。在加入网格的VM上测试OK。改为headless Service,即使用podIP:containerPortheadless Service时,不使用port和targetPort的值,可以没有spec.ports,在VM上测试也OK。
  • ClusterIP时,提示The Service "helloworld" is invalid: spec.ports: Required value,必须定义spec.ports;
  • headless Service时,可以没有spec.ports。测试发现有个异常现象:
  • 没有spec.ports时,在VM上测试报错!集群内使用Pod测试正常。
apiVersion: v1
kind: Service
metadata:
  labels:
    app: helloworld
    service: helloworld
  name: helloworld
  namespace: vm-airnet
spec:
  selector:
    app: helloworld
  type: ClusterIP
  clusterIP: None
[root@FDP2-vm ~]# curl helloworld:5000/hello
curl: (6) Could not resolve host: helloworld; 未知的错误
--->集群内使用Pod测试正常
[root@k8s-master01 sts-ok]# kv exec -ti tool-97f9cfd66-4j9b2 -- curl helloworld:5000/hello
Hello version: v1, instance: helloworld-v1-7cb486975f-hqkdh
  • 加上spec.ports时,随便写port端口值、targetPort可以注释掉,在VM上测试OK!集群内使用Pod测试均正常。
apiVersion: v1
kind: Service
metadata:
  labels:
    app: helloworld
    service: helloworld
  name: helloworld
  namespace: vm-airnet
spec:
  ports:
  - name: any
    port: 1234
    protocol: TCP
    targetPort: 5678
  selector:
    app: helloworld
  type: ClusterIP
  clusterIP: None
[root@FDP2-vm ~]# curl helloworld:5000/hello
Hello version: v1, instance: helloworld-v1-7cb486975f-hqkdh  
--->集群内使用Pod测试正常
[root@k8s-master01 sts-ok]# kv exec -ti tool-97f9cfd66-4j9b2 -- curl helloworld:5000/hello
Hello version: v1, instance: helloworld-v1-7cb486975f-hqkdh
  • containerPort:正常情况下,pod的镜像只要有启用服务和端口,就可以在pod和pod之间访问了,流量已经可以在pod出去了。而containerPort这个字段主要是为了指明端口(让机器知道开放了什么端口)以及和集群的service或者真机进行绑定,让流量知道怎么进入pod。
  • 通过service访问clusterIP:port,需要定义port绑定哪个targetPort(与制作容器时DockerFile中的EXPOSE暴露的端口一致)<--->containerPort(容器内部端口)端口;
  • headless Service情况下,不使用clusterIP,直接使用podIP,所以可以不在service中定义,直接使用在pod控制器中定义的、pod中的容器暴露的端口containerPort。以下随便指定一个spec.ports.port值如7000,测试使用5000端口OK!
#  kv exec -ti tool-97f9cfd66-4j9b2 -- curl helloworld:5000/hello
Hello version: v2, instance: helloworld-v2-7dc5dcdbb9-4rzt9
# kv edit svc helloworld
apiVersion: v1
kind: Service
metadata:
  labels:
    app: helloworld
    service: helloworld
  name: helloworld
  namespace: vm-airnet
spec:
  ports:
  - name: http
    port: 5000
    protocol: TCP
    targetPort: 5000
  selector:
    app: helloworld
  type: ClusterIP
  clusterIP: None
--->containerPort可写可不写,因为内部网络所有端口都可以访问
apiVersion: apps/v1
kind: Deployment  
spec:
  template:
    spec:
      containers:
      - image: docker.io/istio/examples-helloworld-v1
        ports:
        - containerPort: 5000
          protocol: TCP
// headless Service时,随便指定一个spec.ports.port值如7000,测试使用5000端口OK!
[root@FDP2-vm ~]#  curl    helloworld:5000/hello   
Hello version: v1, instance: helloworld-v1-7cb486975f-hqkdh