“”,“apps”, “autoscaling”, “batch”
• 2、resources可配置参数
“services”, “endpoints”,“pods”,“secrets”,“configmaps”,“crontabs”,“deployments”,“jobs”,“nodes”,“rolebindings”,“clusterroles”,“daemonsets”,“replicasets”,“statefulsets”,“horizontalpodautoscalers”,“replicationcontrollers”,“cronjobs”
• 3、verbs可配置参数
“get”,“list”,“watch”, “create”,“update”, “patch”, “delete”,“exec”
• 4、apiGroups和resources对应关系
• apiGroups: [“”] # 空字符串""表明使用core API group
resources: [“pods”,“pods/log”,“pods/exec”, “pods/attach”, “pods/status”, “events”, “replicationcontrollers”, “services”, “configmaps”, “persistentvolumeclaims”]
• apiGroups: [ “apps”]
resources: [“deployments”, “daemonsets”, “statefulsets”,“replicasets”]

role测试说明

创建一个role
  • 除了我下面的方法以外,还可以使用这种方式写role文件
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:

限定可访问的命名空间为 minio

namespace: minio

角色名称

name: role-minio-service-minio

控制 dashboard 中 命名空间模块 中的面板是否有权限查看

rules:
• apiGroups: [“”] # 空字符串""表明使用core API group
#resources: [“pods”,“pods/log”,“pods/exec”, “pods/attach”, “pods/status”, “events”, “replicationcontrollers”, “services”, “configmaps”, “persistentvolumeclaims”]
resources: [“namespaces”,“pods”,“pods/log”,“pods/exec”, “pods/attach”, “pods/status”,“services”]
verbs: [“get”, “watch”, “list”, “create”, “update”, “patch”, “delete”]
• apiGroups: [ “apps”]
resources: [“deployments”, “daemonsets”, “statefulsets”,“replicasets”]
verbs: [“get”, “list”, “watch”]
• 生成role的配置文件
我们可以直接通过这种方式生成yaml文件,后面如果需要做啥操作的话,直接对yaml文件操作就行了
下面生成的yaml中,参数需要修改的,看上面Role参数值说明,上面中有可选参数的详细说明。
[root@master ~]# kubectl create role role1 --verb=get,list --resource=pods --dry-run=client -o yaml > role1.yaml
[root@master ~]# cat role1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: role1
rules:
• apiGroups:
• “”
resources:
• pods
#如果需要同时存在多个参数,以下面 get 和list的形式存在。
verbs:
• get
• list
[root@master ~]#

如,我现在对verbs增加一个create权限和增加一个node权限

[root@master ~]# mv role1.yaml sefe/
[root@master ~]# cd sefe/
[root@master sefe]# vi role1.yaml
[root@master sefe]# cat role1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: role1
rules:
• apiGroups:
• “”
resources:
• pods
• nodes
verbs:
• get
• list
• create
[root@master sefe]#
• 生成role
后面修改后可以直接生成就会覆盖之前的权限了,不用删除了再生成。
[root@master ~]# kubectl apply -f role1.yaml
role.rbac.authorization.k8s.io/role1 created
[root@master ~]#
[root@master ~]# kubectl get role
NAME CREATED AT
role1 2021-11-05T07:34:45Z
[root@master ~]#
• 查看详细
可以看到这个role已有权限
[root@master sefe]# kubectl describe role role1
Name: role1
Labels: 
Annotations: 
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
jobs [] [] [get list create]
nodes [] [] [get list create]
pods [] [] [get list create]
[root@master sefe]#
创建rolebinding【绑定用户】
• 注意,用户是不属于任何命名空间的
• 创建rolebinding需要先创建一个role和对应的用户名哦
#下面中rdind1是自定义的名称
–role=指定一个role
#user=为哪个用户授权
[root@master ~]# kubectl create rolebinding rbind1 --role=role1 --user=ccx
rolebinding.rbac.authorization.k8s.io/rbind1 created
[root@master ~]#
[root@master ~]# kubectl get rolebindings.rbac.authorization.k8s.io
NAME ROLE AGE
rbind1 Role/role1 5s
[root@master ~]#
[root@master sefe]# kubectl describe rolebindings.rbac.authorization.k8s.io rbind1
Name: rbind1
Labels: 
Annotations: 
Role:
Kind: Role
Name: role1
Subjects:
Kind Name Namespace
User ccx
[root@master sefe]#
• 创建ServiceAccount和Role的绑定
我上面是用命令的形式实现的嘛,这是我在网上看到的其他资料,是用文件的形式实现,感兴趣的小伙伴可以用这种方法一试。。。。
[root@app01 k8s-user]# vim role-bind-minio-service-minio.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
#namespace: minio
name: role-bind-minio-service-monio #自定义名称
subjects:
• kind: ServiceAccount
#namespace: minio
name: username # 为哪个用户授权
roleRef:
kind: Role

角色名称

name: rolename#role的名称
apiGroup: rbac.authorization.k8s.io
[root@app01 k8s-user]# kubectl apply -f role-bind-minio-service-minio.yaml
rolebinding.rbac.authorization.k8s.io/role-bind-minio-service-monio created
[root@app01 k8s-user]# kubectl get rolebinding -n minio -owide
NAME AGE ROLE USERS GROUPS SERVICEACCOUNTS
role-bind-minio-service-monio 29s Role/role-minio-service-minio minio/service-minio
[root@app01 k8s-user]#
测试
  • 注意,我上面已经授权了ccx用户和配置文件【看kubeconfig验证中的操作流程】,所以我这直接用一个集群外的主机来做测试【注意,这个集群ip我在kubeconfig验证中已经做好所有配置了,所以我可以直接使用】
  • 查询测试

上面我们已经对ccx用户授权pod权限了,可是直接使用会发现依然报错

[root@master2 ~]# kubectl --kubeconfig=kc1 get pods
Error from server (Forbidden): pods is forbidden: User “ccx” cannot list resource “pods” in API group “” in the namespace “default”
#是因为前面说明role是对命名空间生效的【上面我们在safe命名空间】

#所以我们需要指定命名空间

[root@master2 ~]# kubectl --kubeconfig=kc1 get pods -n safe
No resources found in safe namespace.
[root@master2 ~]#

当然,只要有这个kc1配置文件的,在哪执行都行【当前在集群master上】

[root@master sefe]# ls

ca.crt ccx.crt ccx.csr ccx.key csr.yaml kc1 role1.yaml
[root@master sefe]# kubectl --kubeconfig=kc1 get pods -n safe
No resources found in safe namespace.
[root@master sefe]#
• 创建pod测试
为了更能方便看出这个效果,所以我还在集群外的主机上操作吧。
#拷贝一个pod文件
[root@master sefe]# scp …/pod1.yaml 192.168.59.151:~
root@192.168.59.151’s password:
pod1.yaml 100% 431 424.6KB/s 00:00
[root@master sefe]#

回到测试机上创建一个pod,是可以正常创建成功的

[root@master2 ~]# kubectl --kubeconfig=kc1 get nodes -n safe
^[[AError from server (Forbidden): nodes is forbidden: User “ccx” cannot list resource “nodes” in API group “” at the cluster scope
[root@master2 ~]# kubectl --kubeconfig=kc1 get pods -n safe
No resources found in safe namespace.
[root@master2 ~]#
[root@master2 ~]# export KUBECONFIG=kc1
[root@master2 ~]#
[root@master2 ~]# kubectl apply -f pod1.yaml -n safe
pod/pod1 created
[root@master2 ~]#
[root@master2 ~]# kubectl get pods -n safe
NAME READY STATUS RESTARTS AGE
pod1 1/1 Running 0 8s
[root@master2 ~]#

如果不指定命名空间的话就是创建在本地了,但是本地是没有镜像的,所以状态会一直为pending

[root@master2 ~]# kubectl apply -f pod1.yaml
pod/pod1 created
[root@master2 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
pod1 0/1 Pending 0 3s
[root@master2 ~]#

现在回到集群master上,可以看到这个pod被创建成功的

[root@master sefe]# kubectl get pods
NAME READY STATUS RESTARTS AGE
pod1 1/1 Running 0 32s
[root@master sefe]#
• 删除pod测试
是删除不了的哈,因为这个没有给delete权限
[root@master2 ~]# kubectl delete -f pod1.yaml -n safe
Error from server (Forbidden): error when deleting “pod1.yaml”: pods “pod1” is forbidden: User “ccx” cannot delete resource “pods” in API group “” in the namespace “safe”
[root@master2 ~]#

没关系呀,我们去给delete权限就是了

#集群master节点
[root@master sefe]# cat role1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: role1
rules:
• apiGroups:
• “”
resources:
• nodes
• pods
• jobs
verbs:
• get
• list
• create
• delete
[root@master sefe]# kubectl apply -f role1.yaml
role.rbac.authorization.k8s.io/role1 configured
[root@master sefe]#
#测试节点
[root@master2 ~]# kubectl delete -f pod1.yaml -n safe
pod “pod1” deleted
[root@master2 ~]#
[root@master2 ~]# kubectl get pods -n safe
No resources found in safe namespace.
[root@master2 ~]#
Error from server (Forbidden)报错处理
• 报错内容如下
[root@master2 ~]# kubectl --kubeconfig=kc1 get pods -n safe
Error from server (Forbidden): pods is forbidden: User “ccx” cannot list resource “pods” in API group “” in the namespace “safe”
[root@master2 ~]#


  • 这种情况并不是role的配置文件问题,而是因为kc1中ccx这个用户的授权出问题了,至于排查方法,去看kubeconfig配置那篇文章,从开始一步步跟着排查【重点看csr和授权这样子】

跨 namespace 使用pvc_Group

  • 总结:上面config授权好像和这没关系,如果用之前授权的方式给用户授权的话,好像role并没有生效了【本来role就是授权用的,上面的方式直接给了admin权限了,覆盖role了 】,所以,我上面的处理方法好像是错的,但是讲师也没有纠正我,所以,讲师也并没有很负责吧,几千块钱的培训费花的感觉并不值得,反正role这个问题,如果遇到部分权限能用,部分不能用,还是排查role相关的知识吧,我上面这个办法还是别采取了【所以,role着知识我觉得上面配置流程是没有错的,可能是我集群环境试验太多东西,啥配置被我搞乱罢了,后面有时间打新集群了再回过头来重新做一遍role相关的试验吧】
role的resources分开赋权
  • 其实apiGroups是一个独立模块,一个apiGroups定义一个功能,所以如果我们想对不同的组建定义不同的功能,我们就可以添加足够多的apiGroups分开写就是了
  • 比如现在我们想对pod和deployment分开赋权,那么我们就可以像下面这么写【用2个apiGroups即可】

像这种写法就可以创建deployment和管理副本数了。

[root@master sefe]# cat role2.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: role1
rules:
• apiGroups:
• “”
resources:
• pods
verbs:
• get
• list
• create
• delete
• apiGroups:
• “apps”
resources:
• deployments
• deployments/scale
verbs:
• get
• list
• create
• delete
• patch
[root@master sefe]#
• role的使用到此就完了,后面可以多测试哦
现在我们删除这2样,后面进行ClusterRole的测试吧
[root@master sefe]# kubectl delete -f role1.yaml
role.rbac.authorization.k8s.io “role1” deleted
[root@master sefe]# kubectl delete rolebindings.rbac.authorization.k8s.io rbind1
rolebinding.rbac.authorization.k8s.io “rbind1” deleted
[root@master sefe]#
clusterrole测试说明
创建一个clusterrole
• 除了我下面的方法外,还可以使用这种方式
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:

鉴于ClusterRole是集群范围对象,所以这里不需要定义"namespace"字段

角色名称

name: cluster-role-paas-basic-service-minio
控制 dashboard 中 命名空间模块 中的面板是否有权限查看
rules:
• apiGroups: [“rbac.authorization.k8s.io”,“”] # 空字符串""表明使用core API group
#resources: [“pods”,“pods/log”,“pods/exec”, “pods/attach”, “pods/status”, “events”, “replicationcontrollers”, “services”, “configmaps”, “persistentvolumeclaims”]
resources: [“pods”,“pods/log”,“pods/exec”, “pods/attach”, “pods/status”,“services”]
verbs: [“get”, “watch”, “list”, “create”, “update”, “patch”, “delete”]
• apiGroups: [ “apps”]
resources: [“namespaces”,“deployments”, “daemonsets”, “statefulsets”]
verbs: [“get”, “list”, “watch”]
• 注:这个其实好role是一样的配置方法,唯一区别就是,yaml文件中kind的Role需要改为ClusterRole
• 生成clusterrole的配置文件
我们可以直接通过这种方式生成yaml文件,后面如果需要做啥操作的话,直接对yaml文件操作就行了
下面生成的yaml中,参数需要修改的,看上面ClusterRole参数值说明,上面中有可选参数的详细说明。
[root@master sefe]# kubectl create clusterrole crole1 --verb=get,create,delete --resource=deploy,pod,svc --dry-run=client -o yaml > crole1.yaml
[root@master sefe]# cat crole1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: crole1
rules:
• apiGroups:
• “”
resources:
• pods
• services
verbs:
• get
• create
• delete
• apiGroups:
• apps
resources:
• deployments
verbs:
• get
• create
• delete
[root@master sefe]#
• 生成role
后面修改后可以直接生成就会覆盖之前的权限了,不用删除了再生成。
[root@master sefe]# kubectl apply -f crole1.yaml
clusterrole.rbac.authorization.k8s.io/crole1 created
[root@master sefe]#
[root@master sefe]# kubectl get clusterrole crole1
NAME CREATED AT
crole1 2021-11-05T10:09:04Z
[root@master sefe]#
• 查看详细
[root@master sefe]# kubectl describe clusterrole crole1
Name: crole1
Labels: 
Annotations: 
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
pods [] [] [get create delete]
services [] [] [get create delete]
deployments.apps [] [] [get create delete]
创建cluserrolebinding【绑定用户】
• 注意,这是所有命名空间都生效的
• 创建cluserrolebinding需要先创建一个role和对应的用户名哦
#下面中rdind1是自定义的名称
–role=指定一个role
#user=为哪个用户授权
[root@master ~]#
[root@master sefe]# kubectl create clusterrolebinding cbind1 --clusterrole=crole1 --user=ccx
clusterrolebinding.rbac.authorization.k8s.io/cbind1 created
[root@master sefe]#
[root@master sefe]# kubectl get clusterrolebindings.rbac.authorization.k8s.io cbind1
NAME ROLE AGE
cbind1 ClusterRole/crole1 16s
[root@master sefe]#
前面说过,这是对所有命名空间都生效的,所以我们随便查看几个命名都会发现有这个cluser的存在
[root@master sefe]# kubectl get clusterrolebindings.rbac.authorization.k8s.io -n default cbind1
NAME ROLE AGE
cbind1 ClusterRole/crole1 28s
[root@master sefe]#
[root@master sefe]# kubectl get clusterrolebindings.rbac.authorization.k8s.io -n ds cbind1
NAME ROLE AGE
cbind1 ClusterRole/crole1 35s
[root@master sefe]#
• 查看详情
[root@master sefe]# kubectl describe clusterrolebindings.rbac.authorization.k8s.io cbind1
Name: cbind1
Labels: 
Annotations: 
Role:
Kind: ClusterRole
Name: crole1
Subjects:
Kind Name Namespace
User ccx
[root@master sefe]#
• 除了上面定义的clusterrole以外,也可以直接给amdin的权限给这个用户
[root@master sefe]# kubectl create clusterrolebinding cbind2 --clusterrole=cluster-admin --user=ccx
clusterrolebinding.rbac.authorization.k8s.io/cbind2 created
[root@master sefe]#
[root@master sefe]# kubectl get clusterrolebindings.rbac.authorization.k8s.io cbind2
NAME ROLE AGE
cbind2 ClusterRole/cluster-admin 23s
[root@master sefe]# kubectl describe clusterrolebindings.rbac.authorization.k8s.io cbind2
Name: cbind2
Labels: 
Annotations: 
Role:
Kind: ClusterRole
Name: cluster-admin
Subjects:
Kind Name Namespace
User ccx
[root@master sefe]#
• 创建ServiceAccount和ClusterRole的绑定
我上面是用命令的形式实现的嘛,这是我在网上看到的其他资料,是用文件的形式实现,感兴趣的小伙伴可以用这种方法一试。。。。
[root@app01 k8s-user]# vim cluster-role-bind-paas-basic-service-minio.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cluster-role-bind-paas-basic-service-monio #自定义名称
subjects:
• kind: ServiceAccount
namespace: minio
name: username# 为哪个用户授权
roleRef:
kind: ClusterRole
角色名称
name: cluster-role#clusterrole的名称
apiGroup: rbac.authorization.k8s.io
[root@app01 k8s-user]# kubectl apply -f cluster-role-bind-paas-basic-service-minio.yaml
rolebinding.rbac.authorization.k8s.io/cluster-role-bind-paas-basic-service-monio created
测试
集群内的机器可以正常访问
[root@master sefe]# kubectl --kubeconfig=kc1 get pod -n default
NAME READY STATUS RESTARTS AGE
recycler-for-pv-nfs2 0/1 ImagePullBackOff 0 75d
[root@master sefe]# kubectl --kubeconfig=kc1 get pod -n ccx
NAME READY STATUS RESTARTS AGE
centos-7846bf67c6-gj7v4 0/1 ImagePullBackOff 0 121d
nginx-test-795d659f45-6f8t9 0/1 ImagePullBackOff 0 121d
nginx-test-795d659f45-7bbmt 0/1 ImagePullBackOff 0 121d
pod1-node2 1/1 Running 8 101d
[root@master sefe]#
集群外的主机也是可以正常使用该权限的
[root@master2 ~]# kubectl --kubeconfig=kc1 get node -n safe
NAME STATUS ROLES AGE VERSION
master Ready master 117d v1.21.0
node1 Ready 117d v1.21.0
node2 Ready 117d v1.21.0
[root@master2 ~]#
Error from server (Forbidden)报错处理
• 报错内容如下
[root@master2 ~]# kubectl --kubeconfig=kc1 get pods -n safe
Error from server (Forbidden): pods is forbidden: User “ccx” cannot list resource “pods” in API group “” in the namespace “safe”
[root@master2 ~]#


  • 这种情况并不是role的配置文件问题,而是因为kc1中ccx这个用户的授权出问题了,至于排查方法,去看kubeconfig配置那篇文章,从开始一步步跟着排查【重点看csr和授权这样子】
  • 总结:上面config授权好像和这没关系,如果用之前授权的方式给用户授权的话,好像role并没有生效了【本来role就是授权用的,上面的方式直接给了admin权限了,覆盖role了 】,所以,我上面的处理方法好像是错的,但是讲师也没有纠正我,所以,讲师也并没有很负责吧,几千块钱的培训费花的感觉并不值得,反正role这个问题,如果遇到部分权限能用,部分不能用,还是排查role相关的知识吧,我上面这个办法还是别采取了【所以,role着知识我觉得上面配置流程是没有错的,可能是我集群环境试验太多东西,啥配置被我搞乱罢了,后面有时间打新集群了再回过头来重新做一遍role相关的试验吧】
clusterrole的分开赋权
    • 对pod资源可以删除,进入终端执行命令,其他资源只读权限
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
    annotations:
    rbac.authorization.kubernetes.io/autoupdate: “true”
    creationTimestamp: “2019-10-29T14:21:54Z”
    labels:
    kubernetes.io/bootstrapping: rbac-defaults
    name: uki-view
    rules:
    • apiGroups:
    • “”
    resources:
    • pods
    • pods/attach
    • pods/exec
    • pods/portforward
    • pods/proxy
    verbs:
    • create
    • delete
    • deletecollection
    • patch
    • update
    • apiGroups:
    • “”
    resources:
    • configmaps
    • endpoints
    • persistentvolumeclaims
    • pods
    • replicationcontrollers
    • replicationcontrollers/scale
    • serviceaccounts
    • services
    • nodes
    verbs:
    • get
    • list
    • watch
    • apiGroups:
    • “”
    resources:
    • bindings
    • events
    • limitranges
    • namespaces/status
    • pods/log
    • pods/status
    • replicationcontrollers/status
    • resourcequotas
    • resourcequotas/status
    verbs:
    • get
    • list
    • watch
    • apiGroups:
    • “”
    resources:
    • namespaces
    verbs:
    • get
    • list
    • watch
    • apiGroups:
    • apps
    resources:
    • controllerrevisions
    • daemonsets
    • deployments
    • deployments/scale
    • replicasets
    • replicasets/scale
    • statefulsets
    • statefulsets/scale
    verbs:
    • get
    • list
    • watch
    • apiGroups:
    • autoscaling
    resources:
    • horizontalpodautoscalers
    verbs:
    • get
    • list
    • watch
    • patch
    • update
    • apiGroups:
    • extensions
    resources:
    • daemonsets
    • deployments
    • deployments/scale
    • ingresses
    • networkpolicies
    • replicasets
    • replicasets/scale
    • replicationcontrollers/scale
    verbs:
    • get
    • list
    • watch
    • apiGroups:
    • policy
    resources:
    • poddisruptionbudgets
    verbs:
    • get
    • list
    • watch
    • apiGroups:
    • networking.k8s.io
    resources:
    • networkpolicies
    verbs:
    • get
    • list
    • watch
    • apiGroups:
    • networking.k8s.io
    resources:
    • ingresses
    verbs:
    • get
    • apiGroups:
    • networking.k8s.io
    resources:
    • ingresses
    verbs:
    • list
    • apiGroups:
    • networking.k8s.io
    resources:
    • ingresses
    verbs:
    • watch
    • 对集群资源具有增删改查的权限
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
    annotations:
    rbac.authorization.kubernetes.io/autoupdate: “true”
    creationTimestamp: “2019-10-29T14:21:54Z”
    labels:
    kubernetes.io/bootstrapping: rbac-defaults
    name: uki-namespace-all
    rules:
    • apiGroups:
    • “”
    ups:
    • networking.k8s.io
    resources:
    • ingresses
    verbs:
    • watch
    • 对集群资源具有增删改查的权限
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
    annotations:
    rbac.authorization.kubernetes.io/autoupdate: “true”
    creationTimestamp: “2019-10-29T14:21:54Z”
    labels:
    kubernetes.io/bootstrapping: rbac-defaults
    name: uki-namespace-all
    rules:
    • apiGroups:
    • “”
    resources: