两个hadoop集群开启Kerberos验证后,集群间不能够相互访问,需要实现Kerberos之间的互信,使用hera集群A的客户端访问hera集群B的服务(实质上是使用Kerberos Realm A上的Ticket实现访问Realm B的服务)。
先决条件:
1)两个集群hera.com和均开启Kerberos认证
2)Kerberos的REALM分别设置为hera.com和
步骤如下:

1 配置KDC之间的信任ticket

实现hera.com之间的跨域互信,例如使用hera.com的客户端访问中的服务,两个REALM需要共同拥有名为krbtgt/@hera.com的principal,两个Keys需要保证密码,version number和加密方式一致。默认情况下互信是单向的, 的客户端访问hera.com的服务,两个REALM需要有krbtgt/hera.com@的principal。
向两个集群中添加krbtgt principal



#hera CLUSTER
kadmin.local: addprinc –e "aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal" krbtgt/hera.com@
kadmin.local: addprinc –e "aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal"  krbtgt/yoga.com@hera.com
 
#yoga CLUSTER
kadmin.local: addprinc –e "aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal" krbtgt/hera.com@
kadmin.local: addprinc –e "aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal"  krbtgt/yoga.com@hera.com

要验证两个entries具有匹配的kvno和加密type,查看命令使用getprinc

 

kadmin.local:  getprinc  krbtgt/yoga.com@hera.com
Principal: krbtgt/yoga.com@hera.com
Expiration date: [never]
Last password change: Wed Jul 05 14:18:11 CST 2017
Password expiration date: [none]
Maximum ticket life: 1 day 00:00:00
Maximum renewable life: 30 days 00:00:00
Last modified: Wed Jul 05 14:18:11 CST 2017 (admin/admin@)
Last successful authentication: [never]
Last failed authentication: [never]
Failed password attempts: 0
Number of keys: 7
Key: vno 1, aes128-cts-hmac-sha1-96
Key: vno 1, des3-cbc-sha1
Key: vno 1, arcfour-hmac
Key: vno 1, camellia256-cts-cmac
Key: vno 1, camellia128-cts-cmac
Key: vno 1, des-hmac-sha1
Key: vno 1, des-cbc-md5
MKey: vno 1
Attributes:
Policy: [none]
kadmin.local:  getprinc  addprinc krbtgt/hera.com@
usage: get_principal [-terse] principal
kadmin.local:  getprinc  krbtgt/hera.com@
Principal: krbtgt/hera.com@
Expiration date: [never]
Last password change: Wed Jul 05 14:17:47 CST 2017
Password expiration date: [none]
Maximum ticket life: 1 day 00:00:00
Maximum renewable life: 30 days 00:00:00
Last modified: Wed Jul 05 14:17:47 CST 2017 (admin/admin@)
Last successful authentication: [never]
Last failed authentication: [never]
Failed password attempts: 0
Number of keys: 7
Key: vno 1, aes128-cts-hmac-sha1-96
Key: vno 1, des3-cbc-sha1
Key: vno 1, arcfour-hmac
Key: vno 1, camellia256-cts-cmac
Key: vno 1, camellia128-cts-cmac
Key: vno 1, des-hmac-sha1
Key: vno 1, des-cbc-md5
MKey: vno 1
Attributes:
Policy: [none]
2 在core-site中配置principal和user的映射RULES

 双Hadoop集群&双Kerberos kdc认证跨域互信_kerberos

设置hera.security.auth_to_local参数,该参数用于将principal转变为user,一个需要注意的问题是SASL RPC客户端需要远程Server的Kerberos principal在本身的配置中匹配该principal。相同的pricipal name需要分配给源和目标cluster的服务,例如Source Cluster中的NameNode的kerbeors principal name为nn/h@,在Destination cluster中NameNode的pricipal设置为nn/h@hera.com(不能设置为nn2/h***@hera.com),例如:
在yoga Cluster和 hera Cluster的core-site中增加:

 

<property>
<name>hera.security.auth_to_local</name>
<value>
RULE:[1:$1@$0](^.*@yoga\.com$)s/^(.*)@yoga\.com$/$1/g
RULE:[2:$1@$0](^.*@yoga\.com$)s/^(.*)@yoga\.com$/$1/g
RULE:[1:$1@$0](^.*@hera\.com$)s/^(.*)@hera\.com$/$1/g
RULE:[2:$1@$0](^.*@hera\.com$)s/^(.*)@hera\.com$/$1/g
DEFAULT          
</value>
</property>

 

 用hera org.apache.hera.security.heraKerberosName 来实现验证,例如:

[root@node1a141 ~]#  hadoop org.apache.hadoop.security.HadoopKerberosName hdfs/node1@
 
Name: hdfs/node1@ to hdfs

 

3 在krb5.conf中配置信任关系

 

3.1 配置capaths

第一种方式是配置shared hierarchy of names,这个是默认及比较简单的方式,第二种方式是在krb5.conf文件中改变capaths,复杂但是比较灵活,这里采用第二种方式。
在两个集群的节点的/etc/krb5.conf文件配置domain和realm的映射关系,例如:在yoga cluster中配置:

 

[capaths]
        = {
              hera.com = .
       }

在hera Cluster中配置:

 

[capaths]
      hera.com = {
              = .
      }

配置成'.'是表示没有intermediate realms

3.2 配置realms

为了是yoga 可以访问hera的KDC,需要将hera的KDC Server配置到yoga cluster中,如下,反之相同:

[realms]
  = {
   kdc = {host}.:88
   admin_server = {host}.:749
   default_domain =
 }
 hera.com = {
   kdc = {host}.hera.com:88
   admin_server = {host}.hera.com:749
   default_domain = hera.com
 }

3.3 配置domain_realm

在domain_realm中,一般配置成'.'和''的格式,'.'前缀保证kerberos将所有的的主机均映射到 realm。但是如果集群中的主机名不是以为后缀的格式,那么需要在domain_realm中配置主机与realm的映射关系,例yoga.nn.local映射为,需要增加yoga.nn.local = 。

 

[domain_realm]
dc07-daily-bigdata-yoga-cdh-bj01host-748167. =
dc07-daily-bigdata-yoga-cdh-bj01host-748168. =
dc07-daily-bigdata-yoga-cdh-bj01host-748169. =
dc07-daily-bigdata-yoga-cdh-bj01host-748170. =
dc07-daily-bigdata-yoga-cdh-bj01host-748171. =
dc05-prod-bigdata-apollo-bj01host-614146. = hera.com
dc05-prod-bigdata-apollo-bj01host-614147. = hera.com
idc05-guoyu-hbase-22172. = hera.com
idc05-shunyi-bigdata-0333. = hera.com
idc05-shunyi-bigdata-0393. = hera.com
idc05-shunyi-bigdata-0502. = hera.com
idc05-shunyi-bigdata-0503. = hera.com
idc07-prod-guoyu-101614145. = hera.com
idc07-prod-guoyu-101620135. = hera.com
idc07-prod-guoyu-101622148. = hera.com

重启kerberos服务

3.4 配置hdfs-site.xml

在hdfs-site.xml,设置允许的realms
在hdfs-site.xml中设置dfs.namenode.kerberos.principal.pattern为"*"


双Hadoop集群&双Kerberos kdc认证跨域互信_kerberos

这个是客户端的匹配规则用于控制允许的认证realms,如果该参数不配置,会有下面的异常:

java.io.IOException: Failed on local exception: java.io.IOException:
java.lang.IllegalArgumentException:
       Server has invalid Kerberosprincipal:nn/ hera.com@ ;
       Host Details : local host is: "host1./10.181.22.130";
                        destination host is: "":8020;
4 测试

1)使用hdfs命令测试yoga 和hera 集群间的数据访问
例如在yoga Cluster中kinit admin@,然后运行hdfs命令,查看本机群和对方集群得hdfs目录:
如果未开启跨域互信,访问对方hdfs目录时会报认证错误

 

[root@node1a141 ~]# kdestroy
 
在本机群客户端登陆admin用户,通过kerberos认证
[root@node1a141 ~]# kinit admin
Password for admin@yoga.com:
 
访问本集群hdfs
#hdfs dfs -ls /
Found 11 items
drwxrwxrwt   - yarn   hadoop          0 2021-03-08 15:26 /app-logs
drwxr-xr-x   - yarn   hadoop          0 2021-03-03 20:28 /ats
drwxr-xr-x   - hdfs   hdfs            0 2021-03-08 19:16 /atsv2
drwxr-xr-x   - root   hdfs            0 2021-03-17 17:12 /benchmarks
drwxr-xr-x   - hdfs   hdfs            0 2021-03-03 20:28 /hdp
drwxr-xr-x   - mapred hdfs            0 2021-03-03 20:30 /mapred
drwxrwxrwx   - mapred hadoop          0 2021-03-03 20:30 /mr-history
drwxr-xr-x   - hdfs   hdfs            0 2021-03-03 20:24 /services
drwxr-xr-x   - hdfs   hdfs            0 2021-03-18 15:00 /test
drwxrwxrwx   - hdfs   hdfs            0 2021-03-15 14:12 /tmp
drwxr-xr-x   - hdfs   hdfs            0 2021-03-04 14:42 /user
 
访问对方集群hdfs
[14:19:40root@idc05-shunyi-bigdata-0393 /root]
#hdfs dfs -ls hdfs://dc07-daily-bigdata-yoga-cdh-bj01host-748169.:8020/
Found 8 items
drwxrwxr-x+  - noops supergroup          0 2020-11-18 22:35 hdfs://dc07-daily-bigdata-yoga-cdh-bj01host-748169.:8020/backup
drwxrwxr-x+  - hdfs  supergroup          0 2020-12-08 17:50 hdfs://dc07-daily-bigdata-yoga-cdh-bj01host-748169.:8020/benchmarks
drwxrwxrwx+  - mars  supergroup          0 2020-07-10 10:41 hdfs://dc07-daily-bigdata-yoga-cdh-bj01host-748169.:8020/data
-rw-r-xr--+  1 mars  supergroup       1550 2020-11-18 17:35 hdfs://dc07-daily-bigdata-yoga-cdh-bj01host-748169.:8020/derby.log
drwxrwxr-x+  - hdfs  supergroup          0 2020-08-24 20:26 hdfs://dc07-daily-bigdata-yoga-cdh-bj01host-748169.:8020/system
drwxrwxr-x+  - noops supergroup          0 2020-11-12 08:39 hdfs://dc07-daily-bigdata-yoga-cdh-bj01host-748169.:8020/temp
drwxrwxrwt+  - hdfs  supergroup          0 2021-03-04 11:16 hdfs://dc07-daily-bigdata-yoga-cdh-bj01host-748169.:8020/tmp
dr-xrwxrwx+  - hdfs  supergroup          0 2021-03-19 00:54 hdfs://dc07-daily-bigdata-yoga-cdh-bj01host-748169.:8020/user