两个hadoop集群开启Kerberos验证后,集群间不能够相互访问,需要实现Kerberos之间的互信,使用hera集群A的客户端访问hera集群B的服务(实质上是使用Kerberos Realm A上的Ticket实现访问Realm B的服务)。
先决条件:
1)两个集群hera.com和均开启Kerberos认证
2)Kerberos的REALM分别设置为hera.com和
步骤如下:
实现hera.com和之间的跨域互信,例如使用hera.com的客户端访问中的服务,两个REALM需要共同拥有名为krbtgt/@hera.com的principal,两个Keys需要保证密码,version number和加密方式一致。默认情况下互信是单向的, 的客户端访问hera.com的服务,两个REALM需要有krbtgt/hera.com@的principal。
向两个集群中添加krbtgt principal
#hera CLUSTERkadmin.local: addprinc –e "aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal" krbtgt/hera.com@kadmin.local: addprinc –e "aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal" krbtgt/yoga.com@hera.com#yoga CLUSTERkadmin.local: addprinc –e "aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal" krbtgt/hera.com@kadmin.local: addprinc –e "aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal" krbtgt/yoga.com@hera.com |
要验证两个entries具有匹配的kvno和加密type,查看命令使用getprinc
kadmin.local: getprinc krbtgt/yoga.com@hera.comPrincipal: krbtgt/yoga.com@hera.comExpiration date: [never]Last password change: Wed Jul 05 14:18:11 CST 2017Password expiration date: [none]Maximum ticket life: 1 day 00:00:00Maximum renewable life: 30 days 00:00:00Last modified: Wed Jul 05 14:18:11 CST 2017 (admin/admin@)Last successful authentication: [never]Last failed authentication: [never]Failed password attempts: 0Number of keys: 7Key: vno 1, aes128-cts-hmac-sha1-96Key: vno 1, des3-cbc-sha1Key: vno 1, arcfour-hmacKey: vno 1, camellia256-cts-cmacKey: vno 1, camellia128-cts-cmacKey: vno 1, des-hmac-sha1Key: vno 1, des-cbc-md5MKey: vno 1Attributes:Policy: [none]kadmin.local: getprinc addprinc krbtgt/hera.com@usage: get_principal [-terse] principalkadmin.local: getprinc krbtgt/hera.com@Principal: krbtgt/hera.com@Expiration date: [never]Last password change: Wed Jul 05 14:17:47 CST 2017Password expiration date: [none]Maximum ticket life: 1 day 00:00:00Maximum renewable life: 30 days 00:00:00Last modified: Wed Jul 05 14:17:47 CST 2017 (admin/admin@)Last successful authentication: [never]Last failed authentication: [never]Failed password attempts: 0Number of keys: 7Key: vno 1, aes128-cts-hmac-sha1-96Key: vno 1, des3-cbc-sha1Key: vno 1, arcfour-hmacKey: vno 1, camellia256-cts-cmacKey: vno 1, camellia128-cts-cmacKey: vno 1, des-hmac-sha1Key: vno 1, des-cbc-md5MKey: vno 1Attributes:Policy: [none] |

设置hera.security.auth_to_local参数,该参数用于将principal转变为user,一个需要注意的问题是SASL RPC客户端需要远程Server的Kerberos principal在本身的配置中匹配该principal。相同的pricipal name需要分配给源和目标cluster的服务,例如Source Cluster中的NameNode的kerbeors principal name为nn/h@,在Destination cluster中NameNode的pricipal设置为nn/h@hera.com(不能设置为nn2/h***@hera.com),例如:
在yoga Cluster和 hera Cluster的core-site中增加:
<property><name>hera.security.auth_to_local</name><value>RULE:[1:$1@$0](^.*@yoga\.com$)s/^(.*)@yoga\.com$/$1/gRULE:[2:$1@$0](^.*@yoga\.com$)s/^(.*)@yoga\.com$/$1/gRULE:[1:$1@$0](^.*@hera\.com$)s/^(.*)@hera\.com$/$1/gRULE:[2:$1@$0](^.*@hera\.com$)s/^(.*)@hera\.com$/$1/gDEFAULT </value></property> |
用hera org.apache.hera.security.heraKerberosName 来实现验证,例如:
[root@node1a141 ~]# hadoop org.apache.hadoop.security.HadoopKerberosName hdfs/node1@Name: hdfs/node1@ to hdfs |
3 在krb5.conf中配置信任关系
3.1 配置capaths
第一种方式是配置shared hierarchy of names,这个是默认及比较简单的方式,第二种方式是在krb5.conf文件中改变capaths,复杂但是比较灵活,这里采用第二种方式。
在两个集群的节点的/etc/krb5.conf文件配置domain和realm的映射关系,例如:在yoga cluster中配置:
[capaths] = { hera.com = . } |
在hera Cluster中配置:
[capaths] hera.com = { = . } |
配置成'.'是表示没有intermediate realms
3.2 配置realms
为了是yoga 可以访问hera的KDC,需要将hera的KDC Server配置到yoga cluster中,如下,反之相同:
[realms] = { kdc = {host}.:88 admin_server = {host}.:749 default_domain = } hera.com = { kdc = {host}.hera.com:88 admin_server = {host}.hera.com:749 default_domain = hera.com } |
3.3 配置domain_realm
在domain_realm中,一般配置成'.'和''的格式,'.'前缀保证kerberos将所有的的主机均映射到 realm。但是如果集群中的主机名不是以为后缀的格式,那么需要在domain_realm中配置主机与realm的映射关系,例yoga.nn.local映射为,需要增加yoga.nn.local = 。
[domain_realm]dc07-daily-bigdata-yoga-cdh-bj01host-748167. = dc07-daily-bigdata-yoga-cdh-bj01host-748168. = dc07-daily-bigdata-yoga-cdh-bj01host-748169. = dc07-daily-bigdata-yoga-cdh-bj01host-748170. = dc07-daily-bigdata-yoga-cdh-bj01host-748171. = dc05-prod-bigdata-apollo-bj01host-614146. = hera.comdc05-prod-bigdata-apollo-bj01host-614147. = hera.comidc05-guoyu-hbase-22172. = hera.comidc05-shunyi-bigdata-0333. = hera.comidc05-shunyi-bigdata-0393. = hera.comidc05-shunyi-bigdata-0502. = hera.comidc05-shunyi-bigdata-0503. = hera.comidc07-prod-guoyu-101614145. = hera.comidc07-prod-guoyu-101620135. = hera.comidc07-prod-guoyu-101622148. = hera.com |
重启kerberos服务
3.4 配置hdfs-site.xml
在hdfs-site.xml,设置允许的realms
在hdfs-site.xml中设置dfs.namenode.kerberos.principal.pattern为"*"

这个是客户端的匹配规则用于控制允许的认证realms,如果该参数不配置,会有下面的异常:
java.io.IOException: Failed on local exception: java.io.IOException:java.lang.IllegalArgumentException: Server has invalid Kerberosprincipal:nn/ hera.com@ ; Host Details : local host is: "host1./10.181.22.130"; destination host is: "":8020; |
1)使用hdfs命令测试yoga 和hera 集群间的数据访问
例如在yoga Cluster中kinit admin@,然后运行hdfs命令,查看本机群和对方集群得hdfs目录:
如果未开启跨域互信,访问对方hdfs目录时会报认证错误
[root@node1a141 ~]# kdestroy在本机群客户端登陆admin用户,通过kerberos认证[root@node1a141 ~]# kinit adminPassword for admin@yoga.com:访问本集群hdfs#hdfs dfs -ls /Found 11 itemsdrwxrwxrwt - yarn hadoop 0 2021-03-08 15:26 /app-logsdrwxr-xr-x - yarn hadoop 0 2021-03-03 20:28 /atsdrwxr-xr-x - hdfs hdfs 0 2021-03-08 19:16 /atsv2drwxr-xr-x - root hdfs 0 2021-03-17 17:12 /benchmarksdrwxr-xr-x - hdfs hdfs 0 2021-03-03 20:28 /hdpdrwxr-xr-x - mapred hdfs 0 2021-03-03 20:30 /mapreddrwxrwxrwx - mapred hadoop 0 2021-03-03 20:30 /mr-historydrwxr-xr-x - hdfs hdfs 0 2021-03-03 20:24 /servicesdrwxr-xr-x - hdfs hdfs 0 2021-03-18 15:00 /testdrwxrwxrwx - hdfs hdfs 0 2021-03-15 14:12 /tmpdrwxr-xr-x - hdfs hdfs 0 2021-03-04 14:42 /user访问对方集群hdfs[14:19:40root@idc05-shunyi-bigdata-0393 /root]#hdfs dfs -ls hdfs://dc07-daily-bigdata-yoga-cdh-bj01host-748169.:8020/Found 8 itemsdrwxrwxr-x+ - noops supergroup 0 2020-11-18 22:35 hdfs://dc07-daily-bigdata-yoga-cdh-bj01host-748169.:8020/backupdrwxrwxr-x+ - hdfs supergroup 0 2020-12-08 17:50 hdfs://dc07-daily-bigdata-yoga-cdh-bj01host-748169.:8020/benchmarksdrwxrwxrwx+ - mars supergroup 0 2020-07-10 10:41 hdfs://dc07-daily-bigdata-yoga-cdh-bj01host-748169.:8020/data-rw-r-xr--+ 1 mars supergroup 1550 2020-11-18 17:35 hdfs://dc07-daily-bigdata-yoga-cdh-bj01host-748169.:8020/derby.logdrwxrwxr-x+ - hdfs supergroup 0 2020-08-24 20:26 hdfs://dc07-daily-bigdata-yoga-cdh-bj01host-748169.:8020/systemdrwxrwxr-x+ - noops supergroup 0 2020-11-12 08:39 hdfs://dc07-daily-bigdata-yoga-cdh-bj01host-748169.:8020/tempdrwxrwxrwt+ - hdfs supergroup 0 2021-03-04 11:16 hdfs://dc07-daily-bigdata-yoga-cdh-bj01host-748169.:8020/tmpdr-xrwxrwx+ - hdfs supergroup 0 2021-03-19 00:54 hdfs://dc07-daily-bigdata-yoga-cdh-bj01host-748169.:8020/user |
















