一、配置KDC服务

  由于使用的是内网机器,这里使用rpm包安装。需要的rpm包括:

  服务端:krb5-server, krb5-workstation, krb5-libs,libkadm5

  客户端:krb5-workstation, krb5-libs,libkadm5

  下载地址:http://mirror.centos.org/centos/7/updates/x86_64/Packages

  安装的时候可能报找不到words的rpm包:rpm -ivh words-3.0-22.el7.noarch.rpm

  • KDC服务配置(服务端配置)
vim /var/kerberos/krb5kdc/kdc.conf

# 更改下面的地方即可,配置自己的域名为HADOOP.COM, 授权票的有效期为1天,免密7天
[realms] 
 HADOOP.COM = { 
  #master_key_type = aes256-cts 
   max_life = 1d 
   max_renewable = 7d 
   acl_file = /var/kerberos/krb5kdc/kadm5.acl 
   dict_file = /usr/share/dict/words 
   admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab 
   supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal 
}
vim /etc/krb5.conf

[libdefaults]
pkinit_anchors = /etc/pki/tls/crets/ca-bundle.crt
default_realm = HADOOP.COM
udp_preference_limit = 1
# default_ccache_name = ...

[realms]
HADOOP.COM = {
    kdc = server的主机名, 我的这里是hadoop001
    admin_server = server的主机名, hadoop001
}

# 将krb配置文件发送到其他节点
scp /etc/krb5.conf root@hadoop00x:/etc/
  • 配置
# 生成数据库, 设置密码,123456
kdb5_util create -s

# 创建管理员账号
kadmin.local -q "addprinc admin/admin@HADOOP.COM"

# 赋予kerberos管理员所有的权限
vim /var/kerberos/krb5kdc/kadm5.ac1
*/admin@HADOOP.COM *

# 开启服务并设置自启
systemctl enable krb5kdc
systemctl enable kadmin
systemctl start krb5kdc
systemctl start kadmin

二、在CDH上开启Kerberos认证

  1)创建CM管理用户, 记住密码后面要用

    kadmin.local -q "addprinc cloudera-scm/admin"

   2)进入CM界面,启动Kerberos认证

   3)确保以下配置都已经完成

  

cdhhive如何关闭kerbors验证 cdh kerberos认证_hadoop

 

 

   4)KDC类型:MIT KDC; Kerberos加密类型:aes128-cts, des3-hmac-sha1, arcfour-hmac;  填写KDC所在服务所在的主机

   5)不勾选 “通过CM管理krb5.conf”

   6)输入cm的kerberos的管理账号, 点击继续,直到安装结束




三、kerberos常用命令

  • 创建用户和keytab文件
# 创建linux用户
useradd -m baron
echo "123456" | passwd baron --stdin

# 创建kerberos用户
kadmin.local -q "addprinc -pw 123456 baron"

# 生成keytab文件
kadmin.local
ktadd -k /home/baron/baron.keytab -norandkey baron

# 查看keytab问价
klist -kt /home/baron/baron.keytab
# 查看当前所有的princpal
kadmin.local -q "list_princpals"

# 创建一个hdfs超级用户,一般一个服务对应一个user,每一个节点上都需要创建相应的linux用户
kadmin.local -q "addprinc hdfs" 
kadmin.local 
ktadd -k /home/hdfs/hdfs.keytab -norandkey hdfs

# 将keytab发送到每一个节点
scp -r /home/hdfs/hdfs.keytab root@hadoop00x:/home/hdfs/

# 在每一个节点init
kinit -kt /home/hdfs/hdfs.keytab hdfs@HADOOP.COM 或者 kinit hdfs -> 输入密码

四、datax中配置Kerberos以及shell中init

  • Shell中使用
#!/bin/bash

# 首先需要kinit登录
kinit -kt /home/hdfs/hdfs.keytab hdfs@HADOOP.COM
if ! klist -s
then  
    echo "kerberos no init ----"
    exit 1
else 
    # 执行程序
    echo "success"
fi
  • Datax中配置
{
    "job": {
        "setting": {
            "speed": {
                "channel": 1
            }
        },
        "content": [
            {
                "reader": {
                    "name": "hdfsreader",
                    "parameter": {
                        "path": "/workspace/*",
                        "defaultFS": "hdfs://hadoop001:8020",
                        "column": [
                               {
                                "index": 0,
                                "type": "long"
                               },
                               {
                                "index": 1,
                                "type": "string"
                               },
                               {
                                "index": 2,
                                "type": "double"
                               }
                        ],
                        "fileType": "text",
                        "encoding": "UTF-8",
                        "fieldDelimiter": ",",
                        "haveKerberos": true,
                        "kerberosKeytabFilePath": "/home/hdfs/hdfs.keytab",
                        "kerberosPrincipal": "hdfs@HADOOP.COM"
                    }

                },
                "writer": {
                    "name": "streamwriter",
                    "parameter": {
                        "print": true
                    }
                }
            }
        ]
    }
}
  • View Code

cdhhive如何关闭kerbors验证 cdh kerberos认证_hdfs_02

cdhhive如何关闭kerbors验证 cdh kerberos认证_bc_03

{
    "job": {
        "setting": {
            "speed": {
                 "channel": 1
            }
        },
        "content": [
            {
                "reader": {
                    "name": "mysqlreader",
                    "parameter": {
                        "username": "root",
                        "password": "root",
                        "column": [
                            "uid",
                            "event_type",
                            "time"
                        ],
                        "splitPk": "uid",
                        "connection": [
                            {
                                "table": [
                                    "action"
                                ],
                                "jdbcUrl": [
     "jdbc:mysql://node:3306/aucc"
                                ]
                            }
                        ]
                    }
                },
               "writer": {
                    "name": "hdfswriter",
                    "parameter": {
                        "defaultFS": "hdfs://hadoop001:8020",
                        "fileType": "text",
                        "path": "/workspace",
                        "fileName": "u",
                        "column": [
                            {
                                "name": "uid",
                                "type": "string"
                            },
                            {
                                "name": "event_type",
                                "type": "string"
                            },
                            {
                                "name": "time",
                                "type": "string"
                            }
                        ],
                        "writeMode": "append",
                        "fieldDelimiter": "\t",
                        "compress":"bzip2",
                        "haveKerberos": true,
                        "kerberosKeytabFilePath": "/home/hdfs/hdfs.keytab",
                        "kerberosPrincipal": "hdfs@HADOOP.COM"
                    }
                }
            }
        ]
    }
}
  • View Code

 

五、禁用Kerberos

  1、停止集群的所有服务

  2、Zookeeper:

    1)Zookeeper的enableSecurity为false(取消勾选)

    2)Zookeeper的Enable Kerberos Authentication为false(取消勾选)

  3、修改hdfs的配置

    1)hadoop.security.authentication选择simple

    2)hadoop.security.authorization选择false(取消勾选)

    3)修改dfs.datanode.data.dir.perm的数据目录权限为755

    4)修改DataNode服务的端口号,dfs.datanode.address,9866 (for Kerberos) 改为 50010 (default);dfs.datanode.http.address,1006 (for Kerberos) 改为 9864 (default)

  4、修改HBase的配置

    1)hbase.security.authentication修改为simple

    2)hbase.security.authorization选择false(取消勾选)

    3)hbase.thrift.security.qop选择none

  5、可能存在HBase启动不了,需要设置下zookeeper目录权限,跳过检查

    zookeeper中的配置项中搜索 “Zookeeper Server中java配置项” 增加 -Dzookeeper.kipACL=true