目录

  • 1.环境说明
  • 1.环境
  • 2.kafka和zookeeper的压缩包处理
  • 2.hosts文件配置如下
  • 1.hosts修改
  • 2.让配置生效
  • 3.Kerberos安装以及配置
  • 1.Kerberos服务端安装
  • 2.Kerberos客户端安装
  • 3.修改krb5.conf配置如下
  • 4.kdc.conf配置如下
  • 5.创建Kerberos数据库
  • 6.设置相关服务自启动
  • 7.添加Kerberos用户
  • 8.生产keytab认证文件
  • 9.给keytab文件权限
  • 4.配置相关jaas.conf文件
  • 1.kafka_server_jaas.conf
  • 2.zookeeper_jaas.conf
  • 3.kafka_client_jaas.conf
  • 5.修改对应配置文件xxxx.properties(kafka/confing目录下的)
  • 1.kafka server.properties
  • 2.kafka zookeeper.properties
  • 3.producer.properties和consumer.properties
  • 6.zookeeper/conf目录下
  • 7.制作zookeeper和kafka的启动脚本start.sh
  • 8.制作producer.sh和consumer.sh
  • 9.测试消费和生产是否正常(以下为正常的结果)


1.环境说明

1.环境

  • kafka版本:2.12-2.3.0
  • zookeeper版本:3.6.1
  • 操作系统:CentOS-7-x86_64-DVD-2009
  • 操作用户:root
  • Java版本:openjdk version “1.8.0_272”(操作系统自带的Java就是此版本)
  • zookeepe和kafka下载:官网下载,百度云下载连接->zookeeper和kafka(提取码ytig)
  • Kerberos:无特殊要求

2.kafka和zookeeper的压缩包处理

mkdir /opt/third/
#将两个压缩包拷贝到该目录下,然后解压,解压之后
cp -r apache-zookeeper-3.6.1-bin zookeeper
cp -r kafka_2.12-2.3.0 kafka

2.hosts文件配置如下

1.hosts修改

vim /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.235.139 stream.dt.local demo-db

2.让配置生效

service network restart

3.Kerberos安装以及配置

1.Kerberos服务端安装

yum install krb5-server -y

2.Kerberos客户端安装

yum install krb5-devel krb5-workstation -y
# 这里我把服务端和客户端都安装在了同一台机器上,其中centos7已经自带了krb5-workstation,没有就安装一下

3.修改krb5.conf配置如下

[logging]
	default = FILE:/var/log/krb5libs.log 
	#查看 Kerberos 验证LOG 
	kdc = FILE:/var/log/krb5kdc.log  
	admin_server = FILE:/var/log/kadmind.log
[libdefaults]
	#这里填自己的 realm
	default_realm = EXAMPLE.COM 
	dns_lookup_kdc = false
	dns_lookup_realm = false
	ticket_lifetime = 86400
	#renew_lifetime = 604800
	forwardable = true
	default_tgs_enctypes = rc4-hmac
	default_tkt_enctypes = rc4-hmac
	permitted_enctypes = rc4-hmac
	udp_preference_limit = 1
	kdc_timeout = 3000
[realms]
	EXAMPLE.COM = {
		#这里为host里映射的 stream.dt.local
		kdc = stream.dt.local  
		admin_server = stream.dt.local
	}
[domain_realm]
	kafka = EXAMPLE.COM
	zookeeper = EXAMPLE.COM
	clients = EXAMPLE.COM

4.kdc.conf配置如下

[kdcdefaults]
 kdc_ports = 88
 kdc_tcp_ports = 88

[realms]
 EXAMPLE.COM = {
  #master_key_type = aes256-cts
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
 }

5.创建Kerberos数据库

kdb5_util create -r EXAMPLE.COM -s #填写自己的库名 会提示输入/确认 密码

6.设置相关服务自启动

chkconfig --level 35 krb5kdc on
chkconfig --level 35 kadmin on
service krb5kdc start
service kadmin start

7.添加Kerberos用户

addprinc kafka/stream.dt.local@EXAMPLE.COM
addprinc zookeeper/stream.dt.local@EXAMPLE.COM
addprinc clients/stream.dt.local@EXAMPLE.COM
#为了方便复制而已

测试是否连接到kafka_java

8.生产keytab认证文件

mkdir -p /opt/third/kafka/kerberos/
mkdir -p /opt/third/zookeeper/kerberos/
ktadd -k /opt/third/kafka/kerberos/kafka_server.keytab kafka/stream.dt.local@EXAMPLE.COM
ktadd -k /opt/third/zookeeper/kerberos/kafka_zookeeper.keytab zookeeper/stream.dt.local@EXAMPLE.COM
ktadd -k /opt/third/kafka/kerberos/kafka_client.keytab clients/stream.dt.local@EXAMPLE.COM

执行结果如下即表示成功

测试是否连接到kafka_测试是否连接到kafka_02

9.给keytab文件权限

chmod -R 777 /opt/third/kafka/kerberos/kafka_server.keytab
chmod -R 777 /opt/third/zookeeper/kerberos/kafka_zookeeper.keytab
chmod -R 777 /opt/third/kafka/kerberos/kafka_client.keytab

4.配置相关jaas.conf文件

1.kafka_server_jaas.conf

vim /opt/third/kafka/kerberos/kafka_server_jaas.conf

说明:此context名字为ZKClient,对应kafka broker启动参数-Dzookeeper.sasl.client=ZkClient
KafkaServer {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/opt/third/kafka/kerberos/kafka_server.keytab"
    principal="kafka/stream.dt.local@EXAMPLE.COM";
};
ZkClient {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/opt/third/kafka/kerberos/kafka_server.keytab"
    principal="kafka/stream.dt.local@EXAMPLE.COM";
};

2.zookeeper_jaas.conf

vim /opt/third/zookeeper/kerberos/zookeeper_jaas.conf

Server {
    com.sun.security.auth.module.Krb5LoginModule required debug=true
    useKeyTab=true
    storeKey=true
    useTicketCache=false
    keyTab="/opt/third/zookeeper/kerberos/kafka_zookeeper.keytab"
    principal="zookeeper/stream.dt.local@EXAMPLE.COM";
};

3.kafka_client_jaas.conf

vim /opt/third/kafka/kerberos/kafka_client_jaas.conf

说明: 此context名字为KafkaClient,对应kafka consumer启动参数-Dzookeeper.sasl.client=KafkaClient
KafkaClient {
   com.sun.security.auth.module.Krb5LoginModule required
   useKeyTab=true
   storeKey=true
   keyTab="/opt/third/kafka/kerberos/kafka_client.keytab"
   principal="clients/stream.dt.local@EXAMPLE.COM";
};

5.修改对应配置文件xxxx.properties(kafka/confing目录下的)

修改对应配置文件(以下均为追加)

1.kafka server.properties

listeners=SASL_PLAINTEXT://stream.dt.local:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka

2.kafka zookeeper.properties

authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000

3.producer.properties和consumer.properties

security.protocol=SASL_PLAINTEXT
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka

6.zookeeper/conf目录下

cp zoo_sample.cfg zoo.cfg#很重要,不然起不来

7.制作zookeeper和kafka的启动脚本start.sh

# 启动zookeeper
export KAFKA_OPTS='-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/third/zookeeper/kerberos/zookeeper_jaas.conf'

/opt/third/zookeeper/bin/zkServer.sh start >> /opt/third/kafka/start.log 2>&1

# 启动kafka
export KAFKA_OPTS='-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/third/kafka/kerberos/kafka_server_jaas.conf -Dzookeeper.sasl.client=ZkClient'

JMX_PORT=9988 nohup /opt/third/kafka/bin/kafka-server-start.sh /opt/third/kafka/config/server.properties >> /opt/third/kafka/start.log 2>&1 &
sh start.sh

结果如下,即为启动成功

测试是否连接到kafka_zookeeper_03

8.制作producer.sh和consumer.sh

#!/bin/bash
export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/third/kafka/kerberos/kafka_client_jaas.conf -Dzookeeper.sasl.client=KafkaClient"

sh bin/kafka-console-producer.sh --broker-list stream.dt.local:9092 --topic test --producer.config /opt/third/kafka/config/producer.properties
#!/bin/bash
export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/third/kafka/kerberos/kafka_client_jaas.conf -Dzookeeper.sasl.client=KafkaClient"

sh bin/kafka-console-consumer.sh --bootstrap-server stream.dt.local:9092 --topic test --from-beginning --consumer.config /opt/third/kafka/config/consumer.properties

9.测试消费和生产是否正常(以下为正常的结果)

测试是否连接到kafka_java_04