一、Kafka的下载安装

1.1 Kafka下载

前往官网下载Kafka:
https://kafka.apache.org/downloads

点击链接:

java 连接kafka 用户名 密码_kafka


进入此页面,复制此链接。

java 连接kafka 用户名 密码_java_02

我们会得到一个地址:

https://dlcdn.apache.org/kafka/3.0.0/kafka_2.13-3.0.0.tgz

进入Linux中,使用wget命令下载这个文件,我下载在root的家目录下面:

wget https://dlcdn.apache.org/kafka/3.0.0/kafka_2.13-3.0.0.tgz

完整过程:(网速一般般,157MB/s,花费了我0.5秒时间)

root@ip-10-100-10-195:~# wget https://dlcdn.apache.org/kafka/3.0.0/kafka_2.13-3.0.0.tgz
--2021-12-08 08:10:32--  https://dlcdn.apache.org/kafka/3.0.0/kafka_2.13-3.0.0.tgz
Resolving dlcdn.apache.org (dlcdn.apache.org)... 151.101.2.132, 2a04:4e42::644
Connecting to dlcdn.apache.org (dlcdn.apache.org)|151.101.2.132|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 86396520 (82M) [application/x-gzip]
Saving to: ‘kafka_2.13-3.0.0.tgz’

kafka_2.13-3.0.0.tgz             100%[========================================================>]  82.39M   157MB/s    in 0.5s

2021-12-08 08:10:32 (157 MB/s) - ‘kafka_2.13-3.0.0.tgz’ saved [86396520/86396520]

root@ip-10-100-10-195:~#

查看下载的文件:

root@ip-10-100-10-195:~# ls -l
total 84380
drwxr-xr-x 2 root root     4096 Dec  8 06:47 Downloads
-rw-r--r-- 1 root root 86396520 Sep 20 08:46 kafka_2.13-3.0.0.tgz
drwxr-xr-x 4 root root     4096 Jan 12  2021 snap
root@ip-10-100-10-195:~#

1.2 Kafka的解压安装

我选择将其解压之/opt/modules/文件夹下面:

root@ip-10-100-10-195:~# tar -zxvf kafka_2.13-3.0.0.tgz -C /opt/module/

查看:

root@ip-10-100-10-195:/opt/module# ls -l
total 4
drwxr-xr-x 7 root root 4096 Sep  8 21:26 kafka_2.13-3.0.0

kafka_2.13-3.0.0目录下面创建一个logs文件夹,以便于存放日志。

root@ip-10-100-10-195:~# mkdir /opt/module/kafka_2.13-3.0.0/logs

1.3 修改配置文件

进入kafka目录下面的config目录:

root@ip-10-100-10-195:/opt/module/kafka_2.13-3.0.0# cd config/
root@ip-10-100-10-195:/opt/module/kafka_2.13-3.0.0/config# ls -l
total 76
-rw-r--r-- 1 root root  906 Sep  8 21:21 connect-console-sink.properties
-rw-r--r-- 1 root root  909 Sep  8 21:21 connect-console-source.properties
-rw-r--r-- 1 root root 5475 Sep  8 21:21 connect-distributed.properties
-rw-r--r-- 1 root root  883 Sep  8 21:21 connect-file-sink.properties
-rw-r--r-- 1 root root  881 Sep  8 21:21 connect-file-source.properties
-rw-r--r-- 1 root root 2103 Sep  8 21:21 connect-log4j.properties
-rw-r--r-- 1 root root 2540 Sep  8 21:21 connect-mirror-maker.properties
-rw-r--r-- 1 root root 2262 Sep  8 21:21 connect-standalone.properties
-rw-r--r-- 1 root root 1221 Sep  8 21:21 consumer.properties
drwxr-xr-x 2 root root 4096 Sep  8 21:21 kraft
-rw-r--r-- 1 root root 4674 Sep  8 21:21 log4j.properties
-rw-r--r-- 1 root root 1925 Sep  8 21:21 producer.properties
-rw-r--r-- 1 root root 6849 Sep  8 21:21 server.properties
-rw-r--r-- 1 root root 1032 Sep  8 21:21 tools-log4j.properties
-rw-r--r-- 1 root root 1169 Sep  8 21:21 trogdor.conf
-rw-r--r-- 1 root root 1205 Sep  8 21:21 zookeeper.properties

修改 server.properties 文件:
主要就是修改下面几项,注意,这里面有的已经被写入文件,注意仔细查看:

#broker 的全局唯一编号,不能重复
broker.id=0
#打开删除topic的功能
delete.topic.enable=true
#处理网络请求的线程数量
num.network.threads=3
#用来处理磁盘 IO 的现成数量
num.io.threads=8
#发送套接字的缓冲区大小
socket.send.buffer.bytes=102400
#接收套接字的缓冲区大小
socket.receive.buffer.bytes=102400
#请求套接字的缓冲区大小
socket.request.max.bytes=104857600
#kafka 运行日志存放的路径
log.dirs=/opt/module/kafka_2.13-3.0.0/logs
#topic 在当前 broker 上的分区个数
num.partitions=1
#用来恢复和清理 data 下数据的线程数量
num.recovery.threads.per.data.dir=1
#segment 文件保留的最长时间,超时将被删除
log.retention.hours=168
#配置连接 Zookeeper地址
zookeeper.connect=localhost:2181

!!!注意中间的日志路径,以及zookeeper服务,以及打开删除topic的功能。

配置环境变量:

root@ip-10-100-10-195:~# vim /etc/profile

# kafka_home
export KAFKA_HOME=/opt/module/kafka_2.13-3.0.0
export PATH=$PATH:$KAFKA_HOME/bin

root@ip-10-100-10-195:~# source /etc/profile

1.4 启动Kafka

可以按照官方的命令,在另一个窗口打开kafka并且观察日志输出:

# Start the Kafka broker service
bin/kafka-server-start.sh config/server.properties

或者使用这个命令,仅仅启动kafka Server:

root@ip-10-100-10-195:/opt/module/kafka_2.13-3.0.0# bin/kafka-server-start.sh -daemon config/server.properties

想要关闭的话,就使用:

bin/kafka-server-stop.sh stop

1.5 创建Topic

我们按照官方的教程,来创建一个topic的话:
会报一个错误:

root@ip-10-100-10-195:/opt/module/kafka_2.13-3.0.0# bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server localhost:9092
Missing required argument "[partitions]"

说我们缺少参数:partitions

我们查看文档:

Partitions:

--partitions <Integer: # of partitions> The number of partitions for the topic being created or altered (WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected). If not supplied for create, defaults to the cluster default.

我们加上这个参数,又出现了另一个错误:

root@ip-10-100-10-195:/opt/module/kafka_2.13-3.0.0# bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server localhost:9092 --partitions 1

Missing required argument "[replication-factor]"

缺少另一个参数:replication-factor

replication-factor:

--replication-factor <Integer: The replication factor for each replication factor> partition in the topic being created. If not supplied, defaults to the cluster default.

要再加上另一个参数:

root@ip-10-100-10-195:/opt/module/kafka_2.13-3.0.0# bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1
Created topic quickstart-events.

这样我们就创建了一个主题。

想要删除的话,就使用:

bin/kafka-topics.sh --delete --topic quickstart-events --bootstrap-server localhost:9092

其他kafka的操作,这里我们不展开讲了,这并不是本文的重点。

到这里,我们的kafka就已经配置好了,下面就要使用Java连接做测试了。


二、Java连接测试

2.1 消费者的代码:

package com.veeja.demo;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.util.Arrays;
import java.util.Properties;

/**
 * @author liuweijia
 */
public class ConsumerTest {
    public static void main(String[] args) throws InterruptedException {
        Properties props = new Properties();
        // 注意这里要填写你的IP地址
        props.put("bootstrap.servers", "10.100.10.195:9092");
        props.put("group.id", "1");
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms", "1000");
        props.put("session.timeout.ms", "30000");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        KafkaConsumer<String, String> kafkaConsumer = new KafkaConsumer<>(props);
        kafkaConsumer.subscribe(Arrays.asList("test"));
        while (true) {
            Thread.sleep(1000);
            System.out.println("poll start...");
            ConsumerRecords<String, String> records = kafkaConsumer.poll(100);
            int count = records.count();
            System.out.println("the number of topic:" + count);
            for (ConsumerRecord<String, String> record : records) {
                System.out.printf("offset = %d, key = %s, value = %s",
                        record.offset(),
                        record.key(),
                        record.value());
            }
        }
    }
}

2.2 生产者代码

package com.veeja.demo;

import org.apache.kafka.clients.producer.*;

import java.util.Properties;

/**
 * @author liuweijia
 */
public class ProducerTest {
    public static void main(String[] args) throws Exception {
        ProducerTest test1 = new ProducerTest();
        test1.execMsgSend();
    }

    public void execMsgSend() throws Exception {
        Properties props = new Properties();
		// 注意这里要填写你的IP地址
        props.put("bootstrap.servers", "10.100.10.195:9092");
        props.put("ack", "1");
        props.put("retries", 0);
        props.put("batch.size", 16384);
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        Producer<String, String> producer = new KafkaProducer<String, String>(props);
        String topic = "test";
        for (int i = 0; i < 10; i++) {
            String value = " this is another message_" + i;
            ProducerRecord<String, String> record = new ProducerRecord<String, String>(topic, i + "", value);
            producer.send(record, new Callback() {
                @Override
                public void onCompletion(RecordMetadata recordMetadata, Exception e) {
                    System.out.println("message send to partition " + recordMetadata.partition() + ", offset: " + recordMetadata.offset());
                }
            });
            System.out.println(i + " ---- success");
            Thread.sleep(1000);
        }
        System.out.println("send message over.");
    }
}

2.3 启动

先启动消费者实例,就会不停的消费,但是因为队列中没有东西,每次都消费了个寂寞。
打开生产者,每秒钟生产一个,共计十秒。这期间消费者就能拿到内容了。

Over.