本人收藏了一篇关于kafka多线程消费和手动提交偏移量的博文 , 设计思路值得借鉴, 所以也在此处分享给大家 :
分享本文的目的是,本人的这一篇博文会改进推荐的文章中的代码,并新增如下功能:
- 增加org.apache.kafka.clients.consumer.ConsumerRebalanceListener, 以便分区发生再平衡的时候能够重新分配消费者线程和对应的消费分区
- 可以通过配置文件配置关键参数,比如可以指定各个分区的起始偏移量offset, 这样消费线程只会从这个偏移量处开始消费消息
- 可以配置特定分区的消息消费上限, 这样, 如果指定分区的消息消费了指定的条数,则消费线程会自动停止(目前的设计思路考虑最简单的情形,就是如果某个分区指定了消费上限,则所有其他分区都必须指定消费上限, 这个特性作用其实不大,毕竟 kafka是一种消息队列,任何时候,只要收到消息就消费,没有消息就挂起消费线程才是最常见的使用场景,很少有消费指定条数的消息后就终止应用的做法)
- 增加接口,用户可以自定义消息的处理逻辑,然后可以通过配置文件配置消息处理类,以便达到自定义消息处理逻辑和增加代码通用性的目的
- 本来是打算把源码打成压缩包上传的,无奈公司在安全方面做的很严格,将文件上传到csdn, github等网站会被公司电脑检测到并被禁止,无奈以只能粘贴到博文中(目前已经去除与公司业务相关的任何代码,只剩一个通用的外壳,并能保证可运行),虽然黏贴到博文中的代码比较多,但本人保证黏贴的比较全,是可以根据博文完全还原这个项目的。
虽然本人会张贴出所有代码,但文件结构已经打乱,所以截图一张原始项目的文件结构图, 完整项目的截图如下所示:
本程序的配置文件分properties和yml两种格式,本质上是一样的配置,只是文件格式不同而已
properties格式配置文件是consumer-config.properties,内容和解释如下:
#消费者所属的消费者组
group.id=aaa-bbb-ccc
#kafka集群地址
bootstrap.servers=10.33.25.131:9092
auto.offset.reset=earliest
#是否允许自动提交偏移量
enable.auto.commit=false
# 值为1, kafka消息的序列化、反序列化类是:StringDeserializer和StringDeserializer
# 值为2, kafka消息的序列化、反序列化类是:ByteArrayDeserializer和ByteArrayDeserializer
topic.type=1
#kafka的topic名称
topic.name=SMART_METADATA_TOPIC
#poll一次的最大数据量
max.poll.records=1
#提交偏移量的间隔,这里的含义是每次poll了20条数据就提交一次偏移量
commit.length=20
# ./kafka-topics.sh --topic HIK_SMART_METADATA_TOPIC --describe --zookeeper 10.16.70.211:2181
#第一个元素表示第一个分区的消费起始偏移量,第二个元素表示第二个分区的消费起始偏移量,依次类推,有几个分区,则必须指定几个元素,不能跳过
# 依次修改partition 0到partition N的offset值,改为开始消费的位置,不同partition的offset之间用逗号隔开,-1表示不修改,
# 如果需要从头开始消费数据则配置offsets为0,
partitions.offsets=[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
#第一个元素表示第一个分区消费的数据量,第二个元素表示第二个分区消费的数据量,依次类推,有几个分区,则必须指定几个元素,不能跳过。
该配置项要么全部指定,要么全都不指定,不指定则注释该配置,表示无限消费,应用不会自动停止
#fetch.length= [5, 5, 5, 5, 5, 5, 5, 5, 5, 5]
# 本应用将指定分区的偏移量设置为指定的值是通过一个单独的kafka consumer进行偏移量的提交的。
# 偏移量重置后如果立刻就启动消费线程,会发生剧烈的reblance甚至消费线程启动失败.。所以需要间隔一段时间再启动消费者,
# 该参数配置的就是这个间隔时长
auto.commit.interval.ms=1000
#kafka消费者线程的数量,可以根据实际情况进行配置
kafka.consumer.count=6
#poll到的数据的实际处理类
consumer.data.processor=com.lhever.modules.consumer.processor.impl.DefaultProcessor
yml格式配置文件是consumer-config.yml,内容如下:
---
- group.id: aaa-bbb-ccc
bootstrap.servers: 10.66.191.30:9092,10.66.191.31:9092,10.66.191.29:9092
auto.offset.reset: earliest
enable.auto.commit: false
topic.type: 1
topic.name: SMART_METADATA_TOPIC
max.poll.records: 1
commit.length: 20
partitions.offsets: [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
fetch.length: [20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20]
auto.commit.interval.ms: 1000
kafka.consumer.count: 6
consumer.data.processor: com.lhever.modules.consumer.processor.impl.DefaultProcessor
具体的实现代码全部黏贴如下(类的数量不是很多,可以很容易根据上图还原项目)
package com.lhever.modules.consumer;
import com.lhever.modules.consumer.setup.ResourceFileStarter;
import com.lhever.modules.consumer.setup.Starter;
import com.lhever.modules.consumer.setup.YmlStarter;
import com.lhever.modules.core.exception.IllegalParameterException;
import org.apache.commons.lang3.StringUtils;
import java.util.Scanner;
/**
* 应用的启动入口类
*
*/
public class APP {
public static void main(String... args) {
System.out.println("请输入任意非空内容并回车启动程序");
String line = null;
do {
Scanner scan = new Scanner(System.in);
line = scan.nextLine();
} while (StringUtils.isBlank(line));
if (args == null || args.length < 1 || StringUtils.isBlank(args[0])) {
throw new IllegalParameterException("未指定配置文件路径");
}
args[0] = args[0].trim();
Starter starter = null;
if (args.length >= 2 && StringUtils.isNotBlank(args[1])) {
if (args[1].trim().equals("yml")) {
starter = new YmlStarter();
} else if(args[1].trim().equals("resourceFile")){
starter = new ResourceFileStarter();
} else {
starter = new YmlStarter();
}
} else {
starter = new YmlStarter();
}
starter.run(args);
}
}
package com.lhever.modules.consumer.cogroup.manager;
import com.lhever.modules.consumer.config.Config;
import com.lhever.modules.core.utils.CollectionUtils;
import java.util.List;
import java.util.concurrent.atomic.AtomicBoolean;
/**
* 配置文件中如果指定了消费上限,则该类用于记录各个分区的消费完成情况
* 如果指定集群中所有分区都消费到了指定上限,则应用会自动停止。如果没有配置上限,
* 则应用永远不会停止。
* @author lihong10 2018/10/13 11:58:00
* return
*/
public class ConsumeState {
private AtomicBoolean[] states = null;
public ConsumeState(Config config) {
List<Integer> partitionOffsets = config.getPartitionOffsets();
if (CollectionUtils.isEmpty(partitionOffsets)) {
return;
}
int size = partitionOffsets.size();
states = new AtomicBoolean[size];
for (int i = 0; i < size; i++) {
states[i] = new AtomicBoolean(false);
}
}
/**
* 判断指定分区是否消费到了指定的偏移量,如果配置文件中指定了消费上限的话
* @param partition
* @return
*/
public boolean isCompleted(int partition) {
if (partition < 0 || partition >= states.length) {
System.out.println("unknown partition, treat as uncompleted" + partition);
return false;
}
return states[partition].get();
}
/**
* 判断所有分区是否都消费到了指定的偏移量,如果配置文件中指定了消费上限的话
* @return
*/
public boolean allCompleted() {
boolean allCompleted = true;
for (AtomicBoolean state : states) {
allCompleted = allCompleted & state.get();
}
return allCompleted;
}
/**
* 设置指定分区消费到了指定的偏移量的标志位
* @param partition
* @param isCompleted
*/
public void updateState(int partition, boolean isCompleted) {
if (partition < 0 || partition >= states.length) {
System.out.println("unknown partition: " + partition);
return;
}
if (isCompleted == false) {
System.out.println("update unnecessary");
return;
}
AtomicBoolean state = states[partition];
if (state.get() == true) {
// StringUtils.println("partition:{} task complete already", partition);
return;
}
boolean success = false;
int i = 0;
while (!(success = state.compareAndSet(false, true))) {
if (state.get() == true) {
//如果被其他线程更新了
break;
}
i++;
//10000次自旋都不成功,放弃,该情况几乎不可能发生
if (i > 10000) {
break;
}
}
}
}
package com.lhever.modules.consumer.cogroup.manager;
import com.lhever.modules.consumer.config.Config;
import com.lhever.modules.consumer.cogroup.service.Poller;
import com.lhever.modules.core.utils.ThreadUtils;
/**
* Poller线程的管理对象,如果轮询指定分区消息的poller线程终止了,则pollerManager会为指定的分区
* 重新创建一个poller用于从特定分区中获取消息
*/
public class PollerManager {
private final Poller[] pollers;
private final Thread[] threads;
private final Config config;
private final ConsumeState consumeState;
public PollerManager(Config config, Poller[] pollers, Thread[] threads) {
this.config = config;
this.pollers = pollers;
this.threads = threads;
this.consumeState = new ConsumeState(config);
}
public void updateState(int partition, boolean isCompleted) {
consumeState.updateState(partition, isCompleted);
}
public boolean isCompleted(int partition) {
return consumeState.isCompleted(partition);
}
public boolean allCompleted() {
return consumeState.allCompleted();
}
public void manage() {
if (allCompleted()) {
//如果任务全部完成,则不再重启挂掉的poller线程
return;
}
for(int i = 0; i < pollers.length; i++) {
Poller poller= pollers[i];
if (poller != null && poller.isStop()) {
System.out.println("调度线程重置线程: poller-" + i);
poller.clean();
ThreadUtils.joinThread(threads[i]);
Poller newPoller = new Poller(config, null);
newPoller.setPollerManager(this);
Thread newPollThread = new Thread(newPoller);
newPollThread.setDaemon(true);
newPollThread.setName("poller-" + i);
newPollThread.start();
pollers[i] = newPoller;
threads[i] = newPollThread;
}
}
}
}
package com.lhever.modules.consumer.cogroup.parse;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.node.ArrayNode;
import com.fasterxml.jackson.databind.node.ObjectNode;
import com.lhever.modules.consumer.config.Config;
import com.lhever.modules.core.exception.BusinessException;
import com.lhever.modules.core.utils.CollectionUtils;
import com.lhever.modules.core.utils.PropertiesFile;
import com.lhever.modules.core.utils.YmlUtils;
import org.apache.commons.lang3.StringUtils;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.InputStream;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.Properties;
/**
* yml配置文件的解析类,
* 本项目提供yml格式和.properties格式的配置文件,配置信息是一致的,仅仅是配置文件格式有差异
*
* @author lihong10 2018/10/13 11:56:00
* return
*/
public class YmlParser {
private String[] keys = new String[]{
"group.id",
"bootstrap.servers",
"auto.offset.reset",
"enable.auto.commit",
"topic.type",
"topic.name",
"max.poll.records",
"commit.length",
"partitions.offsets",
"fetch.length",
"auto.commit.interval.ms",
"kafka.consumer.count",
"consumer.data.processor"
};
public List<Config> getConfigs(String fileName, boolean fullName) {
InputStream inPutStream = getInPutStream(fileName, fullName);
JsonNode jsonNode = YmlUtils.getJsonNode(inPutStream);
List<Config> configs = getConfigs(jsonNode);
return configs;
}
private List<Config> getConfigs(JsonNode tree) {
List<Config> configList = new ArrayList<Config>();
if (tree instanceof ObjectNode) {
ObjectNode node = (ObjectNode) tree;
Config config = getConfig(node);
configList.add(config);
} else if (tree instanceof ArrayNode) {
ArrayNode arrayNode = (ArrayNode) tree;
Iterator<JsonNode> elements = arrayNode.elements();
while (elements.hasNext()) {
JsonNode next = elements.next();
List<Config> configs = getConfigs(next);
if (CollectionUtils.isNotEmpty(configs)) {
configList.addAll(configs);
}
}
} else {
throw new BusinessException("yml format error");
}
return configList;
}
private Config getConfig(ObjectNode node) {
Properties props = new Properties();
for (String key : keys) {
if (!node.has(key)) {
continue;
}
String value = null;
if (key.equals("partitions.offsets") || key.equals("fetch.length")) {
ArrayNode arrs = (ArrayNode) node.get(key);
Iterator<JsonNode> elements = arrs.elements();
value = "";
while (elements.hasNext()) {
JsonNode next = elements.next();
if (StringUtils.isBlank(next.asText())) {
continue;
}
value += "," + next.asText();
}
value = "[" + value.substring(1) + "]";
} else {
value = node.get(key).asText();
}
props.put(key, value);
}
Config config = new Config(new PropertiesFile(props));
return config;
}
private InputStream getInPutStream(String fileName, boolean fullName) {
InputStream inputStream = null;
try {
if (fullName) {
File file = new File(fileName);
if (file == null || !file.isFile() || !file.exists()) {
throw new IllegalArgumentException("不正确的配置文件");
}
inputStream = new FileInputStream(file);
} else {
inputStream = Thread.currentThread().getContextClassLoader().getResourceAsStream(fileName);
}
} catch (FileNotFoundException e) {
throw new BusinessException("加载配置文件出错", e);
}
if (inputStream == null) {
throw new IllegalArgumentException("不正确的配置文件, inputStream null");
}
return inputStream;
}
}
package com.lhever.modules.consumer.cogroup.service;
import com.lhever.modules.consumer.processor.Processor;
import com.lhever.modules.consumer.config.Config;
import com.lhever.modules.core.exception.SystemException;
import com.lhever.modules.core.utils.ReflectionUtils;
import com.lhever.modules.core.utils.StringUtils;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.common.TopicPartition;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.time.Duration;
import java.time.LocalDateTime;
import java.util.Collections;
import java.util.Map;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.TimeUnit;
/**
* 消费线程对象。 poller线程从特定分区中获取到的消息,实际上是委托给Consumer线程去处理.
* 一个poller线程对应一个Consumer线程
* @param <K>
* @param <V>
*/
public class Consumer<K, V> implements Runnable {
private static Logger logger = LoggerFactory.getLogger(Consumer.class);
//保存Poller线程发送过来的消息
private BlockingQueue<ConsumerRecord<K, V>> queue = new LinkedBlockingQueue<>();
private Config config;
//上一次提交时间
private LocalDateTime lastTime = LocalDateTime.now();
//消费了20条数据, 就进行一次提交
private long commitLength;
//距离上一次提交多久, 就提交一次
private Duration commitTime = Duration.ofSeconds(2);
//当前该线程消费的数据条数
private long completed = 0;
//保存上一条消费的数据
private ConsumerRecord<K, V> lastUncommittedRecord;
private Processor consumerRecordProcessor;
private Poller poller;
//用于保存消费偏移量的queue, 由MsgReceiver提供
public Consumer(Config config, Poller poller) {
this.config = config;
this.poller = poller;
this.consumerRecordProcessor = createProcessor(config);
this.commitLength = config.getCommitLength();
}
private volatile boolean stop = false;
@Override
public void run() {
while (!stop || queue.size() > 0) {
//有时间限制的poll, consumer发送消费过来的队列. 每个处理线程都有自己的队列.
ConsumerRecord<K, V> record = poll();;
if (record != null) {
//处理过程
process(record);
//完成任务数加1
this.completed++;
//保存上一条处理记录
lastUncommittedRecord = record;
}
//提交偏移给consumer
commitToQueue();
}
}
private Processor createProcessor(Config config) {
String processorClass = config.getDataProcessor();
if (StringUtils.isBlank(processorClass)) {
throw new SystemException("consumer.processor未配置");
}
Processor processor = ReflectionUtils.newInstance(processorClass.trim(), Processor.class,
Thread.currentThread().getContextClassLoader());
if (processor == null) {
throw new SystemException("系统无法加载类: " + processorClass);
}
return processor;
}
private void process(ConsumerRecord<K, V> record) {
try {
consumerRecordProcessor.process(record, config);
//防止自定义的processor抛出的异常威胁到consumer线程
} catch (Exception e) {
System.out.println("processor error");
e.printStackTrace();
}
}
//将当前的消费偏移量放到queue中, 由MsgReceiver进行提交
private void commitToQueue() {
//如果没有消费或者最后一条消费数据已经提交偏移信息, 则不提交偏移信息
if (lastUncommittedRecord == null) {
return;
}
//如果消费了设定的条数, 比如又消费了commitLength消息
boolean arrivedCommitLength = this.completed % commitLength == 0;
//获取当前时间, 看是否已经到了需要提交的时间
LocalDateTime currentTime = LocalDateTime.now();
boolean arrivedTime = currentTime.isAfter(lastTime.plus(commitTime));
//如果消费了设定条数, 或者到了设定时间, 那么就发送偏移到消费者, 由消费者非阻塞poll这个偏移信息队列, 进行提交
if (arrivedCommitLength || arrivedTime) {
lastTime = currentTime;
long offset = lastUncommittedRecord.offset();
int partition = lastUncommittedRecord.partition();
String topic = lastUncommittedRecord.topic();
TopicPartition topicPartition = new TopicPartition(topic, partition);
logger.debug("partition: " + topicPartition + " submit offset: " + (offset + 1L) + " to consumer task");
Map<TopicPartition, OffsetAndMetadata> map = Collections.singletonMap(topicPartition, new OffsetAndMetadata(offset + 1L));
poller.putOffset(map);
//置空
lastUncommittedRecord = null;
}
}
public void stop() {
this.stop = true;
}
public boolean isStop() {
return stop;
}
private ConsumerRecord<K, V> poll() {
ConsumerRecord<K, V> record = null;
try {
record = queue.poll(50, TimeUnit.MILLISECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
}
return record;
}
//consumer线程向处理线程的队列中添加record
public boolean add(ConsumerRecord<K, V> record) {
boolean success = true;
try {
if (!stop) {
queue.put(record);
} else {
success = false;
}
} catch (InterruptedException e) {
success = false;
e.printStackTrace();
}
return success;
}
}
package com.lhever.modules.consumer.cogroup.service;
import com.lhever.modules.consumer.cogroup.manager.PollerManager;
import com.lhever.modules.consumer.config.Config;
import com.lhever.modules.consumer.processor.Processor;
import com.lhever.modules.core.exception.IllegalParameterException;
import com.lhever.modules.core.exception.SystemException;
import com.lhever.modules.core.utils.ReflectionUtils;
import com.lhever.modules.core.utils.StringUtils;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.common.TopicPartition;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
/**
* 本项目的调度类,Dispatcher的职责如下:
* 1:根据配置文件中的配置,将集群中topic的分区偏移量重置为配置的值
* 2:启动poller线程用于重分区中轮询消息(注: poller线程只负责轮询消息和提交偏移量,消息的处理实际上是委托给Consumer线程处理,
* Consumer线程对消息的具体处理逻辑,又是由实现了Processor接口的类进行。
* 3: 启动PollerManager, PollerManager作用是定期重启提前终止的poller线程,另外,如果配置文件指定了消费上限,则每个分区的消息
* 如果消费到了指定的偏移量处的时候,PollerManager还可以终止整个应用
*/
public class Dispatcher {
private volatile boolean stop = false;
private volatile CountDownLatch latch = null;
public Dispatcher() {
}
public void run(Config config) {
check(config);
//启动一个消费者,将topic中的消息偏移量设置为配置文件指定的消费起始偏移量,然后再关闭消费者
commiteOffsetCousumeFrom(config);
Phasers.SubscribePhaser commitInitialOffsetPhaser = new Phasers.SubscribePhaser();
int consumerCount = config.getKafkaConsumerCount();
for (int i = 0; i < consumerCount; i++) {
commitInitialOffsetPhaser.register();
}
Poller[] pollers = new Poller[consumerCount];
Thread[] pollerThreads = new Thread[consumerCount];
for (int i = 0; i < consumerCount; i++) {
Poller poller = new Poller(config, commitInitialOffsetPhaser);
Thread pollThread = new Thread(poller);
pollThread.setDaemon(true);
pollThread.setName("poller-" + i);
pollers[i] = poller;
pollerThreads[i] = pollThread;
}
PollerManager manager = new PollerManager(config, pollers, pollerThreads);
for(int i = 0; i < consumerCount; i++) {
pollers[i].setPollerManager(manager);
pollerThreads[i].start();
}
System.out.println("调度线程进入调度状态..............");
while (true) {
if (stop || manager.allCompleted()) {
System.out.println("任务完成或被销毁,bye bye !!!");
break;
}
System.out.println("调度线程进入检查状态..............");
manager.manage();
waitMillis(1000L * 30);
}
for(int i = 0; i < pollers.length; i++) {
pollers[i].clean();
}
}
private void waitMillis(long millis) {
System.out.println("调度线程进入休眠状态..............");
CountDownLatch latch = new CountDownLatch(1);
try {
latch.await(millis, TimeUnit.MILLISECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public void destroy() {
this.stop = true;
}
private Map<Integer, Map<TopicPartition, OffsetAndMetadata>> assignPartition(Config config) {
int consumerCount = config.getKafkaConsumerCount();
Map<Integer, Map<TopicPartition, OffsetAndMetadata>> pollerAndPartition =
new HashMap<Integer, Map<TopicPartition, OffsetAndMetadata>>();
List<Integer> partitionOffsets = config.getPartitionOffsets();
StringBuilder builder = new StringBuilder().append("\n\t");
int consumerIndex = 0;
for (int partionIndex = 0; partionIndex < partitionOffsets.size(); partionIndex++) {
Integer offset = partitionOffsets.get(partionIndex);
if (consumerIndex >= consumerCount) {
consumerIndex = 0;
}
Map<TopicPartition, OffsetAndMetadata> topicPartitionOffsetAndMetadataMap = pollerAndPartition.get(consumerIndex);
if (topicPartitionOffsetAndMetadataMap == null) {
topicPartitionOffsetAndMetadataMap = new HashMap<TopicPartition, OffsetAndMetadata>();
}
TopicPartition topicPartition = new TopicPartition(config.getTopicName(), partionIndex);
topicPartitionOffsetAndMetadataMap.put(topicPartition, new OffsetAndMetadata(offset));
pollerAndPartition.put(consumerIndex, topicPartitionOffsetAndMetadataMap);
builder.append("消费者: ").append(consumerIndex)
.append(" =>")
.append("分区: ").append(partionIndex)
.append(" : ")
.append("偏移量: ").append(offset).append("\n\t");
consumerIndex++;
}
System.out.println(builder.toString());
return pollerAndPartition;
}
private void commiteOffsetCousumeFrom(Config config) {
OffsetReseter reseter = new OffsetReseter(config);
reseter.reset();
}
private void setStop() {
stop = true;
if (latch != null) {
latch.countDown();
}
}
private void check(Config config) {
int consumerCount = config.getKafkaConsumerCount();
if (consumerCount <= 0) {
throw new IllegalParameterException("kafka.consumer.count <= 0");
}
if (consumerCount > config.getPartitionOffsets().size()) {
throw new IllegalParameterException("error when consumer count > partition count");
}
checkProcessor(config);
}
private void checkProcessor(Config config) {
String processorClass = config.getDataProcessor();
if (StringUtils.isBlank(processorClass)) {
throw new SystemException("consumer.processor未配置");
}
Processor processor = ReflectionUtils.newInstance(processorClass.trim(), Processor.class,
Thread.currentThread().getContextClassLoader());
if (processor == null) {
throw new SystemException("系统无法加载类: " + processorClass);
}
}
}
package com.lhever.modules.consumer.cogroup.service;
import com.lhever.modules.consumer.config.Config;
import com.lhever.modules.consumer.utils.Helper;
import com.lhever.modules.core.exception.SystemException;
import com.lhever.modules.core.utils.CollectionUtils;
import com.lhever.modules.core.utils.ParseUtil;
import com.lhever.modules.core.utils.StringUtils;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.common.TopicPartition;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* ./kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group aaa-bbb-ccc --topic HIK_SMART_METADATA_TOPIC --zookeeper hdfa70211:2181,hdfa167:2181,hdfa165:2181
该类的作用是将指定分区的偏移量重置为指定的数值
*/
public class OffsetReseter {
private Config config;
private Map<TopicPartition, OffsetAndMetadata> partionOffsetsConsumeFrom;
public OffsetReseter(Config config) {
this.config = config;
this.partionOffsetsConsumeFrom = getPartionOffsetsConsumeFrom(config);
}
public boolean reset() {
KafkaConsumer consumer = null;
try {
consumer = new KafkaConsumer(config.getKafkaConsumerProp());
if (CollectionUtils.mapNotEmpty(partionOffsetsConsumeFrom)) {
consumer.commitSync(partionOffsetsConsumeFrom);
StringUtils.println("重置topic: {}, group: {}偏移量成功, 偏移量信息: =>{}",
config.getTopicName(), config.getGroupId(), getInitialOffsetsInfo(config));
}
} catch (Exception e) {
String s = ParseUtil.parseString("重置topic: {}, group: {}偏移量失败, 偏移量信息: =>{}, " +
"请您稍等一会再重启, 或者修改配置项=> group.id 为另一个不同的值 ",
config.getTopicName(), config.getGroupId(), getInitialOffsetsInfo(config));
throw new SystemException(s, e);
} finally {
Helper.close(consumer);
}
silent(config);
return true;
}
private String getInitialOffsetsInfo(Config config) {
if (config.getPartitionOffsets() == null) {
return "";
}
List<Integer> partitionOffsets = config.getPartitionOffsets();
StringBuffer builder = new StringBuffer();
int size = partitionOffsets.size();
builder.append("\n\t").append("分区: => ");
for(int i = 0; i < size; i++) {
builder.append(String.format("%11d", i));
}
builder.append("\n\t").append("偏移: => ");
for(int i = 0; i < size; i++) {
builder.append(String.format("%11d", partitionOffsets.get(i)));
}
builder.append("\n\t");
return builder.toString();
}
private void silent(Config config) {
System.out.println("wait reset cunsumer session timeout....");
try {
Thread.sleep((config.getSessionTimeoutMs() + config.getAutoCommitIntervalMs()) / 10);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("reset cunsumer close totally, system continue...");
}
private Map<TopicPartition, OffsetAndMetadata> getPartionOffsetsConsumeFrom(Config config) {
List<Integer> partitionOffsets = config.getPartitionOffsets();
if (CollectionUtils.isEmpty(partitionOffsets)) {
return null;
}
Map<TopicPartition, OffsetAndMetadata> partionOffsetsConsumeFrom =
new HashMap<TopicPartition, OffsetAndMetadata>();
for (int i = 0; i < partitionOffsets.size(); i++) {
int offsetPosition = partitionOffsets.get(i).intValue();
if (offsetPosition >= 0) {
TopicPartition topicPartition = new TopicPartition(config.getTopicName(), i);
partionOffsetsConsumeFrom.put(topicPartition, new OffsetAndMetadata(offsetPosition));
}
}
return partionOffsetsConsumeFrom;
}
}
package com.lhever.modules.consumer.cogroup.service;
import com.lhever.modules.core.utils.StringUtils;
import com.lhever.modules.core.utils.ThreadUtils;
import org.apache.kafka.clients.consumer.ConsumerRebalanceListener;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.common.TopicPartition;
import java.util.Collection;
/**
* kafka中的分区发生rebalance的时候,如果poller对应分区被撤销,则用于终止Consumer线程,如果poller获得新的分区,
* 则也需要从该分区轮询消息
*/
public class PartitionRebalanceListener implements ConsumerRebalanceListener {
private Poller poller;
//初始化方法,传入consumer对象,否则无法调用外部的consumer对象,必须传入
public PartitionRebalanceListener(Poller poller) {
this.poller = poller;
}
@Override
public void onPartitionsRevoked(Collection<TopicPartition> collection) {
//提交偏移量 主要是consumer.commitSync(toCommit); 方法
poller.commitOffsetUrgently(true);
for (TopicPartition partition : collection) {
System.out.println("onPartitionsRevoked: " + Thread.currentThread().getName() + " 分区: " + partition.partition() + "被撤销!!!");
Consumer consumer = poller.removeByPartition(partition);
if (consumer != null) {
consumer.stop();
Thread thread = poller.removeThreadByPartition(partition);
if (thread != null) {
//等待被撤销的分区的任务处理线程运行结束
ThreadUtils.joinThread(thread, 10 * 60 * 1000L);
StringUtils.println("thread: {} has been killed by thread: {}",
thread.getName(), Thread.currentThread().getName());
}
}
}
}
@Override
public void onPartitionsAssigned(Collection<TopicPartition> collection) {
//rebalance之后 获取新的分区,获取最新的偏移量,设置拉取分量
for (TopicPartition partition : collection) {
boolean success = seekPartitionWithRetry(poller, partition);
if (!success) {
StringUtils.println("onPartitionsAssigned: {} 分配到: (分区: {} <-> 偏移量: {})",
Thread.currentThread().getName(), partition.partition(), null);
}
}
}
private boolean seekPartitionWithRetry(Poller poller, TopicPartition partition) {
boolean success = true;
try {
seekPartition(poller, partition);
} catch (Exception e) {
success = false;
e.printStackTrace();
}
int retry = 3;
while (!success && retry > 0) {
success = true;
try {
seekPartition(poller, partition);
} catch (Exception e) {
success = false;
e.printStackTrace();
}
retry--;
}
return success;
}
private void seekPartition(Poller poller, TopicPartition partition) {
//获取消费偏移量,实现原理是向协调者发送获取请求
OffsetAndMetadata offset = poller.getConsumer().committed(partition);
//设置本地拉取分量,下次拉取消息以这个偏移量为准
poller.getConsumer().seek(partition, offset.offset());
StringUtils.println("onPartitionsAssigned: {} 分配到: (分区: {} <-> 偏移量: {})",
Thread.currentThread().getName(), partition.partition(), offset.offset());
}
}
package com.lhever.modules.consumer.cogroup.service;
import java.util.concurrent.Phaser;
public class Phasers {
public static class SubscribePhaser extends Phaser {
//在每个阶段执行完成后回调的方法
@Override
protected boolean onAdvance(int phase, int registeredParties) {
switch (phase) {
case 0:
return InitailOffsetCommited();
default:
return true;
}
}
private boolean InitailOffsetCommited() {
System.out.println("所有消费者订阅topic完毕");
return true;
}
}
}
package com.lhever.modules.consumer.cogroup.service;
import com.lhever.modules.consumer.cogroup.manager.PollerManager;
import com.lhever.modules.consumer.config.Config;
import com.lhever.modules.consumer.utils.Helper;
import com.lhever.modules.core.exception.IllegalParameterException;
import com.lhever.modules.core.utils.StringUtils;
import com.lhever.modules.core.utils.ThreadUtils;
import org.apache.kafka.clients.consumer.*;
import org.apache.kafka.common.TopicPartition;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.LinkedBlockingQueue;
public class Poller<K, V> implements Runnable {
private LinkedBlockingQueue<Map<TopicPartition, OffsetAndMetadata>> offsetQueue = new LinkedBlockingQueue<>();
private ConcurrentHashMap<TopicPartition, Consumer> partitionProcessorMap = new ConcurrentHashMap<>();
private ConcurrentHashMap<TopicPartition, Thread> partitionThreadMap = new ConcurrentHashMap<>();
private Config config;
private KafkaConsumer<K, V> consumer;
private String topicName;
private volatile boolean stop = false;
private Phasers.SubscribePhaser commitInitialOffsetPhaser;
private List<Boolean> pollStateQueue = new ArrayList<Boolean>();
private PollerManager pollerManager;
public Poller(Config config, Phasers.SubscribePhaser commitInitialOffsetPhaser) {
this.config = config;
this.topicName = config.getTopicName();
this.consumer = new KafkaConsumer(config.getKafkaConsumerProp());
this.commitInitialOffsetPhaser = commitInitialOffsetPhaser;
}
public void setPollerManager(PollerManager pollerManager) {
this.pollerManager = pollerManager;
}
public PollerManager getPollerManager() {
return pollerManager;
}
@Override
public void run() {
subscribeTopic(consumer, topicName);
if (commitInitialOffsetPhaser != null) {
commitInitialOffsetPhaser.arriveAndAwaitAdvance();
}
//检查线程中断标志是否设置, 如果设置则表示外界想要停止该任务,终止该任务
try {
while (!stop) {
doRun();
}
} finally {
Helper.close(consumer);
}
}
private void doRun() {
//查看该消费者是否有需要提交的偏移信息, 使用非阻塞读取
Map<TopicPartition, OffsetAndMetadata> toCommit = offsetQueue.poll();
if (toCommit != null && toCommit.size() > 0) {
commitOffset(toCommit, consumer, true);
}
//最多轮询100ms
ConsumerRecords<K, V> records = poll();
if (records == null) {
return;
}
/*if (records.count() > 0) {
StringUtils.println("线程: {} poll records size: " + records.count(), Thread.currentThread().getName());
}*/
for (final ConsumerRecord<K, V> record : records) {
String topic = record.topic();
int partition = record.partition();
long offset = record.offset();
TopicPartition topicPartition = new TopicPartition(topic, partition);
Consumer consumer = null;
//查找指定分区的消费者
//consumeRecord(consumer, record);
List<Integer> fetchLength = config.getFetchLength();
if ((fetchLength != null) && (partition <= fetchLength.size() - 1)) {
int index = fetchLength.get(partition);
if (index < 0) {//如果没有指定消费上限
consumer = findByPartition(topicPartition, true);
consumeRecord(consumer, record);
} else if (offset >= index) {//如果已经消费到了指定的偏移量
if (pollerManager != null) {
//标记对该分区的消费已经完成
pollerManager.updateState(partition, true);
consumer = findByPartition(topicPartition, false);
if (consumer != null) {
consumer.stop();
}
}
continue;
} else {
consumer = findByPartition(topicPartition, true);
consumeRecord(consumer, record);
}
} else {
consumer = findByPartition(topicPartition, true);
consumeRecord(consumer, record);
}
}
}
private boolean consumeRecord(Consumer consumer, ConsumerRecord record) {
if (consumer == null || record == null) {
return false;
}
//将消息放到指定消费者的任务队列进行异步消费
boolean added = consumer.add(record);
if (!added) {
StringUtils.println("线程: {}提交记录给processor失败,记录: {}", Thread.currentThread().getName(), record);
}
return added;
}
/**
* 查找消费指定分区的消费者
* @param topicPartition topic的分区信息
* @param createIfNULL 指定分区的消费者不存在是否创建新的消费者,true:创建, false:不创建
* @return
*/
private Consumer findByPartition(TopicPartition topicPartition, boolean createIfNULL) {
Consumer consumer = partitionProcessorMap.get(topicPartition);
//如果当前分区还没有开始消费, 则就没有消费任务在map中
if (consumer == null && createIfNULL) {
consumer = createByPartition(topicPartition);
}
return consumer;
}
private Consumer createByPartition(TopicPartition topicPartition) {
//生成新的处理任务和线程, 然后将其放入对应的map中进行保存
Consumer consumer = new Consumer(config, this);
Consumer old = partitionProcessorMap.putIfAbsent(topicPartition, consumer);
if (old != null) {
//发生reblance多次, poller被撤销了分区partition, 又获取到了这个分区的情况
if (old.isStop()) {
startConsumer(topicPartition, consumer);
} else {
consumer = old;
}
} else {
startConsumer(topicPartition, consumer);
}
return consumer;
}
public void commitAndStop() {
commitOffsetUrgently(true);
stop();
}
public void stop() {
this.stop = true;
}
public boolean isStop() {
return stop;
}
private ConsumerRecords<K, V> poll() {
ConsumerRecords<K, V> records = null;
try {
records = consumer.poll(100);
pollStateQueue.add(true);
} catch (Exception e) {
pollStateQueue.add(false);
e.printStackTrace();
}
checkState();
return records;
}
private void checkState() {
int threshold = 500;
if (pollStateQueue.size() > threshold) {
int lastIndex = pollStateQueue.size() - 1;
boolean health = false;
int j = threshold;
while (j > 0 && !health) {
health = health || pollStateQueue.get(lastIndex);
lastIndex--;
j--;
}
if (j == 0 && !health) {
StringUtils.println("线程: {}最近的: {}次poll全部失败, 标记线程终止",
Thread.currentThread().getName(), threshold);
commitOffsetUrgently(true);
stop();
killConsumers();
}
pollStateQueue.remove(0);
}
while (pollStateQueue.size() > 0 && pollStateQueue.get(0)) {
pollStateQueue.remove(0);
}
}
private void startConsumer(TopicPartition topicPartition, Consumer consumer) {
partitionProcessorMap.put(topicPartition, consumer);
Thread thread = new Thread(consumer);
String name = "processor-for-partition-" + topicPartition.partition();
thread.setName(name);
System.out.println("start process Thread: " + name);
partitionThreadMap.put(topicPartition, thread);
thread.start();
}
private void commitOffset(Map<TopicPartition, OffsetAndMetadata> commitOffsets, KafkaConsumer consumer, boolean sync) {
//同步提交offset
if (commitOffsets == null || commitOffsets.size() == 0) {
return;
}
StringBuilder builder = new StringBuilder();
try {
if (sync) {
consumer.commitSync(commitOffsets);
} else {
consumer.commitAsync(commitOffsets, new OffsetCommitCallback() {
@Override
public void onComplete(Map<TopicPartition, OffsetAndMetadata> offsets, Exception exception) {
if (exception != null) {
exception.printStackTrace();
System.out.println(Thread.currentThread().getName() + " 异步提交偏移量报错");
}
}
});
}
Set<TopicPartition> topicPartitions = commitOffsets.keySet();
for (TopicPartition topicPartition : topicPartitions) {
builder.append("(").append(topicPartition.partition()).append(" : ");
OffsetAndMetadata offsetAndMetadata = commitOffsets.get(topicPartition);
builder.append(offsetAndMetadata.offset()).append(") ");
}
System.out.println(Thread.currentThread().getName() + " 提交偏移量完毕: " + builder.toString());
} catch (Exception e) {
System.out.println(Thread.currentThread().getName() + " 提交偏移量报错!!!" + builder.toString());
}
}
/**
* 连续不停的将积压在队列中的偏移量提交完毕
*/
public void commitOffsetUrgently(boolean commitAll) {
Map<TopicPartition, OffsetAndMetadata> toCommit = null;
int count = (commitAll ? Integer.MAX_VALUE : 10000);
while((toCommit = offsetQueue.poll()) != null && (toCommit.size() > 0) && count > 0) {
commitOffset(toCommit, consumer, false);
count--;
}
}
private void subscribeTopic(KafkaConsumer consumer, String... topics) {
if (consumer == null) {
throw new IllegalParameterException("consumer is null");
}
if (topics == null || topics.length == 0) {
throw new IllegalParameterException("topic unknown, subscribe fail");
}
ArrayList<String> topicList = new ArrayList<>();
for (String topic : topics) {
topicList.add(topic);
}
try {
consumer.subscribe(topicList, new PartitionRebalanceListener(this));
} catch (Exception e) {
StringUtils.println("线程: {}订阅topic: {}失败, 停止该线程", Thread.currentThread().getName(), config.getTopicName());
stop();
}
System.out.println(Thread.currentThread().getName() + "成功订阅topic: " + topicName);
}
public KafkaConsumer getConsumer() {
return consumer;
}
public Consumer removeByPartition(TopicPartition topicPartition) {
return partitionProcessorMap.remove(topicPartition);
}
public Thread removeThreadByPartition(TopicPartition topicPartition) {
return partitionThreadMap.remove(topicPartition);
}
public void putOffset(Map<TopicPartition, OffsetAndMetadata> offset) {
try {
offsetQueue.put(offset);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public void clean() {
commitOffsetUrgently(true);
killConsumers();
}
public void killConsumers() {
Set<Map.Entry<TopicPartition, Consumer>> entries = partitionProcessorMap.entrySet();
List<Thread> remainedKilled = new ArrayList<Thread>();
for (Map.Entry<TopicPartition, Consumer> entry : entries) {
TopicPartition key = entry.getKey();
if (key != null) {
Consumer consumer = partitionProcessorMap.remove(key);
if (consumer != null) {
consumer.stop();
}
Thread thread = partitionThreadMap.remove(key);
if (thread != null) {
remainedKilled.add(thread);
}
}
}
//等待被撤销的分区的任务处理线程运行结束
for (Thread thread : remainedKilled) {
ThreadUtils.joinThread(thread, 10 * 60 * 1000L);
StringUtils.println("thread: {} has been killed by thread: {}", thread.getName(), Thread.currentThread().getName());
}
}
}
package com.lhever.modules.consumer.config;
import com.lhever.modules.core.exception.IllegalParameterException;
import com.lhever.modules.core.utils.ArrayUtils;
import com.lhever.modules.core.utils.CollectionUtils;
import com.lhever.modules.core.utils.PropertiesFile;
import com.lhever.modules.core.utils.StringUtils;
import org.apache.kafka.common.serialization.StringDeserializer;
import java.util.Arrays;
import java.util.List;
import java.util.Properties;
public class Config {
private PropertiesFile props;
private String bootstrapServers;
private String topicName;
private String groupId;
private List<Integer> partitionOffsets;
private Properties kafkaConsumerProp;
private int kafkaConsumerCount;
private boolean enableAutoCommit;
private int autoCommitIntervalMs;
private int sessionTimeoutMs;
private int topicType;
private String autoOffsetReset;
private String dataProcessor;
private int maxPollRecords;
private List<Integer> fetchLength;
private long commitLength;
public Config(PropertiesFile props) {
this.props = props;
this.bootstrapServers = props.getStringProperty("bootstrap.servers");
this.topicName = props.getStringProperty("topic.name");
this.groupId = props.getStringProperty("group.id");
this.kafkaConsumerCount = props.getIntProperty("kafka.consumer.count", 1);
this.enableAutoCommit = props.getBooleanProperty("enable.auto.commit", true);
this.autoCommitIntervalMs = props.getIntProperty("auto.commit.interval.ms", 1000);
this.sessionTimeoutMs = props.getIntProperty("session.timeout.ms", 30000);
this.topicType = props.getIntProperty("topic.type", 2);
this.autoOffsetReset = props.getStringProperty("auto.offset.reset");
this.dataProcessor = props.getStringProperty("consumer.data.processor");
this.maxPollRecords = props.getIntProperty("max.poll.records", 50);
this.commitLength = props.getLongProperty("commit.length", 20L);
this.kafkaConsumerProp = getKafkaConsumerProp(props);
this.partitionOffsets = getPartitionOffsets(props);
this.fetchLength = getFetchLength(props);
if (CollectionUtils.isNotEmpty(getFetchLength(props))) {
this.maxPollRecords = 1;
this.commitLength = 1;
StringUtils.println("设置了: {}, {}修正为: {}", "fetch.length", "maxPollRecords", maxPollRecords);
StringUtils.println("设置了: {}, {}修正为: {}", "fetch.length", "commitLength", commitLength);
}
}
private List<Integer> getPartitionOffsets(PropertiesFile props) {
String offsets = props.getStringProperty("partitions.offsets");
if (StringUtils.isBlank(offsets)) {
return null;
}
offsets = offsets.trim();
if (offsets.length() <= 2) {
return null;
}
String[] ints = ArrayUtils.splitByComma(offsets.substring(1, offsets.length() - 1));
Integer[] integers = ArrayUtils.toIntegerArray(ints);
List<Integer> offsetList = Arrays.asList(integers);
return offsetList;
}
private List<Integer> getFetchLength(PropertiesFile props) {
String fetch = props.getStringProperty("fetch.length");
if (StringUtils.isBlank(fetch)) {
return null;
}
fetch = fetch.trim();
if (fetch.length() <= 2) {
return null;
}
String[] ints = ArrayUtils.splitByComma(fetch.substring(1, fetch.length() - 1));
Integer[] integers = ArrayUtils.toIntegerArray(ints);
List<Integer> fetchLength = Arrays.asList(integers);
for (int i = 0; i < fetchLength.size(); i++) {
int len = fetchLength.get(i);
if (len < 0) {
continue;
} else {
fetchLength.set(i, partitionOffsets.get(i) + len);
}
}
return fetchLength;
}
private Properties getKafkaConsumerProp(PropertiesFile file) {
Properties props = new Properties();
if (StringUtils.isBlank(bootstrapServers)) {
throw new IllegalParameterException("bootstrap.servers empty");
}
props.put("bootstrap.servers", bootstrapServers);
if (StringUtils.isBlank(groupId)) {
throw new IllegalParameterException("group.id empty");
}
props.put("group.id", groupId);
if (StringUtils.isBlank(autoOffsetReset)) {
throw new IllegalParameterException("auto.offset.reset empty");
}
props.put("auto.offset.reset", autoOffsetReset);
if (StringUtils.isBlank(topicName)) {
throw new IllegalParameterException("topic empty");
}
props.put("topic.name", topicName);
if (topicType == 2) {
props.put("key.deserializer", "org.apache.kafka.common.serialization.ByteArrayDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.ByteArrayDeserializer");
} else {
props.put("key.deserializer", StringDeserializer.class.getName());
props.put("value.deserializer", StringDeserializer.class.getName());
}
props.put("enable.auto.commit", enableAutoCommit);
props.put("auto.commit.interval.ms", autoCommitIntervalMs);
props.put("session.timeout.ms", sessionTimeoutMs);
props.put("max.poll.records", maxPollRecords);
return props;
}
public PropertiesFile getProps() {
return props;
}
public String getBootstrapServers() {
return bootstrapServers;
}
public String getTopicName() {
return topicName;
}
public String getGroupId() {
return groupId;
}
public List<Integer> getPartitionOffsets() {
return partitionOffsets;
}
public Properties getKafkaConsumerProp() {
return kafkaConsumerProp;
}
public int getKafkaConsumerCount() {
return kafkaConsumerCount;
}
public int getAutoCommitIntervalMs() {
return autoCommitIntervalMs;
}
public int getSessionTimeoutMs() {
return sessionTimeoutMs;
}
public String getDataProcessor() {
return dataProcessor;
}
public int getMaxPollRecords() {
return maxPollRecords;
}
public List<Integer> getFetchLength() {
return fetchLength;
}
public long getCommitLength() {
return commitLength;
}
}
package com.lhever.modules.consumer.processor.impl;
import com.lhever.modules.consumer.config.Config;
import com.lhever.modules.consumer.processor.Processor;
import org.apache.kafka.clients.consumer.ConsumerRecord;
/**
* 默认的消息处理实现类
*/
public class DefaultProcessor implements Processor<String, String> {
@Override
public void process(ConsumerRecord<String, String> record, Config config) {
System.out.println(Thread.currentThread().getName() + " get data: " + record);
}
}
package com.lhever.modules.consumer.processor;
import com.lhever.modules.consumer.config.Config;
import org.apache.kafka.clients.consumer.ConsumerRecord;
/**
* 该即可的实现了用于定于poller轮询到的消息的实际处理逻辑
* @author lihong10 2018/10/13 12:34:00
* return
*/
public interface Processor<K, V> {
public void process(ConsumerRecord<K, V> record, Config config);
}
package com.lhever.modules.consumer.setup;
import com.lhever.modules.consumer.cogroup.service.Dispatcher;
import com.lhever.modules.consumer.config.Config;
import com.lhever.modules.core.exception.SystemException;
import com.lhever.modules.core.utils.PropertiesFile;
/**
* 读取properties文件启动应用的启动类
*/
public class ResourceFileStarter extends Starter {
@Override
public void run(String... args) { Dispatcher dispatcher = new Dispatcher();
dispatcher.run(getConfig(args[0]));
}
private Config getConfig(String path) {
Config config = null;
Exception ex = null;
try {
config = new Config(new PropertiesFile(path, true));
} catch (Exception e) {
ex = e;
}
if (ex != null || config == null) {
throw new SystemException("无法获取配置信息", ex);
}
return config;
}
}
package com.lhever.modules.consumer.setup;
public abstract class Starter {
public abstract void run(String... args) ;
}
package com.lhever.modules.consumer.setup;
import com.lhever.modules.consumer.cogroup.parse.YmlParser;
import com.lhever.modules.consumer.cogroup.service.Dispatcher;
import com.lhever.modules.consumer.config.Config;
import java.util.List;
/**
* 读取yml文件启动应用的启动类
*/
public class YmlStarter extends Starter {
@Override
public void run(String... args) {
YmlParser parser = new YmlParser();
List<Config> configs = parser.getConfigs(args[0], true);
Dispatcher dispatcher = new Dispatcher();
dispatcher.run(configs.get(0));
}
}
package com.lhever.modules.consumer.utils;
import org.apache.kafka.clients.consumer.KafkaConsumer;
public class Helper {
public static void close(KafkaConsumer consumer) {
if (consumer == null) {
return;
}
try {
consumer.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
上述代码可能用到的工具类(Utils)统一黏贴如下:
package com.lhever.modules.core.concurrent.terminatable;
public abstract class AbstractTerminatableThread extends Thread implements Terminatable {
// 用以标记线程是否应该终止的标记对象
public final TerminationToken terminationToken;
public AbstractTerminatableThread() {
this(new TerminationToken());
}
/**
* @param terminationToken 线程间共享的线程终止标志实例
*/
public AbstractTerminatableThread(TerminationToken terminationToken) {
super();
this.setDaemon(true);
this.terminationToken = terminationToken;
terminationToken.register(this);
}
/**
* 留给子类实现其线程处理逻辑。
*
* @throws Exception
*/
protected abstract void doRun() throws Exception;
/**
* 留给子类实现。用于实现线程停止后的一些清理动作。
*
* @param cause
*/
protected void doCleanup(Exception cause) {
// 什么也不做
}
/**
* 留给子类实现。用于执行线程停止所需的操作。
*/
protected void doTerminiate() {
// 什么也不做
}
@Override
public void run() {
Exception ex = null;
try {
for (; ; ) {
// 在执行线程的处理逻辑前先判断线程停止的标志。
if (terminationToken.isToShutdown()
&& terminationToken.reservations.get() <= 0) {
break;
}
doRun();
}
} catch (Exception e) {
// 使得线程能够响应interrupt调用而退出
ex = e;
System.out.println("线程[" + this.getName() + "]发生异常");
e.printStackTrace();
} finally {
try {
doCleanup(ex);
} finally {
terminationToken.notifyThreadTermination(this);
}
}
}
@Override
public void interrupt() {
terminate();
}
/*
* 请求停止线程。
*/
@Override
public void terminate() {
terminationToken.setToShutdown(true);
try {
doTerminiate();
} finally {
// 若无待处理的任务,则试图强制终止线程
if (terminationToken.reservations.get() <= 0) {
super.interrupt();
}
}
}
public void terminate(boolean waitUtilThreadTerminated) {
terminate();
if (waitUtilThreadTerminated) {
try {
this.join();
} catch (InterruptedException e) {
// Thread.currentThread().interrupt();
System.out.println("线程" + this.getName() + "尚未处理完剩余任务便被终止");
e.printStackTrace();
} catch (Exception ex)
{
System.out.println("停止线程" + this.getName() + "时发生异常");
ex.printStackTrace();
}
}
}
}
package com.lhever.modules.core.concurrent.terminatable;
public interface Terminatable {
void terminate();
}package com.lhever.modules.core.concurrent.terminatable;
import java.lang.ref.WeakReference;
import java.util.Queue;
import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.concurrent.atomic.AtomicInteger;
/**
* 线程停止标志。
*/
public class TerminationToken {
// 使用volatile修饰,以保证无需显式锁的情况下该变量的内存可见性
protected volatile boolean toShutdown = false;
public final AtomicInteger reservations = new AtomicInteger(0);
/*
* 在多个可停止线程实例共享一个TerminationToken实例的情况下,该队列用于
* 记录那些共享TerminationToken实例的可停止线程,以便尽可能减少锁的使用 的情况下,实现这些线程的停止。
*/
private final Queue<WeakReference<Terminatable>> coordinatedThreads;
public TerminationToken() {
coordinatedThreads = new ConcurrentLinkedQueue<WeakReference<Terminatable>>();
}
public boolean isToShutdown() {
return toShutdown;
}
protected void setToShutdown(boolean toShutdown) {
this.toShutdown = true;
}
protected void register(Terminatable thread) {
coordinatedThreads.add(new WeakReference<Terminatable>(thread));
}
/**
* 通知TerminationToken实例:共享该实例的所有可停止线程中的一个线程停止了, 以便其停止其它未被停止的线程。
*
* @param thread 已停止的线程
*/
protected void notifyThreadTermination(Terminatable thread) {
WeakReference<Terminatable> wrThread;
Terminatable otherThread;
while (null != (wrThread = coordinatedThreads.poll())) {
otherThread = wrThread.get();
if (null != otherThread && otherThread != thread) {
otherThread.terminate();
}
}
}
}package com.lhever.modules.core.exception;
/**
* @description 用于表示权限方面的异常,比如没有权限访问某个资源, 调用接口没有携带token等,可以抛出该异常
* @author lihong10 2018/4/16 11:34:00
*/
public class AuthorizationException extends BaseRuntimeException {
public AuthorizationException(int errorCode, String msg) {
super(errorCode, msg);
}
public AuthorizationException(int errorCode, String msg, Throwable cause) {
super(errorCode, msg, cause);
}
public AuthorizationException(String msg) {
super(msg);
}
public AuthorizationException(String msg, Throwable cause) {
super(msg, cause);
}
}
package com.lhever.modules.core.exception;
/**
* @description 所有运行时异常的基类
* @author lihong10 2018/4/16 13:44:00
*/
public class BaseRuntimeException extends RuntimeException {
/**
* 序列化ID
*/
private static final long serialVersionUID = 7830353921973771800L;
/*
* 错误码
*/
protected Integer errorCode;
/**
* 创建一个新的实例CommonException
* @param errorCode
* @param msg
*/
public BaseRuntimeException(int errorCode, String msg) {
super(msg);
this.errorCode = errorCode;
}
/**
* 创建一个新的实例CommonException
* @param errorCode
* @param msg
* @param cause
*/
public BaseRuntimeException(int errorCode, String msg, Throwable cause) {
super(msg, cause);
this.errorCode = errorCode;
}
public BaseRuntimeException(String msg, Throwable cause) {
super(msg, cause);
}
public BaseRuntimeException(String msg) {
super(msg);
}
public Integer getErrorCode() {
return errorCode;
}
public void setErrorCode(Integer errorCode) {
this.errorCode = errorCode;
}
}
package com.lhever.modules.core.exception;
/**
* @description 业务异常,所有业务方面的问题,比如用户不存在,用户身份过期, 缴费不足等都应该抛出该异常
* @author lihong10 2018/4/16 11:34:00
*/
public class BusinessException extends BaseRuntimeException {
public BusinessException(int errorCode, String msg) {
super(errorCode, msg);
}
public BusinessException(int errorCode, String msg, Throwable cause) {
super(errorCode, msg, cause);
}
public BusinessException(String msg) {
super(msg);
}
public BusinessException(String msg, Throwable cause) {
super(msg, cause);
}
}
package com.lhever.modules.core.exception;
/**
* @description 不合法的参数异常,比如参数为空,参数格式不对均可以抛出该异常
* @author lihong10 2018/4/16 11:34:00
*/
public class IllegalParameterException extends BaseRuntimeException {
public IllegalParameterException(int errorCode, String msg) {
super(errorCode, msg);
}
public IllegalParameterException(int errorCode, String msg, Throwable cause) {
super(errorCode, msg, cause);
}
public IllegalParameterException(String msg, Throwable cause) {
super(msg, cause);
}
public IllegalParameterException(String msg) {
super(msg);
}
}
package com.lhever.modules.core.exception;
/**
* @description 系统或组件异常,比如系统启动时候,ES, Hbase, Kafka配置信息读取错误, 客户端不能正确初始化可以抛出该异常
* @author lihong10 2018/4/16 11:34:00
*/
public class SystemException extends BaseRuntimeException {
public SystemException(int errorCode, String msg) {
super(errorCode, msg);
}
public SystemException(int errorCode, String msg, Throwable cause) {
super(errorCode, msg, cause);
}
public SystemException(String msg) {
super(msg);
}
public SystemException(String msg, Throwable cause) {
super(msg, cause);
}
}
package com.lhever.modules.core.utils;
import com.lhever.modules.core.exception.IllegalParameterException;
import java.lang.reflect.Array;
public class ArrayUtils {
public static boolean isEmpty(final Object[] array) {
return getLength(array) == 0;
}
public static int getLength(final Object array) {
if (array == null) {
return 0;
}
return Array.getLength(array);
}
public static String[] splitByComma(String string) {
String[] strings = splitBy(string, ",");
return strings;
}
public static String[] splitBy(String string, String regex) {
if (StringUtils.isBlank(string)) {
return new String[0];
}
return string.split(regex);
}
public static Integer[] toIntegerArray(String[] ints) {
if (ints == null || ints.length == 0) {
throw new IllegalParameterException("array empty");
}
Integer[] integers = new Integer[ints.length];
for (int i = 0; i < ints.length; i++) {
if (StringUtils.isBlank(ints[i])) {
throw new IllegalParameterException("array element(index: + " + i + ") empty");
}
integers[i] = Integer.parseInt(ints[i].trim());
}
return integers;
}
}
package com.lhever.modules.core.utils;
import java.lang.reflect.Field;
import java.security.AccessController;
import java.security.PrivilegedAction;
import java.util.*;
/**
* 该类用于将对象转成map并返回
*
* @author lihong10 2017-10-31 上午11:04:10
* @version v1.0
*/
public class Bean2MapUtil {
/**
* 将bean的所有属性存为map, 包括null
*
* @author lihong10 2015-5-7 下午3:18:56
* @param bean bean
* @return object map
*/
@SuppressWarnings("unused")
public static Map<String, Object> toMap(Object bean) {
Field[] fields = bean.getClass().getDeclaredFields();
List<String> includedAndAlias = new ArrayList<String>();
int i = 0;
for (Field f : fields) {
if ("this$0".equals(f.getName())) {//内部类默认拥有此属性, 不应该写入map
continue;
}
includedAndAlias.add(f.getName());
}
return toMap(bean, includedAndAlias.toArray(new String[includedAndAlias.size()]));
}
/**
* 将Java bean转换为map<String,Object>对象, 可指定包含的属性及属性别名<br>
* 如 Map<String, Object> objectMap = toMap(object, new String[]{"id:userId","name","age"})<br>
* 这个调用将object中的id, name, age 字段名和字段值作为key-value对映射到map中, 并给id指定了别名userId<br>
*
* @author lihong10 2015-4-20 下午7:45:38
* @param bean java bean
* @param includedAndAlias 要包含的属性[:别名]
* @return object map
*/
public static Map<String, Object> toMap(Object bean, String[] includedAndAlias) {
Field[] fields = bean.getClass().getDeclaredFields();
Map<String, Object> map = new HashMap<String, Object>();
for (String nameAndAlias : includedAndAlias) {
String[] names = nameAndAlias.split(":");
String fieldName = names[0];
String mapName = fieldName;
if (names.length > 1) {
mapName = names[1];
}
boolean hasField = false;
Field target = null;
for (Field f : fields) {
if (fieldName.equals(f.getName())) {
hasField = true;
target = f;
break;
}
}
if (!hasField) {
throw new RuntimeException(String.format("there is no field named '%s' declared in %s", fieldName, bean.getClass().getName()));
}
target.setAccessible(true);
try {
map.put(mapName, target.get(bean));
} catch (IllegalArgumentException | IllegalAccessException e) {
System.out.println(e);
}
}
return map;
}
/**
* 将实体list转换成map list
*
* @author lihong10 2015-6-30 下午4:38:28
* @param list
* @param includedAndAlias
* @return List<Map<String, Object>>
*/
@SuppressWarnings("rawtypes")
public static List<Map<String, Object>> toMap(List list, String[] includedAndAlias) {
List<Map<String, Object>> data = new ArrayList<Map<String, Object>>();
for (Object obj : list) {
data.add(toMap(obj, includedAndAlias));
}
return data;
}
/**
* 将实体list转换成map list
*
* @author lihong10 2015-7-22 上午11:12:29
* @param list
* @return List<Map<String, Object>>
*/
@SuppressWarnings("rawtypes")
public static List<Map<String, Object>> toMap(List list) {
List<Map<String, Object>> data = new ArrayList<Map<String, Object>>();
for (Object obj : list) {
data.add(toMap(obj));
}
return data;
}
/**
* 将Java bean转换为map<String,Object>对象, 可指定包含的属性及属性别名<br>
* 如 Map<String, Object> objectMap = toMap(object, new String[]{"id:userId","name","age"})<br>
* 这个调用将object中的id, name, age 字段名和字段值作为key-value对映射到map中, 并给id指定了别名userId<br>
*
* @author lihong10 2015-4-20 下午7:45:38
* @param bean java bean
* @param differenceSet 要包含的属性[:别名]
* @return object map
*/
public static Map<String, Object> toMapByDifferenceSet(Object bean, String[] differenceSet) {
Field[] fields = bean.getClass().getDeclaredFields();
Map<String, Object> map = new HashMap<String, Object>();
List<String> differenceSetList = new ArrayList<String>(Arrays.asList(differenceSet));
for (Field field : fields) {
boolean isInDifferenceSet = false;
for (Iterator<String> iterator = differenceSetList.iterator(); iterator.hasNext(); ) {
String difference = (String) iterator.next();
if (field.getName().equals(difference)) {
isInDifferenceSet = true;
iterator.remove();
break;
}
}
if (!isInDifferenceSet) {
field.setAccessible(true);
try {
map.put(field.getName(), field.get(bean));
} catch (IllegalArgumentException | IllegalAccessException e) {
System.out.println(e);
}
}
}
if (differenceSetList.isEmpty() == false) {
throw new RuntimeException(String.format("there is no field named '%s' declared in %s where runing method toMapByDifferenceSet(Object bean, String[] differenceSet)",
differenceSetList.toString(),
bean.getClass().getName()));
}
return map;
}
/**
* 将实体list转换成map list
*
* @author lihong10 2015-6-30 下午4:38:28
* @param list
* @param differenceSet
* @return List<Map<String, Object>>
*/
@SuppressWarnings("rawtypes")
public static List<Map<String, Object>> toMapByDifferenceSet(List list, String[] differenceSet) {
List<Map<String, Object>> data = new ArrayList<Map<String, Object>>();
for (Object obj : list) {
data.add(toMapByDifferenceSet(obj, differenceSet));
}
return data;
}
/**
* 获取对象的某个属性值
*
* @author lihong10 2017-10-31 下午12:12:09
* @param propertyName
* @param obj
* @return 属性值
*/
@SuppressWarnings({"rawtypes", "unchecked"})
public static Object getProperty(final Object obj, String propertyName) {
Class cls = obj.getClass();
try {
final Field field = cls.getDeclaredField(propertyName);
return AccessController.doPrivileged(new PrivilegedAction() {
public Object run() {
boolean accessible = field.isAccessible();
field.setAccessible(true);
Object result = null;
try {
result = field.get(obj);
} catch (IllegalAccessException e) {
//error wont' happen
System.out.println(e);
}
field.setAccessible(accessible);
return result;
}
});
} catch (Exception e) {
System.out.println(e);
return null;
}
}
/**
* 循环向上转型,获取对象的DeclaredField.
* @author lihong10 2017-10-31 下午12:12:09
* @param clazz 类型
* @param propertyName 属性名
* @return 返回对应的Field
* @throws NoSuchFieldException 如果没有该Field时抛出.
*/
@SuppressWarnings("rawtypes")
public static Field getDeclaredField(Class clazz, String propertyName)
throws NoSuchFieldException {
for (Class superClass = clazz; superClass != Object.class;
superClass = superClass.getSuperclass()) {
try {
return superClass.getDeclaredField(propertyName);
} catch (NoSuchFieldException ex) {
// Field不在当前类定义,继续向上转型
System.out.println(ex);
}
}
throw new NoSuchFieldException("No such field: " + clazz.getName()
+ '.' + propertyName);
}
/**
* 强制设置对象变量值,忽略private,protected修饰符的限制.
* @author lihong10 2017-10-31 下午12:12:09
* @param object 对象实例
* @param propertyName 属性名
* @param newValue 赋予的属性值
* @throws NoSuchFieldException 如果没有该Field时抛出.
*/
@SuppressWarnings({"unchecked", "rawtypes"})
public static void forceSetProperty(final Object object,
final String propertyName, final Object newValue)
throws NoSuchFieldException {
final Field field = getDeclaredField(object.getClass(), propertyName);
AccessController.doPrivileged(new PrivilegedAction() {
/** * run. */
public Object run() {
boolean accessible = field.isAccessible();
field.setAccessible(true);
try {
field.set(object, newValue);
} catch (IllegalAccessException e) {
//"Error won't happen
System.out.println(e);
}
field.setAccessible(accessible);
return null;
}
});
}
}
package com.lhever.modules.core.utils;
import java.util.Collection;
import java.util.Map;
public class CollectionUtils {
public static boolean isEmpty(Collection coll) {
return (coll == null || coll.isEmpty());
}
public static boolean isNotEmpty(Collection coll) {
return !CollectionUtils.isEmpty(coll);
}
@SuppressWarnings("Map generic type missing")
public static boolean mapEmpty(Map map) {
return (map == null || map.size() == 0);
}
@SuppressWarnings("Map generic type missing")
public static boolean mapNotEmpty(Map map) {
return !mapEmpty(map);
}
public static <T extends Object> T getValue(Map<String, Object> map, String key, Class<? extends Object> T) {
if (map == null || map.size() == 0) {
return null;
}
if (StringUtils.isBlank(key)) {
System.out.println("key is null");
return null;
}
T value = null;
try {
value = (T) map.get(key);
} catch (Exception e) {
System.out.println("从map取值, key=[" + key + "], 转换为类型" + T + "错误");
e.printStackTrace();
}
return value;
}
}
package com.lhever.modules.core.utils;
import com.lhever.modules.core.exception.IllegalParameterException;
import org.apache.commons.lang3.StringUtils;
import org.joda.time.DateTime;
import org.joda.time.format.DateTimeFormatter;
import org.joda.time.format.ISODateTimeFormat;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
/**
* 日期时间工具类
*/
public class DateFormatUtils {
protected static final Logger LOG = LoggerFactory.getLogger(DateFormatUtils.class);
// public static final String DATE_TIME_FORMATTER_DEFAULT = "yyyy-MM-dd'T'HH:mm:ss.SSSXXX";
public static final String DATE_TIME_FORMATTER_DEFAULT = "yyyy-MM-dd HH:mm:ss";
public static final String DATE_TIME_FORMATTER_STANDARD = "yyyy-MM-dd HH:mm:ss";
public static final String DATE_TIME_FORMATTER_STANDARD_TZ = "yyyy-MM-dd'T'HH:mm:ss'Z'";
public static final String DATE_TIME_FORMATTER_LONG = "yyyy-MM-dd HH:mm:ss.SSS";
public static final String DATE_TIME_FORMATTER_LONG_TZ = "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'";
public static final String DATE_TIME_FORMATTER_NET = "yyyy-MM-dd HH:mm:ss.SSS";
public static final String DATE_TIME_FORMATTER_YEAR_MONTH = "yyyyMM";
public static final String DATE_TIME_FORMATTER_YMD = "yyyy-MM-dd";
public static final String DATE_TIME_FORMATTER_YEAR_MONTH_DAY = "yyyyMMdd";
public static final long TIMEMILLIS_FOR_ONE_DAY = 24L * 60 * 60 * 1000;
public static boolean matchRegex(String s, String regex) {
Pattern p = Pattern.compile(regex);
Matcher m = p.matcher(s);
return m.matches();
}
/**
* @description
* @author lihong10 2018/6/28 10:11:00
* @param dateStr
* return java.util.Date
*/
public static Date toDate(String dateStr) {
if (StringUtils.isBlank(dateStr)) {
return null;
}
if (dateStr.indexOf("T") > 0) {
DateTimeFormatter dateTimeFormatter = ISODateTimeFormat.dateTimeParser();
DateTime dateTime = null;
try {
dateTime = dateTimeFormatter.parseDateTime(dateStr);
} catch (Exception e) {
LOG.error("日期字符串是:" + dateStr + ", 日期转换错误", e);
}
if (dateTime != null) {
Date date = dateTime.toDate();
LOG.debug("日期字符串是:{}, 时间戳是:{}", dateStr, date.getTime());
return date;
}
LOG.error("日期字符串是:{}, 转换后是空值, 继续匹配.......", dateStr);
}
String[][] patterns = new String[][] {
new String[]{"^(\\d{4}-\\d{2}-\\d{2}[T]{1}\\d{2}:\\d{2}:\\d{2}[Z]{1})$", "yyyy-MM-dd'T'HH:mm:ss'Z'"},
new String[]{"(^(\\d{4}-\\d{2}-\\d{2}[T]{1}\\d{2}:\\d{2}:\\d{2}\\.\\d{3}[Z]{1})$)", "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"},
new String[]{"(^(\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2})$)", "yyyy-MM-dd HH:mm:ss"},
new String[]{"(^(\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}\\.\\d{3})$)", "yyyy-MM-dd HH:mm:ss.SSS"},
};
Date date = null;
for (String[] pattern : patterns) {
if (!matchRegex(dateStr, pattern[0])) {
continue;
}
try {
SimpleDateFormat dateFormat = new SimpleDateFormat(pattern[1]);
date = dateFormat.parse(dateStr);
LOG.debug("date[{}] is match pattern[{}]", dateStr, pattern[1]);
} catch (ParseException e) {
LOG.debug("parse error, dateStr:{}, pattern:{}", dateStr, pattern, e);
throw new IllegalParameterException("日期" + dateStr + "格式转换错误", e);
}
return date;
}
try {
date = new Date(Long.parseLong(dateStr));
} catch (NumberFormatException e) {
LOG.debug("parse error, dateStr:{}", dateStr, e);
throw new IllegalParameterException("日期" + dateStr + "不是long型", e);
}
return date;
}
public static String unixToIso(Long timestamp) {
if (timestamp == null) {
return null;
}
Date date = new Date(timestamp);
String dateStr = getISO8601Timestamp(date);
LOG.debug("timestamp:{} => ISODATE:{}", timestamp, dateStr);
return dateStr;
}
public static String getISO8601Timestamp(Date date) {
// TimeZone tz = TimeZone.getTimeZone("UTC");
SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSSXXX");
// df.setTimeZone(tz);
String nowAsISO = null;
try {
nowAsISO = df.format(date);
} catch (Exception e) {
LOG.info("to format yyyy-MM-dd'T'HH:mm:ss.SSSXXX error", e);
}
return nowAsISO;
}
}
package com.lhever.modules.core.utils;
import com.lhever.modules.core.exception.IllegalParameterException;
import org.apache.commons.lang3.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.*;
import java.nio.channels.FileChannel;
import java.util.ArrayList;
import java.util.List;
public class FileUtils {
private static Logger LOG = LoggerFactory.getLogger(FileUtils.class);
/**
* @param filePath
* @param encoding
* @return
*/
public static List<String> readByPath(String filePath, String encoding) {
List<String> lines = new ArrayList<String>();
encoding = StringUtils.isBlank(encoding) ? "UTF-8" : encoding;
InputStreamReader reader = null;
try {
File file = new File(filePath);
if (file.isFile() && file.exists()) { // 判断文件是否存在
reader = new InputStreamReader(new FileInputStream(file), encoding);// 考虑到编码格式
BufferedReader bufferedReader = new BufferedReader(reader);
String line = null;
while ((line = bufferedReader.readLine()) != null) {
if (StringUtils.isNotBlank(line)) {
lines.add(line.trim());
}
}
} else {
throw new FileNotFoundException("找不到文件" + filePath);
}
} catch (Exception e) {
LOG.error("读取文件内容出错", e);
} finally {
if (reader != null) {
try {
reader.close();
} catch (IOException e) {
LOG.error("IOException", e);
}
}
}
return lines;
}
public static List<String> readByName(String fileName) {
List<String> lines = new ArrayList<String>();
InputStreamReader reader = null;
try {
InputStream resourceAsStream = Thread.currentThread().getContextClassLoader().getResourceAsStream(fileName);
reader = new InputStreamReader(resourceAsStream, "UTF-8");// 考虑到编码格式
BufferedReader bufferedReader = new BufferedReader(reader);
String line = null;
while ((line = bufferedReader.readLine()) != null) {
if (StringUtils.isNotBlank(line)) {
lines.add(line.trim());
}
}
} catch (Exception e) {
LOG.error("读取文件内容出错", e);
} finally {
if (reader != null) {
try {
reader.close();
} catch (IOException e) {
LOG.error("IOException", e);
}
}
}
return lines;
}
public void copyByBio(File fromFile, File toFile) throws FileNotFoundException {
InputStream inputStream = null;
OutputStream outputStream = null;
try {
inputStream = new BufferedInputStream(new FileInputStream(fromFile));
outputStream = new BufferedOutputStream(new FileOutputStream(toFile));
byte[] bytes = new byte[1024];
int i;
//读取到输入流数据,然后写入到输出流中去,实现复制
while ((i = inputStream.read(bytes)) != -1) {
outputStream.write(bytes, 0, i);
}
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
if (inputStream != null)
inputStream.close();
if (outputStream != null)
outputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
/**
* 用filechannel进行文件复制
*
* @param fromFile 源文件
* @param toFile 目标文件
*/
public static void copyByFileChannel(File fromFile, File toFile) {
FileInputStream fileInputStream = null;
FileOutputStream fileOutputStream = null;
FileChannel fileChannelInput = null;
FileChannel fileChannelOutput = null;
try {
fileInputStream = new FileInputStream(fromFile);
fileOutputStream = new FileOutputStream(toFile);
//得到fileInputStream的文件通道
fileChannelInput = fileInputStream.getChannel();
//得到fileOutputStream的文件通道
fileChannelOutput = fileOutputStream.getChannel();
//将fileChannelInput通道的数据,写入到fileChannelOutput通道
fileChannelInput.transferTo(0, fileChannelInput.size(), fileChannelOutput);
} catch (IOException e) {
e.printStackTrace();
} finally {
close(fileInputStream);
close(fileOutputStream);
close(fileChannelInput);
close(fileChannelOutput);
}
}
public static void copyByFileChannelEx(File fromFile, File toFile) {
if (!fromFile.exists() || fromFile.isDirectory()) {
throw new IllegalParameterException("source file not exists nor source file is a directory");
}
try (FileInputStream fileInputStream = new FileInputStream(fromFile);
FileOutputStream fileOutputStream = new FileOutputStream(toFile);
FileChannel fileChannelInput = fileInputStream.getChannel();
FileChannel fileChannelOutput = fileOutputStream.getChannel()) {
fileChannelInput.transferTo(0, fileChannelInput.size(), fileChannelOutput);
} catch (IOException e) {
e.printStackTrace();
}
}
public static void copyByFileChannel(String from, String to) {
File fileFrom = new File(from);
if (!fileFrom.exists() || fileFrom.isDirectory()) {
throw new IllegalParameterException("source file not exists nor source file is a directory");
}
File fileTo = new File(to);
/*try {
if (!fileTo.exists()) {
fileTo.createNewFile();
} else if (fileTo.isDirectory()) {
fileTo.delete();
fileTo.createNewFile();
}
} catch (IOException e) {
throw new IllegalParameterException("dest file not exists ", e);
}
*/
copyByFileChannel(fileFrom, fileTo);
}
public static void close(Closeable closeable) {
if (closeable == null) {
return;
}
try {
closeable.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
package com.lhever.modules.core.utils;
import com.fasterxml.jackson.databind.DeserializationFeature;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.SerializationFeature;
import org.apache.commons.lang3.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.text.SimpleDateFormat;
import java.util.List;
import java.util.Map;
import java.util.TimeZone;
public class JsonUtils {
private static final Logger LOG = LoggerFactory.getLogger(JsonUtils.class);
private static ObjectMapper objectMapper = new ObjectMapper();
public static String toJsonWithFormat(Object obj, String dateFormat) {
if (obj == null) {
return null;
}
String result = null;
try {
ObjectMapper objectMapper = new ObjectMapper();
//备份老的日期格式
//DateFormat oldFormat = objectMapper.getDateFormat();
if (StringUtils.isNotEmpty(dateFormat)) {
objectMapper.setDateFormat(new SimpleDateFormat(dateFormat));
//不设置时区,会与系统当前时间相差8小时
TimeZone timeZone = TimeZone.getTimeZone("GMT+8");
objectMapper.setTimeZone(timeZone);
}
result = objectMapper.writeValueAsString(obj);
//恢复日期格式
//objectMapper.setDateFormat(oldFormat);
} catch (IOException e) {
}
return result;
}
public static String object2Json(Object obj) {
if (obj == null) {
return null;
}
String result = null;
try {
result = objectMapper.writeValueAsString(obj);
} catch (IOException e) {
e.printStackTrace();
LOG.error("对象转JSON字符串异常", e);
}
return result;
}
public static String object2Json(Object obj, boolean indented) {
if(obj == null) {
return null;
}
String result = null;
try {
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.configure(SerializationFeature.FAIL_ON_EMPTY_BEANS, true);
if(indented) {
result = objectMapper.writerWithDefaultPrettyPrinter().writeValueAsString(obj);
} else {
result = objectMapper.writeValueAsString(obj);
}
} catch (IOException e) {
LOG.error("error when object to json", e);
}
return result;
}
public static Map<?, ?> jsonToMap(String json) {
return json2Object(json, Map.class);
}
public static <T> T json2Object(String json, Class<T> cls) {
T result = null;
try {
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
result = objectMapper.readValue(json, cls);
} catch (IOException e) {
e.printStackTrace();
LOG.error("JSON字符串转对象异常", e);
}
return result;
}
public static <T> T conveterObject(Object srcObject, Class<T> destObjectType) {
String jsonContent = object2Json(srcObject);
return json2Object(jsonContent, destObjectType);
}
public static <T> List<T> fromJsonList(String json, Class<T> clazz) throws IOException {
return objectMapper.readValue(json, objectMapper.getTypeFactory().constructCollectionType(List.class, clazz));
}
}
package com.lhever.modules.core.utils;
public class ObjectUtils {
public static <T> T checkNotNull(T arg, String text) {
if (arg == null) {
throw new NullPointerException(text);
}
return arg;
}
}
package com.lhever.modules.core.utils;
import java.util.*;
/**
* Created by lihong10 on 2017/10/28.
*/
public class ParseUtil {
private ParseUtil() {
}
/**
* 将字符串text中由openToken(目前默认是{)和closeToken(目前默认是})组成的占位符依次替换为args数组中的值。
* 该方法取自mybatis框架,是mybatis默认的占位符解析器,性能极高。
* @param openToken
* @param closeToken
* @param text
* @param args
* @return
*/
public static String parse(String openToken, String closeToken, String text, Object data, Object... args) {
if (data == null && (args == null || args.length <= 0)) {
return text;
}
int argsIndex = 0;
if (text == null || text.isEmpty()) {
return "";
}
char[] src = text.toCharArray();
int offset = 0;
// search open token
int start = text.indexOf(openToken, offset);
if (start == -1) {
return text;
}
final StringBuilder builder = new StringBuilder();
StringBuilder expression = null;
while (start > -1) {
if (start > 0 && src[start - 1] == '\\') {
// this open token is escaped. remove the backslash and continue.
builder.append(src, offset, start - offset - 1).append(openToken);
offset = start + openToken.length();
} else {
// found open token. let's search close token.
if (expression == null) {
expression = new StringBuilder();
} else {
expression.setLength(0);
}
builder.append(src, offset, start - offset);
offset = start + openToken.length();
int end = text.indexOf(closeToken, offset);
while (end > -1) {
if (end > offset && src[end - 1] == '\\') {
// this close token is escaped. remove the backslash and continue.
expression.append(src, offset, end - offset - 1).append(closeToken);
offset = end + closeToken.length();
end = text.indexOf(closeToken, offset);
} else {
expression.append(src, offset, end - offset);
offset = end + closeToken.length();
break;
}
}
if (end == -1) {
// close token was not found.
builder.append(src, start, src.length - start);
offset = src.length;
} else {
///从mybatis拷贝后仅仅修改了该else分支下的个别行代码
String value = null;
String key = expression.toString().trim();
if(data == null) {
value = TokenHandler.handleToken(argsIndex, args);
argsIndex++;
} else {
if (data instanceof Map) {
Map<String, Object> map = (Map) data;
value = TokenHandler.handleToken(key, map);
} else if (data instanceof Properties) {
Properties prop = (Properties) data;
value = TokenHandler.handleToken(key, prop);
} else if(data instanceof List){
List list = (List) data;
value = TokenHandler.handleToken(argsIndex, list);
argsIndex++;
}else {
value = TokenHandler.handleToken(key, data);
}
}
builder.append(value);
offset = end + closeToken.length();
}
}
start = text.indexOf(openToken, offset);
}
if (offset < src.length) {
builder.append(src, offset, src.length - offset);
}
return builder.toString();
}
public static String parseMap(String text, Map<String, Object> data) {
return ParseUtil.parse("{", "}", text, data, null);
}
public static String parseString(String text, Object... args) {
return ParseUtil.parse("{", "}", text, null, args);
}
public static String parseProp(String text, Properties prop) {
return ParseUtil.parse("{", "}", text, prop, null);
}
public static String parseList(String text, List<Object> list) {
return ParseUtil.parse("{", "}", text, list, null);
}
public static String parseObj(String text, Object bean) {
return ParseUtil.parse("{", "}", text, bean, null);
}
private static class TokenHandler {
private TokenHandler() {
}
private static String handleToken(String key, Properties variables) {
if (variables != null) {
if (variables.containsKey(key)) {
return variables.getProperty(key);
}
return null;//不含有key属性
}
return null; //variables为空
}
private static String handleToken(int index, Object... args) {
if (args == null || args.length == 0) {
return null;
}
String value = (index <= args.length - 1) ?
(args[index] == null ? null : args[index].toString()) : null;
return value;
}
private static String handleToken(int index, List<Object> list) {
if (list == null || list.size() == 0) {
return null;
}
String value = (index <= list.size() - 1) ?
(list.get(index) == null ? null : list.get(index).toString()) : null;
return value;
}
private static String handleToken(String key, Map<String, Object> map) {
String value = (map == null || map.get(key) == null) ?
null : map.get(key).toString();
return value;
}
private static String handleToken(String property, Object obj) {
if (obj == null) {
return null;
}
Object field = Bean2MapUtil.getProperty(obj, property);
if (field == null) {
return null;
}
return field.toString();
}
}
/**
* 使用示例
* @param args
*/
public static void main(String... args) {
Map data = new HashMap<String, Object>(){{
put("name", "雷锋");
put("result", true);
put("confidence", 99.9);
put("", this);
}};
List<Object> li = new LinkedList<Object>(){{
add("david");
add(1);
add(true);
add(null);
add(new Object());
add(123.4);
add(1000L);
}};
//{}被转义,不会被替换
System.out.println(ParseUtil.parseString("我的名字是\\{},结果是{},可信度是%{}", "雷锋", true, 100));
System.out.println(ParseUtil.parseString("我的名字是{},结果是{},可信度是%{}", "雷锋", true, 100));
System.out.println(ParseUtil.parseMap("我的名字是{name},结果是{result},可信度是%{confidence}, 执行类是{}", data));
System.out.println(ParseUtil.parseList("{}-{}-{}-{}-{}-{}-{}-{}", li));
System.out.println(ParseUtil.parseObj("{}-{}-{}-{}-{}-{}-{}-{}", null));
System.out.println(ParseUtil.parseList(null, li));
}
}
package com.lhever.modules.core.utils;
import org.apache.commons.lang3.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.*;
import java.util.Properties;
public class PropertiesFile {
private final static Logger LOG = LoggerFactory.getLogger(PropertiesFile.class);
private Properties p;
private String fileName;
/**
* @param fileName 要加载的properties文件名, 必要的话可加上路径
* @author lihong10 2015-4-14 上午11:19:41
* @since v1.0
*/
public PropertiesFile(String fileName, boolean outside) {
this.p = new Properties();
this.fileName = fileName;
InputStream inputStream = null;
try {
if (outside) {
inputStream = getInputStreamByFile(fileName);
} else {
inputStream = getInputStream(Thread.currentThread().getContextClassLoader(), fileName);
if (inputStream == null) {
inputStream = getInputStream(PropertiesFile.class.getClassLoader(), fileName);
}
}
p.load(inputStream);
} catch (Exception ex) {
// LOG.error("找不到配置文件: " + fileName, ex);
throw new RuntimeException("找不到配置文件: " + fileName, ex);
} finally {
if (inputStream != null) {
try {
inputStream.close();
} catch (IOException e) {
LOG.error("关闭文件流失败", e);
}
}
}
}
public PropertiesFile(Properties props) {
this.p = props;
}
public static InputStream getInputStreamByFile(String path) {
File file = new File(path);
if (!file.isFile() || !file.exists()) {
throw new IllegalArgumentException("文件" + path + "不存在");
}
InputStream in = null;
try {
in = new FileInputStream(file);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
return in;
}
public static InputStream getInputStream(ClassLoader classLoader, String fileName) {
if (classLoader == null || StringUtils.isBlank(fileName)) {
LOG.info("classLoader is null or fileName is null");
return null;
}
fileName = fileName.trim();
InputStream stream = null;
try {
stream = classLoader.getResourceAsStream(fileName);
} catch (Exception e) {
LOG.error("read " + fileName + " error", e);
}
if (stream == null && !fileName.startsWith("/")) {
try {
stream = classLoader.getResourceAsStream("/" + fileName);
} catch (Exception e) {
LOG.error("read /" + fileName + " error", e);
}
}
return stream;
}
/**
* @param propertyName
* @return property value
* @author lihong10 2015-4-14 上午11:22:23
* @since v1.0
*/
public String getStringProperty(String propertyName) {
return p.getProperty(propertyName);
}
public String getStringProperty(String propertyName, String dft) {
String value = p.getProperty(propertyName);
if (StringUtils.isBlank(value)) {
return dft;
}
return value;
}
public Integer getIntProperty(String propertyName, Integer dft) {
String raw = p.getProperty(propertyName);
return getInt(raw, dft);
}
public Long getLongProperty(String propertyName, Long dft) {
String raw = p.getProperty(propertyName);
return getLong(raw, dft);
}
public Boolean getBooleanProperty(String propertyName, Boolean dft) {
String raw = p.getProperty(propertyName);
return getBoolean(raw, dft);
}
/**
* @param propertyName
* @param propertyValue
* @author lihong10 2015-6-15 下午4:16:54
* @since v1.0
*/
public void setProperty(String propertyName, String propertyValue) {
p.setProperty(propertyName, propertyValue);
}
/**
* @return the Properties
*/
public Properties getProps() {
return p;
}
/**
* @return the fileName
*/
public String getFileName() {
return fileName;
}
private Integer getInt(String str, Integer dft) {
try {
return Integer.parseInt(str.trim());
} catch (Exception e) {
LOG.error("error when parsing " + str + " to int, use default value: " + dft);
return dft;
}
}
private Long getLong(String str, Long dft) {
Long value = null;
try {
value = Long.parseLong(str.trim());
} catch (Exception e) {
LOG.error("error when parsing " + str + " to long, use default value: " + dft);
return dft;
}
return (value == null) ? dft : value;
}
private Boolean getBoolean(String str, Boolean dft) {
try {
return Boolean.parseBoolean(str.trim());
} catch (Exception e) {
LOG.error("error when parsing " + str + " to bool, use default value: " + dft);
return dft;
}
}
}
package com.lhever.modules.core.utils;
public class ReflectionUtils {
public static <T> T newInstance(String className, Class<?> instanceType, ClassLoader classLoader) {
T instance = null;
try {
Class<?> connectorClass = classLoader.loadClass(className).asSubclass(instanceType);
instance = (T) connectorClass.newInstance();
} catch (Throwable t) {
t.printStackTrace();
}
return instance;
}
}
package com.lhever.modules.core.utils;
import java.util.ArrayList;
import java.util.List;
public class StringUtils {
private static final String EMPTY = "";
public static boolean isBlank(final CharSequence cs) {
int strLen;
if (cs == null || (strLen = cs.length()) == 0) {
return true;
}
for (int i = 0; i < strLen; i++) {
if (!Character.isWhitespace(cs.charAt(i))) {
return false;
}
}
return true;
}
public static boolean isNotBlank(final CharSequence cs) {
return !isBlank(cs);
}
public static boolean isNoneBlank(final CharSequence... css) {
return !isAnyBlank(css);
}
public static boolean isAnyBlank(final CharSequence... css) {
if (ArrayUtils.isEmpty(css)) {
return false;
}
for (final CharSequence cs : css){
if (isBlank(cs)) {
return true;
}
}
return false;
}
public static List<Integer> stringToIntegerLst(List<String> inList) {
List<Integer> iList = new ArrayList<Integer>(inList.size());
for (int i = 0; i < inList.size(); i++) {
try {
iList.add(Integer.parseInt(inList.get(i)));
} catch (Exception e) {
}
}
return iList;
}
public static void println(String format, Object... objects) {
String str = ParseUtil.parseString(format, objects);
System.out.println(str);
}
public static String join(final Object[] array, final String separator) {
if (array == null) {
return null;
}
return join(array, separator, 0, array.length);
}
public static String join(final Object[] array, String separator, final int startIndex, final int endIndex) {
if (array == null) {
return null;
}
if (separator == null) {
separator = EMPTY;
}
// endIndex - startIndex > 0: Len = NofStrings *(len(firstString) + len(separator))
// (Assuming that all Strings are roughly equally long)
final int noOfItems = endIndex - startIndex;
if (noOfItems <= 0) {
return EMPTY;
}
final StringBuilder buf = new StringBuilder(noOfItems * 16);
for (int i = startIndex; i < endIndex; i++) {
if (i > startIndex) {
buf.append(separator);
}
if (array[i] != null) {
buf.append(array[i]);
}
}
return buf.toString();
}
}
package com.lhever.modules.core.utils;
public class ThreadUtils {
public static void joinThread(Thread thread) {
if (thread == null) {
return;
}
try {
thread.join();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
}
public static void joinThread(Thread thread, long millis) {
if (thread == null) {
return;
}
try {
thread.join(millis);
} catch (InterruptedException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
}
}
package com.lhever.modules.core.utils;
import com.fasterxml.jackson.databind.DeserializationFeature;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.node.TreeTraversingParser;
import com.fasterxml.jackson.dataformat.yaml.YAMLFactory;
import com.fasterxml.jackson.dataformat.yaml.YAMLParser;
import java.io.IOException;
import java.io.InputStream;
public class YmlUtils {
private static YAMLFactory yamlFactory;
private static ObjectMapper mapper;
static {
yamlFactory = new YAMLFactory();
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.enable(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES);
mapper = objectMapper;
}
public static JsonNode getJsonNode(InputStream input) {
JsonNode node = null;
try {
YAMLParser yamlParser = yamlFactory.createParser(input);
node = mapper.readTree(yamlParser);
} catch (IOException e) {
e.printStackTrace();
}
return node;
}
public static <T> T build(InputStream input , Class<? extends T> clss) throws IOException {
T obj = null;
try {
JsonNode node = getJsonNode(input);
TreeTraversingParser treeTraversingParser = new TreeTraversingParser(node);
obj = mapper.readValue(treeTraversingParser, clss);
} catch (Exception e) {
e.printStackTrace();
throw e;
}
return obj;
}
}
最后,项目用到的pom文件也一并贴出来
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.lhever.modules</groupId>
<artifactId>lhever-main</artifactId>
<packaging>pom</packaging>
<version>1.0-SNAPSHOT</version>
<modules>
<module>kafka-sender</module>
<module>kafka-consumer</module>
<module>core</module>
<module>datastructure</module>
</modules>
<properties>
<kafka.version>0.10.0.1</kafka.version>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<junit.version>4.11</junit.version>
<protobuf.java.version>3.1.0</protobuf.java.version>
<protobuf.java.format.version>1.2</protobuf.java.format.version>
<!--skipTests会编译测试类,即生成.class文件,只是不运行测试类, 你可以手动运行测试类。-->
<skipTests>true</skipTests>
</properties>
<dependencies>
<!-- 单元测试 -->
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>${junit.version}</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.commons/commons-lang3 -->
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.7</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
<version>2.6.6</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.6.6</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.6.6</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-dataformat-cbor</artifactId>
<version>2.6.6</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-dataformat-smile</artifactId>
<version>2.6.6</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-dataformat-yaml</artifactId>
<version>2.6.6</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-dataformat-cbor</artifactId>
<version>2.6.6</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.module</groupId>
<artifactId>jackson-module-paranamer</artifactId>
<version>2.6.6</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-dataformat-yaml</artifactId>
<version>2.6.6</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>${kafka.version}</version>
</dependency>
<!--<dependency>
<groupId>commons-lang</groupId>
<artifactId>commons-lang</artifactId>
<version>2.6</version>
</dependency>-->
<dependency>
<groupId>com.google.protobuf</groupId>
<artifactId>protobuf-java</artifactId>
<version>2.5.0</version>
</dependency>
<dependency>
<groupId>com.googlecode.protobuf-java-format</groupId>
<artifactId>protobuf-java-format</artifactId>
<version>${protobuf.java.format.version}</version>
</dependency>
<dependency>
<groupId>commons-codec</groupId>
<artifactId>commons-codec</artifactId>
<version>1.9</version>
</dependency>
<dependency>
<groupId>joda-time</groupId>
<artifactId>joda-time</artifactId>
<version>2.10</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.2</version>
</dependency>
</dependencies>
<build>
<plugins>
<!-- 打包jar文件时,配置manifest文件,加入lib包的jar依赖 -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<configuration>
<classesDirectory>target/classes/</classesDirectory>
<archive>
<manifest>
<!-- 主函数的入口 -->
<mainClass>com.lhever.modules.consumer.APP</mainClass>
<!-- 打包时 MANIFEST.MF文件不记录的时间戳版本 -->
<useUniqueVersions>false</useUniqueVersions>
<addClasspath>true</addClasspath>
<classpathPrefix>lib/</classpathPrefix>
</manifest>
<manifestEntries>
<Class-Path>.</Class-Path>
</manifestEntries>
</archive>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>copy</id>
<phase>package</phase>
<goals>
<goal>copy-dependencies</goal>
</goals>
<configuration>
<outputDirectory>
${project.build.directory}/lib
</outputDirectory>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build>
</project>