认识 Apache Kafka
什么是 Kafka
- 诞生之初被用作消息队列,一般来说做一个日志消息的转发,现已发展为强大的分布式事件流平台
- LinkedIn 在 2011 年开源
Spring Cloud Stream 对 Kafka 的支持
依赖
- 引入 Spring Cloud - spring-cloud-starter-stream-kafka
配置
- spring.cloud.stream.kafka.binder.*
binder 抽象的配置 - spring.cloud.stream.kafka.bindings.<channelName>.consumer.*
binder 的配置 需要指定绑定的q的名字 - spring.kafka.*
与kafka服务相关的配置
通过 Docker 启动 Kafka
官方指引
- https://hub.docker.com/r/confluentinc/cp-kafka
- https://docs.confluent.io/current/quickstart/cos-docker-quickstart.html
运行镜像
- https://github.com/confluentinc/cp-docker-images • kafka-single-node/docker-compose.yml
使用 这个yml文件 进行配置 - docker-compose up -d
启动组件
具体 在结果分析中 实现
实例
由两部分组成:kafka-barista-service kafka-waiter-service
kafka-barista-service
需要修改的代码:
OrderListener
@Component
@Slf4j
@Transactional
public class OrderListener {
@Autowired
private CoffeeOrderRepository orderRepository;
@Autowired
@Qualifier(Waiter.FINISHED_ORDERS)
private MessageChannel finishedOrdersMessageChannel;
@Value("${order.barista-prefix}${random.uuid}")
private String barista;
@StreamListener(Waiter.NEW_ORDERS)//使用StreamListener去监听Waiter.NEW_ORDERS 将结果 使用SendTo发送出去
@SendTo(Waiter.FINISHED_ORDERS)
public Long processNewOrder(Long id) {
CoffeeOrder o = orderRepository.getOne(id);
if (o == null) {
log.warn("Order id {} is NOT valid.", id);
throw new IllegalArgumentException("Order ID is INVAILD!");
}
log.info("Receive a new Order {}. Waiter: {}. Customer: {}",
id, o.getWaiter(), o.getCustomer());
o.setState(OrderState.BREWED);
o.setBarista(barista);
log.info("barista {}",o.getBarista());
orderRepository.save(o);
log.info("Order {} is READY.", id);
return id;
}
}
application.properties
spring.application.name=barista-service
order.barista-prefix=springbucks-
server.port=8070
management.endpoints.web.exposure.include=*
management.endpoint.health.show-details=always
spring.jpa.hibernate.ddl-auto=none
spring.jpa.properties.hibernate.show_sql=true
spring.jpa.properties.hibernate.format_sql=true
spring.datasource.url=jdbc:mysql://localhost:3306/springbucks?serverTimezone=UTC
spring.datasource.username=root
spring.datasource.password=123456
spring.cloud.stream.kafka.binder.brokers=localhost
spring.cloud.stream.kafka.binder.defaultBrokerPort=9092
spring.cloud.stream.bindings.newOrders.group=barista-service
#监听的newOrders队列 如果启动多个barista-service它的一个消息就会被当中的一个实例收到
kafka-waiter-service
需要修改的代码:
application.properties
spring.jpa.hibernate.ddl-auto=none
spring.jpa.properties.hibernate.show_sql=false
spring.jpa.properties.hibernate.format_sql=false
# 运行过一次后,如果不想清空数据库就注释掉下面这行
spring.datasource.initialization-mode=always
management.endpoints.web.exposure.include=*
management.endpoint.health.show-details=always
info.app.author=DigitalSonic
info.app.encoding=@project.build.sourceEncoding@
server.port=8080
spring.datasource.url=jdbc:mysql://localhost:3306/springbucks?serverTimezone=UTC
spring.datasource.username=root
spring.datasource.password=123456
order.discount=95
resilience4j.ratelimiter.limiters.coffee.limit-for-period=5
resilience4j.ratelimiter.limiters.coffee.limit-refresh-period-in-millis=30000
resilience4j.ratelimiter.limiters.coffee.timeout-in-millis=5000
resilience4j.ratelimiter.limiters.coffee.subscribe-for-events=true
resilience4j.ratelimiter.limiters.coffee.register-health-indicator=true
resilience4j.ratelimiter.limiters.order.limit-for-period=3
resilience4j.ratelimiter.limiters.order.limit-refresh-period-in-millis=30000
resilience4j.ratelimiter.limiters.order.timeout-in-millis=1000
resilience4j.ratelimiter.limiters.order.subscribe-for-events=true
resilience4j.ratelimiter.limiters.order.register-health-indicator=true
spring.cloud.stream.kafka.binder.brokers=localhost
spring.cloud.stream.kafka.binder.defaultBrokerPort=9092
#kafka的地址与端口
spring.cloud.stream.bindings.finishedOrders.group=waiter-service
#监听的finishedOrders队列 如果启动多个waiter-service它的一个消息就会被当中的一个实例收到
docker-compose.yml
---
version: '2'
services:
zookeeper:
image: zookeeper:latest
#启动一个名叫zookeeper的服务 镜像是zookeeper:latest
kafka:
image: confluentinc/cp-kafka:latest
#启动一个名叫kafka的服务 镜像是zookeeper:latest
depends_on:
- zookeeper
# 使用的依赖是zookeeper
ports:
- 9092:9092
# 在本地9092做一个发布
environment:
KAFKA_BROKER_ID: 1
# 整个Kafka集群内标识唯一Broker的ID。整数类型
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
# zookeeper的地址
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
# Broker之间,Client与Broker之间通信建立连接时使用的信息。既Broker的监听者,可以以逗号分割配置多个。它的格式为[安全协议]://Hostname/IP:Port。
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
# 以Key/Value的形式定义监听者的安全协议,在大多数情况下会将Key认为是监听者的别名
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
# 设置内部通信时使用哪个监听者。可以直接设置listener.security.protocol.map中设置的Key。
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
# 创建Topic时,如果没有指定Partition的Replication Factor数,则使用该配置项设置的Replication Factor数。
结果分析
进入 kafka-single-node/docker-compose.yml在的文件夹 输入 docker-compose up -d 启动组件
使用 docker ps 查看 启动的容器
可以 看到 已经成功启动使用 docker-compose ps 查看具体信息
在9092端口 就有被监听的kafka了启动 BaristaServiceApplication 和 WaiterServiceApplication
打开 postman 进行 创建订单 支付订单 查询订单操作
Spring 中的定时任务
Spring 的抽象
- TaskScheduler / Trigger / TriggerContext
都通过check的方式 做的抽象
配置定时任务
- 使用@EnableScheduling注解 开启定时任务支持
- 使用 <task:scheduler />配置
- 使用@Scheduled配置
Spring 中的事件机制
Spring 中的事件
- 有ApplicationEvent的事件机制
在内部发送一些event出来 做一个通知
发送事件
- 使用 ApplicationEventPublisherAware 接口 注入
- 使用ApplicationEventPublisher.publishEvent()发送一个event
监听事件
- 使用 ApplicationListener<> 接口 实现接口 进行监听
- 使用@EventListener注解 进行监听
实例
scheduled-customer-service
需要修改的代码
CustomerController
@RestController
@RequestMapping("/customer")
@Slf4j
public class CustomerController implements ApplicationEventPublisherAware {
@Autowired
private CoffeeService coffeeService;
@Autowired
private CoffeeOrderService coffeeOrderService;
private CircuitBreaker circuitBreaker;
private Bulkhead bulkhead;
private ApplicationEventPublisher applicationEventPublisher;
public CustomerController(CircuitBreakerRegistry circuitBreakerRegistry,
BulkheadRegistry bulkheadRegistry) {
circuitBreaker = circuitBreakerRegistry.circuitBreaker("menu");
bulkhead = bulkheadRegistry.bulkhead("menu");
}
@GetMapping("/menu")
public List<Coffee> readMenu() {
return Try.ofSupplier(
Bulkhead.decorateSupplier(bulkhead,
CircuitBreaker.decorateSupplier(circuitBreaker,
() -> coffeeService.getAll())))
.recover(CircuitBreakerOpenException.class, Collections.emptyList())
.recover(BulkheadFullException.class, Collections.emptyList())
.get();
}
@PostMapping("/order")
@io.github.resilience4j.circuitbreaker.annotation.CircuitBreaker(name = "order")
@io.github.resilience4j.bulkhead.annotation.Bulkhead(name = "order")
public CoffeeOrder createAndPayOrder() {
NewOrderRequest orderRequest = NewOrderRequest.builder()
.customer("Li Lei")
.items(Arrays.asList("capuccino"))
.build();
CoffeeOrder order = coffeeOrderService.create(orderRequest);
log.info("Create order: {}", order != null ? order.getId() : "-");
order = coffeeOrderService.updateState(order.getId(),
OrderStateRequest.builder().state(OrderState.PAID).build());
log.info("Order is PAID: {}", order);//将这个put请求 发给服务端
applicationEventPublisher.publishEvent(new OrderWaitingEvent(order));
return order;
}
@Override
public void setApplicationEventPublisher(ApplicationEventPublisher applicationEventPublisher) {
this.applicationEventPublisher = applicationEventPublisher;
}
}
CoffeeOrderService
@FeignClient(name = "waiter-service", contextId = "coffeeOrder")
public interface CoffeeOrderService {
@GetMapping("/order/{id}")
CoffeeOrder getOrder(@PathVariable("id") Long id);
@PostMapping(path = "/order/", consumes = MediaType.APPLICATION_JSON_VALUE,
produces = MediaType.APPLICATION_JSON_UTF8_VALUE)
CoffeeOrder create(@RequestBody NewOrderRequest newOrder);
@PutMapping("/order/{id}")
CoffeeOrder updateState(@PathVariable("id") Long id,
@RequestBody OrderStateRequest orderState);//更新状态
}
OrderStateRequest
@Getter
@Setter
@ToString
@Builder
public class OrderStateRequest {
private OrderState state;
}
CoffeeOrderScheduler
@Component
@Slf4j
public class CoffeeOrderScheduler {
@Autowired
private CoffeeOrderService coffeeOrderService;
private Map<Long, CoffeeOrder> orderMap = new ConcurrentHashMap<>();
@EventListener //监听OrderWaitingEvent
public void acceptOrder(OrderWaitingEvent event) {
orderMap.put(event.getOrder().getId(), event.getOrder());//以id为key order为value 保存
}
@Scheduled(fixedRate = 1000) //每隔一秒钟 去这一个处理
public void waitForCoffee() {
if (orderMap.isEmpty()) {
return;
}
log.info("I'm waiting for my coffee.");
orderMap.values().stream()
.map(o -> coffeeOrderService.getOrder(o.getId()))//取出 id
.filter(o -> OrderState.BREWED == o.getState())//保留state为BREWED的order
.forEach(o -> {
log.info("Order [{}] is READY, I'll take it.", o);
coffeeOrderService.updateState(o.getId(),
OrderStateRequest.builder()
.state(OrderState.TAKEN).build());//更改状态
orderMap.remove(o.getId());//移除
});
}
}
OrderWaitingEvent
@Data
public class OrderWaitingEvent extends ApplicationEvent {
private CoffeeOrder order;
public OrderWaitingEvent(CoffeeOrder order) {
super(order);
this.order = order;
// 传入一个order 并保存
}
}
CustomerServiceApplication
@SpringBootApplication
@Slf4j
@EnableDiscoveryClient
@EnableFeignClients
@EnableAspectJAutoProxy
@EnableScheduling//开启定时任务的支持
public class CustomerServiceApplication {
public static void main(String[] args) {
SpringApplication.run(CustomerServiceApplication.class, args);
}
@Bean
public CloseableHttpClient httpClient() {
return HttpClients.custom()
.setConnectionTimeToLive(30, TimeUnit.SECONDS)
.evictIdleConnections(30, TimeUnit.SECONDS)
.setMaxConnTotal(200)
.setMaxConnPerRoute(20)
.disableAutomaticRetries()
.setKeepAliveStrategy(new CustomConnectionKeepAliveStrategy())
.build();
}
}
结果分析
启动 CustomerServiceApplication 和 BaristaServiceApplication 和 WaiterServiceApplication
使用 postman进行测试
通过 将状态更改的put请求通过事件 发送给服务器 服务器 进行相应操作
客户端 进行 map的监听 将完成的订单 打出一个日志