前言

阿里的双11销量大屏可以说是一道特殊的风景线。实时大屏(real-time dashboard)正在被越来越多的企业采用,用来及时呈现关键的数据指标。并且在实际操作中,肯定也不会仅仅计算一两个维度。由于Flink的“真·流式计算”这一特点,它比Spark Streaming要更适合大屏应用。本文从笔者的实际工作经验抽象出简单的模型,并简要叙述计算流程(当然大部分都是源码)。

基于 Kafka + Flink + Redis 的电商大屏实时计算案_json

数据格式与接入

简化的子订单消息体如下。

{
    "userId": 234567,
    "orderId": 2902306918400,
    "subOrderId": 2902306918401,
    "siteId": 10219,
    "siteName": "site_blabla",
    "cityId": 101,
    "cityName": "北京市",
    "warehouseId": 636,
    "merchandiseId": 187699,
    "price": 299,
    "quantity": 2,
    "orderStatus": 1,
    "isNewOrder": ,
    "timestamp": 1572963672217
}

由于订单可能会包含多种商品,故会被拆分成子订单来表示,每条JSON消息表示一个子订单。现在要按照自然日来统计以下指标,并以1秒的刷新频率呈现在大屏上:

  • 每个站点(站点ID即siteId)的总订单数、子订单数、销量与GMV;
  • 当前销量排名前N的商品(商品ID即merchandiseId)与它们的销量。

由于大屏的最大诉求是实时性,等待迟到数据显然不太现实,因此我们采用处理时间作为时间特征,并以1分钟的频率做checkpointing。

StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
env.enableCheckpointing(60 * 1000, CheckpointingMode.EXACTLY_ONCE);
env.getCheckpointConfig().setCheckpointTimeout(30 * 1000);

然后订阅Kafka的订单消息作为数据源。

    Properties consumerProps = ParameterUtil.getFromResourceFile("kafka.properties");
    DataStream<String> sourceStream = env
      .addSource(new FlinkKafkaConsumer011<>(
        ORDER_EXT_TOPIC_NAME,                        // topic
        new SimpleStringSchema(),                    // deserializer
        consumerProps                                // consumer properties
      ))
      .setParallelism(PARTITION_COUNT)
      .name("source_kafka_" + ORDER_EXT_TOPIC_NAME)
      .uid("source_kafka_" + ORDER_EXT_TOPIC_NAME);

给带状态的算子设定算子ID(通过调用uid()方法)是个好习惯,能够保证Flink应用从保存点重启时能够正确恢复状态现场。为了尽量稳妥,Flink官方也建议为每个算子都显式地设定ID,参考:https://ci.apache.org/projects/flink/flink-docs-stable/ops/state/savepoints.html#should-i-assign-ids-to-all-operators-in-my-job

接下来将JSON数据转化为POJO,JSON框架采用FastJSON。

    DataStream<SubOrderDetail> orderStream = sourceStream
      .map(message -> JSON.parseObject(message, SubOrderDetail.class))
      .name("map_sub_order_detail").uid("map_sub_order_detail");

JSON已经是预先处理好的标准化格式,所以POJO类SubOrderDetail的写法可以通过Lombok极大地简化。如果JSON的字段有不规范的,那么就需要手写Getter和Setter,并用@JSONField注解来指明。

@Getter
@Setter
@NoArgsConstructor
@AllArgsConstructor
@ToString
public class SubOrderDetail implements Serializable {
  private static final long serialVersionUID = 1L;
  
  private long userId;
  private long orderId;
  private long subOrderId;
  private long siteId;
  private String siteName;
  private long cityId;
  private String cityName;
  private long warehouseId;
  private long merchandiseId;
  private long price;
  private long quantity;
  private int orderStatus;
  private int isNewOrder;
  private long timestamp;
}

统计站点指标

将子订单流按站点ID分组,开1天的滚动窗口,并同时设定ContinuousProcessingTimeTrigger触发器,以1秒周期触发计算。注意二手手机号出售地图处理时间的时区问题,这是老生常谈了。

    WindowedStream<SubOrderDetail, Tuple, TimeWindow> siteDayWindowStream = orderStream
      .keyBy("siteId")
      .window(TumblingProcessingTimeWindows.of(Time.days(1), Time.hours(-8)))
      .trigger(ContinuousProcessingTimeTrigger.of(Time.seconds(1)));

接下来写个聚合函数。

    DataStream<OrderAccumulator> siteAggStream = siteDayWindowStream
      .aggregate(new OrderAndGmvAggregateFunc())
      .name("aggregate_site_order_gmv").uid("aggregate_site_order_gmv");

 

  public static final class OrderAndGmvAggregateFunc
    implements AggregateFunction<SubOrderDetail, OrderAccumulator, OrderAccumulator> {
    private static final long serialVersionUID = 1L;

    @Override
    public OrderAccumulator createAccumulator() {
      return new OrderAccumulator();
    }

    @Override
    public OrderAccumulator add(SubOrderDetail record, OrderAccumulator acc) {
      if (acc.getSiteId() == ) {
        acc.setSiteId(record.getSiteId());
        acc.setSiteName(record.getSiteName());
      }
      acc.addOrderId(record.getOrderId());
      acc.addSubOrderSum(1);
      acc.addQuantitySum(record.getQuantity());
      acc.addGmv(record.getPrice() * record.getQuantity());
      return acc;
    }

    @Override
    public OrderAccumulator getResult(OrderAccumulator acc) {
      return acc;
    }

    @Override
    public OrderAccumulator merge(OrderAccumulator acc1,