多级缓存的架构主要用来解决的问题是:时效高低数据的维度存储

时效性不高的数据,比如一些商品的基本信息,如果发生了变更,假设在 5 分钟之后再更新到页面中, 供用户观察到,也是 ok 的,那么我们采取的是异步更新缓存的策略

时效性要求很高的数据,如库存,采取的是数据库 + 缓存双写的技术方案,也解决了双写的一致性的问题

上面这两条可能直接看觉得好像差不多的,这里忽略了一个解释,对于页面来说,需要静态的生成页面, 这个过程可能稍微耗时一些,而对于双写来说则快太多了,它不负责页面渲染等工作,只需要把缓存数据更新即可

缓存数据生产服务,监听一个消息队列,然后数据源服务(商品信息管理服务)发生了数据变更之后, 就将数据变更的消息推送到消息队列中

缓存数据生产服务可以去消费到这个数据变更的消息,然后根据消息的指示提取一些参数, 然后调用对应的数据源服务的接口拉取数据,这个时候一般是从 mysql 库中拉去的

消息队列这里采用的是 kafka,这里选择 kafka 另外一个原因: 后面我们还要用 zookeeper 来解决缓存的分布式并发更新的问题(如分布式锁解决)

而 kafka 集群是依赖 zookeeper 集群,所以先搭建 zookeeper 集群,再搭建 kafka 集群

zookeeper + kafka 的集群,都至少是三节点

配置文件介绍

[root@master config]# more server.properties 
   # Licensed to the Apache Software Foundation (ASF) under one or more
   # contributor license agreements.  See the NOTICE file distributed with
   # this work for additional information regarding copyright ownership.
   # The ASF licenses this file to You under the Apache License, Version 2.0
   # (the "License"); you may not use this file except in compliance with
   # the License.  You may obtain a copy of the License at
   #
   #    http://www.apache.org/licenses/LICENSE-2.0
   #
   # Unless required by applicable law or agreed to in writing, software
   # distributed under the License is distributed on an "AS IS" BASIS,
   # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   # See the License for the specific language governing permissions and
   # limitations under the License.
   
   # see kafka.server.KafkaConfig for additional details and defaults
   
   ############################# Server Basics #############################
   
   # The id of the broker. This must be set to a unique integer for each broker.
   ##每一个broker在集群中的唯一标示,要求是正数。在改变IP地址,不改变broker.id的话不会影响consumers
   broker.id=0
   
   # Switch to enable topic deletion or not, default value is false
   ## 是否允许自动创建topic ,若是false,就需要通过命令创建topic
   delete.topic.enable=true
   
   ############################# Socket Server Settings #############################
   
   # The address the socket server listens on. It will get the value returned from 
   # .InetAddress.getCanonicalHostName() if not configured.
   #   FORMAT:
   #     listeners = listener_name://host_name:port
   #   EXAMPLE:
   #     listeners = PLAINTEXT://:9092
   #listeners=PLAINTEXT://:9092
   ##提供给客户端响应的端口
   port=9092
   =192.168.1.128
   
   # Hostname and port the broker will advertise to producers and consumers. If not set, 
   # it uses the value for "listeners" if configured.  Otherwise, it will use the value
   # returned from .InetAddress.getCanonicalHostName().
   #advertised.listeners=PLAINTEXT://:9092
   
   # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
   #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
   
   # The number of threads handling network requests
   ## broker 处理消息的最大线程数,一般情况下不需要去修改
   num.network.threads=3
   
   # The number of threads doing disk I/O
   ## broker处理磁盘IO 的线程数 ,数值应该大于你的硬盘数
   .threads=8
   
   # The send buffer (SO_SNDBUF) used by the socket server
   ## socket的发送缓冲区,socket的调优参数SO_SNDBUFF
   socket.send.buffer.bytes=102400
   
   # The receive buffer (SO_RCVBUF) used by the socket server
   ## socket的接受缓冲区,socket的调优参数SO_RCVBUFF
   socket.receive.buffer.bytes=102400
   
   # The maximum size of a request that the socket server will accept (protection against OOM)
   ## socket请求的最大数值,防止serverOOM,message.max.bytes必然要小于socket.request.max.bytes,会被topic创建时的指定参数覆盖
   socket.request.max.bytes=104857600
   
   
   ############################# Log Basics #############################
   
   # A comma seperated list of directories under which to store log files
   ##kafka数据的存放地址,多个地址的话用逗号分割/data/kafka-logs-1,/data/kafka-logs-2
   log.dirs=/tmp/kafka-logs
   
   # The default number of log partitions per topic. More partitions allow greater
   # parallelism for consumption, but this will also result in more files across
   # the brokers.
   ##每个topic的分区个数,若是在topic创建时候没有指定的话会被topic创建时的指定参数覆盖
   num.partitions=1
   
   # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
   # This value is recommended to be increased for installations with data dirs located in RAID array.
   ##我们知道segment文件默认会被保留7天的时间,超时的话就
   ##会被清理,那么清理这件事情就需要有一些线程来做。这里就是
   ##用来设置恢复和清理data下数据的线程数量
   num.recovery.threads.per.data.dir=1
   
   ############################# Log Flush Policy #############################
   
   # Messages are immediately written to the filesystem but by default we only fsync() to sync
   # the OS cache lazily. The following configurations control the flush of data to disk.
   # There are a few important trade-offs here:
   #    1. Durability: Unflushed data may be lost if you are not using replication.
   #    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
   #    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
   # The settings below allow one to configure the flush policy to flush data after a period of time or
   # every N messages (or both). This can be done globally and overridden on a per-topic basis.
   
   # The number of messages to accept before forcing a flush of data to disk
   #log.flush.interval.messages=10000
   
   # The maximum amount of time a message can sit in a log before we force a flush
   #=1000
   
   ############################# Log Retention Policy #############################
   
   # The following configurations control the disposal of log segments. The policy can
   # be set to delete segments after a period of time, or after a given size has accumulated.
   # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
   # from the end of the log.
   
   # The minimum age of a log file to be eligible for deletion due to age
   ##segment文件保留的最长时间,默认保留7天(168小时),
   ##超时将被删除,也就是说7天之前的数据将被清理掉。
   log.retention.hours=168
   
   # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
   # segments don't drop below log.retention.bytes. Functions independently of log.retention.hours.
   #log.retention.bytes=1073741824
   
   # The maximum size of a log segment file. When this size is reached a new log segment will be created.
   ###日志文件中每个segment的大小,默认为1G
   log.segment.bytes=1073741824
   
   # The interval at which log segments are checked to see if they can be deleted according
   # to the retention policies
   ##上面的参数设置了每一个segment文件的大小是1G,那么
   ##就需要有一个东西去定期检查segment文件有没有达到1G,
   ##多长时间去检查一次,就需要设置一个周期性检查文件大小
   ##的时间(单位是毫秒)。
   =300000
   
   ############################# Zookeeper #############################
   
   # Zookeeper connection string (see zookeeper docs for details).
   # This is a comma separated host:port pairs, each corresponding to a zk
   # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
   # You can also append an optional chroot string to the urls to specify the
   # root directory for all kafka znodes.
   #zookeeper.connect=localhost:2181
   ##消费者集群通过连接Zookeeper来找到broker。
   ##zookeeper连接服务器地址
   zookeeper.connect=master:2181,worker1:2181,worker2:2181
   
   # Timeout in ms for connecting to zookeeper
   zookeeper.connection.timeout.ms=6000
   
 启动kafka        
   [root@master kafka]# ./bin/ config/server.properties &
   [2018-06-25 02:31:21,931] INFO KafkaConfig values: 
           advertised. = null
           advertised.listeners = null
           advertised.port = null
            = 
           auto.create.topics.enable = true
           auto.leader.rebalance.enable = true
           background.threads = 10
           broker.id = 0
           broker.id.generation.enable = true
           broker.rack = null
           compression.type = producer
            = 600000
           controlled.shutdown.enable = true
           controlled.shutdown.max.retries = 3
            = 5000
           controller.socket.timeout.ms = 30000
 创建topic  #创建topic topic名字为gilbert
   [root@master kafka]# ./bin/kafka-topics.sh --create --zookeeper master:2181,worker1:2181,worker2:2181 --replication-factor 3 --partitions 3 --topic gilbert
   Created topic "gilbert".
 查看topic  
   [root@master kafka]# ./bin/kafka-topics.sh --describe --zookeeper master:2181,worker1:2181,worker2:2181 --topic gilbert
   Topic:gilbert   PartitionCount:3        ReplicationFactor:3     Configs:
           Topic: gilbert  Partition: 0    Leader: 2       Replicas: 2,0,1 Isr: 2,0,1
           Topic: gilbert  Partition: 1    Leader: 0       Replicas: 0,1,2 Isr: 0,1,2
           Topic: gilbert  Partition: 2    Leader: 1       Replicas: 1,2,0 Isr: 1,2,0
             
   [root@master kafka]# ./bin/kafka-topics.sh --list --zookeeper master:2181,worker1:2181,worker2:2181
   gilbert
   test
 创建producer  
   ./bin/ --broker-list master:9092 -topic gilbert
 创建consumer,分别在3台服务器上执行创建消费者  #192.168.1.128服务器 
   [root@master kafka]# ./bin/kafka-console-consumer.sh --zookeeper master:2181,worker1:2181,worker2:2181 -topic gilbert --from-beginning
   Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
     
   #192.168.1.129服务器 
   [root@worker1 kafka_2.10-0.10.2.0]# ./bin/kafka-console-consumer.sh --zookeeper master:2181,worker1:2181,worker2:2181 -topic gilbert --from-beginning
   Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
   
   #192.168.1.130服务器 
   [root@worker2 kafka_2.10-0.10.2.0]#  ./bin/kafka-console-consumer.sh --zookeeper master:2181,worker1:2181,worker2:2181 -topic gilbert --from-beginning
   Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
   
   
 在#192.168.1.128服务器上生产者控制台输入:hello kafka进行测试在3台服务器上的消费者都正常接收到消息
删除topic
  
   [root@master kafka]# ./bin/kafka-topics.sh --delete --zookeeper master:2181,worker1:2181,worker2:2181 --topic test
   Topic test is marked for deletion.
   Note: This will have no impact if delete.topic.enable is not set to truespringboot集成kafka
1.生产者kafka-producer
 a) pom文件
  <dependency>
     <groupId>org.springframework.boot</groupId>
     <artifactId>spring-boot-starter</artifactId>
   </dependency>
   <dependency>
     <groupId>org.springframework.kafka</groupId>
     <artifactId>spring-kafka</artifactId>
   </dependency>
   <dependency>
     <groupId>com.google.code.gson</groupId>
     <artifactId>gson</artifactId>
     <version>2.8.2</version>
   </dependency>
   <dependency>
     <groupId>org.projectlombok</groupId>
     <artifactId>lombok</artifactId>
     <optional>true</optional>
   </dependency>
   <dependency>
     <groupId>org.springframework.boot</groupId>
     <artifactId>spring-boot-starter-test</artifactId>
     <scope>test</scope>
   </dependency>
 b) yml配置文件,本例为kafka3节点集群  
   spring:
     kafka:
       bootstrap-servers: http://master:9092,http://worker1:9092,http://worker2:9092
       producer:
         retries: 0
         batch-size: 16384
         buffer-memory: 33554432
         key-serializer: org.apache.kafka.common.serialization.StringSerializer
         value-serializer: org.apache.kafka.common.serialization.StringSerializer
 c) message消息实体类  
   @Data
   public class Message {
       private Long id;    //id
       private String msg; //消息
       private Date sendTime;  //时间戳
   }
 d) 生产者  @Component
   @Slf4j
   public class KafkaProducer {
   
       @Autowired
       private KafkaTemplate<String, String> kafkaTemplate;
   
       private Gson gson = new GsonBuilder().create();
   
       //发送消息方法
       public void send() {
           Message message = new Message();
           message.setId(System.currentTimeMillis());
           message.setMsg(UUID.randomUUID().toString());
           message.setSendTime(new Date());
           log.info("+++++++++++++++++++++  message = {}", gson.toJson(message));
           //topic-ideal为主题
           kafkaTemplate.send("topic-ideal", gson.toJson(message));
       }
   }
 e) 测试类,运行kafkaProducer方法即可  @RunWith(SpringRunner.class)
   @SpringBootTest
   public class KafkaProducerApplicationTests {
   
       @Autowired
       private KafkaProducer kafkaProducer;
   
       @Test
       public void kafkaProducer(){
           this.kafkaProducer.send();
       }
   
       @Test
       public void contextLoads() {
       }
   
   }2. 消费者kafka-consumer
 a) pom文件
  <dependency>
     <groupId>org.springframework.boot</groupId>
     <artifactId>spring-boot-starter</artifactId>
   </dependency>
   <dependency>
     <groupId>org.springframework.kafka</groupId>
     <artifactId>spring-kafka</artifactId>
   </dependency>
   <dependency>
     <groupId>com.google.code.gson</groupId>
     <artifactId>gson</artifactId>
     <version>2.8.2</version>
   </dependency>
   <dependency>
     <groupId>org.projectlombok</groupId>
     <artifactId>lombok</artifactId>
     <optional>true</optional>
   </dependency>
 b) yml配置文件  
   server:
     port: 9999
   spring:
     kafka:
       bootstrap-servers: http://master:9092,http://worker1:9092,http://worker2:9092
       consumer:
         group-id: ideal-consumer-group
         auto-offset-reset: earliest
         enable-auto-commit: true
         auto-commit-interval: 20000
         key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
         value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
 c) 消费者类  
   @Component
   @Slf4j
   public class KafkaConsumer {
   
       @KafkaListener(topics = {"topic-ideal"})
       public void consumer(ConsumerRecord<?, ?> record){
           Optional<?> kafkaMessage = Optional.ofNullable(record.value());
           if (kafkaMessage.isPresent()) {
               Object message = kafkaMessage.get();
               log.info("----------------- record =" + record);
               log.info("------------------ message =" + message);
           }
       }
   }


运行消费者kafka-consumer,再运行kafka-producer工程测试类KafkaProducerApplicationTests中kafkaProducer()方法,可以看到消费者后台正常接收消息 

 

编写业务逻辑

  1. 两种服务会发送来数据变更消息:商品信息服务和商品店铺信息服务,每个消息都包含服务名以及商品 id
  2. 接收到消息之后,根据商品 id 到对应的服务拉取数据
    这一步,我们采取简化的模拟方式,就是在代码里面写死,会获取到什么数据,不去实际再写其他的服务去调用了
  3. 商品信息:id、名称、价格、图片列表、商品规格、售后信息、颜色、尺寸
  4. 商品店铺信息:其他维度
    用这个维度模拟出来缓存数据维度化拆分:id、店铺名称、店铺等级、店铺好评率
  5. 分别拉取到了数据之后,将数据组织成 json 串,然后分别存储到 ehcache 中和 redis 缓存中

这里的业务逻辑代码,因为是模拟这个场景,所以事先比较简单,重要的类如下

接收到事件之后,分别处理事件


public class KafkaMessageProcessor implements Runnable {

    private KafkaStream kafkaStream;
    private CacheService cacheService;
    private Logger log = LoggerFactory.getLogger(getClass());

    public KafkaMessageProcessor(KafkaStream kafkaStream, CacheService cacheService) {
        this.kafkaStream = kafkaStream;
        this.cacheService = cacheService;
    }

    public void run() {
        ConsumerIterator<byte[], byte[]> it = kafkaStream.iterator();
        while (it.hasNext()) {
            String message = new String(it.next().message());

            // 首先将message转换成json对象
            JSONObject messageJSONObject = JSONObject.parseObject(message);

            // 从这里提取出消息对应的服务的标识
            String serviceId = messageJSONObject.getString("serviceId");

            // 如果是商品信息服务
            if ("productInfoService".equals(serviceId)) {
                processProductInfoChangeMessage(messageJSONObject);
            } else if ("shopInfoService".equals(serviceId)) {
                processShopInfoChangeMessage(messageJSONObject);
            }
        }
    }

    /**
     * 处理商品信息变更的消息
     */
    private void processProductInfoChangeMessage(JSONObject messageJSONObject) {
        // 提取出商品id
        Long productId = messageJSONObject.getLong("productId");

        // 调用商品信息服务的接口
        // 直接用注释模拟:getProductInfo?productId=1,传递过去
        // 商品信息服务,一般来说就会去查询数据库,去获取productId=1的商品信息,然后返回回来

        String productInfoJSON = "{\"id\": 1, \"name\": \"iphone7手机\", \"price\": 5599, \"pictureList\":\"a.jpg,b.jpg\", \"specification\": \"iphone7的规格\", \"service\": \"iphone7的售后服务\", \"color\": \"红色,白色,黑色\", \"size\": \"5.5\", \"shopId\": 1}";
        ProductInfo productInfo = JSONObject.parseObject(productInfoJSON, ProductInfo.class);
        cacheService.saveProductInfo2LocalCache(productInfo);
        log.info("获取刚保存到本地缓存的商品信息:" + cacheService.getProductInfoFromLocalCache(productId));
        cacheService.saveProductInfo2ReidsCache(productInfo);
    }

    /**
     * 处理店铺信息变更的消息
     */
    private void processShopInfoChangeMessage(JSONObject messageJSONObject) {
        // 提取出商品id
        Long productId = messageJSONObject.getLong("productId");
        Long shopId = messageJSONObject.getLong("shopId");
        // 这里也是模拟去数据库获取到了信息

        String shopInfoJSON = "{\"id\": 1, \"name\": \"小王的手机店\", \"level\": 5, \"goodCommentRate\":0.99}";
        ShopInfo shopInfo = JSONObject.parseObject(shopInfoJSON, ShopInfo.class);
        cacheService.saveShopInfo2LocalCache(shopInfo);
        log.info("获取刚保存到本地缓存的店铺信息:" + cacheService.getShopInfoFromLocalCache(shopId));
        cacheService.saveShopInfo2ReidsCache(shopInfo);
    }
}


把缓存的读写封装了到 service 中。上面通过 service 去操作缓存


@Service
public class CacheServiceImpl implements CacheService {
    public static final String CACHE_NAME = "local";

    @Resource
    private JedisCluster jedisCluster;

    /**
     * 将商品信息保存到本地缓存中
     */
    @CachePut(value = CACHE_NAME, key = "'key_'+#productInfo.getId()")
    public ProductInfo saveLocalCache(ProductInfo productInfo) {
        return productInfo;
    }

    /**
     * 从本地缓存中获取商品信息
     */
    @Cacheable(value = CACHE_NAME, key = "'key_'+#id")
    public ProductInfo getLocalCache(Long id) {
        return null;
    }

    /**
     * 将商品信息保存到本地的ehcache缓存中
     */
    @CachePut(value = CACHE_NAME, key = "'product_info_'+#productInfo.getId()")
    public ProductInfo saveProductInfo2LocalCache(ProductInfo productInfo) {
        return productInfo;
    }

    /**
     * 从本地ehcache缓存中获取商品信息
     */
    @Cacheable(value = CACHE_NAME, key = "'product_info_'+#productId")
    public ProductInfo getProductInfoFromLocalCache(Long productId) {
        return null;
    }

    /**
     * 将店铺信息保存到本地的ehcache缓存中
     */
    @CachePut(value = CACHE_NAME, key = "'shop_info_'+#shopInfo.getId()")
    public ShopInfo saveShopInfo2LocalCache(ShopInfo shopInfo) {
        return shopInfo;
    }

    /**
     * 从本地ehcache缓存中获取店铺信息
     */
    @Cacheable(value = CACHE_NAME, key = "'shop_info_'+#shopId")
    public ShopInfo getShopInfoFromLocalCache(Long shopId) {
        return null;
    }

    /**
     * 将商品信息保存到redis中
     */
    public void saveProductInfo2ReidsCache(ProductInfo productInfo) {
        String key = "product_info_" + productInfo.getId();
        jedisCluster.set(key, JSONObject.toJSONString(productInfo));
    }

    /**
     * 将店铺信息保存到redis中
     */
    public void saveShopInfo2ReidsCache(ShopInfo shopInfo) {
        String key = "shop_info_" + shopInfo.getId();
        jedisCluster.set(key, JSONObject.toJSONString(shopInfo));
    }
}


测试业务逻辑

 

1. 创建一个 kafka topic # 创建 topic ,需要和程序中的一致 :eshop-message
bin/kafka-topics.sh --zookeeper 192.168.99.170:2181,192.168.99.171:2181,192.168.99.172:2181 --topic eshop-message --replication-factor 1 --partitions 1 --create
2. 在命令行启动一个 kafka producer # 创建一个生产者
bin/ --broker-list 192.168.99.170:9092,192.168.99.171:9092,192.168.99.172:9092 --topic eshop-message
3. 启动系统,消费者开始监听 kafka topic
注意:在 boot 2.1.x 中,连不上也没有日志打印。需要把几个虚拟机 hostname 映射到本地
C:\Windows\System32\drivers\etc\hosts  4. 在 producer 中,分别发送两条消息,一个是商品信息服务的消息,一个是商品店铺信息服务的消息
由于在本次模拟中,只使用 serviceId 作为了判定,其他数据是程序中写死的,所以这里推送两条携带 serviceId 的信息即可 {"serviceId":"productInfoService","productId":"1"}

{"serviceId":"shopInfoService","shopId":"1"}
  1. 能否接收到两条消息,并模拟拉取到两条数据,同时将数据写入 ehcache 中,并写入 redis 缓存中
  2. ehcache 通过打印日志方式来观察,redis 通过手工连接上去来查询