Metrics:按照时间维度聚合各个参数,以数字形式呈现出来(次高 -> 通过prometheus实现UI呈现)

Logging:描述离散的,不连续的事件,以文字的形式呈现(重要 -> 通过ELK实现)

Tracing:针对单次请求追踪发生了什么事情(低 -> 通过pinpoint实现)

总体流程:

普罗米修斯会影响redis吗 普罗米修斯springcloud_普罗米修斯会影响redis吗

所有的应用都在Docker上部署,Docker入门:

    与java比较:

普罗米修斯会影响redis吗 普罗米修斯springcloud_elasticsearch_02

    详细比较见:docker

本章主要目录结构:

普罗米修斯会影响redis吗 普罗米修斯springcloud_docker_03

prometheus环境搭建,使用:

    入门详见:普罗米修斯

    创建monitoring文件夹,创建对应grafana和prometheus文件夹存放对应的配置文件

    grafana文件夹:

config.grafana.ini:
########################## SMTP/Emailing #####################
#配置邮件服务器
[smtp]
enabled = true
# 发件服务器
host = smtp.qq.com:465
# smtp账号
user = 294636185@qq.com
# smtp密码
password = xuyu
# 发送邮件
from_address = 294636185@qq.com
# 发信人
from_name = xuyu

provisioning.datasources.datasource.yml:
apiVersion: 1

deleteDatasources:
- name: Prometheus
  orgId: 1

datasources:
- name: Prometheus
  type: prometheus
  access: proxy
  orgId: 1
  url: http://prometheus:9090
  basicAuth: false
  isDefault: true
  version: 1
  editable: true

provisioning.config.monitoring:

GF_SECURITY_ADMIN_PASSWORD=password  //管理员密码
GF_USERS_ALLOW_SIGN_UP=false  //不允许用户注册

    Prometheus文件夹:

prometheus.yml  //告诉prometheus获取哪些监视应用信息


global:  //全局
    scrape_interval: 15s  //抓取数据频率

scrape_configs:  //拉取的目标,一下可见两个job
- job_name: 'springboot-app'  //job抓取springboot-app

    scrape_interval: 10s  //抓取频率
    metrics_path: '/actuator/prometheus'  //抓取数据请求路径,需要在服务端代码配置处接口

    static_configs:  //配置去哪里抓
    - targets: ['host.docker.internal:9082']  //去docker宿主机上抓取数据
      labels:  //对抓取的数据打标签
        application: 'springboot-app'  //标签名称

- job_name: 'prometheus'  //job抓取peometheus

    scrape_intercal: 5s  //抓取频率

    static_configs:  //去本机9090端口抓数据
    - targets: ['localhost:9090']
docker-compose.yml  创建多个docker镜像

version: "3"
services:
    prometheus:  //以map的形式配置
        image: prom/prometheus:v2.4.3  //使用哪个镜像
        container_name: 'prometheus'  //镜像对应容器名称
        volumes:  //挂载卷
        - ./prometheus/:/etc/prometheus/  //映射值  本地文件/:/镜像中地址->prometheus读取配置文件
        ports:  //端口映射
        - '9092:9090'  //映射到9092端口
    grafana:
        image: grafana/grafana:5.2.4  //镜像
        container_name: 'grafana'  //容器
        ports:
        - '3000:3000'
        volumes:  //挂在卷替换
        - ./grafana/config/grafana.ini:/etc/grafana/grafana.ini
        - ./grafana/provisioning/:/etc/grafgana/provisioning
        env_file:  //环境配置文件
        - ./grafana/config.monitoring
        depends_on:  //依赖上面的prometheus配置
        - prometheus

    前端访问:localhost:9092/graph 即可查看当前环境状态

    服务端Order代码修改:

pom.xml 添加
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
    <version>2.1.6.RELEASE</version>
</dependency>
<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

application.yml  添加
server:
  port: 9082

management:
  endpoints:
    prometheus:
      enable: true
    web:
      exposure:
        include:
          - prometheus
          - info
          - health

ActuatorSecurityConfig.java  配置安全身份认证
/**
 * @author aric
 * @create 2021-05-25-19:07
 * @fun  访问prometheus不需要身份认证
 */
@Configuration
public class ActuatorSecurityConfig extends ResourceServerConfigurerAdapter {
    @Override
    public void configure(HttpSecurity http) throws Exception {
        http.authorizeRequests()  //授权访问
                .requestMatchers(EndpointRequest.toAnyEndpoint()).permitAll()  //请求匹配器:所有的端点都不需要身份认证
                .anyRequest().authenticated();  //剩下的需要认证
    }
}

测试用例(循环请求,报错):

public static void main(String[] args){
    RestTemplate restTemplate = new RestTemplate();

    HttpHeaders headers = new HttpHeaders();
    headers.setContentType(MediaType.APPLICATION_HSON_UTF8);
    headers.set("Authorization","bearer token...")

    OrderInfo info = new OrderInfo();
    info.setProductId(1231);

    HttpEntity<OrderInfo> entity = new HttpEntity = new HttpEntity<OrderInfo>(info,headers);

    while(true){
        try{
            restTemplate.exchange("http://order.imooc.com:9082/orders",HttpMethod.POST,entity,String.class);
        }catch(Exception e){
            
        }
        Thread.sleep(100);
    }
}

通过grafana前端页面配置邮箱即可。

Metrics主要指标:

    Counter:只增不减的计数器

    Gauge:可增可减的仪表盘

    Histogram:自带Buckets区间数据分布统计图

    Summary:客户端定义的数据分布统计图

    例子:查询接口调用情况:

PrometheusMetricsConfig.java  //定义counter统计次数
@Configuration
public class PrometheusMetricsConfig {

    @Autowired
    private PrometheusMeterRegistry prometheusMeterRegistry;

    //定义一个Counter,来统计请求次数,在拦截器拦住请求计数
    @Bean
    public Counter requestCounter(){
        return Counter.build("is_request_count","count_request_by_service")  //指定统计名字
                .labelNames("service","method","code")  //指定携带哪些标签维度
                .register(prometheusMeterRegistry.getPrometheusRegistry());  //注册到prometheus上
    }

    //定义Summary,来统计请求时间
    @Bean
    public Summary requestLatency(){
        return Summary.build("is_request_latency","monite request latency by service")
                .quantile(0.5,0.05)
                .quantile(0.9,0.01)
                .labelNames("service","method","code")
                .register(prometheusMeterRegistry.getPrometheusRegistry());
    }
}

PrometheusMetricsInterceptor.java  //拦截器拦截逻辑实现
@Component
public class PrometheusMetricsInterceptor extends HandlerInterceptorAdapter {

    @Autowired
    private Counter requestCounter;

    @Autowired
    private Summary requestLatency;

    @Override  //在请求前记下时间,算请求时间
    private boolean preHandle(HttpServletRequest request,HttpServletResponse response,Object handler){
        request.setAttribute("startTime",new Date().getTime());
        return true;
    }

    @Override
    public void afterCompletion(HttpServletRequest request, HttpServletResponse response,Object handler, @Nullable Exception ex){
        String service = request.getRequestURI();  //访问的路径

        requestCounter.labels(service,request.getMethod(),String.valueOf(response.getStatus()))  //设计统计维度逻辑实现
        .inc(); //inc为count+1
        long duration = new Date().getTime() - (Long)request.getAttribute("startTime");
        requestLatency.labels(service,request.getMethod(),String.valueOf(response.getStatus())).ibserve(duratoin);  //计算请求持续时间
    }

}

WebConfig.java  //配置webconfig
/**
 * @author aric
 * @create 2021-05-26-11:08
 * @fun  使prometheus拦截器生效
 */
@Configuration
public class WebConfig implements WebMvcConfigurer {

    @Autowired
    private PrometheusMetricsInterceptor prometheusMetricsInterceptor;

    @Override
    public void addInterceptors(InterceptorRegistry registry){
        registry.addInterceptor(prometheusMetricsInterceptor)
                .addPathPatterns("/**");  //针对所有的请求都会通过prometheus拦截器
    }
}

   以上配置可在Grafana中配置可视化界面来观察。

ELK:

        checkAT:ELK简介,微服务架构错误信息日志会落在任何一个服务器上,没有ELK需要一个一个排查,效率低,ELK可以把所有错误信息收集到一个服务器上在Kibana上进行展示。

普罗米修斯会影响redis吗 普罗米修斯springcloud_docker_04

    1.docker-elk下载:github.com/search?q=docker+elk

    2.放在本地monitorning文件夹下

    3.进入docker-elk目录下运行:docker-compose up

    4.各个服务暴露的端口:

        *5000:Logstash TCP input  //用于收集日志

        *9200:Elasticsearch HTTP  //搜索引擎,用于采集数据,建立索引,可访问lcoalhost:9200访问,用户:elastic密码:changeme

        *9300:Elasticsearch TCP transport  //用作集群通信

        *5601:Kibana  //UI呈现,可以访问localhost:5601,用户名:elastic密码:changeme

    code解析:

docker-compose.yml:

version: '3.2'

services:
    elasticsearch:  //elasticsearch安装
        build:
            context: elasticsearch/
            args:
                ELK_VERSION: $ELK_VERSION
        volumes:  //本地映射到docker的环境上
          -  type: bind
             source: ./elasticsearch/config/elasticsearch.yml
             target: /usr/share/elasticsearch/config/elasticsearch.yml
             read_only: true
          -  type: volume
             source: elasticsearch
             target: /usr/share/elasticsearch/data
        ports:  //端口映射
          - "9200:9200"
          - "9300:9300"
        environment:  //jvm设置
          ES_JAVA_OPTS: "-Xmx256m -Xms256m"
          ELASTIC_PASSWORD: changeme
        networks:
          - elk
    
    logstash:
        build:  //logstash安装
            context: logstash/
            args:
                ELK_VERSION: $ELK_VERSION
        volumes:  //logstash映射进docker
          - type: bind
            source: ./logstash/config/logstash.yml
            target: /usr/share/logstash/config/logstasth.yml
            read_only: true
          - type: bind
            source: ./logstash/pipeline
            target: /usr/share/logstash/pipeline
            read_only: true
        ports:  //logstash端口映射
          - "5000:5000"
          - "9600:9600"
        environment:  //jvm配置
          LS_JAVA_OPTS: "-Xmx256m -Xms256m"
        networks:
          - elk
        depends_on:
          - elasticsearch

    kibana:
        build:  //同上
            context: kibana/
            args:
                ELK_VERSION: $ELK_VERSION
        volumes:
          - type: bind
            source: ./kibana/config/kibana.yml
            target: /usr/share/kibana/config/kibana.yml
            read_only: true
        ports:
          - "5601:5601"
        networks:
          - elk
        depends_on:
          - elasticsearch

networks:
    elk:
        driver:bridge
docker-elk.kibana.config.kubana.yml:

server.name: kibana  //文件名
server.host: "0"  //本机
elasticsearch.host: ["http://elasticsearch:9200"]  //绑定elasticsearch数据地址
xpack.monitoring.ui.container.elasticsearch.enable: true  //安全机制启动,需要输入密码

elasticsearch.username: elastic
elasticsearch.password: changeme
docker-elk.logstash.config.logstash.yml:

docker-elk.logstash.pipeline.logstash.conf:

//日志文件写入5000端口
input{
    tcp{
        port => 5000
    }
}

#实现自己的过滤条件
filter{
    grok{  //把整个message日志切开,切分规则详见github日志
        match => {
            "message" => "%{TIMESTAMP_ISO8601:time}"
        }
    }
}

output{
    elasticsearch{
        hosts => "elasticsearch:9200"
        user => "elastic"
        password => "changeme"
    }
}
pom.xml
//logback把日志输出到logstash中
<dependency>
     <groupId>net.logstash.logback</groupId>
     <artifactId>logstash-logback-encoder</artifactId>
     <version>6.2</version>
</dependency>
resources.logback-spring.xml:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE configuration>
<configuration>
<!--    把springboot输出的日志转到logstash-->
    <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<!--        告诉logstash在哪里-->
        <destination>localhost:5000</destination>
<!--        <encoder charset="UTF-8" class="net.logstash.logback.encoder.logstashEncoder" />  //直接转成json格式发出-->
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [$thread] %-5level %logger{50} - %msg%n</pattern>
            <charset>UTF-8</charset>
        </encoder>
    </appender>

    <include
        resource="org/springframework/boot/logging/logback/base.xml" />  //把spring-boot默认的日志包进来

    <root level="INFO">  //INFO级别日志同时输出到两个地方
        <appender-ref ref="LOGSTASH" />
        <appender-ref ref="CONSOLE" />
    </root>
</configuration>

    扩展:ELK数据量过大会导致LogStash崩掉,所以用kafka(削峰填谷)作中间环节

普罗米修斯会影响redis吗 普罗米修斯springcloud_普罗米修斯会影响redis吗_05

    安装kafka:github.com/search?q=docker+kafka

    切换到kafka-docker运行:docker-compose -f docker-compose-single-broker.yml up

    主要配置code解析:

pom.xml

<dependency>
    <groupId>com.github.danielwegener</groupId>
    <artifactId>logback-kafka-appender</artifactId>
    <version>0.2.0-RC2</version>
</dependency>
修改springboot日志信息写入kafka
logback-spring.xml

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE configuration>
<configuration>
<!--    把springboot输出的日志转到kafka-->
    <appender name="KAFKA" class="com.github.danielwegener.logback.kafka.KafkaAppender">
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [$thread] %-5level %logger{50} - %msg%n</pattern>
            <charset>UTF-8</charset>
        </encoder>
        <topic>test</topic>  //发往哪个topic
        <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy" />  //主键分区策略
        <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />  //dangkafka不可用时将消息异步打印CONSOLE
        <producerConfig>bootstrap.servers=192.168.0.10:9092</producerConfig>  //kafka服务器地址
        <appender-ref ref="CONSOLE"/>
    </appender>

    <include
        resource="org/springframework/boot/logging/logback/base.xml" />  //把spring-boot默认的日志包进来

    <root level="INFO">  //INFO级别日志同时输出到两个地方
        <appender-ref ref="KAFKA" />
        <appender-ref ref="CONSOLE" />
    </root>
</configuration>
kafka-docker.test.docker-compose-single-broker.yml

version: '2'
services:
    zookeeper:  //安装zookeeper
        image: wurstmeister/zookeeper
        ports:
          -  "2181:2181"
    kafka:  //安装kafka
        build: .
        ports:
          -  "9092:9092"  //端口映射
        environment:
            KAFKA_ADVERTISED_HOST_NAME: 192.168.0.10  //修改为自己机器的ip
            KAFKA_CREATE_TOPICS: "test:1:1"  //指定kafka中toptic队列test
            KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181  //连接到zookeeper
        volumes:
          -  /var/run/docker.sock:/var/run/docker.sock
修改logstash从kafka获取消息:
docker-elk.logstash.pipeline.logstash.conf

input{
##    tcp{
##       port => 5000
##    }

    kafka{
        id => "my_plugin_id"
        bootstrap_servers => "192.168.0.10:9092"
        topics => ["test"]
        auto_offset_reset => "latest"
    }
}

#实现自己的过滤条件
filter{
    grok{  //把整个message日志切开,切分规则详见github日志
        match => {
            "message" => "%{TIMESTAMP_ISO8601:time}"
        }
    }
}

output{
    elasticsearch{
        hosts => "elasticsearch:9200"
        user => "elastic"
        password => "changeme"
    }
}

 pinpoint调用链监控

    全网最详细?使用教程:pinpoint教程

    安装pinpoint:github.com/naver/pinpoint-docker

    切换到pinpoint-docker运行:docker-compose  up

    前端访问localhost:8079即可

    无侵入代码使用:

    安装point-agent地址:github.com/naver/pinpoint/releases 找到Assets中pinpoint-agent.tar.gz下载

    code解析:

修改文件中的pinpoint.config主要参数

profilter.collector.ip:=127.0.0.1  //pinpoint放服务器修改本地或远端地址
profilter.sampling.rate=1  //采样率,100%数据量较大 比率对应关系(20:5%)
profilter.applicationservertype:SPRING_BOOT  //当前服务器采样类型

profilter.springboot.enable=true //开启
profilter.springboot.bootstrap.main=org.springframework.boot.loader.JarLauncher...  //把要监控的类填入

prodiler.lobback.logging.transactioninfo=true  //向logback日志中加入参数,详见下方ProductApi测试中application.yml中日志输出格式中的[%X{PtxId}]参数设置一个PtxId,这样可以把同一个请求一条链路上的不同系统中的日志通过ELK的日志过滤PtxId抓取中的出来

 

普罗米修斯会影响redis吗 普罗米修斯springcloud_elasticsearch_06

在启动类中配置参数:
第一个参数:pinpoint-agent的位置
第二个参数:agentId 唯一的
第三个参数:同一个集群的要配成同一个Name

    新建一个product-api来测试监视:

pom.xml

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
</dependencies>

application.yml
server:
    port: 8064

logging:  //修改日志格式
    pattern:
        console:"%d{yyyy-MM-dd HH:mm:ss.SSS} [%X{PtxId}] [$thread] %-5level %logger{50} - %msg%n"
ProductApi:

@SpringBootApplication
@RestController
@Slf4j
public class ProductApi{

    private RestTemplate restTemplate = new RestTemplate();

    @GetMapping("/product")
    public String getProduct(){
        log.info("get product");
        restTemplate.getForObject("http://localhost:8080/users/13",String.class);
        return "hehe";
    }

    public static void main(String[] args){
        SpringApplication.run(ProductApi.class,args);
    }
}

启动时类配置pinpoint参数设置,并在pinpoint的conf文件中把ProductApi加进去