ELK

首先要准备好ELK的安装包:

jdk-8u162-linux-x64.rpm
elasticsearch-6.2.4.rpm
kibana-6.2.4-x86_64.rpm
logstash-6.2.4.rpm
filebeat-6.3.0-x86_64.rpm
# 我用的6.24的版本

安装Elasticsearch

< 1  ELK-TEST - [root]: ~ > #  rpm -ivh jdk-8u162-linux-x64.rpm
< 2  ELK-TEST - [root]: ~ > #  rpm -ivh elasticsearch-6.2.4.rpm
< 3  ELK-TEST - [root]: ~ > #  systemctl start elasticsearch
< 4  ELK-TEST - [root]: ~ > #  vim /etc/elasticsearch/elasticsearch.yml
< 5  ELK-TEST - [root]: ~ > # grep -Pv "^(#|$)" /etc/elasticsearch/elasticsearch.yml 
cluster.name: elk
node.name: elk-node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"

启动测试

< 6  ELK-TEST - [root]: ~ > # systemctl restart elasticsearch
< 7  ELK-TEST - [root]: ~ > # lsof -i:9200
COMMAND    PID          USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
java      5924 elasticsearch  119u  IPv6  59918      0t0  TCP *:wap-wsp (LISTEN)
java      5924 elasticsearch  136u  IPv6  69396      0t0  TCP 192.168.1.41:wap-wsp->192.168.1.41:54884 (ESTABLISHED)

< 8  ELK-TEST - [root]: ~ > # curl -X GET http://192.168.1.41:9200
{

  "name" : "elk-node-1",
  "cluster_name" : "elk",
  "cluster_uuid" : "i4h5DhHbSzyQ9o0bFzjLZg",
  "version" : {
    "number" : "6.2.4",
    "build_hash" : "ccec39f",
    "build_date" : "2018-04-12T20:37:28.497551Z",
    "build_snapshot" : false,
    "lucene_version" : "7.2.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

插入一条数据

< 7  ELK-TEST - [root]: ~ > # curl -H "Content-Type: application/json" -XPOST '192.168.1.41:9200/customer/external/1?pretty' -d' {"name": "Fei Ba" }'
{
  "_index" : "customer",
  "_type" : "external",
  "_id" : "1",
  "_version" : 2,
  "result" : "updated",
  "_shards" : {
    "total" : 2,
    "successful" : 1,
    "failed" : 0
  },
  "_seq_no" : 1,
  "_primary_term" : 2
}

安装head插件

(1)安装NodeJS

wget https://npm.taobao.org/mirrors/node/latest-v4.x/node-v4.5.0-linux-x64.tar.gz
tar -zxvf node-v4.5.0-linux-x64.tar.gz  -C  /opt

配置环境变量,编辑/etc/profile添加
export NODE_HOME=/usr/local/node-v4.5.0-linux-x64
export PATH=PATH:NODE_HOME/bin/
export NODE_PATH=$NODE_HOME/lib/node_modules

source /etc/profile

(2)安装npm

# npm install -g cnpm --registry=https://registry.npm.taobao.org

(3)使用npm安装grunt

# npm install -g grunt
# npm install -g grunt-cli --registry=https://registry.npm.taobao.org --no-proxy

(4)版本确定

< 9  ELK-TEST - [root]: ~ > # node -v
v6.14.2
< 10  ELK-TEST - [root]: ~ > # npm -v
3.10.10
< 12  ELK-TEST - [root]: ~ > # grunt -version
grunt-cli v1.2.0

(5)下载head插件源码

< 12  ELK-TEST - [root]: ~ > # wget https://github.com/mobz/elasticsearch-head/archive/master.zip                   #也可以在别的地方下载后上传到服务器
unzip master.zip 

(6)下载依赖

进入elasticsearch-head-master目录,执行下面命令
< 12  ELK-TEST - [root]: ~ > # < 11  ELK-TEST - [root]: /opt/elasticsearch-head-master > # npm install

在配置文件里添加如下内容:

< 12  ELK-TEST - [root]: ~ > # tail -3 /etc/elasticsearch/elasticsearch.yml
http.cors.enabled: true
http.cors.allow-origin: "*"

< 12  ELK-TEST - [root]: ~ > # systemctl restart elasticsearch
< 12  ELK-TEST - [root]: ~ > # grunt server &         #后台运行

访问192.168.1.41:9100,如下:

1530684217839

安装logstash

< 2  ELK-TEST - [root]: ~ > #  rpm -ivh logstash-6.2.4.rpm
< 40  ELK-TEST - [root]: ~ > # grep -Pv "^(#|$)" /etc/logstash/logstash.yml 
path.data: /var/lib/logstash
path.config: /etc/logstash/conf.d
path.logs: /var/log/logstash
根据默认配置,pipeline实例文件默认应放置于/etc/logstash/conf.d目录,此时目录下无实例文件,可根据实际情况新建实例,以处理本机messages信息为例,如下:
< 41  ELK-TEST - [root]: ~ > # cd /etc/logstash/conf.d/
< 42  ELK-TEST - [root]: /etc/logstash/conf.d > # vim sqlsendmail.conf 
input {
    file {
        type => "trip-service"
        path => "/home/elk/trip-service-*.log"
        start_position => "beginning"
    }
}

filter {
  grok {
     match => { "message" => "\s*\[impl\]\[in\] traceId=%{NUMBER:traceId},reqTime=%{NUMBER:reqTime} req=%{GREEDYDATA:req}" }
  }
  if [priority] == "SQLerror" {    #是错误添加标记
     grok {
        tag_on_failure => "sqlerror"
     }  
  }  
  # 计数
  metrics {
      # 每60秒清空计数器
      clear_interval =>60
      # 每隔60秒统计一次
      flush_interval =>60
      # 计数器数据保存的字段名 priority的值默认就有是日志的级别
      meter => "events_%{priority}"
      # 增加"error",作为标记
      add_tag => "sqlerror"
      # 3秒内的error数据才统计,避免延迟
      gnore_older_than => 3 
  }   
  # 如果event中标记为“error”的
  if "metrics" in [tags] {
      # 执行ruby代码
      ruby {
          # 如果level为warn的数量小于3条,就忽略此事件(即不发送任何消息)。
          code => "event.cancel if event['events_WARN']['count'] < 3"
      }   
  }   
}
output {
  # 如果event中标记为“sqlerror”,表示出错。
  if "error" in [tags]{
  # 通过emai发送邮件。
      email {
        to => "923483719@qq.com"
        via => "smtp"
        port => 25
        username => "923483719@qq.com"
        password => "dznxkqcnutnfbbji"
        subject => "[%{@timestamp} xxx服务器 日志发现异常!]"
        body => "new bug ! %{message}"
        htmlbody => "%{message}"
      }
  }
  stdout { codec => rubydebug } # 标准输出

注意目录权限和属主和属组问题

< 42  ELK-TEST - [root]: /etc/logstash/conf.d > # cd ..
< 43  ELK-TEST - [root]: /etc/logstash > #  
 chown -R logstash:logstash conf.d/
< 44  ELK-TEST - [root]: /etc/logstash > # 
 chmod 644 /var/log/messages

启动测试

< 45  ELK-TEST - [root]: ~ > # 
 cd /usr/share/logstash/
< 46  ELK-TEST - [root]: ~ > # 
bin/logstash -e 'input { stdin { } } output { stdout {} }'

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs to console
# logstash可以启动,但此种验证方式会有告警,可以提示方式处理,在“$LS_HOME”下建立“config”目录,并将”/etc/logstash/”下的文件建软链接到“config”目录,再次执行即可,如下:
< 47  ELK-TEST - [root]: ~ ># mkdir -p /usr/share/logstash/config/
< 48  ELK-TEST - [root]: ~ ># ln -s /etc/logstash/* /usr/share/logstash/config
< 49  ELK-TEST - [root]: ~ ># chown -R logstash:logstash /usr/share/logstash/config/
< 50  ELK-TEST - [root]: ~ ># bin/logstash -e 'input { stdin { } } output { stdout {} }'

安装filebeat

< 34  ELK-TEST - [root]: /etc/filebeat > # grep -Pv "^( *#|$)" filebeat.yml 
filebeat.inputs:

- type: log
  enabled: true
  paths:
    - /var/log/*.log

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 3

setup.kibana:

output.elasticsearch:
  hosts: ["192.168.1.41:9200"]
logging:
    to_syslog: false
    to_files: true
    files:
        rotateeverybytes: 10485760     # 默认的10MB
        level: info

注意:Filebeat 6.0后,enabled默认为关闭,如红圈外,必须要修改成true,就是因为这个是新版本。 paths:为你想要抓取分析的日志内容

img

如果直接将日志发送到Elasticsearc,请编辑此行Elasticsearc output 如果直接将日志发送到Logstash,请编辑此行Logstash output 只能使用一行输出,其它的注掉即可

img

安装kibana

< 63  ELK-TEST - [root]: ~ > #  rpm -ivh kibana-6.2.4-x86_64.rpm 
< 64  ELK-TEST - [root]: ~ > #  rpm -ql kibana |grep '/etc/'
< 65  ELK-TEST - [root]: ~ > #  cd /etc/kibana/
< 66  ELK-TEST - [root]: ~ > #  vim /etc/kibana/kibana.yml 
< 67  ELK-TEST - [root]: ~ > #  grep -Pv "^(#|$)" /etc/kibana/kibana.yml
server.port: 5601
server.host: "192.168.1.41"
elasticsearch.url: "http://192.168.1.41:9200"
< 68  ELK-TEST - [root]: ~ > # systemctl start kibana

1530683727526

上边表示日志推送成功。