接上一篇:ELK日志分析系统-ELK搭建篇

1、整体架构

在ELK实战篇中,在搭建篇的基础上有给139上搭建es,logstash;同时也搭建了nginx,httpd服务,通过对nginx和httpd服务的日志做json化处理后,然后通过在logstash服务中添加配置从而获取139服务器中的/var/log/messages,nginx,httpd的日志,整理架构如图:

ELK工作自述 elk实战_nginx

2、es配置如下

1)192.168.171.129

ELK工作自述 elk实战_大数据_02

2)192.168.171.139(并已经安装head插件,通129操作)

ELK工作自述 elk实战_大数据_03

3)访问head并查看

ELK工作自述 elk实战_apache_04


ok,现在索引也可以创建了,现在可以来输出nginx、apache、message、secrue的日志到前台展示(Nginx有的话直接修改,没有自行安装)

编辑nginx配置文件,修改以下内容(在http模块下添加)

3、安装nginx(获取nginx日志)

1)安装
yum -y install bash-complation net-tools lrzsz gcc gcc-c++ make cmake openssl openssl-devel pcre zlib upzip zip      #安装依赖
tar -xf nginx-1.14.2.tar.gz                    #解压
groupadd www                                   #创建组
useradd -g www www                             #创建用户
./configure --user=www --group=www --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module --with-http_flv_module --with-http_gzip_static_module --with-stream --without-http_rewrite_module
                                               #配置
make && make install                           #编译并安装
cd /usr/local/nginx/  &&  ./sbin/nginx         #启动

ELK工作自述 elk实战_大数据_05


ELK工作自述 elk实战_大数据_06

2)日志格式:

ELK工作自述 elk实战_apache_07


将nginx日志格式定义为json格式:

log_format access_json   '{"timestamp":"$time_iso8601",'
                        '"hostname":"$hostname",'
                        '"ip":"$remote_addrx",'
                        '"request_method":"$request_method",'
                        '"domain":"XXXX",'
                        '"size":$body_bytes_sent,'
                        '"status": $status,'
                        '"responsetime":$request_time,'
                        '"sum":"1"'
                        '}';
3)修改nginx日志打印格式(/usr/local/nginx/conf)
1> 大概在21行(在http模块下添加)
log_format access_json   '{"timestamp":"$time_iso8601",'
                        '"hostname":"$hostname",'
                        '"ip":"$remote_addrx",'
                        '"request_method":"$request_method",'
                        '"domain":"XXXX",'
                        '"size":$body_bytes_sent,'
                        '"status": $status,'
                        '"responsetime":$request_time,'
                        '"sum":"1"'
                        '}';
2> 将“access_log logs/access.log main”为“access_log logs/nginx.access.log json”(原文件大概在217行的位置)

将原来的access.log清空,并重启nginx服务

tailf /usr/local/nginx/logs/nginx.access.log

ELK工作自述 elk实战_ELK工作自述_08

4、安装httpd(获取http日志)

1)安装
yum -y install httpd         #安装httpd服务
cat /etc/httpd/conf/httpd.conf
Listen 8080                  #因为nginx的端口已经占用了80,因为修改80为8080
systemctl start httpd        #启动服务

ELK工作自述 elk实战_大数据_09

2)日志格式:

ELK工作自述 elk实战_nginx_10


将Apache日志格式定义为json格式

LogFormat "{ \
        \"@timestamp\": \"%{%Y-%m-%dT%H:%M:%S%z}t\", \
        \"@version\": \"1\", \
        \"tags\":[\"apache\"], \
        \"message\": \"%h %l %u %t \\\"%r\\\" %>s %b\", \
        \"clientip\": \"%a\", \
        \"duration\": %D, \
        \"status\": %>s, \
        \"request\": \"%U%q\", \
        \"urlpath\": \"%U\", \
        \"urlquery\": \"%q\", \
        \"bytes\": %B, \
        \"method\": \"%m\", \
        \"site\": \"%{Host}i\", \
        \"referer\": \"%{Referer}i\", \
        \"useragent\": \"%{User-agent}i\" \
       }" ls_apache_json
3)修改Apache日志打印格式(/etc/httpd/conf/httpd.conf)
1> 大概在202行的位置,找个空格加入下面的代码(注意空格什么的),这一串代码就是把上面信息的messages的关键信息改写成key:value的格式展示
LogFormat "{ \
        \"@timestamp\": \"%{%Y-%m-%dT%H:%M:%S%z}t\", \
        \"@version\": \"1\", \
        \"tags\":[\"apache\"], \
        \"message\": \"%h %l %u %t \\\"%r\\\" %>s %b\", \
        \"clientip\": \"%a\", \
        \"duration\": %D, \
        \"status\": %>s, \
        \"request\": \"%U%q\", \
        \"urlpath\": \"%U\", \
        \"urlquery\": \"%q\", \
        \"bytes\": %B, \
        \"method\": \"%m\", \
        \"site\": \"%{Host}i\", \
        \"referer\": \"%{Referer}i\", \
        \"useragent\": \"%{User-agent}i\" \
       }" ls_apache_json
2> 修改CustomLog “logs/access_log” combined为CustomLog “logs/access_log” apache_json (把原来默认的combined换成apache_json,原文件大概在217行的位置)

将原来的access.log清空,并重启httpd服务

tailf /var/log/httpd/access_log

ELK工作自述 elk实战_apache_11

5、编辑logstash配置文件

由于我的nginx和httpd均在139服务器上,而139服务器上是没有logstatsh的,为方便获取,需要额外在139上安装logstash,安装过程同129服务器一样,过程省掉直接见配置

1)区分与服务器ip的配置,即在logstash配置文件中区别于主机。
vim /u01/isi/application/logstash-5.6.16/config/full.conf
input {
    file {
        path => "/var/log/messages"
        type => "system"
        start_position => "beginning"
    }   
    file {
        path => "/var/log/secure"
        type => "secure"
        start_position => "beginning"
    }   
    file {
        path => "/var/log/httpd/access_log"
        type => "http"
        start_position => "beginning"
    }   

    file {
        path => "/usr/local/nginx/logs/nginx.access.log"
        type => "nginx"
        start_position => "beginning"
    }   

} 
output {
    if [type] == "system" { 
        elasticsearch {
            hosts => ["192.168.171.129:9200"]
            index => "139-system-%{+YYYY.MM.dd}"
        }       
    }   
    if [type] == "secure" {
        elasticsearch {
            hosts => ["192.168.171.129:9200"]
            index => "139-secure-%{+YYYY.MM.dd}"
        }
    }
    if [type] == "http" {
        elasticsearch {
            hosts => ["192.168.171.129:9200"]
            index => "139-http-%{+YYYY.MM.dd}"
        }
    }
    if [type] == "nginx" {
        elasticsearch {
            hosts => ["192.168.171.129:9200"]
            index => "139-nginx-%{+YYYY.MM.dd}"
        }
    }
}
./bin/logstash -f ./config/full.conf   #启动服务

ELK工作自述 elk实战_nginx_12


ELK工作自述 elk实战_nginx_13

2)不区分ip的配置,即在logstash配置文件中不区别于主机

ELK工作自述 elk实战_apache_14

重启服务:nohup ./bin/logstash -f ./config/full.conf &

ELK工作自述 elk实战_json_15


ELK工作自述 elk实战_ELK工作自述_16


ELK工作自述 elk实战_json_17

3)创建索引并查看
http-*
 kafka-*
 nginx-*

ELK工作自述 elk实战_json_18


ELK工作自述 elk实战_apache_19


ELK工作自述 elk实战_nginx_20

4)MySQL慢日志的配置(cat /u01/isi/application/logstash-5.6.16/config/mysql.conf)
input {
    file {
            path => "/var/log/mysql/mysql.slow.log"
            type => "mysql"
            start_position => "beginning"   
        codec => multiline {
                pattern => "^# User@Host:"
                negate => true
                what => "previous"
            }
	}
}
filter {
    grok {
        match => { "message" => "SELECT SLEEP" }
            add_tag => [ "sleep_drop" ]
            tag_on_failure => []
    }  
    if "sleep_drop" in [tags] {
            drop {}
    }    
    grok {
        match => { "message" => "(?m)^# User@Host: %{USER:User}\[[^\]]+\] @ (?:(?<clienthost>\S*) )?\[(?:%{IP:Client_IP})?\]\s.*# Query_time: %{NUMBER:Query_Time:float}\s+Lock_time: %{NUMBER:Lock_Time:float}\s+Rows_sent: %{NUMBER:Rows_Sent:int}\s+Rows_examined: %{NUMBER:Rows_Examined:int}\s*(?:use %{DATA:Database};\s*)?SET timestamp=%{NUMBER:timestamp};\s*(?<Query>(?<Action>\w+)\s+.*)\n# Time:.*$" }
        } 
     date {
            match => [ "timestamp", "UNIX" ]
            remove_field => [ "timestamp" ]
    } 
}
output {
if [type] == "mysql" {

        elasticsearch {
            hosts => ["192.168.1.202:9200"]
            index => "nagios-mysql-slow-%{+YYYY.MM.dd}"
        }
    }
}