1、架构图:


服务器名称

ip地址

controller-node1(主)

172.16.1.90

slave-node1(从)

172.16.1.91



12.10、elk实用案例_nginx

2、安装filebeat:

filebeat不需要安装jdk,比logstash更节约服务器的资源;

(1)下载软件包:

mkdir -p /tools/ && cd /tools/

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.0.0-x86_64.rpm

(2)安装:

rpm -ivh filebeat-7.0.0-x86_64.rpm

(3)启动filebeat服务:

[root@slave-node1 tools]# systemctl start filebeat

[root@slave-node1 tools]# systemctl enable filebeat

3、安装mysql:

(1)下载myslq社区GA版:


12.10、elk实用案例_redis_02

mkdir -p /tools/ && cd /tools/

wget https://dev.mysql.com/get/Downloads/MySQL-5.5/mysql-5.5.62-linux-glibc2.12-x86_64.tar.gz

(2)安装mysql所需的依赖包:

wget http://mirror.centos.org/centos/7/os/x86_64/Packages/libaio-devel-0.3.109-13.el7.i686.rpm

rpm -ivh /tools/libaio-0.3.109-13.el7.x86_64.rpm

(3)修改系统编码为utf-8:

localectl set-locale LANG=zh_CN.UTF-8

(4)创建mysql虚拟用户:

useradd -Ms /sbin/nologin mysql

(5)安装:

tar -xzf mysql-5.5.62-linux-glibc2.12-x86_64.tar.gz

mkdir -p /application/

cp -a mysql-5.5.62-linux-glibc2.12-x86_64/ /application/mysql-5.5.62/

ln -s /application/mysql-5.5.62/ /application/mysql

chown -R mysql.mysql /application/mysql/

(6)创建数据库:

/application/mysql/scripts/mysql_install_db --basedir=/application/mysql/ --datadir=/application/mysql/data/ --user=mysql

(7)修改数据库启动脚本和的路径:

cp -a /application/mysql/support-files/mysql.server /etc/init.d/mysqld

chmod +x /etc/init.d/mysqld

cp -a /application/mysql/support-files/my-small.cnf /etc/my.cnf

sed -i 's#/usr/local/mysql#/application/mysql#g' /application/mysql/bin/mysqld_safe /etc/init.d/mysqld

(8)启动mysql:

[root@slave-node1 tools]# /etc/init.d/mysqld start

Starting MySQL.. SUCCESS!

[root@slave-node1 tools]# netstat -tunlp | grep 3306

tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 18919/mysqld

(9)设置环境变量并初始化mysql:

ln -s /application/mysql/bin/* /usr/local/bin/

mysql_secure_installation

#一路回车/yes,然后设置mysql的root的用户密码即可;

(10)登录mysql:

[root@slave-node1 tools]# mysql -uroot -p123456

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 1

Server version: 5.5.62 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

(11)将mysql加入到开机自启动:

chkconfig --add mysqld

4、安装redis:

(1)下载redis:

mkdir -p /tools/

cd /tools/

wget http://download.redis.io/releases/redis-5.0.0.tar.gz

(2)安装:

cd /tools/

tar -xzf redis-5.0.0.tar.gz

cd redis-5.0.0

make

mkdir -p /application/

cp -a /tools/redis-5.0.0/ /application/

ln -s /application/redis-5.0.0/ /application/redis

ln -s /application/redis/src/redis-cli /usr/bin/redis-cli

(3)redis实例配置:

1)创建实例目录:

mkdir -p /redis/6379/

cp -a /application/redis/redis.conf /redis/6379/

2)修改配置文件:

sed -ri "/#|^$/d" /redis/6379/redis.conf

在redis.conf配置文件中修改如下内容:

bind 172.16.1.90

protected-mode yes

requirepass root

maxclients 10000

daemonize yes

pidfile /redis/6379/redis.pid

logfile /redis/6379/redis.log

#save 900 1

#save 300 10

#save 60 10000

(4)启动redis:

[root@controller-node1 ~]# /application/redis/src/redis-server /redis/6379/redis.conf

[root@controller-node1 ~]# netstat -tunlp | grep 6379

tcp 0 0 172.16.1.90:6379 0.0.0.0:* LISTEN 6184/redis-server 1

[root@controller-node1 ~]# redis-cli -h 172.16.1.90 -p 6379

172.16.1.90:6379> keys *

(error) NOAUTH Authentication required.

172.16.1.90:6379> auth root

OK

172.16.1.90:6379> keys *

(empty list or set)

172.16.1.90:6379>

(5)将redis服务加入到开机自启动:

chmod +x /etc/rc.d/rc.local

echo '/application/redis/src/redis-server /redis/6379/redis.conf' >>/etc/rc.local

5、安装nginx:

(1)下载nginx:

cd /tools/

wget http://nginx.org/download/nginx-1.16.0.tar.gz

(2)安装:

yum install openssl openssl-devel gcc pcre pcre-devel -y

useradd -M -s /sbin/nologin www

tar -xzf nginx-1.16.0.tar.gz

cd /tools/nginx-1.16.0/

./configure --user=www --group=www --with-http_ssl_module --with-http_stub_status_module --prefix=/application/nginx-1.16.0/

echo $?

0

make install

echo $?

0

ln -s /application/nginx-1.16.0/ /application/nginx

(3)启动nginx:

/application/nginx/sbin/nginx

ps -ef | grep nginx

root 9045 1 0 14:55 ? 00:00:00 nginx: master process /application/nginx/sbin/nginx

www 9046 9045 0 14:55 ? 00:00:00 nginx: worker process

root 9052 1422 0 14:55 pts/0 00:00:00 grep --color=auto nginx

(4)登录验证:


12.10、elk实用案例_mysql_03

(5)加入到开机自启动:

[root@controller-node1 ~]# echo '/application/nginx/sbin/nginx' >>/etc/rc.local

6、日志收集流程:

(1)通过filebeat收集日志并写入到logstash:

1)修改参数:

[root@slave-node1 ~]# grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$"

filebeat.inputs:

- type: log

enabled: true

paths:

- /var/log/messages

tags: ["filebeat-system_log-1.91"]

exclude_lines: ['^DBG','^$']

- type: log

enabled: true

paths:

- /var/log/tomcat/tomcat_access_log*.log

tags: ["filebeat-tomcat_access_log-1.91"]

exclude_lines: ['^DBG','^$']

filebeat.config.modules:

path: ${path.config}/modules.d/*.yml

reload.enabled: false

setup.template.settings:

index.number_of_shards: 1

setup.kibana:

output.logstash:

hosts: ["172.16.1.91:5044"]

enabled: true

worker: 2

compression_level: 3

processors:

- add_host_metadata: ~

- add_cloud_metadata: ~

2)重启filebeat:

systemctl restart filebeat

systemctl status filebeat

3)补充:

将filebeat收集到的数据输出到文件中,用于测试输出数据时使用:

output.file:

path: "/tmp"

filename: "filebeat.txt"

(2)logstash将filebeat传来的日志写入到redis中:

1)修改参数:

#表示使用filebeat收集来自172.16.1.91服务器的日志;

[root@slave-node1 ~]# vim /etc/logstash/conf.d/filebeat-1.91.conf

input {

beats {

port => 5044

}

}


output {

if "filebeat-system_log-1.91" in [tags] {

redis {

data_type => "list"

host => "172.16.1.90"

db => "0"

port => "6379"

key => "filebeat-system_log-1.91"

password => "root"

}

}

if "filebeat-tomcat_access_log-1.91" in [tags] {

redis {

data_type => "list"

host => "172.16.1.90"

db => "0"

port => "6379"

key => "filebeat-tomcat_access_log-1.91"

password => "root"

}

}

}

2)检查配置并重启logstash:

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/filebeat-1.91.conf -t

Configuration OK

systemctl restart logstash

[root@slave-node1 ~]# netstat -tunlp | egrep "9600|5044"

tcp6 0 0 172.16.1.91:9600 :::* LISTEN 780/java

tcp6 0 0 :::5044 :::* LISTEN 780/java

(3)在redis中查看从logstash中推过来的数据:

[root@controller-node1 ~]# redis-cli -h 172.16.1.90 -p 6379

172.16.1.90:6379> auth root

OK

172.16.1.90:6379> keys *

1) "filebeat-system_log-1.91"

2) "filebeat-tomcat_access_log-1.91"

172.16.1.90:6379> llen filebeat-system_log-1.91

(integer) 1986

172.16.1.90:6379> llen filebeat-tomcat_access_log-1.91

(integer) 9

172.16.1.90:6379> lpop filebeat-tomcat_access_log-1.91

#查看写入的数据,取一条少一条,所以redis是不需要持久化的参数配置;

172.16.1.90:6379> llen filebeat-tomcat_access_log-1.91

(integer) 8

(4)logstash从redis中取出数据,然后写入到elasticsearch中:

1)修改配置参数:

#表示通过redis收集来自172.16.1.91服务器上的日志;

[root@slave-node1 ~]# vim /etc/logstash/conf.d/redis-1.91.conf

input {

redis {

data_type => "list"

host => "172.16.1.90"

db => "0"

port => "6379"

key => "filebeat-system_log-1.91"

password => "root"

}

redis {

data_type => "list"

host => "172.16.1.90"

db => "0"

port => "6379"

key => "filebeat-tomcat_access_log-1.91"

password => "root"

}

}


output {

if "filebeat-system_log-1.91" in [tags] {

elasticsearch {

hosts => ["172.16.1.90"]

index => "filebeat-system_log-1.91-%{+YYYY.MM.dd}"

}

}

if "filebeat-tomcat_access_log-1.91" in [tags] {

elasticsearch {

hosts => ["172.16.1.90"]

index => "filebeat-tomcat_access_log-1.91-%{+YYYY.MM.dd}"

}

}

}

2)验证配置文件并重启logstash:

[root@slave-node1 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis-1.91.conf -t

Configuration OK

[root@slave-node1 ~]# systemctl restart logstash

[root@slave-node1 ~]# netstat -tunlp | grep 9600

tcp6 0 0 172.16.1.91:9600 :::* LISTEN 5835/java

3)通过elasticsearch-head查看数据是否已经写入到elasticsearch集群中:


12.10、elk实用案例_mysql_04

(5)在redis中查看数增减情况:

1)

[root@controller-node1 ~]# redis-cli -h 172.16.1.90 -p 6379

172.16.1.90:6379> auth root

OK

172.16.1.90:6379> keys *

1) "filebeat-system_log-1.91"

2) "filebeat-tomcat_access_log-1.91"

172.16.1.90:6379> keys *

1) "filebeat-system_log-1.91"

172.16.1.90:6379> llen filebeat-system_log-1.91

(integer) 1846

172.16.1.90:6379> llen filebeat-system_log-1.91

(integer) 1924

172.16.1.90:6379> llen filebeat-system_log-1.91

(integer) 1832

172.16.1.90:6379> llen filebeat-system_log-1.91

(integer) 1745

172.16.1.90:6379> llen filebeat-system_log-1.91

(integer) 1907

172.16.1.90:6379>

说明:从上面的数据可以看到,key值在有序的增减,这说明logstash正在从redis中取数据且整个集群还是比较稳定的;

2)编写python脚本检测,redis中日志列表的数量,方便进行zabbix监控:

#安装python中的redis模块:

[root@controller-node1 ~]# yum install python-pip

[root@controller-node1 ~]# pip install redis

#编写python监控脚本:

[root@controller-node1 ~]# mkdir -p /scripts/

[root@controller-node1 ~]# vim /scripts/redis-check.py

#!/usr/bin/env python

import redis

def redis_conn():

pool=redis.ConnectionPool(host="172.16.1.90",port="6379",db="0",password="root")

conn = redis.Redis(connection_pool=pool)

data = conn.llen("filebeat-system_log-1.91")

print(data)

redis_conn()

#运行脚本:

[root@controller-node1 ~]# python /scripts/redis-check.py

2342

[root@controller-node1 ~]# python /scripts/redis-check.py

2296

(6)将elasticsearch索引加入到kibana中:

在浏览其中输入:http://172.16.1.90:5601

1)


12.10、elk实用案例_nginx_05

2)


12.10、elk实用案例_nginx_06

3)同理可以将tomcat的日志加入到kibana中;

4)查看数据:


12.10、elk实用案例_mysql_07

7、通过nginx代理kibana验证登录:

(1)清理注释的配置:

[root@controller-node1 ~]# sed -ri.bak "/#|^$/d" /application/nginx/conf/nginx.conf

(2)创建nginx的配置文件目录:

[root@controller-node1 ~]# mkdir /application/nginx/conf/conf.d/ -p

(3)修改配置文件:

1)主配置文件:

vim /application/nginx/conf/nginx.conf

worker_processes 1;

events {

worker_connections 1024;

}

http {

include mime.types;

default_type application/octet-stream;

sendfile on;

keepalive_timeout 65;

include conf.d/*.conf;

}

2)包含配置文件:

vim /application/nginx/conf/conf.d/kibana1.90.conf

upstream kibana_server {

server 172.16.1.90:5601 weight=1 max_fails=3 fail_timeout=60;

}

server {

listen 80;

server_name www.kibana1.90.com;

auth_basic "Restricted Access";

auth_basic_user_file htpasswd.users;

location / {

proxy_pass http://kibana_server;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

}

}

(4)生成验证的账户和密码文件:

1)安装密码生成工具:

cd /tools/

wget http://mirror.centos.org/centos/7/os/x86_64/Packages/httpd-tools-2.4.6-88.el7.centos.x86_64.rpm

rpm -ivh httpd-tools-2.4.6-88.el7.centos.x86_64.rpm

2)创建登录验证账户:

htpasswd -bc /application/nginx/conf/htpasswd.users liuc1 123456

Adding password for user liuc1

htpasswd -b /application/nginx/conf/htpasswd.users liuc2 123456

Adding password for user liuc2

cat /application/nginx/conf/htpasswd.users

liuc1:$apr1$6xLiuu4L$EiYQY0gjuiFfZU0xIo83i/

liuc2:$apr1$ZQ90TjCg$O/eHsVOLyvh29fLZ7ORj9/

(5)验证nginx配置文件:

/application/nginx/sbin/nginx -t

nginx: the configuration file /application/nginx-1.16.0/conf/nginx.conf syntax is ok

nginx: configuration file /application/nginx-1.16.0/conf/nginx.conf test is successful

(6)重启nginx服务:

/application/nginx/sbin/nginx -s reload

(7)在windows中使用浏览器进行登录验证:

1)修改windows的C:\Windows\System32\drivers\etc\hosts文件添加如下内容:

172.16.1.90 www.kibana1.90.com

2)在浏览器中进行访问:


12.10、elk实用案例_nginx_08


12.10、elk实用案例_mysql_09

8、使用haproxy实现代理功能:

使用haproxy代理的缺点是不能使用账户和密码进行验证登录;

(1)下载软件:

cd /tools/

wget \

​https://src.fedoraproject.org/repo/pkgs/haproxy/haproxy-1.8.15.tar.gz/sha512/425e1f3a9ab2c2d09934c5d783ad986bd61a638ba6466dd15c20c5b6e7fc3dfad7c398e10bbd336a856ccad29bab0f23e4b9c3d0f17a54b86c8b917e4b974bcb/haproxy-1.8.15.tar.gz​

(2)安装依赖软件:

yum install openssl openssl-devel gcc pcre pcre-devel systemd-devel -y

(3)编译安装haproxy:

tar -xzf haproxy-1.8.15.tar.gz

cd /tools/haproxy-1.8.15/

uname -r

3.10.0-862.el7.x86_64

make ARCH=x86_64 TARGET=linux2628 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 PREFIX=/application/haproxy-1.8.15/

#通过查看系统的内核版本可知TARGET=linux310;如果内核大于2.6.28的可以使用TARGET=linux2628;

#USE_SYSTEMD=1为支持使用 -Ws参数(systemd-aware master-worker 模式)启动Haproxy,从而实现单主进程多子进程运行模式;

echo $?

0

make install PREFIX=/application/haproxy-1.8.15/

echo $?

0

ln -s /application/haproxy-1.8.15/ /application/haproxy

[root@controller-node1 ~]# /application/haproxy/sbin/haproxy -v

HA-Proxy version 1.8.15 2018/12/13

Copyright 2000-2018 Willy Tarreau <willy@haproxy.org>

(4)修改haproxy配置参数:

useradd -M -s /sbin/nologin haproxy

mkdir -p /etc/haproxy/

vim /etc/haproxy/haproxy.cfg

global

#全局配置,和操作系统有关;

log 127.0.0.1 local6 info

############################################################

#定义haproxy的日志输出和级别{err|waning|info|debug};

#local6表示使用syslog收集日志,local2.* /var/log/haproxy.log表示将日志写入到

#/var/log/haproxy.log文件中;

############################################################

chroot /application/haproxy/

#haproxy的工作目录;

pidfile /application/haproxy/haproxy.pid

#守护进程方式下的pid文件存放位置;

maxconn 100000

#最大连接数;

user haproxy

group haproxy

#haproxy进程使用的用户和组;

daemon

#以守护进程的方式运行;

stats socket /application/haproxy/stats

#定义统计信息保存的位置;

nbproc 1

#进程数,一般是服务器的核心数;


defaults

#默认配置,作用下面的listen,frontend,backend组件,

#如果下面组件有相同的配置,则会覆盖defaults的配置;

mode http

#{tcp|http|health},tcp是4层,http是7层,health只会返回OK;

log global

#引入global模块中定义的日志格式;

option httplog

#日志类别为http日志格式;

option dontlognull

#如果产生了一个空连接,那这个空连接的日志将不会记录;

option http-server-close

############################################################

#打开http协议中服务器端关闭功能,使得支持长连接,使得会话

#可以被重用;

############################################################

option forwardfor except 127.0.0.0/8

#haproxy后端服务器获取客户端的真实ip;

option redispatch

############################################################

#当haproxy后端服务器挂掉,haproxy将用户的访问转移到一个健

#康的后端服务器;

############################################################

retries 3

############################################################

#向haproxy后端服务器尝试连接的最大次数,超过此值就认为后

#端服务器不可用;

############################################################

timeout http-request 10s

#客户端发送http请求haproxy的超时时间;

timeout queue 1m

############################################################

#当haproxy后端服务器在高负载响应时,把haproxy发送来的

#请求放进队列中的超时时间;

############################################################

timeout connect 10s

#haproxy与后端服务器连接超时时间;

timeout client 1m

#定义客户端与haproxy的非活动连接的超时时间;

timeout server 1m

#定义haproxy与后端服务器非活动连接的超时时间;

timeout http-keep-alive 10s

#保持tcp的长连接,减少tcp重复连接的次数;

timeout check 10s

#健康监测超时时间;

maxconn 100000

#最大连接数;


listen stats

#监听haproxy实例状态配置;

bind 172.16.1.90:9999

stats enable

stats uri /haproxy-status

stats auth haproxy:123456


frontend web_port

#接收请求的前端虚拟节点配置;

bind 0.0.0.0:8080

############################ACL Setting###############################

acl pc hdr_dom(host) -i www.kibana1.90.com

acl mobile hdr_dom(host) -i mobile.kibana1.90.com

############################USE ACL###################################

use_backend pc_host if pc

use_backend mobile_host if mobile


backend pc_host

#后端服务器集群配置;

balance source

############################################################

#roundrobin:基于权重进行的轮询算法,比较公平;

#source:是基于请求源IP的算法,会不公平,但是能解决session问题;

#leastconn:此算法会将新的连接请求转发到具有最少连接数目

#的后端服务器,适合于长连接,比如数据库;

############################################################

server kibana 172.16.1.90:5601 check inter 2000 rise 3 fall 2 weight 1

############################################################

#server:定义后端真实服务器;kibana:定义后端服务器的名称;

#172.16.1.90:5601:后端服务器的ip及端口号;

#check inter 2000:对后端服务器实行健康检查,时间间隔为2s;

#rise:后端服务器从故障状态转换至正常状态需要成功检查的次数;

#fall:后端服务器从正常状态转换为不可用状态需要检查的次数;

#weight:后端服务器的权重,默认1,最大256,0不参与负载均衡;

############################################################

backend mobile_host

balance source

server kibana 172.16.1.90:5601 check inter 2000 rise 3 fall 2 weight 1

(5)设置启动脚本:

vim /usr/lib/systemd/system/haproxy.service

[Unit]

Description=HAProxy Load Balancer

After=syslog.target network.target


[Service]

ExecStartPre=/application/haproxy/sbin/haproxy -f /etc/haproxy/haproxy.cfg -c -q

ExecStart=/application/haproxy/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /application/haproxy/haproxy.pid

ExecReload=/bin/kill -USR2 $MAINPID


[Install]

WantedBy=multi-user.target

(6)启动haproxy:

systemctl start haproxy

netstat -tunlp | egrep "9999|8080"

tcp 0 0 172.16.1.90:9999 0.0.0.0:* LISTEN 46036/haproxy

tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 46036/haproxy

systemct status haproxy

systemctl enable haproxy

(7)访问测试:

1)

在windos的'C:\Windows\System32\drivers\etc\hosts'文件中加入如下内容:

172.16.1.90 www.kibana1.90.com

172.16.1.90 mobile.kibana1.90.com

2)访问:


12.10、elk实用案例_redis_10


12.10、elk实用案例_mysql_11

(8)访问haproxy的监控界面:

1)


12.10、elk实用案例_nginx_12

2)


12.10、elk实用案例_redis_13

9、使用rsyslog收集haproxy的日志:

(1)查看系统rsyslog的版本(默认情况下rsyslog是已经安装的,且开机自启动):

[root@controller-node1 ~]# yum list rsyslog

…………

已安装的软件包

rsyslog.x86_64 8.24.0-16.el7 @anaconda

说明:在centos7系统中是rsyslog,在centos6中是syslog;

(2)修改rsyslog的配置参数:

[root@controller-node1 ~]# vim /etc/rsyslog.conf

$ModLoad imudp

$UDPServerRun 514

$ModLoad imtcp

$InputTCPServerRun 514

#打开15、16、19、20行的注释;

local6.* @@172.16.1.91:5160

#在配置文件末尾添加上面一行;

说明:local6.* /var/log/haproxy/haproxy.log #表示将rsyslog收集的日志写到本地文件中;

(3)重启rsyslog:

[root@controller-node1 ~]# systemctl restart rsyslog.service

(4)修改logstash配置文件:

[root@slave-node1 ~]# vim /etc/logstash/conf.d/rsyslog-haproxy-1.90.conf

input {

syslog {

type => "rsyslog-haproxy-1.90"

port => "5160"

}

}


output {

if [type] == "rsyslog-haproxy-1.90"{

elasticsearch {

hosts => ["172.16.1.90:9200"]

index => "rsyslog-haproxy-1.90-%{+YYYY.MM.dd}"

}

}

}

(5)验证并重启logstash:

[root@slave-node1 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/rsyslog-haproxy-1.90.conf -t

Configuration OK

[root@slave-node1 ~]# systemctl restart logstash.service

[root@slave-node1 ~]# netstat -tunlp | egrep "9600|5160"

(6)测试:

1)访问haproxy代理:

​ http://www.kibana1.90.com:8080/app/kibana​

2)在elasticsearch中验证:


12.10、elk实用案例_redis_14

(7)将存入elasticsearch上的日志索引加入到kibana中;

(8)说明:通过rsyslog收集日志,只能通过logstash收集后写入到redis中,rsyslog不能将收集的日志直接写入到redis中;

10、日志写入到数据库中:

(1)说明:

1)

如果安装了kibana后,elasticsearch中的数据是会被定时删除的,默认删除时间是7天;

#在kibana中定时删除elasticsearch中数据的配置:


12.10、elk实用案例_redis_15


12.10、elk实用案例_redis_16

2)写入数据库的目的是用于持久化保存重要数据,比如状态码,客户端浏览器版本等等,用于后期按月做统计等;

(2)在myql上建库授权并建表:

1)建库授权:

[root@slave-node1 ~]# mysql -uroot -p123456 -S /tmp/mysql.sock

mysql> create database elk character set utf8 collate utf8_bin;

Query OK, 1 row affected (0.02 sec)

mysql> grant all privileges on elk.* to elk@"172.16.1.%" identified by '123456';

Query OK, 0 rows affected (0.04 sec)

mysql> flush privileges;

Query OK, 0 rows affected (0.00 sec)

2)建表:

#在windows客户端使用“Navicat for MySQL”连接mysql数据库:


12.10、elk实用案例_nginx_17

#查看收集的tomcat json格式的日志:

tail -1 /var/log/tomcat/tomcat_access_log2019-06-13.log

{"clientip":"172.16.1.254","ClientUser":"-","authenticated":"-","AccessTime":"[13/Jun/2019:22:58:39 +0800]","method":"GET /test/ HTTP/1.1","status":"304","SendBytes":"-","Query?string":"","p

artner":"-","AgentVersion":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36"}

#建表:


12.10、elk实用案例_nginx_18

说明:时间戳字段的默认值需要设为:CURRENT_TIMESTAMP


12.10、elk实用案例_redis_19

(3)安装mysql jdbc驱动程序:

1)下载mysql JDBC驱动程序:

​ https://dev.mysql.com/downloads/connector/j/​


12.10、elk实用案例_mysql_20

cd /tools/

wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-8.0.16.tar.gz

#mysql-connector-java是mysql官方jdbc(数据库连接)驱动程序,是一种执行sql语句的javaAPI,为

#mysql关系型数据库停供统一的访问,它由一主用java语言编写的类和接口组;

2)安装:

#创建驱动程序目录:

[root@slave-node1 tools]# mkdir -pv /usr/share/logstash/vendor/jar/jdbc/

mkdir: 已创建目录 "/usr/share/logstash/vendor/jar"

mkdir: 已创建目录 "/usr/share/logstash/vendor/jar/jdbc"

#解压驱动:

[root@slave-node1 tools]# tar -xzf mysql-connector-java-8.0.16.tar.gz

[root@slave-node1 tools]# cp -a mysql-connector-java-8.0.16/mysql-connector-java-8.0.16.jar /usr/share/logstash/vendor/jar/jdbc/

[root@slave-node1 tools]# chown -R logstash.logstash /usr/share/logstash/vendor/jar/

[root@slave-node1 tools]# ls -l /usr/share/logstash/vendor/jar/jdbc/

总用量 2240

-rw-r--r-- 1 logstash logstash 2293144 3月 21 03:08 mysql-connector-java-8.0.16.jar

#安装logstash-output-jdbc插件:

[root@slave-node1 tools]# yum install ruby rubygems -y

[root@slave-node1 tools]# gem sources --add https://gems.ruby-china.com/ --remove https://rubygems.org/

[root@slave-node1 tools]# gem sources -l

*** CURRENT SOURCES ***

​https://gems.ruby-china.com/​

[root@slave-node1 tools]# /usr/share/logstash/bin/logstash-plugin install logstash-output-jdbc

Validating logstash-output-jdbc

Installing logstash-output-jdbc

Installation successful

[root@slave-node1 tools]# /usr/share/logstash/bin/logstash-plugin list | grep jdbc #查看jdbc插件是否被安装;

logstash-filter-jdbc_static

logstash-filter-jdbc_streaming

logstash-input-jdbc

logstash-output-jdbc

(4)编写logstash配置文件:

[root@slave-node1 tools]# vim /etc/logstash/conf.d/logstash-tomcat-access-log-1.91.conf

input {

file {

path => "/var/log/tomcat/tomcat_access_log*.log"

type => "logstash-tomcat_access_log-1.91"

start_position => "beginning"

stat_interval => "2"

codec => "json"

}

}


output {

if [type] == "logstash-tomcat_access_log-1.91" {

jdbc {

connection_string => "jdbc:mysql://172.16.1.91:3306/elk?user=elk&password=123456&useUnicode=true&characterEncoding=UTF8"

statement => ["INSERT INTO logstash_tomcat_access_log_1_91(clientip,AccessTime,AgentVersion,status,method,SendBytes) VALUES(?,?,?,?,?,?)","clientip","AccessTime","AgentVersion","status","method","SendBytes"]

}

}

}

(5)验证logstash配置文件:

[root@slave-node1 tools]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-tomcat-access-log-1.91.conf -t

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults

Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console

[WARN ] 2019-06-13 23:51:56.737 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified

Configuration OK

[INFO ] 2019-06-13 23:52:20.009 [LogStash::Runner] runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

(6)重启logstash:

[root@slave-node1 tools]# systemctl restart logstash

[root@slave-node1 tools]# tailf /var/log/logstash/logstash-plain.log

[2019-06-13T22:57:34,706][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

(7)在浏览器中输入"http://172.16.1.91:8080/test"url地址访问tomcat服务;

(8)在"Navicat for MySQL"中查看写入到mysql的数据:


12.10、elk实用案例_redis_21

11、使用kibana画图:

kibana-space(kibana空间):不同的space之间kibana是完全隔离的,可以用于分割不同的组;

machine-learning(机器学习):在本地上传excel或csv文件进行分析;

visualize(可视化):对数据进行分析,转化为直方图,热图等形式;

dashboard(仪表盘):对可视化模块中的数据进行过滤后进行展现;

地理位置:

"location":{

"type":"geo_point",

"ignore_malformed":true

}

12、补充:

(1)kibana是通过对elasticsearch上的数据进行分析的,elasticsearch7.0或以上的版本,数据分片默认是两个,分片

越少,检索数据的时候速度就会越快;