文章目录
- Kibana
- 概念
- 安装
- logstash
- 概念
- 安装
- codec类插件
- file模块插件
- filter grok插件
- 解析Apache的日志
- 优化
Kibana
概念
- 数据可视化平台工具
- 特点:
– 灵活的分析和可视化平台
– 实时总结流量和数据的图表
– 为不同的用户显示直观的界面
– 即时分享和嵌入的仪表版
安装
yum -y install kibana
rpm -qc kibana /opt/kibana/config/kibana.yml
- 修改配置文件
vim /opt/kibana/config/kibana.yml
2 server.port: 5601
//若把端口改为80,可以成功启动kibana,但ss时没有端口,没有监听80端口,服务里面写死了,不能用80端口,只能是5601这个端口
5 server.host: "0.0.0.0" //服务器监听地址
15 elasticsearch.url: http://192.168.1.51:9200
//声明地址,从哪里查,集群里面随便选一个
23 kibana.index: ".kibana" //kibana自己创建的索引
26 kibana.defaultAppId: "discover" //打开kibana页面时,默认打开的页面discover
53 elasticsearch.pingTimeout: 1500 //ping检测超时时间
57 elasticsearch.requestTimeout: 30000 //请求超时
64 elasticsearch.startupTimeout: 5000 //启动超时
systemctl restart kibana
systemctl enable kibana
ss -antup | grep 5601 //查看监听端口
- 浏览器访问kibana
firefox 192.168.1.56:5601
- 用head插件访问会有.kibana的索引信息
firefox http://192.168.1.55:9200/_plugin/head
logstash
概念
- 是一个数据采集,加工处理以及传输的工具
- 特点
– 所有类型的数据集中处理
–不同模式和格式数据的正常化
– 自定义日志格式的迅速扩展
– 为自定义数据源轻松添加插件
安装
- 需要依赖java环境,安装java-1.8.0-openjdk
- 没有默认配置文件,需要手动配置
- 默认安装在/opt/logstash
- 不会使用插件,见插件文档https://github.com/logstash-plugins
vim /etc/hosts
192.168.1.51 es1
192.168.1.52 es2
192.168.1.53 es3
192.168.1.54 es4
192.168.1.55 es5
192.168.1.56 kibana
192.168.1.57 logstash
yum -y install java-1.8.0-openjdk logstash
java -version
openjdk version "1.8.0_161"
OpenJDK Runtime Environment (build 1.8.0_161-b14)
OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode)
touch /etc/logstash/logstash.conf
/opt/logstash/bin/logstash --version
logstash 2.3.4
/opt/logstash/bin/logstash-plugin list //查看插件
...
logstash-input-stdin //标准输入插件
logstash-output-stdout //标准输出插件
...
vim /etc/logstash/logstash.conf
input{
stdin{
}
}
filter{
}
output{
stdout{
}
}
/opt/logstash/bin/logstash -f /etc/logstash/logstash.conf
//启动并测试
Settings: Default pipeline workers: 2
Pipeline main started
aa //logstash 配置从标准输入读取输入源,然后从标准输出输出到屏幕
2018-09-15T06:19:28.724Z logstash aa
codec类插件
vim /etc/logstash/logstash.conf
input{
stdin{
codec => "json" //输入设置为编码json
}
}
filter{
}
output{
stdout{
codec => "rubydebug" //输出设置为rubydebug
}
}
/opt/logstash/bin/logstash -f /etc/logstash/logstash.conf
Settings: Default pipeline workers: 2
Pipeline main started
{"a":1}
{
"a" => 1,
"@version" => "1",
"@timestamp" => "2019-03-12T03:25:58.778Z",
"host" => "logstash"
}
file模块插件
vim /etc/logstash/logstash.conf
input{
file {
path => [ "/tmp/a.log", "/tmp/b.log" ]
sincedb_path => "/var/lib/logstash/sincedb"
#记录读取文件的位置
start_position => "beginning" # 默认"end"
#配置第一次读取文件从什么地方开始
type => "testlog"
#类型名称
}
}
filter{
}
output{
stdout{
codec => "rubydebug"
}
}
touch /tmp/a.log
touch /tmp/b.log
/opt/logstash/bin/logstash -f /etc/logstash/logstash.conf
另开一个终端:写入数据echo A_${RANDOM} > /tmp/a.log
echo B_${RANDOM} > /var/tmp/b.log
之前终端查看:
/opt/logstash/bin/logstash -f /etc/logstash/logstash.conf
Settings: Default pipeline workers: 2
Pipeline main started
{
"message" => "A_8676",
"@version" => "1",
"@timestamp" => "2019-03-12T03:40:24.111Z",
"path" => "/tmp/a.log",
"host" => "logstash",
"type" => "testlog"
}
{
"message" => "B_15431",
"@version" => "1",
"@timestamp" => "2019-03-12T03:40:49.167Z",
"path" => "/tmp/b.log",
"host" => "logstash",
"type" => "testlog"
}
filter grok插件
- grok插件:
– 解析各种非结构化的日志数据插件
– grok使用正则表达式把飞结构化的数据结构化
– 在分组匹配,正则表达式需要根据具体数据结构编写
– 虽然编写困难,但适用性极广
解析Apache的日志
192.168.1.254 - - [12/Mar/2019:11:51:31 +0800] “GET /favicon.ico HTTP/1.1” 404 209 “-” "
- 查找正则宏路径
cd /opt/logstash/vendor/bundle/ \
jruby/1.9/gems/logstash-patterns-core-2.0.5/patterns/
vim grok-patterns #查找COMBINEDAPACHELOG
COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent}
input{
file {
path => [ "/var/log/httpd/access_log" ]
sincedb_path => "/var/lib/logstash/sincedb"
start_position => "beginning"
type => "testlog"
}
}
filter{
grok{
match => [ "message", "%{COMBINEDAPACHELOG}" ]
}
}
output{
stdout{
codec => "rubydebug"
}
}
- 解析出的结果
opt/logstash/bin/logstash -f /etc/logstash/logstash.conf
Settings: Default pipeline workers: 2
Pipeline main started
{
"message" => "192.168.1.254 - - [15/Sep/2018:18:25:46 +0800] \"GET /noindex/css/open-sans.css HTTP/1.1\" 200 5081 \"http://192.168.1.65/\" \"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0\"",
"@version" => "1",
"@timestamp" => "2018-09-15T10:55:57.743Z",
"path" => "/tmp/a.log",
ZZ "host" => "logstash",
"type" => "testlog",
"clientip" => "192.168.1.254",
"ident" => "-",
"auth" => "-",
"timestamp" => "15/Sep/2019:18:25:46 +0800",
"verb" => "GET",
"request" => "/noindex/css/open-sans.css",
"httpversion" => "1.1",
"response" => "200",
"bytes" => "5081",
"referrer" => "\"http://192.168.1.65/\"",
"agent" => "\"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0\""
}
...
优化
logstash依赖java环境,而且占用资源非常大,我们可以使用更轻量的filebeat代替
在web服务器上安装使用filebeat将access.conf数据收集发送给logstash
logstash使用input filebeats插件接受filebeat发过来的数据,再处理将数据写入es集群,再通过Kibana展示出来
- web服务器上:
yum -y install filebeat
vim/etc/filebeat/filebeat.yml
paths:
- /var/log/httpd/access_log #日志的路径,短横线加空格代表yml格式
document_type: apachelog #文档类型
elasticsearch: #加上注释
hosts: ["localhost:9200"] #加上注释
logstash: #去掉注释
hosts: ["192.168.1.57:5044"] #去掉注释,写logstash那台主机的ip
systemctl enabled filebeat
- logstash:
vim /etc/logstash/logstash.conf
input{
stdin{ codec => "json" }
beats{
port => 5044
}
filter{
if [type] == "apachelog"{
grok{
match => ["message", "%{COMBINEDAPACHELOG}"]
}}
}
output{
stdout{ codec => "rubydebug" }
if [type] == "apachelog"{
elasticsearch {
hosts => ["es1:9200", "es2:9200","es3:9200"]
index => "apachelog"
flush_size => 2000
idle_flush_time => 10
}}
}
/opt/logstash/bin/logstash -f /etc/logstash/logstash.conf
打开另一终端查看5044是否成功启动
netstat -antup | grep 5044
firefox 192.168.1.58 #访问web服务,刷新日志
浏览器访问Elasticsearch,有apachelog
firefox http://192.168.1.55:9200/_plugin/head
最后使用Kibana画饼直观查看即可,到此,基本的elk平台部署完毕