1.下载相关软件及插件
http://www.elastic.co --- > 产品 --- > elasticsearch、logstash、kibana、filebeat等
可以下载 tar、zip、rpm等多种格式,具体想用哪个需要按自己的需求选择
或
wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.3.4/elasticsearch-2.3.4.tar.gz
wget https://download.elastic.co/logstash/logstash/logstash-2.3.4.tar.gz
wget https://download.elastic.co/kibana/kibana/kibana-4.5.3-linux-x64.tar.gz
wget http://download.oracle.com/otn-pub/java/jdk/8u45-b14/jdk-8u45-linux-x64.tar.gz
2.配置安装环境
jdk 1.8及以上版本,可使用oracle jdk 也可使用 open jdk (视情况自己选择)
#系统优化
vim /etc/security/limits.conf #这个初始优化的时候一般都会做好
* soft nofile 1024000
* hard nofile 1024000
# End of file
# vi /etc/sysctl.conf
vm.max_map_count = 655360
# sysctl -p
#解压 elasticsearch
cd /opt/elk/
tar -xvf elasticsearch-6.2.4.tar.gz
mv elasticsearch-6.2.4.tar.gz elasticsearch
cd elasticsearch/config
#编辑配置文件
vim elasticsearch.yml
#写一下几项
cluster.name: zmkhua-es
node.name: sc-elk
path.data: /data/es/data
path.logs: /data/es/logs
network.host: 192.168.198.92
http.port: 9200
http.cors.enabled: true
http.cors.allow-origin: "*"
#安装head插件
ElasticSearch-Head 是一个与Elastic集群(Cluster)相交互的Web前台。
ES-Head的主要作用:
它展现ES集群的拓扑结构,并且可以通过它来进行索引(Index)和节点(Node)级别的操作
它提供一组针对集群的查询API,并将结果以json和表格形式返回
它提供一些快捷菜单,用以展现集群的各种状态
5.x以后的版本安装Head插件比较麻烦,不能像2.x的时候一条#elasticsearch/bin/plugin install mobz/elasticsearch-head #一波搞定
#安装ndoe.js
#由于head插件本质上还是一个nodejs的工程,因此需要安装node,使用npm来安装依赖的包。(npm可以理解为maven),官网nodejs,https://nodejs.org/en/download/
#wget https://nodejs.org/dist/v8.9.1/node-v8.9.1.tar.gz #新版要编译时间太长了用旧版本吧
# tar zxf node-v8.9.1.tar.gz
#cd node-v8.9.1
#./configure --prefix=/usr/local/node-8.9.1 && make -j 8 && make install #安装时间比较长,没办法,Centos7的系统要最新版本的nodejs。
# ln -s /usr/local/node-v6.10.2-linux-x64 /usr/local/node
# vim /etc/profile
Bash############nodejs####################
export NODE_HOME=/usr/local/node
export PATH=$PATH:$NODE_HOME/bin# source /etc/profile
# node -v
Bashv8.9.1# npm -v
Bash5.5.1
下载插件包
# yum install git -y
# git clone https://github.com/mobz/elasticsearch-head.git #下载head插件文件
# cd elasticsearch-head/
# npm install -g grunt --registry=https://registry.npm.taobao.org
#使用国内淘宝源安装grunt,grunt是基于Node.js的项目构建工具,可以进行打包压缩、测试、执行等等的工作,head插件就是通过grunt启动
Bashnpm WARN deprecated coffee-script@1.10.0: CoffeeScript on NPM has moved to "coffeescript" (no hyphen)
/usr/local/node-8.9.1/bin/grunt -> /usr/local/node-8.9.1/lib/node_modules/grunt/bin/grunt
+ grunt@1.0.1
added 92 packages in 10.604s# ls -d node_modules/grunt
Bashnode_modules/grunt
#如果没产生此目录需要:#cd elasticsearch-head && npm install grunt --save
安装Head插件
# npm install -g grunt-cli --registry=https://registry.npm.taobao.org
# npm install --registry=https://registry.npm.taobao.org #安装head插件
修改配置文件
# mkdir /home/elk/plugin
# cp -rf /opt/elasticsearch-head /home/elk/plugin/head
# chown -R elk:elk /home/elk
$ vim /home/elk/plugin/head/Gruntfile.js
Bashconnect: {
server: {
options: {
port: 9100,
hostname: '*', #增加此行
base: '.',
keepalive: true$ vim /home/elk/plugin/head/_site/app.js
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://192.168.14.60:9200"; #这里改成es的IP和端口
$ cd /home/elk/plugin/head/ #一定要进入此目录下启动命令啊
$ grunt server #服务启动了
Bash>> Local Npm module "grunt-contrib-jasmine" not found. Is it installed?
(node:1446) ExperimentalWarning: The http2 module is an experimental API.
Running "connect:server" (connect) task
Waiting forever...
Started connect web server on http://localhost:9100
解决使用 Head 插件连接不上集群:
$ vim elasticsearch/config/elasticsearch.yml #加下面两句话
Bashhttp.cors.enabled: true
http.cors.allow-origin: "*"
$ /home/elk/elasticsearch/bin/elasticsearch -d #重新启动es服务
$ pwd
/home/elk/plugin/head$ nohup grunt server
#需要创建 elsearch 用户,因为elasticsearch么人是能用root用户启动
useradd elsearch
su elsearch
#后台启动
/opt/elk/elasticsearch/bin/elasticsearch -d
/opt/software/elasticsearch/elasticsearch-5.2.1/bin/elasticsearch -d
netstat -npltu
192.168.198.92:9200 0.0.0.0:* LISTEN 17755/java
192.168.198.92:9300 0.0.0.0:* LISTEN 17755/java
#看到出现两个端口 其中9200为数据传输端口 9300为es集群中的节点之间互相通信时使用的
#解压logstash
cd /opt/elk
tar -xvf logstash-6.2.4.tar.gz
mv logstash-6.2.4.tar.gz logstash
cd logstash/config
#创建新的配置文件
vim logstash-beats.conf
input {
beats {
port => 5044
}
}
filter {
if "nginx-accesslog" in [tags] {
grok {
match => { "message" => "%{HTTPDATE:timestamp}\|%{IP:remote_addr}\|%{IPORHOST:http_host}\|(?:%{DATA:http_x_forwarded_for}|-)\|%{DATA:request_method}\|%{DATA:request_uri}\|%{DATA:server_protocol}\|%{NUMBER:status}\|(?:%{NUMBER:body_bytes_sent}|-)\|(?:%{DATA:http_referer}|-)\|%{DATA:http_user_agent}\|(?:%{DATA:request_time}|-)\|"}
}
mutate {
convert => ["status","integer"]
convert => ["body_bytes_sent","integer"]
convert => ["request_time","float"]
}
geoip {
source=>"remote_addr"
}
date {
match => [ "timestamp","dd/MMM/YYYY:HH:mm:ss Z"]
}
useragent {
source=>"http_user_agent"
}
}
if "sys-messages" in [tags] {
grok {
match => { "message" => "%{SYSLOGLINE}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "timestamp", "MMM d HH:mm:ss" ]
}
#ruby {
# code => "event['@timestamp'] = event['@timestamp'].getlocal"
#}
}
}
output {
elasticsearch {
hosts => ["192.168.198.92:9200"]
index => "logstash-%{type}-%{+YYYY.MM.dd}"
document_type => "%{type}"
}
stdout { codec => rubydebug }
}
#grok 日志分析规则
vim vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns/grok-patterns
#后台启动logstash
nohup ./logstash -f ../config/logstash-beats.conf > nohup.log &
#加压kibana
cd /opt/elk/
tar -xvf kibana-6.2.4-linux-x86_64.tar.gz
mv kibana-6.2.4-linux-x86_64 kibana
cd kibana/config
vim kibana.yml
server.port: 5601
server.host: "192.168.198.92"
elasticsearch.url: "http://192.168.198.92:9200"
kibana.index: ".kibana"
cd /opt/elk/kinana/bin
#后台启动kibana
nohup /opt/elk/kibana/kibana > nohup.log &
#解压 filebeats
cd /opt/elk/
tar -xvf filebeat-6.2.4-linux-x86_64.tar.gz
mv filebeat-6.2.4-linux-x86_64 filebeat
cd filebeat
vim filebeat.yml
#filebeat.prospectors:
#- input_type: log
# paths:
# - /var/log/nginx/access.log
# document_type: nginx_access
# multiline.pattern: '^\['
# multiline.pattern: '^\sINFO|^\sERROR|^\sDEBUG|^\sWARN' ##将日志info,error,debug,warn开头的作为一行(用于java日志多行合并,也可以用时间为开头)
# multiline.negate: true
# multiline.match: after
# exclude_lines: ['^DEBUG']
# include_lines: ["^ERROR", "^WARN","^INFO"]
#----------------------------- Logstash output --------------------------------
#output.logstash:
# # The Logstash hosts
# hosts: ["192.168.198.92:5044"]
#==================================================
#filebeat.prospectors:
#- input_type: log
# paths: /var/log/secure
# include_lines: [".*Failed.*",".*Accepted.*"]
#output.logstash:
# hosts: ["192.168.198.92:5044"]
#===========================================================
filebeat.prospectors:
- input_type: log
paths:
- /var/log/nginx/access.log
tags: ["nginx-accesslog"]
document_type: nginx_access
output.logstash:
hosts: ["192.168.198.92:5044"]
tail_files:如果设置为true,Filebeat从文件尾开始监控文件新增内容,把新增的每一行文件作为一个事件依次发送,而不是从文件开始处重新发送所有内容。
#后台启动filebeat
nohup ./filebeat -c filebeat.yml > nohup.log &
使用nginx访问kibana
#安装nginx
yum install nginx -y
vim /etc/nginx/nginx.conf
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 19200;
server_name apm.zmkhua.com;
#root /usr/share/nginx/html;
# Load configuration files for the default server block.
#include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://192.168.198.92:9200/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
server {
listen 15601;
server_name apm.zmkhua.com;
#root /usr/share/nginx/html;
# Load configuration files for the default server block.
#include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://192.168.198.92:5601/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
systemctl start nginx
##访问时
192.168.198.92:15601
curl -XDELETE '192.168.198.92:9200/logstash-finance_tomcat-47_log-2018.05.11?pretty' 删除索引
client_inactivity_timeout => 120
client_inactivity_timeout => 300