1、Elasticsearch安装配置

1、下载并安装GPG Key

[root@elk-node1 ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

2、添加yum仓库

[root@elk-node1 ~]# vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

3、安装elasticsearch

[root@elk-node1 ~]# yum install -y elasticsearch

4、安装相关测试软件

提前先下载安装epel源:epel-release-latest-7.noarch.rpm,否则yum会报错:No Package.....

[root@elk-node1 ~]# wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
[root@elk-node1 ~]# rpm -ivh epel-release-latest-7.noarch.rpm

安装Redis

[root@elk-node1 ~]# yum install -y redis

安装Nginx

[root@elk-node1 ~]# yum install -y nginx

安装java

[root@elk-node1 ~]# yum install -y java

安装完java后,检测

[root@elk-node1 ~]# java -version
openjdk version "1.8.0_102"
OpenJDK Runtime Environment (build 1.8.0_102-b14)
OpenJDK 64-Bit Server VM (build 25.102-b14, mixed mode)

5、配置部署

5.1、配置修改配置文件
[root@elk-node1 ~]# mkdir -p /home/elk-data
[root@elk-node1 ~]# vim /etc/elasticsearch/elasticsearch.yml

【将里面内容情况,配置下面内容】

  • cluster.name: Accommate # 组名(同一个组,组名必须一致)
  • node.name: elk-Jbs # 节点名称,建议和主机名一致
  • path.data: /home/elk-data # 数据存放的路径
  • path.logs: /var/log/elasticsearch/ # 日志存放的路径
  • bootstrap.mlockall: true # 锁住内存,不被使用到交换分区去
  • network.host: 0.0.0.0 # 网络设置
  • http.port: 9200 # 端口
5.2、启动并查看
[root@elk-node1 ~]# chown -R elasticsearch.elasticsearch /home/elk-data/
[root@elk-node1 ~]# systemctl start elasticsearch
[root@elk-node1 ~]# systemctl status elasticsearch
CGroup: /system.slice/elasticsearch.service
└─3005 /bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSI...

注意:上面可以看出elasticsearch设置的内存最小256m,最大1g

[root@linux-node1 src]# netstat -antlp |egrep "9200|9300"
tcp6 0 0 :::9200 :::* LISTEN 3005/java 
tcp6 0 0 :::9300 :::* LISTEN 3005/java

然后通过web访问(访问的浏览器最好用google浏览器)http://192.168.153.150:9200/

HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 95
{
  "count" : 0,
  "_shards" : {
    "total" : 0,
    "successful" : 0,
    "failed" : 0
  }
}

这样感觉用命令来查看,特别的不爽。

5.3、接下来安装插件,使用插件进行查看~
53.1、安装head插件

----------------------------------------------------------------------------------------------------

a) 插件安装方法一

[root@elk-node1 src]# /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head

b)插件安装方法二

首先下载head插件,下载到/usr/loca/src目录下

下载地址:https://github.com/mobz/elasticsearch-head

----------------------------------------------------------------

[root@elk-node1 src]# unzip elasticsearch-head-master.zip
[root@elk-node1 src]# ls
elasticsearch-head-master elasticsearch-head-master.zip

在/usr/share/elasticsearch/plugins目录下创建head目录

然后将上面下载的elasticsearch-head-master.zip解压后的文件都移到/usr/share/elasticsearch/plugins/head下

接着重启elasticsearch服务即可!

[root@elk-node1 src]# cd /usr/share/elasticsearch/plugins/
[root@elk-node1 plugins]# mkdir head
[root@elk-node1 plugins]# ls
head
[root@elk-node1 plugins]# cd head
[root@elk-node1 head]# cp -r /usr/local/src/elasticsearch-head-master/* ./
[root@elk-node1 head]# pwd
/usr/share/elasticsearch/plugins/head

[root@elk-node1 head]# chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/plugins
[root@elk-node1 head]# ll
total 40
-rw-r--r--. 1 elasticsearch elasticsearch 104 Sep 28 01:57 elasticsearch-head.sublime-project
-rw-r--r--. 1 elasticsearch elasticsearch 2171 Sep 28 01:57 Gruntfile.js
-rw-r--r--. 1 elasticsearch elasticsearch 3482 Sep 28 01:57 grunt_fileSets.js
-rw-r--r--. 1 elasticsearch elasticsearch 1085 Sep 28 01:57 index.html
-rw-r--r--. 1 elasticsearch elasticsearch 559 Sep 28 01:57 LICENCE
-rw-r--r--. 1 elasticsearch elasticsearch 795 Sep 28 01:57 package.json
-rw-r--r--. 1 elasticsearch elasticsearch 100 Sep 28 01:57 plugin-descriptor.properties
-rw-r--r--. 1 elasticsearch elasticsearch 5211 Sep 28 01:57 README.textile
drwxr-xr-x. 5 elasticsearch elasticsearch 4096 Sep 28 01:57 _site
drwxr-xr-x. 4 elasticsearch elasticsearch 29 Sep 28 01:57 src
drwxr-xr-x. 4 elasticsearch elasticsearch 66 Sep 28 01:57 test
[root@elk-node1 _site]# systemctl restart elasticsearch

插件访问http://192.168.153.150:9200/_plugin/head/

centos搭建ELK(Elasticsearch、Logstash、Kibana)_elastic

先插入数据实例,测试下

如下:打开”复合查询“,在POST选项下,任意输入如/index-demo/test,然后在下面输入数据(注意内容之间换行的逗号不要漏掉);

数据输入好之后(如下输入wangxiaolei;hello world内容),下面点击”验证JSON“->”提交请求“,提交成功后,观察右栏里出现的信息:有index,type,version等信息,failed:0(成功消息)

centos搭建ELK(Elasticsearch、Logstash、Kibana)_java_02

再查看测试实例,如下:

"复合查询"下,选择GET选项,在/index-demo/test/后面输入上面POST结果中的id号,不输入内容,即{}括号里为空!

然后点击”验证JSON“->"提交请求",观察右栏内就有了上面插入的数据了(即wangxiaolei,hello world)

centos搭建ELK(Elasticsearch、Logstash、Kibana)_elastic_03

5.3.2、安装kopf监控插件

--------------------------------------------------------------------------------------------------------------------

a)监控插件安装方法一

[root@elk-node1 src]# /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf

b)监控插件安装方法二

首先下载监控插件kopf,下载到/usr/loca/src目录下

下载地址:https://github.com/lmenezes/elasticsearch-kopf

----------------------------------------------------------------

[root@elk-node1 src]# unzip elasticsearch-kopf-master.zip
[root@elk-node1 src]# ls
elasticsearch-kopf-master elasticsearch-kopf-master.zip

在/usr/share/elasticsearch/plugins目录下创建kopf目录

然后将上面下载的elasticsearch-kopf-master.zip解压后的文件都移到/usr/share/elasticsearch/plugins/kopf下

接着重启elasticsearch服务即可!

[root@elk-node1 src]# cd /usr/share/elasticsearch/plugins/
[root@elk-node1 plugins]# mkdir kopf
[root@elk-node1 plugins]# cd kopf
[root@elk-node1 kopf]# cp -r /usr/local/src/elasticsearch-kopf-master/* ./
[root@elk-node1 kopf]# pwd
/usr/share/elasticsearch/plugins/kopf
[root@elk-node1 kopf]# chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/plugins
[root@elk-node1 kopf]# ll
total 40
-rw-r--r--. 1 elasticsearch elasticsearch 237 Sep 28 16:28 CHANGELOG.md
drwxr-xr-x. 2 elasticsearch elasticsearch 22 Sep 28 16:28 dataset
drwxr-xr-x. 2 elasticsearch elasticsearch 73 Sep 28 16:28 docker
-rw-r--r--. 1 elasticsearch elasticsearch 4315 Sep 28 16:28 Gruntfile.js
drwxr-xr-x. 2 elasticsearch elasticsearch 4096 Sep 28 16:28 imgs
-rw-r--r--. 1 elasticsearch elasticsearch 1083 Sep 28 16:28 LICENSE
-rw-r--r--. 1 elasticsearch elasticsearch 1276 Sep 28 16:28 package.json
-rw-r--r--. 1 elasticsearch elasticsearch 102 Sep 28 16:28 plugin-descriptor.properties
-rw-r--r--. 1 elasticsearch elasticsearch 3165 Sep 28 16:28 README.md
drwxr-xr-x. 6 elasticsearch elasticsearch 4096 Sep 28 16:28 _site
drwxr-xr-x. 4 elasticsearch elasticsearch 27 Sep 28 16:28 src
drwxr-xr-x. 4 elasticsearch elasticsearch 4096 Sep 28 16:28 tests
[root@elk-node1 _site]# systemctl restart elasticsearch

-----------------------------------------------------------------------------------------------------

访问插件:(如下,同样要提前安装好elk-node2节点上的插件,否则访问时会出现集群节点为黄色的yellow告警状态)

http://192.168.153.150:9200/_plugin/kopf/#!/cluster

centos搭建ELK(Elasticsearch、Logstash、Kibana)_elk_04

2、Logstash安装配置

这个在客户机上是要安装的。elk-node1和elk-node2都安装

基础环境安装(客户端安装logstash,收集到的数据写入到elasticsearch里,就可以登陆logstash界面查看到了)

1、下载并安装GPG Key

[root@elk-node1 ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

2、添加yum仓库

[root@hadoop-node1 ~]# vim /etc/yum.repos.d/logstash.repo
[logstash-2.1]
name=Logstash repository for 2.1.x packages
baseurl=http://packages.elastic.co/logstash/2.1/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

3、安装logstash

[root@elk-node1 ~]# yum install -y logstash

4、logstash启动

[root@elk-node1 ~]# systemctl restart elasticsearch
[root@elk-node1 ~]# systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2016-11-07 18:33:28 CST; 3 days ago
Docs: http://www.elastic.co
Main PID: 8275 (java)
CGroup: /system.slice/elasticsearch.service
└─8275 /bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFrac...
..........
..........

数据的测试

4.1、基本的输入输出
[root@elk-node1 ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{} }'
Settings: Default filter workers: 1
Logstash startup completed
hello                                                                                     #输入这个
2016-11-11T06:41:07.690Z elk-node1 hello                        #输出这个
wangshibo                                                                            #输入这个
2016-11-11T06:41:10.608Z elk-node1 wangshibo               #输出这个

2) 把内容写到elasticsearch中

[root@elk-node1 ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["192.168.153.160:9200"]} }'
Settings: Default filter workers: 1
Logstash startup completed                       #输入下面的测试数据
123456 
wangxiaolei
accommate
hahaha

写到elasticsearch中在logstash中查看,如下图:

centos搭建ELK(Elasticsearch、Logstash、Kibana)_java_05

centos搭建ELK(Elasticsearch、Logstash、Kibana)_elasticsearch_06

5、logstash的配置

5.1、收集系统日志
[root@elk-node1 ~]# vim  file.conf
input {
    file {
      path => "/var/log/messages"
      type => "system"
      start_position => "beginning"
    }
}
 
output {
    elasticsearch {
       hosts => ["192.168.153.160:9200"]
       index => "system-%{+YYYY.MM.dd}"
    }
}

执行上面日志信息的收集,如下,这个命令会一直在执行中,表示日志在监控收集中;如果中断,就表示日志不在收集!所以需要放在后台执行~

[root@elk-node1 ~]# /opt/logstash/bin/logstash -f file.conf &

登陆elasticsearch界面,查看本机系统日志的信息:

centos搭建ELK(Elasticsearch、Logstash、Kibana)_elk_07

3、Kibana安装配置

1、kibana的安装:

[root@elk-node1 ~]# cd /usr/local/src
[root@elk-node1 src]# wget https://download.elastic.co/kibana/kibana/kibana-4.3.1-linux-x64.tar.gz
[root@elk-node1 src]# tar zxf kibana-4.3.1-linux-x64.tar.gz
[root@elk-node1 src]# mv kibana-4.3.1-linux-x64 /usr/local/
[root@elk-node1 src]# ln -s /usr/local/kibana-4.3.1-linux-x64/ /usr/local/kibana

2、修改配置文件:

[root@elk-node1 config]# pwd
/usr/local/kibana/config
[root@elk-node1 config]# cp kibana.yml kibana.yml.bak
[root@elk-node1 config]# vim kibana.yml 
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.153.160:9200"

kibana.index: ".kibana" #注意这个.Kibana索引用来存储数据,千万不要删除了它。它是将es数据通过kibana进行web展示的关键。这个配置后,在es的web界面里就会看到这个.kibana索引。

因为他一直运行在前台,要么选择开一个窗口,要么选择使用screen。

安装并使用screen启动kibana:

[root@elk-node1 ~]# yum -y install screen
[root@elk-node1 ~]# screen                          #这样就另开启了一个终端窗口
[root@elk-node1 ~]# /usr/local/kibana/bin/kibana
log [18:23:19.867] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready
log [18:23:19.911] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [18:23:19.941] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green - Ready
log [18:23:19.953] [info][status][plugin:markdown_vis] Status changed from uninitialized to green - Ready
log [18:23:19.963] [info][status][plugin:metric_vis] Status changed from uninitialized to green - Ready
log [18:23:19.995] [info][status][plugin:spyModes] Status changed from uninitialized to green - Ready
log [18:23:20.004] [info][status][plugin:statusPage] Status changed from uninitialized to green - Ready
log [18:23:20.010] [info][status][plugin:table_vis] Status changed from uninitialized to green - Ready

然后按ctrl+a+d组合键,这样在上面另启的screen屏里启动的kibana服务就一直运行在前台了....

[root@elk-node1 ~]# screen -ls
There is a screen on:
15041.pts-0.elk-node1 (Detached)
1 Socket in /var/run/screen/S-root.

3、访问kibana:

http://112.110.115.10:15601/

如下,如果是添加上面设置的system日志收集信息,则在下面填写system*,以此类型(可以从logstash界面看到日志收集项)

centos搭建ELK(Elasticsearch、Logstash、Kibana)_elk_08

然后点击上面的Discover,在Discover中查看:

centos搭建ELK(Elasticsearch、Logstash、Kibana)_elk_09

查看日志登陆,需要点击“Discover”-->"message",点击它后面的“add”

注意:

需要右边查看日志内容时带什么属性,就在左边点击相应属性后面的“add”

如下图,添加了message和path的属性:

centos搭建ELK(Elasticsearch、Logstash、Kibana)_java_10

centos搭建ELK(Elasticsearch、Logstash、Kibana)_elastic_11

然后就可以查看到我们添加的信息了

centos搭建ELK(Elasticsearch、Logstash、Kibana)_elastic_12

添加新的日志采集项,点击Settings->+Add New,比如添加system系统日志。注意后面的*不要忘了。

4、收集nginx的访问日志

修改nginx的配置文件,分别在nginx.conf的http和server配置区域添加下面内容:

http 标签中

log_format json '{"@timestamp":"$time_iso8601",'
                           '"@version":"1",'
                           '"client":"$remote_addr",'
                           '"url":"$uri",'
                           '"status":"$status",'
                           '"domain":"$host",'
                           '"host":"$server_addr",'
                           '"size":$body_bytes_sent,'
                           '"responsetime":$request_time,'
                           '"referer": "$http_referer",'
                           '"ua": "$http_user_agent"'
'}';

server标签中

access_log /disk/logs/nginx/access_json.log json;

本次实验中:全部放在了http中

centos搭建ELK(Elasticsearch、Logstash、Kibana)_elastic_13

重启Nginx服务

[root@localhost ~]# /etc/init.d/nginx restart

编写file.conf文件

[root@localhost ~]# vim file.conf
input{
        file{
                path => "/var/log/messages"
                type => "system"
                start_position => "beginning"
        }
}
input{
        file{
                path => "/disk/logs/nginx/access.log"
                type => "nginx_access"
                codec => json
                start_position => "beginning"
        }
}
output{
        if [type] == "system"{
                elasticsearch{
                        hosts => ["192.168.153.160:9200"]
                        index => "system-%{+YYYY.MM.dd}"
                }
        }
        if [type] == "nginx_access"{
                elasticsearch{
                        hosts => ["192.168.153.160:9200"]
                        index => "nginx_access-%{+YYYY.MM.dd}"
                }
        }
}

可以加上--configtest参数,测试下配置文件是否有语法错误或配置不当的地方,这个很重要!!

[root@elk-node1 ~]# /opt/logstash/bin/logstash -f file.conf --configtest
Configuration OK

然后接着执行logstash命令(由于上面已经将这个执行命令放到了后台,所以这里其实不用执行,也可以先kill之前的,再放后台执行),然后可以再访问nginx界面测试下

[root@elk-node1 ~]# /opt/logstash/bin/logstash -f file.conf &

登陆elasticsearch界面查看:

centos搭建ELK(Elasticsearch、Logstash、Kibana)_java_14

将nginx日志整合到kibana界面里,如下:

centos搭建ELK(Elasticsearch、Logstash、Kibana)_elk_15


centos搭建ELK(Elasticsearch、Logstash、Kibana)_elastic_16