ELK是Elasticsearch、Logstash、Kibana的简称,这三者是核心套件,但并非全部。

Elasticsearch是实时全文搜索和分析引擎,提供搜集、分析、存储数据三大功能;是一套开放REST和JAVA API等结构提供高效搜索功能,可扩展的分布式系统。它构建于Apache Lucene搜索引擎库之上。

Logstash是一个用来搜集、分析、过滤日志的工具。它支持几乎任何类型的日志,包括系统日志、错误日志和自定义应用程序日志。它可以从许多来源接收日志,这些来源包括 syslog、消息传递(例如 RabbitMQ)和JMX,它能够以多种方式输出数据,包括电子邮件、websockets和Elasticsearch。

Kibana是一个基于Web的图形界面,用于搜索、分析和可视化存储在 Elasticsearch指标中的日志数据。它利用Elasticsearch的REST接口来检索数据,不仅允许用户创建他们自己的数据的定制仪表板视图,还允许他们以特殊的方式查询和过滤数据

金庸武功之““兰花拂穴手””--elk5.5安装_ELK


一.环境准备

关闭Selinux

关闭防火墙


Centos7.2 mini


A: 192.168.1.241    es && kibana && nginx


B: 192.168.1.242    logstach

 

C: 192.168.1.221    Filebeat代理 (client):将其日志发送到Logstash的客户端服务器



每台服务器都安装Java环境,1.8以上   jdk-8u131-linux-x64.rpm


rpm -ivh jdk-8u131-linux-x64.rpm


[root@logstach java]# java -version

java version "1.8.0_131"

Java(TM) SE Runtime Environment (build 1.8.0_131-b11)

Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)



[root@logstach java]# which java

/usr/bin/java


这里说明下,用rpm包方式安装的Java 默认的安装路径是在/usr/java下,要记住


vi  /etc/profile


JAVA_HOME=/usr/java/jdk1.8.0_131

JRE_HOME=/usr/java/jdk1.8.0_131/jre

PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib



source /etc/profile







二.安装logsstach


在服务器B


vim /etc/yum.repos.d/elasticsearch.repo                    #增加ELK源


[logstash-5.x]

name=Elastic repository for 5.x packages

baseurl=https://artifacts.elastic.co/packages/5.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md





yum makecache

yum install logstash -y        # logstash-5.5.1


cd /usr/share/logstash

bin/logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'


[root@logstach logstash]# bin/logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'

ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults

Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs to console

09:36:20.791 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}

09:36:20.899 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started

The stdin plugin is now waiting for input:

09:36:21.008 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}

hello world

{

    "@timestamp" => 2017-08-12T01:36:29.687Z,

      "@version" => "1",

          "host" => "0.0.0.0",

       "message" => "hello world"

}


出现红色字体的报错忽略即可,logstash.agent - Successfully started Logstash API endpoint {:port=>9600}出现后,输入hello world 即可。




修改环境变量


vi /etc/profile.d/logstash.sh


export PATH=/usr/share/logstash/bin:$PATH



source /etc/profile



logstash 命令可以顺便使用了


logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'



创建简单配置文件:



vi  /etc/logstash/conf.d/sample.conf


input  {

    stdin   {}


}


output {

    stdout  {

        codec  => rubydebug

    }


}



[root@logstach conf.d]# logstash -f /etc/logstash/conf.d/sample.conf    #启动




生成SSL证书

由于我们将使用Filebeat将日志从我们的客户端服务器发送到我们的ELK服务器,我们需要创建一个SSL证书和密钥对。 Filebeat使用该证书来验证ELK Server的身份。使用以下命令创建将存储证书和私钥的目录:

使用以下命令(在ELK服务器的FQDN中替换)在适当的位置(/etc/pki/tls/)中生成SSL证书和私钥:



[root@linuxprobe ~]# cd /etc/pki/tls


[root@linuxprobe tls]# openssl req -subj '/CN=ELK_server_fqdn/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt


[root@linuxprobe ~]# cd /etc/pki/tls


[root@linuxprobe tls]# openssl req -subj '/CN=kibana.aniu.co/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt




/etc/pki/tls/openssl.cnf

找到 [ v3_ca ] 
加入下边一行:

subjectAltName = IP:这里写ip地址

subjectAltName =192.168.1.242  (logstash IP)





否则,filebeat启动会报错!

filebeat x509: cannot validate certificate for 192.168.1.242 because it doesn't conta




logstash-forwarder.crt文件将被复制到,所有将日志发送到Logstash的服务器   ( D上 )





三.安装es && kibana


在服务器A


vim /etc/yum.repos.d/elasticsearch.repo


[elasticsearch-5.x]

name=Elasticsearch repository for 5.x packages

baseurl=https://artifacts.elastic.co/packages/5.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md


yum makecache


yum install elasticsearch -y


systemctl daemon-reload


systemctl enable elasticsearch.service



systemctl start elasticsearch.service





[root@es bin]# ./elasticsearch

Exception in thread "main" org.elasticsearch.bootstrap.BootstrapException: java.nio.file.NoSuchFileException: /usr/share/elasticsearch/config

Likely root cause: java.nio.file.NoSuchFileException: /usr/share/elasticsearch/config

at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)

at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)

at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)

at sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)

at sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)

at sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)

at java.nio.file.Files.readAttributes(Files.java:1737)

at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:225)

at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)

at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)

at java.nio.file.Files.walkFileTree(Files.java:2662)

at org.elasticsearch.common.logging.LogConfigurator.configure(LogConfigurator.java:150)

at org.elasticsearch.common.logging.LogConfigurator.configure(LogConfigurator.java:122)

at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:316)

at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:123)

at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:114)

at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:67)

at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122)

at org.elasticsearch.cli.Command.main(Command.java:88)

at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91)

at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84)

Refer to the log for complete error details.




这个错误我觉得主要是因为找不到配置文件,但是如果你直接在安装目录里去启动elasticsearch的话,elasticsearch只会在当前目录找config文件夹,如果安装成service的形式应该是可以找到配置文件,但我没去尝试,后面试试。

问题知道了,我们可以直接把/etc目录下的elasticsearch配置文件copy过来:

 cp -r /etc/elasticsearch /usr/share/elasticsearch/config

这个时候我们再启动就不会报刚才的错误了,我们再试一遍:
bin/elasticsearch

意料之中,这时候会提示以下错误:

        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-5.1.2.jar:5.1.2]
        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-5.1.2.jar:5.1.2]
        at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54) ~[elasticsearch-5.1.2.jar:5.1.2]
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) ~[elasticsearch-5.1.2.jar:5.1.2]
        at org.elasticsearch.cli.Command.main(Command.java:88) ~[elasticsearch-5.1.2.jar:5.1.2]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:89) ~[elasticsearch-5.1.2.jar:5.1.2]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:82) ~[elasticsearch-5.1.2.jar:5.1.2]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
        at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:100) ~[elasticsearch-5.1.2.jar:5.1.2]
        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:176) ~[elasticsearch-5.1.2.jar:5.1.2]
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:306) ~[elasticsearch-5.1.2.jar:5.1.2]
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-5.1.2.jar:5.1.2]
        ... 6 more

这个错误的原因是elasticsearch不允许使用root启动,因此我们要解决这个问题需要新建一个用户来启动elasticsearch(参考:https://my.oschina.net/topeagle/blog/591451?fromerr=mzOr2qzZ)

具体操作如下:

  ~ groupadd elsearch
 ~ useradd elsearch -g elsearch -p elsearch
 ~ cd /usr/share  
  chown -R elsearch:elsearch elasticsearch
 su elsearch

这个时候在这个用户去启动elasticsearch,一般情况下这个时候就能成功起来了,可能还会出现一些错误,如:

hcw-X450VC% ./elasticsearch2017-01-17 21:03:31,158 main ERROR Could not register mbeans java.security.AccessControlException: access denied ("javax.management.MBeanTrustPermission" "register")    at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)    at java.lang.SecurityManager.checkPermission(SecurityManager.java:585)    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.checkMBeanTrustPermission(DefaultMBeanServerInterceptor.java:1848)    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:322)    at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)    at org.apache.logging.log4j.core.jmx.Server.register(Server.java:389)    at org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:167)    at org.apache.logging.log4j.core.jmx.Server.reregisterMBeansAfterReconfigure(Server.java:140)    at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:541)    at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:258)    at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:206)    at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:220)    at org.apache.logging.log4j.core.config.Configurator.initialize(Configurator.java:197)    at org.elasticsearch.common.logging.LogConfigurator.configureStatusLogger(LogConfigurator.java:125)    at org.elasticsearch.common.logging.LogConfigurator.configureWithoutConfig(LogConfigurator.java:67)    at org.elasticsearch.cli.Command.main(Command.java:85)    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:89)    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:82)

这是因为elasticsearch需要读写配置文件,我们需要给予config文件夹权限,上面新建了elsearch用户,elsearch用户不具备读写权限,因此还是会报错,解决方法是切换到管理员账户,赋予权限即可:

sudo -i
chmod -R 775 config

这个时候就可以起来了,来看看效果:


金庸武功之““兰花拂穴手””--elk5.5安装_ELK_02




[root@es ~]# curl 127.0.0.1:9200

{

  "name" : "tZhA-Rw",

  "cluster_name" : "elasticsearch",

  "cluster_uuid" : "OzC1IJd3Sg66bwDv7AAUHw",

  "version" : {

    "number" : "5.5.1",

    "build_hash" : "19c13d0",

    "build_date" : "2017-07-18T20:44:24.823Z",

    "build_snapshot" : false,

    "lucene_version" : "6.6.0"

  },

  &qunt;t醙lmne" : "You Know, for Search"



这里如果想开房外网访问:

http://192.168.1.241:9200

{
  "name" : "qO-BHYV",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "7nmo0Io_SDOQ5Gt7AV7fjw",
  "version" : {
    "number" : "5.5.1",
    "build_hash" : "19c13d0",
    "build_date" : "2017-07-18T20:44:24.823Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.0"
  },
  "tagline" : "You Know, for Search"
}



需要修改/etc/elasticsearch/elasticsearch.yml这个文件(虽然前面我们复制它到了/usr/share/elasticsearch/config)下了,但配置文件生效的确是/etc/elasticsearch/elasticsearch.yml

这点要特别的注意:

cluster.name: ptsearch                            # 组名(同一个组,组名必须一致)

node.name: yunwei-ts-100-70                       # 节点名称,建议和主机名一致

path.data: /data/elasticsearch                    # 数据存放的路径

path.logs: /var/log/elasticsearch/                # 日志存放的路径

bootstrap.mlockall: true                          # 锁住内存,不被使用到交换分区去

network.host: 0.0.0.0                             # 网络设置

http.port: 9200                                    # 端口

discovery.zen.ping.unicast.hosts: ["172.16.100.71","172.16.100.111"]    #手动发现节点,写本机之外的集群节点IP地址

discovery.zen.ping.multicast.enabled: false       #关闭多播模式

```

==以上配置3台elasticsearch节点都要配,注意nodename写每台主机的名称,discovery.zen.ping.unicast.hosts:xie写本机之外的集群节点IP地址。==




四.安装Kibana

在服务器A上:

vi /etc/yum.repos.d/kibana.repo

yum makecache

yum install kibana -y

systemctl daemon-reload

 systemctl enable kibana.service

 systemctl start kibana.service

 vi /etc/kibana/kibana.yml

 修改 server.host: "192.168.1.241"

 systemctl restart kibana.service



 访问:  htpp://IP:5601     #如果出现打开页面一直LOAD 换一个浏览器试试


安装nginx反向代理

配置Kibana在localhost上监听,必须设置一个反向代理,允许外部访问它。本文使用Nginx来实现发向代理


创建nginx官方源来安装nginx

vi /etc/yum.repos.d/nginx.repo


[nginx]

name=nginx repo

baseurl=http://nginx.org/packages/centos/$releasever/$basearch/

gpgcheck=0

enabled=1


yum install nginx httpd-tools -y


[root@es kibana]# htpasswd -c -m /etc/nginx/htpasswd.users kibanaadmin

New password: 

Re-type new password: 

Adding password for user kibanaadmin



vi /etc/nginx/conf.d/kibana.conf 


server {

    listen       80;

    server_name  kibana.aniu.co;

    access_log  /var/log/nginx/kibana.aniu.co.access.log main;

    error_log   /var/log/nginx/kibana.aniu.co.access.log;

    auth_basic "Restricted Access";

    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {

        proxy_pass http://localhost:5601;

        proxy_http_version 1.1;

        proxy_set_header Upgrade $http_upgrade;

        proxy_set_header Connection 'upgrade';

        proxy_set_header Host $host;

        proxy_cache_bypass $http_upgrade;

    }

}



systemctl start nginx

systemctl enable nginx


在WINDOWS本机的hosts文件增加记录


192.168.1.241       kibana.aniu.co


访问http://kibana.aniu.co/


输入 kibanaadmin   tongbang123




加载Kibana仪表板

Elastic提供了几个样例Kibana仪表板和Beats索引模式,可以帮助我们开始使用Kibana。虽然我们不会在本教程中使用仪表板,我们仍将加载它们,以便我们可以使用它包括的Filebeat索引模式。

首先,将示例仪表板归档下载到您的主目录:


  1. 下载 wget  http://download.elastic.co/beats/dashboards/beats-dashboards-1.1.1.zip  

  2. 2, 解压 unzip beats-dashboards-1.1.1.zip  

  3. 3,  进入 cd beats-dashboards-1.1.1/  

  4. 4, 执行 ./load.sh  或者  ./load.sh -url http://192.168.1.241:9200   

  5.     将dashboard的模板配置数据存进elasticsarch里面 


在Elasticsearch中加载Filebeat索引模板

因为我们计划使用Filebeat将日志发送到Elasticsearch,我们应该加载Filebeat索引模板。索引模板将配置Elasticsearch以智能方式分析传入的Filebeat字段。

首先,将Filebeat索引模板下载到您的主目录:


cd /usr/local/src curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json



[root@linuxprobe src]# curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d@filebeat-index-template.json




设置Filebeat(添加客户端服务器)

对于要将日志发送到ELK服务器的每个CentOS或RHEL 7服务器,请执行以下步骤

复制ssl证书

在logstash服务器上,将先决条件教程中创建的SSL证书复制到客户端服务器:


在C服务器上:


mkdir -p /etc/pki/tls/certs




在B服务器上:


scp /etc/pki/tls/certs/logstash-forwarder.crt root@l92.168.1.221:/etc/pki/tls/certs


安装Filebeat包

在C服务器上:


vi /etc/yum.repos.d/elasticsearch.repo


[elasticsearch-5.x]

name=Elasticsearch repository for 5.x packages

baseurl=https://artifacts.elastic.co/packages/5.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md



[root@monitor certs]# yum makecache


[root@monitor locale]# yum install filebeat -y


systemctl enable filebeat

systemctl start filebeat




vi /etc/filebeat/filebeat.yml


filebeat.prospectors: - input_type: log  paths:    - /var/log/secure         # 新增    - /var/log/messages       # 新增    - /var/log/*.logoutput.elasticsearch:  hosts: ["localhost:9200"] output.logstash:  hosts: ["192.168.1.241:5044"]   # 修改为ELK上Logstash的连接方式  ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"] #




less /var/log/filebeat/filebeat      查看日志