实验环境:
【node2】 192.168.1.102:ELS
【node3】192.168.1.103:ELS+filebeat+nginx
【node1】192.168.1.104:logstash-server+kibana + redis
ELS集群的配置方式:
1、启动nginx服务并配置filebeat服务使其能够收集日志信息并发送给redis:
systemctl start nginx; ss -ntl | grep 80 yum install filebeat-5.1.1-x86_64.rpm #配置filebeat服务 cd /etc/filebeat/ cp filebeat.full.yml filebeat.yml vim filebeat.yml - input_type: log paths: - /var/log/nginx/access.log output.redis: enabled: true hosts: ["192.168.1.104:6379"] port: 6379 key: filebeat-nginxlog 注意:默认是开启直接传输到els集群的,此处需要注释掉 systemctl start filebeat.service
yum install redis vim /etc/redis.conf bind 192.168.1.104 systemctl start redis; ss -tnl | grep 6379
验证filebeat是否与redis连接成功的方法:
1、在浏览器端刷新http://192.168.1.103/
2、在104主机上连接redis,通过不停的刷新web页面来跟新日志,来查看KEY的长度是否发生变化
root@node1 ~ # redis-cli -h 192.168.1.104 192.168.1.104:6379> KEYS * 1) "filebeat-nginxlog" 192.168.1.104:6379> LLEN filebeat-nginxlog (integer) 10 192.168.1.104:6379> LINDEX filebeat-nginxlog 9 #可以查看收集到的最后一条的日志信息
2、将redis端收集到的数据发送给logstash-server,并做过滤处理。
yum install logstash-5.1.1-1.noarch vim /etc/logstashjvm.options #定义最小和最大使用内存 #-Xms256m #-Xmx1g 注意:/etc/logstash/conf.d目录下的文件都会被读取,不一定是.conf结尾的 vim /etc/logstash/conf.dtest.conf input { redis { port => "6379" host => "192.168.1.104" data_type => "list" key => "filebeat-nginxlog" } } filter { grok { match => { "message" => "%{NGINXACCESS}"} } } output { elasticsearch { hosts => "192.168.1.103" index => "logstash-%{+YYYYY.MM.dd}" } stdout { #此处使用标准输出时便于测试使用,可以让更新的日志信息直接显示在屏幕上 codec => rubydebug } } nginx log的匹配方式: 将如下信息添加至 /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-4.0.2/patterns/grok-patterns 文件的尾部: NGUSERNAME [a-zA-Z\.\@\-\+_%]+ NGUSER %{NGUSERNAME} NGINXACCESS %{IPORHOST:clientip} - %{NOTSPACE:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{NOTSPACE:http_x_forwarded_for}
验证方法:由于此时还没有配置ELS集群,因此先将output处的elasticsearch配置注释掉,然后执行:
logstash -t -f /etc/logstash/conf.d/test.conf #验证语法是否正确 Configuration OK logstash -f /etc/logstash/conf.d/test.conf #启动输入输出和过滤插件【非守护进程方式运行】 11:09:27.901 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started 11:09:27.977 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600} ============此时在浏览器处刷新 过几秒钟可以看到在logstash端会有输出 : { "@timestamp" => 2017-01-03T03:09:51.175Z, "offset" => 36486, "beat" => { "hostname" => "node3", "name" => "node3", "version" => "5.1.1" }, "input_type" => "log", "@version" => "1", "source" => "/var/log/nginx/access.log", "message" => "192.168.1.100 - - [03/Jan/2017:11:09:43 +0800] \"GET / HTTP/1.1\" 304 0 \"-\" \"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:50.0) Gecko/20100101 Firefox/50.0\" \"-\"", "type" => "log", "tags" => [ [0] "_grokparsefailure" ] }
3、配置elastic集群:
时间要同步 java-1.8.0-openjdk-src.x86_64 java-1.8.0-openjdk-devel-1.8.0.65-3.b17.el7.x86_64 #java -version 确认版本要在1.8或1.8以上 yum install ./elasticsearch-5.1.1.rpm #当前最新版本 mkdir -pv /els/{data,logs} chown -R elasticsearch.elasticsearch /els/* vim /etc/elasticsearch/elasticsearch.yml cluster.name: myels #判断多个集群节点是不是属于同一个集群的通过集群名称来判断即可 node.name: node1 #节点名,且集群网络能够解析 path.data: /els/data path.logs: /els/logs network.host: 0.0.0.0 #本地如果存在多个网卡时,最好设置成固定ip discovery.zen.ping.unicast.hosts: ["node1IP", "node2IP","node3IP"] discovery.zen.minimum_master_nodes: 2 #大于节点数的半数 vim /etc/sysconfig/elasticsearch #可配置使用最大和最小内存的使用量 #例如:ES_JAVA_OPTS="-Xms512m -Xmx512m" 最大最小均为512M systemctl daemon-reload systemctl start elasticsearch.service #确保9200和9300端口是打开的 #在102和103节点上配置
集群配置后的验证方法:可已通过安装elasticsearch-head-latest.zip 插件来管理集群
root@node3 ~ # curl -XGET 'http:/ curl -X GET 'http://192.168.1.103:9200/_cat/nodes?v' ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 192.168.1.102 40 93 0 0.06 0.40 0.43 mdi * node2 192.168.1.103 38 93 2 0.13 0.75 0.72 mdi - node3 root@node3 ~ # curl -XGET 'http://192.168.1.103:9200/_cluster/health?pretty' { "cluster_name" : "myels", "status" : "green", 。。。。。
4、配置kibana
yum install kibanna vim /etc/kibana/kibana.yml server.port: 5601 server.host: "0.0.0.0" elasticsearch.url: "http://localhost:9200" kibana.index: ".kibana" systemctl start kibana 在浏览器上输入http://192.168.1.104:5601
5、联调
在第2步中去掉对els的注释,在logstash端执行logstash -f /etc/logstash/test.conf 命令
在浏览器上输入http://192.168.1.104:5601 出现kibana界面,配置KEY之后就可以键入需要需要渣炒的命令进行查找。