由于docker官方的推荐,我们本次讲解是fluentd日志收集组件。

具体使用如下:

         

docker 日志监控 docker日志查看_docker 日志监控


安装ES


1. 初始化环境



[root@salt-node1 src]# vim /etc/sysctl.conf
vm.max_map_count = 290000

[root@salt-node1 src]#vim/etc/security/limits.conf 
*        hard  nproc           20000
*        soft  nproc           20000
*        soft  nofile          290000
*        hard  nofile          290000
 
[root@salt-node1 src]# cat /etc/security/limits.d/20-nproc.conf
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.
 
*          soft    nproc    290000
root       soft   nproc     unlimited
 
[root@salt-node1 src]# sysctl -p
net.ipv4.ip_forward = 1
vm.max_map_count = 290000
[root@salt-node1 src]# su - java
Last login: Sat Mar 18 00:40:54 CST 2017 on pts/1

查看参数是否生效
[java@salt-node1 ~]$ ulimit -n
290000




2.下载elasticsearch安装包并安装



[root@salt-node1 src]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.2.2.tar.gz
[root@salt-node1 src]# tar zxf elasticsearch-5.2.2.tar.gz && mv elasticsearch-5.2.2 /usr/local/elasticsearch
cluster.name: docker_log
network.host: 0.0.0.0
[root@salt-node1 local]# useradd java
[root@salt-node1 local]# chown -R java:java elasticsearch
[java@salt-node1 ~]$ nohup  /usr/local/elasticsearch/bin/elasticsearch &
[java@salt-node1 elasticsearch]$ curl localhost:9200
{
  "name" : "t3_jC3D",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "ylQzD_hERziK29YF6O5bYg",
  "version" : {
    "number" : "5.2.2",
    "build_hash" : "f9d9b74",
    "build_date" : "2017-02-24T17:26:45.835Z",
    "build_snapshot" : false,
    "lucene_version" : "6.4.1"
  },
  "tagline" : "You Know, for Search"
}



安装fluentd

  1. 下载安装
    下面是监听本机24224端口收到日志,转发到192.168.199.116节点的ES上,索引名为docker_YYYY_mm
rpm -ivh http://packages.treasuredata.com.s3.amazonaws.com/2/redhat/7/x86_64/td-agent-2.1.3-0.x86_64.rpm
 
[root@salt-node1 elasticsearch]# chown 777 /var/log/messages
[root@salt-node1 elasticsearch]# cat /etc/td-agent/td-agent.conf
<source>
  type forward
  bind 0.0.0.0
  port 24224
  linger_timeout 0
  log_level info
</source>

#下面的filter会把所有标签为docker开头的日志里面的log字段进行单独解析,且这些日志的格式都为json
 <filter docker.**>
  @type parser
  format json
  key_name log
</filter>


<match docker.**>
  type elasticsearch
  host 192.168.198.116
  port 9200
  logstash_format true
  logstash_prefix docker
  logstash_dateformat %Y_%m
  index_name docker_log
  flush_interval 5s
  type_name docker
  include_tag_key true
</match>


 

2. 启动fluentd服务

systemctl start td-agent


启动容器

1.首先修改nginx的日志格式为json


 

log_format  main  '{"remote_addr": "$remote_addr", "remote_user": "$remote_user","time_local": "$time_local","request": "$request", '
                      '"status": "$status", "body_bytes_sent":"$body_bytes_sent","http_referer": "$http_referer" ,'
                      '"http_user_agent": "$http_user_agent", "http_x_forwarded_for":"$http_x_forwarded_for"}';



2.启动容器

使用docker内置的日志驱动把日志传出给fluentd


docker run -dit -p 8080:80\
             -v /test/nginx.conf:/etc/nginx/nginx.conf \
             --log-driver=fluentd \
             --log-opt fluentd-address=localhost:24224 \
             --log-opt tag="docker.``.`Name`" \
             nginx
#测试docker访问正常             
[root@test-node3 ~]# curl localhost:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>


安装kibana

下载kibana

wget https://artifacts.elastic.co/downloads/kibana/kibana-5.2.2-linux-x86_64.tar.gz


[root@salt-node1 src]# tar zxf kibana-5.2.2-linux-x86_64.tar.gz 
[root@salt-node1 src]# mv kibana-5.2.2-linux-x86_64 /usr/local/kibana
[root@salt-node1 src]# cd /usr/local/kibana/
[root@salt-node1 kibana]# vim config/kibana.yml 
server.host: "192.168.198.116"
elasticsearch.url: "http://localhost:9200"



启动kibana



[root@salt-node1 kibana]# ./bin/kibana
  log   [17:06:42.720] [info][status][plugin:kibana@5.2.2] Status changed from uninitialized to green - Ready
  log   [17:06:42.831] [info][status][plugin:elasticsearch@5.2.2] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [17:06:42.877] [info][status][plugin:console@5.2.2] Status changed from uninitialized to green - Ready
  log   [17:06:42.970] [info][status][plugin:elasticsearch@5.2.2] Status changed from yellow to green - Kibana index ready
  log   [17:06:43.551] [info][status][plugin:timelion@5.2.2] Status changed from uninitialized to green - Ready
  log   [17:06:43.559] [info][listening] Server running at http://192.168.198.116:5601
  log   [17:06:43.562] [info][status][ui settings] Status changed from uninitialized to green - Ready




现在对web页面索引搜索进行配置

添加索引

docker 日志监控 docker日志查看_java_02


docker 日志监控 docker日志查看_json_03

docker 日志监控 docker日志查看_json_04


现在你可以去看看你的日志了

  1. 查找发现
  2. 配置时间
  3. 配置最近15分钟的日志
  4. 开始搜索


docker 日志监控 docker日志查看_运维_05



现在大家可以开始通过kibana索搜日志了,但这个架构会出现一个问题,当es重启过程中会丢失部分日志,这时候我们就需要一个稳定的消息管理工具。这里我推荐大家使用kafka。

官网地址:http://kafka.apache.org/


下面是一些我写的其它帮助文档:

 0. docker安装

  1. 阿里云加速你的docker
  2. 运行你第一个镜像实例-docker容器
  3. docker命令参数
  4. docker仓库使用和镜像提交
  5. docker镜像构建构建及参数详解
  6. docker镜像的导入导出
  7. docker的网络
  8. docker网络的测试示例
  9. docker数据卷
  10. docker的api和python sdk
  11. 单机多容器的编排-compose
  12. docker-swarm集群的管理-swarm
  13. docker-swarm集群管理详解
  14. docker-swarm集群中删除节点和服务
  15. docker自建仓库Registry


使用技巧和推荐

  1. docker集群web页面管理工具
  2. 不要再给你的docker安装ssh server
  3. compose快速创建zookeeper集群



转载于:https://blog.51cto.com/nginxs/1922519