(java环境自己之前配置好,这我就不讲了,接下去碰到的172.30.194.180这个地址是我内网测试服务器的ip地址,你们改为自己的机器ip或者域名即可)

1.logstash的安装:

wget https://download.elastic.co/logstash/logstash/logstash-2.2.0.tar.gz 
tar zxvf logstash-2.2.0.tar.gz


(我们这里用的logstash和接下去的elasticsearch都用的2.2.0版本)

进入logstash的根目录下面:

启动logstash:

bin/logstash -e 'input { stdin { } } output { stdout {} }'


然后命令行输入一些字符串:

hello
2017-04-21T02:32:57.582Z 172_30_194_180 hello



看到上面按照日志格式输出的内容就说明logstash运行成功了。

接下去我们换种输出方式试验下:

bin/logstash -e 'input { stdin { } } output { stdout {codec => rubydebug } }'


继续输入一些字符串:


jetty
{
       "message" => "jetty",
      "@version" => "1",
    "@timestamp" => "2017-04-21T02:36:01.834Z",
          "host" => "172_30_194_180"
}



看到结果输出的格式已经不一样了。接下去我们安装elasticsearch

2.elasticsearch的安装:

wget https://download.elasticsearch.org/elasticsearch/release/org/elasticsearch/distribution/zip/elasticsearch/2.2.0/elasticsearch-2.2.0.zip
unzip elasticsearch-2.2.0.zip


下载后在elasticsearch(下面简称es)根目录下启动

./bin/elasticsearch




发现报错了,是因为es不允许root权限启动,我们添加一个用户:

#添加一个用户:elasticsearch
$useradd elasticsearch
#给用户elasticsearch设置密码,连续输入2次
$passwd elasticsearch
#创建一个用户组 es
groupadd es
#分配 elasticsearch 到 es 组
usermod -G elasticsearch es
#这里注意下,如果提示用户“es”不存在,那么是因为服务器版本问题,你可以换成 usermod -G es elasticsearch ,也就是用户和用户组对调一下使用。
#在elasticsearch 根目录下,给定用户权限。-R表示逐级(N层目录) , * 表示 任何文件
chown  -R elasticsearch.es *
#切换到elasticsearch用户
su elasticsearch




如果不按上面的给用户elasticearch分配权限目录。那么会报下面的错。

java.io.FileNotFoundException: /home/es/elasticsearch-2.2.0/logs/elasticsearch.log (Permission denied)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.(FileOutputStream.java:221)
        at java.io.FileOutputStream.(FileOutputStream.java:142)
        at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
        at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
        at org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:223)
        at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
        at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
        at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
        at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)
        at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
        at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
        at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
        at org.apache.log4j.PropertyConfigurator.configure(PropertyConfigurator.java:440)
        at org.elasticsearch.common.logging.log4j.LogConfigurator.configure(LogConfigurator.java:128)
        at org.elasticsearch.bootstrap.Bootstrap.setupLogging(Bootstrap.java:204)
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:258)
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
log4j:ERROR Either File or DatePattern options are not set for appender [file].
log4j:ERROR setFile(null,true) call failed.




修改配置文件:

$ vi config/elasticsearch.yml
#cluster name
cluster.name: sojson-application
#节点名称
node.name: node-1
#绑定IP和端口
network.host: 172.30.194.180
http.port: 9200




安装head插件。

进入
$ cd elasticsearch/bin
$ ./plugin install mobz/elasticsearch-head
启动es
./bin/elasticsearch

再访问http://172.30.194.180:9200/_plugin/head/

如下图就对了。


logstash 写入hbase logstash logback_elasticsearch



3.java代码中logback配置

首先我们引入maven依赖:

<dependency> 
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>4.7</version>
</dependency>




并且引入下面这些提前应该要有的依赖:
<!-- jackson -->
        <dependency>
            <groupId>com.fasterxml.jackson.core</groupId>
            <artifactId>jackson-core</artifactId>
            <version>2.6.3</version>
        </dependency>
		<dependency>
			<groupId>org.slf4j</groupId>
			<artifactId>slf4j-api</artifactId>
			<version>1.7.5</version>
		</dependency>
		<dependency>
			<groupId>org.slf4j</groupId>
			<artifactId>jcl-over-slf4j</artifactId>
			<version>1.7.12</version>
		</dependency>
		<dependency>
			<groupId>org.slf4j</groupId>
			<artifactId>log4j-over-slf4j</artifactId>
			<version>1.7.12</version>
		</dependency>
		<dependency>
			<groupId>ch.qos.logback</groupId>
			<artifactId>logback-core</artifactId>
			<version>1.0.13</version>
		</dependency>
        <dependency>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-classic</artifactId>
            <version>1.0.13</version>
        </dependency>
		<dependency>
			<groupId>ch.qos.logback</groupId>
			<artifactId>logback-access</artifactId>
			<version>1.0.13</version>
		</dependency>
		<dependency>
			<groupId>org.logback-extensions</groupId>
			<artifactId>logback-ext-spring</artifactId>
			<version>0.1.2</version>
			<scope>compile</scope>
		</dependency>






在logback.xml文件中添加一个appender:

<!--logstash-->
    <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>172.30.194.180:9250</destination>
        <!-- encoder必须配置,有多种可选 -->
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" />
    </appender>






加入到需要监控的logger中:

<!-- logstash-->
    <logger name="com.qccr.nebula.biz.facade.FileUploadFacadeImpl" level="INFO">
        <appender-ref ref="LOGSTASH"/>
    </logger>






项目需要改的就这么多,然后我们在logstash刚才的安装根目录下,bin下面创建一个配置文件

vim bin/logstash.conf




input {
tcp {
    ##host:port就是上面appender中的 destination,这里其实把logstash作为服务,开启9250端口接收logback发出的消息
    host => "172.30.194.180"
    port => 9250
    #模式选择为server
    mode => "server"
    tags => ["tags"]
    ##格式json
    codec => json_lines
  }
 }
output {
  stdout { codec => rubydebug }
#这里是es的地址
elasticsearch { hosts => "172.30.194.180:9200" }
}






保存以后,在bin下面以配置文件方式启动logstash

./logstash -f logstash.conf






刚才的es应该被你关了,所以也要去es的根目录下面启动下es

./bin/elasticsearch






4.测试

接下去在java中写个测试用例,我们这边用结合spring项目做得测试,你们可以自己写,但需要把logback的配置用进去,切记

logstash 写入hbase logstash logback_elasticsearch_02


logstash 写入hbase logstash logback_logback_03


logstash 写入hbase logstash logback_elasticsearch_04

这里只是为了用log打出“imptest”,运行一下测试用例,跑完,确定没有错误以后,进es的head插件查看,发现日志已经记录进去了,完工了




(若head插件没装,我们可以通过输入下面指令查看es结果)

curl http://172.30.194.180:9200/_search?pretty






参考文章:


http://www.sojson.com/blog/81.html

http://www.jianshu.com/p/db2196991a00