1、安装elasticsearch
1.)关闭防火墙及SELinux
service iptables stop
chkconfig iptables off
chkconfig iptables --list
vim /etc/sysconfig/selinux
SELinux=disabled
setenforce 0

2.)配置jdk环境
vim /etc/profile.d/java.sh
export JAVA_HOME=/home/admin/jdk1.8.0_172/
export CLASSPATH=.:$JAVA_HOME/lib.tools.jar
export PATH=$JAVA_HOME/bin:$PATH
source /etc/profile.d/java.sh

3.)安装ElasticSearch6.x
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.tar.gztar -zxvf elasticsearch-6.2.4.tar.gz -C /home/admin/project/elk
cd /home/admin/project/elkelasticsearch-6.2.4
vim config/elasticsearch.yml
cluster.name: elasticsearch
node.name: node-1
network.host: 10.2.151.203
http.port: 9200
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
http.cors.enabled: true
http.cors.allow-origin: "*"

4.)启动elasticsearch

useradd elk

chown –R elk.elk /home/admin/project/elk/elasticsearch-6.2.4

./bin/elasticsearch –d

filebeat配置kafka加密参数 filebeat到kafka_java


netstat –luntp #查看监听端口9200 9300

filebeat配置kafka加密参数 filebeat到kafka_大数据_02


curl 10.2.151.203:9200

filebeat配置kafka加密参数 filebeat到kafka_filebeat配置kafka加密参数_03

5.)启动常见错误
uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
问题原因:不能使用root用户启动
解决方法:切换要其他用户启动

unable to install syscall filter:
java.lang.UnsupportedOperationException: seccomp unavailable:
问题原因:其实只是一个警告,主要是因为你Linux版本过低造成的
解决方法:警告不影响使用,可以忽略

ERROR: bootstrap checks failed
memory locking requested for elasticsearch process but memory is not locked
问题原因:锁定内存失败
解决方法:切换到root用户,编辑limits.conf配置文件
vim /etc/security/limits.conf

  • hard nproc 65536
  • soft nproc 65536
  • hard nofile 65536
  • soft nofile 65536

max number of threads [1024] for user [es] is too low, increase to at least [2048]
原因:无法创建本地线程问题,用户最大可创建线程数太小
解决方案:切换到root用户,进入limits.d目录下,修改90-nproc.conf 配置文件
vim /etc/security/limits.d/90-nproc.conf

  • soft nofile 65536
  • soft nproc 65536
  • soft nproc 2048

max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
原因:最大虚拟内存太小
解决方案:切换到root用户下,修改配置文件sysctl.conf
vim /etc/sysctl.conf
vm.max_map_count=655360
sysctl -p

system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
问题原因:因为Centos6不支持SecComp
解决方法:在elasticsearch.yml中配置bootstrap.system_call_filter为false,注意要在Memory下面:
bootstrap.memory_lock: false
bootstrap.system_call_filter: false

2、elasticsearch-head插件安装
通过web界面来查看elasticsearch集群状态信息

1.)下载安装nodejs
wget https://nodejs.org/dist/v8.11.3/node-v8.11.3-linux-x64.tar.xztar -zxvf node-v8.11.3-linux-x64.tar.gz -C /home/admin/project/elk/
cd /home/admin/project/elk/
mv node-v8.11.3-linux-x64/ node-v8.11.3
#配置nodejs环境变量
vim /etc/profile.d/node.sh
export NODE_HOME=/home/admin/project/elk/node-v8.11.3
export PATH=$NODE_HOME/bin:$PATH
export NODE_PATH=$NODE_HOME/lib/node_modules
source /etc/profile.d/node.sh
#查看nodejs是否生效
[admin@localhost node-v8.11.3]$ node -v
v8.11.3
[admin@localhost node-v8.11.3]$ npm -v
5.6.0

2.)安装grunt
npm config set registry https://registry.npm.taobao.orgvim ~/.npmrcregistry=https://registry.npm.taobao.orgstrict-ssl = falsenpm install -g grunt-cli#将grunt加入系统文件ln -s /home/admin/project/elk/node-v8.11.3/lib/node_modules/grunt-cli/bin/grunt /usr/bin/grunt

3.)下载head二进制包wget https://codeload.github.com/mobz/elasticsearch-head/zip/masterunzip elasticsearch-head-master.zipcd elasticsearch-head-masternpm install#如果速度较慢或安装失败,建议使用国内镜像
npm install --ignore-scripts -g cnpm --registry=https://registry.npm.taobao.org

4.)修改elasticserach配置文件
vi ./config/elasticsearch.yml
#增加新的参数,这样head插件可以访问es
http.cors.enabled: true
http.cors.allow-origin: "*"

5.)修改Gruntfile.js配置
vim Gruntfile.js
#port: 9100上面增加hostname地址
hostname: "0.0.0.0",

6.)修改_site/app.js配置
vim _site/app.js
#localhost替换为IP地址
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://10.2.151.203:9200";

7.) 启动grunt
grunt server
#如果启动成功,则可以直接使用后台运行,命令行可继续输入(但是如果想退出,则需要自己kill进程)
grunt server &
nohup grunt server & exit #后台启动

#启动提示模块未找到

Local Npm module "grunt-contrib-jasmine" not found. Is it installed?
npm install grunt-contrib-jasmine #安装模块

filebeat配置kafka加密参数 filebeat到kafka_大数据_04


filebeat配置kafka加密参数 filebeat到kafka_开发工具_05

3、安装kibana

1.)下载安装

wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.4-linux-x86_64.tar.gztar -zxvf kibana-6.2.4-linux-x86_64.tar.gz -C /home/admin/project/elk/

cd /ho me/admin/project/elk/ kibana-6.2.4-linux-x86_64

2.)修改配置

vim config/kibana.yml

server.port: 5601

server.host: “IP"

elasticsearch.url: http://IP:9200

filebeat配置kafka加密参数 filebeat到kafka_java_06


3.)启动kibana

./bin/kibana

filebeat配置kafka加密参数 filebeat到kafka_大数据_07


filebeat配置kafka加密参数 filebeat到kafka_开发工具_08

filebeat配置kafka加密参数 filebeat到kafka_开发工具_09

4、安装logstash
1.)下载安装wget https://artifacts.elastic.co/downloads/logstash/logstash-6.2.4.tar.gz
tar -zxvf logstash-6.2.4.tar.gz -C /home/admin/project/elk/
cd /home/admin/project/elk/logstash-6.2.4
2.)新建模板

vim config/test.conf
input 
{
kafka 
{
bootstrap_servers => "10.7.1.112:9092" 
topics => "nethospital_2"
codec => "json"
}
}output
{
if [fields][tag] == "nethospital_2"
{
elasticsearch 
{
hosts => ["10.7.1.111:9200"]
index => "nethospital_2-%{+YYYY-MM-dd}"
codec => "json"
}
} 
}

filebeat配置kafka加密参数 filebeat到kafka_elasticsearch_10

3.)启动logstash
nohup ./bin/logstash –f config/test.conf & # -f 指定配置文件
5、安装kafka
1.)下载安装
wget https://archive.apache.org/dist/kafka/1.0.0/kafka_2.11-1.0.0.tgzwget http://mirrors.hust.edu.cn/apache/zookeeper/zookeeper-3.4.12/zookeeper-3.4.12.tar.gztar -zxvf kafka_2.11-1.0.0.tgz -C /home/admin/project/elk/
tar -zxvf zookeeper-3.4.12.tar.gz -C /home/admin/project/elk/
cd /home/admin/project/elk/kafka_2.11-1.0.0/

2.)修改kafka参数及启动
vim config/zookeeper.properties
dataDir=/tmp/zookeeper/data # 数据持久化路径
clientPort=2181 # 连接端口
maxClientCnxns=100 # 最大连接数
dataLogDir=/tmp/zookeeper/logs #日志存放路径
tickTime=2000 # Zookeeper服务器心跳时间,单位毫秒
initLimit=10 # 投票选举新leader的初始化时间。
#启动zookeeper
./bin/zookeeper-server-start.sh config/zookeeper.properties
#后台启动
nohup ./bin/zookeeper-server-start.sh config/zookeeper.properties &

3.)修改kafka参数及启动

vim config/server.properties
broker.id=0
port=9092
host.name=10.2.151.203
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/logs/kafka
num.partitions=2
num.recovery.threads.per.data.dir=1
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000

filebeat配置kafka加密参数 filebeat到kafka_elasticsearch_11

#启动zookeeper

./bin/kafka-server-start.sh config/server.properties

#后台启动

nohup bin/kafka-server-start.sh config/server.properties &

filebeat配置kafka加密参数 filebeat到kafka_filebeat配置kafka加密参数_12

4.)测试kafka

#创建topic (test)

bin/kafka-topics.sh --create --zookeeper 10.2.151.203:2181 --replication-factor 1 --partitions 1 --topic test

filebeat配置kafka加密参数 filebeat到kafka_java_13


#查看topic

bin/kafka-topics.sh --list --zookeeper 10.2.151.203:2181

filebeat配置kafka加密参数 filebeat到kafka_java_14


#启动生产进程测试

bin/kafka-console-producer.sh --broker-list 10.2.151.203:9092 --topic test#启动启动消费者进程

bin/kafka-console-consumer.sh --zookeeper 10.2.151.203:2181 --topic test --from-beginning

filebeat配置kafka加密参数 filebeat到kafka_filebeat配置kafka加密参数_15

6、安装filebeat
1.)下载安装
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.4-linux-x86_64.tar.gztar –zxvf filebeat-6.2.4-linux-x86_64.tar.gz –C /home/admin/project/elk
cd /home/admin/project/elk/ filebeat-6.2.4-linux-x86_64

2.)配置filebeat
vim filebeat.yml

  • input_type: log
    #Paths that should be crawled and fetched. Glob based paths.
    paths:
  • /home/admin/project/other_project/nh-interface/nh-interface.log
    fields:
    tag: nh-interface
    multiline:
    pattern: '^[0-9]{4}-[0-9]{2}.*'
    negate: true
    match: after

output.kafka:
enabled: true
hosts: ["AppElk1:9092","AppElk2:9092","AppElk3:9092"]
topic: 'hospital'
compression: gzip
max_message_bytes: 100000000

3)启动filebeat
nohup ./filebeat -e -c filebeat.yml &

查看集群状态
curl -XGET 'http://10.2.151.203:9200/_cat/nodes'
curl -XGET 'http://10.2.151.203:9200/_cat/nodes?v'
curl -XGET 'http://10.2.151.203:9200/_cluster/state/nodes?pretty'

查看集群master
curl -XGET 'http://10.2.151.203:9200/_cluster/state/master_node?pretty'
或curl -XGET 'http://10.2.151.203:9200/_cat/master?v'

查询集群的健康状态
curl -XGET 'http://10.2.151.203:9200/_cluster/health?pretty'

curl -XGET 'http://10.2.151.203:9200/_cat/health?v'

7、安装cerebro插件
cerebo是kopf在es5上的替代者,通过web界面来管理和监控elasticsearch集群状态信息

1.)下载安装

#wget https://github.com/lmenezes/cerebro/releases/download/v0.8.1/cerebro-0.8.1.tgz#tar –zxvf cerebro-0.8.1.tgz /home/admin/project/elk
#cd /home/admin/project/elk/cerebro-0.8.1
##vim conf/application.conf
#hosts = [****
{
host = "http://10.2.151.203:9200"
name = "my-elk"
},
]

filebeat配置kafka加密参数 filebeat到kafka_开发工具_16


2.)启动/访问

nohup ./bin/cerebro & #后台运行

http://10.2.151.203:9000

filebeat配置kafka加密参数 filebeat到kafka_elasticsearch_17

8、安装bigdesk插件
bigdesk 统计分析和图表化elasticsearch集群状态信息
1.)下载安装
#wget https://codeload.github.com/hlstudio/bigdesk/zip/masterunzip bigdesk-master.zip
#mv bigdesk-master /home/admin/project/elk/elasticsearch-6.2.4/plugins/
#cd /home/admin/project/elk/elasticsearch-6.2.4/plugins/bigdesk-master/_site**

2.)使用 python -m SimpleHTTPServer 快速搭建http服务
指定端口8000
nohup python -m SimpleHTTPServer 8000 & #后台运行

http://10.2.151.203:8000/

filebeat配置kafka加密参数 filebeat到kafka_filebeat配置kafka加密参数_18


转载于:https://blog.51cto.com/11291014/2298694