首先需要说明的是,在es 5.0之前,es与kibana的不同版本号的版本之间会有兼容性问题,但是es 5.0开始,统一了版本号,相同版本号的版本是兼容的。

一、节点规划

host

elasticsearch

kibana

hadoop01(192.168.174.20)

安装

安装

hadoop01(192.168.174.21)

安装

不安装

hadoop01(192.168.174.22)

安装

不安装

 

二、安装es

首先在节点hadoop01上安装配置,完成后再复制到另外两台机器上

1、下载es的历史版本之一:elasticsearch 2.4.6:

下载历史版本的入口:https://www.elastic.co/downloads/past-releases

2、解压缩后,修改conf/elasticsearch.yml文件

注意.ynl文件的格式要求非常严格,每行必须顶格(行开头不能有空格),配置项的值前面必须有个空格,例如:name: value(:与value之前必须有空格)

主要修改以下配置:

#集群名称
cluster.name: escluster
 #当前节点名称(每个节点各不相同)
node.name: node-2
 #数据及日志目录
 path.data: /home/hadoop/data/elasticsearch/data
 path.logs: /home/hadoop/data/elasticsearch/logs #当前节点ip或者hostname(每个节点各不相同)
 network.host: 192.168.174.22
 #http交互端口
 http.port: 9200

 # --------------------------------- Discovery ----------------------------------
 #集群节点发现机制配置#不使用默认的组播方式,改用单播方式(Point to Point)
 discovery.zen.ping.multicast.enables: false
 discovery.zen.ping_timeout: 120s
 client.transport.ping_timeout: 60s#单播的目标节点的ip或者hostname
 discovery.zen.ping.unicast.hosts: ["192.168.174.20", "192.168.174.21", "192.168.174.22"]

3、复制到另外两台机器

将配置好的elasticsearch整个文件夹通过scp命令,复制到集群的另外两台机器上去,然后分别修改配置文件

4、安装插件:

没个节点上都执行以下命令:

执行 bin/plugin install license --verbose,安装license(注意是license不是licence)

 

执行 bin/plugin install marvel-agent --verbose   (Marvel能够让你通过Kibana非常容易的监视ES。你能实时的观察集群(your cluster)的健康和表现也能分析过去的集群、索引和节点指标。

Marvel由两部分组成:Marvel代理:在你的集群中安装在每一个节点上;Marvel应用:安装在Kibana。

Marvel代理从ES收集和为数值编写索引,然后通过Kibana中的Marvel仪表板展示数据。

)

执行bin/plugin install mobz/elasticsearch-head (集群管理工具、数据可视化、增删改查工具)

 

5、通过bin/elasticsearch命令启动三个节点(每个节点都执行)

后台启动方法:加参数-d

```
[hadoop@hadoop01 bin]$ ./elasticsearch -d
```
[2017-09-28 17:01:49,887][INFO ][marvel.agent.exporter    ] [node-0] skipping exporter [default_local] as it isn't ready yet
 [2017-09-28 17:01:59,888][INFO ][marvel.agent.exporter    ] [node-0] skipping exporter [default_local] as it isn't ready yet
 [2017-09-28 17:02:09,889][INFO ][marvel.agent.exporter    ] [node-0] skipping exporter [default_local] as it isn't ready yet
 [2017-09-28 17:02:10,146][WARN ][discovery                ] [node-0] waited for 30s and no initial state was set by the discovery
 [2017-09-28 17:02:10,163][INFO ][http                     ] [node-0] publish_address {192.168.174.20:9200}, bound_addresses {192.168.174.20:9200}
 [2017-09-28 17:02:10,163][INFO ][node                     ] [node-0] started

版本5.x以上有可能报错:

```
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
 [2]: max number of threads [1024] for user [hadoop] is too low, increase to at least [2048]
 [3]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]```

解决办法:

1、max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

  每个进程最大同时打开文件数太小,可通过下面2个命令查看当前数量

ulimit -Hn
ulimit -Sn

  修改/etc/security/limits.conf文件,增加配置,用户退出后重新登录生效

*               soft    nofile          65536
*               hard    nofile          65536

kibana8 配置es kibana配置多个es集群_集群

 

2、max number of threads [3818] for user [es] is too low, increase to at least [4096]

  问题同上,最大线程个数太低。修改配置文件/etc/security/limits.conf,增加配置

*               soft    nproc           4096
*               hard    nproc           4096


  可通过命令查看

ulimit -Hu
ulimit -Su

kibana8 配置es kibana配置多个es集群_kibana8 配置es_02

 

3、max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

  修改/etc/sysctl.conf文件,增加配置vm.max_map_count=262144

vi /etc/sysctl.conf
sysctl -p

  执行命令sysctl -p生效

kibana8 配置es kibana配置多个es集群_kibana_03

 4、Exception in thread "main" java.nio.file.AccessDeniedException: /usr/local/elasticsearch/elasticsearch-6.2.2-1/config/jvm.options

  elasticsearch用户没有该文件夹的权限,执行命令

chown -R es:es /usr/local/elasticsearch/

6、web查看节点及集群状态:

#查看hadoop01上的节点状态;

http://192.168.174.20:9200 

{
  "name" : "node-0",
  "cluster_name" : "escluster",
  "cluster_uuid" : "FQBrEEN3SOiq6z8gNI5tHQ",
  "version" : {
    "number" : "2.4.6",
    "build_hash" : "5376dca9f70f3abef96a77f4bb22720ace8240fd",
    "build_timestamp" : "2017-07-18T12:17:44Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.4"
  },
  "tagline" : "You Know, for Search"
}

 

#查看集群状态,注意集群状态是green时,才是正常,如果red或yellow则需要排查下哪里有问题

 

es的集群所有节点地位平等,所以以下命令在哪台机器上都可以执行(只要安装了插件)

http://192.168.174.20:9200/_cluster/health?pretty 

{
  "cluster_name" : "escluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 4,
  "active_shards" : 8,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

 

通过head插件提供的可视化web管理界面查看:

http://hadoop01:9200/_plugin/head/

 

7、启动命令的参数:

bin/elasticsearch -d (后台启动)

bin/elasticsearch -d -p ../pid (保存进程号到pid文件)

三、安装kibana

1、 下载kibana的4.6.0版本

2、解压缩后,修改conf/kibana.yml的配置项elasticsearch.url

elasticsearch.url: http://hadoop01:9200

3、安装插件

执行 bin/kibana plugin --install elasticsearch/marvel/latest

插件安装成功后可以在installedPlugins目录下看到

4、启动kibana:

执行kibana下bin/kibana 即可

5、通过kibana查看es集群的各个节点的详细状态,及索引数据信息:http://192.168.174.20:5601

如果查看marvel时,提示no data 。。,有可能节点时间没同步,可以通过ntpdate 同步下节点时间,再查看