报错1

[2019-01-15T12:36:59,779][ERROR][o.e.b.Bootstrap          ] Exception
java.lang.IllegalStateException: failed to obtain node locks, tried [[/mnt/elasticsearch/data/my-application]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?

解决方法: elasticsearch.yml 配置文件最后添加:
node.max_local_storage_nodes: 2

报错2

[2019-03-14T19:33:31,092][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>64}
[2019-03-14T19:33:34,819][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@localhost:9200/, :path=>"/"}

解决方法: 1、调用 ES 服务 curl http://192.168.0.166:9200:

{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication token for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication token for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}

发现是出现了认证失败。 2、修改 logstash.conf 在 output 段里添加:

user => 'elastic'
password => 'changeme'

3、修改 logstash.yml,添加:

xpack.monitoring.elasticsearch.url: "http://192.168.0.166:9200" 
xpack.monitoring.elasticsearch.username: "logstash_system" 
xpack.monitoring.elasticsearch.password: "changeme"

重启服务后OK。

报错3

Watcher:Error 400 Bad Request:Bad Request

解决方法:

# vim elasticsearch.yml
xpack.watcher.enabled: true

重启服务后OK。

报错4

ERROR: [2] bootstrap checks failed [1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536] [2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

解决方法:

# vim /etc/sysctl.conf

#添加下面配置:
vm.max_map_count=655360

#并执行命令:
# sysctl -p

报错5

ERROR: [1] bootstrap checks failed [1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

解决方法:

切换到root用户

# ulimit -Hn  查看硬限制

# vim /etc/security/limits.conf 

##在末尾添加下面设置

* soft nofile 655350   
* hard nofile 655350

退出用户重新登录,使配置生效

重新 ulimit -Hn 查看硬限制 会发现数值有4096改成65535

# vim /etc/security/limits.d/90-nproc.conf 

#找到如下内容:

soft nproc 1024
//修改为
soft nproc 2048

报错6

org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: failed to obtain node locks, tried [[/data/elasticsearch/data/elasticsearch]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?

解决方法: 在开启多个elasticsearch 实例时,config/elasticsearch.yml文件中新增一个配置变量:

[elsearch@Elk_Server elasticsearch]$ vim config/elasticsearch.yml

node.max_local_storage_nodes: 256  

报错7

[ERROR][o.e.b.Bootstrap ] [qsh-test-node01] node validation exception [1] bootstrap checks failed [1]: memory locking requested for elasticsearch process but memory is not locked

解决方法一: (关闭bootstrap.memory_lock:,会影响性能):

# vim /etc/elasticsearch/elasticsearch.yml          // 设置成false就正常运行了。

bootstrap.memory_lock: false

解决方法二:(开启bootstrap.memory_lock:):

  1. 修改文件/etc/elasticsearch/elasticsearch.yml,上面那个报错就是开启后产生的,如果开启还要修改其它系统配置文件 bootstrap.memory_lock: true

  2. 修改文件/etc/security/limits.conf,最后添加以下内容。

* hard memlock unlimited
* soft memlock unlimited
  1. 修改文件 /etc/systemd/system.conf ,分别修改以下内容。
DefaultLimitNOFILE=65536
DefaultLimitNPROC=32000
DefaultLimitMEMLOCK=infinity

改好后重启下系统。再启动elasticsearch就没报错了 。

报错8

11月 22 18:26:13 filebeat.sh.com elasticsearch[9775]: ERROR: [1] bootstrap checks failed 11月 22 18:26:13 filebeat.sh.com elasticsearch[9775]: [1]: the default discovery settings are unsuitable for production use;...gured

**解决方法:**打开默认配置即可

# vim /etc/elasticsearch/elasticsearch.yml

discovery.seed_hosts: ["host1", "host2"]
cluster.initial_master_nodes: ["node-1", "node-2"]

# systemctl restart elasticsearch

报错9

ERROR: [1] bootstrap checks failed [1]: system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk [2019-03-01T16:20:49,025][INFO ][o.e.n.Node ] [node-data1] stopping ... [2019-03-01T16:20:49,081][INFO ][o.e.n.Node ] [node-data1] stopped [2019-03-01T16:20:49,081][INFO ][o.e.n.Node ] [node-data1] closing ... [2019-03-01T16:20:49,100][INFO ][o.e.n.Node ] [node-data1] closed

解决方法: Centos6不支持SecComp,而ES5.2.0默认bootstrap.system_call_filter为true

报错10

ElasticSearch的maximum shards open问题

问题: "reason"=>"Validation Failed: 1: this action would add [2] shards, but this cluster currently has [3000]/[3000] maximum normal shards open;"

原因 这是因为集群最大shard(分片)数不足引起的,从Elasticsearch v7.0 开始,集群中的每个节点默认限制1000个分片。

解决方法: 官方文档:https://www.elastic.co/guide/en/elasticsearch/reference/7.17/modules-cluster.html#cluster-shard-limit

方案1、在elasticsearch.yml中定义

cluster.max_shards_per_node: 10000

方案2、在kibana控制台执行

PUT /_cluster/settings
{
  "transient": {
    "cluster": {
      "max_shards_per_node":10000
    }
  }
}

方案3、在linux控制台执行

curl -XPUT http://localhost:9200/_cluster/settings \
-u elastic:password \
-H "Content-Type: application/json" \
-d '{"transient":{"cluster":{"max_shards_per_node":10000}}}' 

返回{"acknowledged":true,"persistent":{},"transient":{"cluster":{"max_shards_per_node":"10000"}}}表示执行成功!

报错11

增加或更新数据时会出现错误如下,根据错误信息是索引只有只读和删除权限

{ "error": { "root_cause": [ { "type": "cluster_block_exception", "reason": "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];" } ], "type": "cluster_block_exception", "reason": "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];" }, "status": 403 }

解决方法: 1、在postman中执行GET http://127.0.0.1:9200/my_index/_settings

    "my_index": {
        "settings": {
            "index": {
                "codec": "best_compression",
                "refresh_interval": "15s",
                "number_of_shards": "1",
                "blocks": {
                    "read_only_allow_delete": "true"
                },
                "provided_name": "my_index-2021.01",
                "creation_date": "1609459200118",
                "number_of_replicas": "1",
                "uuid": "sukLydyCRI6m_tkOH9MQtA",
                "version": {
                    "created": "6080299"
                }
            }
        }
    }
}

2、使用以下命令解决

curl -XPUT -H "Content-Type: application/json" http://127.0.0.1:9200/<索引名称>/_settings -d '{"index.blocks.read_only_allow_delete": null}'

//或是创建索引时就指定read_only_allow_delete为false

3、如果上面的方法执行完之后,过几分钟又会出现同样的错误,并且查看索引信息,read_only_allow_delete依然为true

这时就是因为你的磁盘空间不足导致的,查看官方文档,可以看到当磁盘的使用率超过95%时,Elasticsearch为了防止节点耗尽磁盘空间,自动将索引设置为只读模式。

解决方法: 1、最简单也是最直接的是清理磁盘空间 2、更改elasticsearch.yml配置文件,在config/elasticsearch.yml中增加下面这句话 cluster.routing.allocation.disk.watermark.flood_stage: 99% 这是把控制洪水阶段水印设置为99%,你也可以自己设置其他百分比,默认是95%。 3、更改elasticsearch.yml配置文件,在config/elasticsearch.yml中增加下面这句话 cluster.routing.allocation.disk.threshold_enabled: false 默认为true。设置为false禁用磁盘分配决策程序。 //上面无论哪一种方法修改之后,都需要重启elasticsearch,然后再把索引的read_only_allow_delete设置为false,采用一中的方法中的任意一种即可,更改后再查看索引的信息,read_only_allow_delete配置没有了,表示以及设置成功了。