FIELDDATA Data is too large

两盒软妹~` 提交于 2019-12-17 07:31:34

问题


I open kibana and do a search and i get the error where shards failed. I looked in the elasticsearch.log file and I saw this error:

org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [622775500/593.9mb]

Is there any way to increase that limit of 593.9mb?


回答1:


You can try to increase the fielddata circuit breaker limit to 75% (default is 60%) in your elasticsearch.yml config file and restart your cluster:

indices.breaker.fielddata.limit: 75%

Or if you prefer to not restart your cluster you can change the setting dynamically using:

curl -XPUT localhost:9200/_cluster/settings -d '{
  "persistent" : {
    "indices.breaker.fielddata.limit" : "40%" 
  }
}'

Give it a try.




回答2:


I meet this problem,too. Then i check the fielddata memory.

use below request:

GET /_stats/fielddata?fields=*

the output display:

"logstash-2016.04.02": {
  "primaries": {
    "fielddata": {
      "memory_size_in_bytes": 53009116,
      "evictions": 0,
      "fields": {

      }
    }
  },
  "total": {
    "fielddata": {
      "memory_size_in_bytes": 53009116,
      "evictions": 0,
      "fields": {

      }
    }
  }
},
"logstash-2016.04.29": {
  "primaries": {
    "fielddata": {
      "memory_size_in_bytes":0,
      "evictions": 0,
      "fields": {

      }
    }
  },
  "total": {
    "fielddata": {
      "memory_size_in_bytes":0,
      "evictions": 0,
      "fields": {

      }
    }
  }
},

you can see my indexes name base datetime, and evictions is all 0. Addition, 2016.04.02 memory is 53009116, but 2016.04.29 is 0, too.

so i can make conclusion, the old data have occupy all memory, so new data cant use it, and then when i make agg query new data , it raise the CircuitBreakingException

you can set config/elasticsearch.yml

indices.fielddata.cache.size:  20%

it make es can evict data when reach the memory limit.

but may be the real solution you should add you memory in furture.and monitor the fielddata memory use is good habits.

more detail: https://www.elastic.co/guide/en/elasticsearch/guide/current/_limiting_memory_usage.html




回答3:


Alternative solution for CircuitBreakingException: [FIELDDATA] Data too large error is cleanup the old/unused FIELDDATA cache.

I found out that fielddata.limit been shared across indices, so deleting a cache of an unused indice/field can solve the problem.

curl -X POST "localhost:9200/MY_INDICE/_cache/clear?fields=foo,bar"

For more info https://www.elastic.co/guide/en/elasticsearch/reference/7.x/indices-clearcache.html




回答4:


I think it is important to understand why this is happening in the first place.

In my case, I had this error because I was running aggregations on "analyzed" fields. In case you really need your string field to be analyzed, you should consider using multifields and make it analyzed for searches and not_analyzed for aggregations.




回答5:


I ran into this issue the other day. In addition to checking the fielddata memory, I'd also consider checking the JVM and OS memory as well. In my case, the admin forgot to modify the ES_HEAP_SIZE and left it at 1gig.




回答6:


just use:

ES_JAVA_OPTS="-Xms10g -Xmx10g" ./bin/elasticsearch

since the default heap is 1G, if your data is big ,you should set it bigger



来源:https://stackoverflow.com/questions/30811046/fielddata-data-is-too-large

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!