Logstash/Elasticsearch/Kibana resource planning

走远了吗. 提交于 2019-12-11 03:53:07

问题


How to plan resources (I suspect, elasticsearch instances) according to load:

With load I mean ≈500K events/min, each containing 8-10 fields.

What are the configuration knobs I should turn? I'm new to this stack.


回答1:


500,000 events per minute is 8,333 events per second, which should be pretty easy for a small cluster (3-5 machines) to handle.

The problem will come with keeping 720M daily documents open for 60 days (43B documents). If each of the 10 fields is 32 bytes, that's 13.8TB of disk space (nearly 28TB with a single replica).

For comparison, I have 5 nodes at the max (64GB of RAM, 31GB heap), with 1.2B documents consuming 1.2TB of disk space (double with a replica). This cluster could not handle the load with only 32GB of RAM per machine, but it's happy now with 64GB. This is 10 days of data for us.

Roughly, you're expecting to have 40x the number of documents consuming 10x the disk space than my cluster.

I don't have the exact numbers in front of me, but our pilot project for using doc_values is giving us something like a 90% heap savings.

If all of that math holds, and doc_values is that good, you could be OK with a similar cluster as far as actual bytes indexed were concerned. I would solicit additional information on the overhead of having so many individual documents.

We've done some amount of elasticsearch tuning, but there's probably more than could be done as well.

I would advise you to start with a handful of 64GB machines. You can add more as needed. Toss in a couple of (smaller) client nodes as the front-end for index and search requests.



来源:https://stackoverflow.com/questions/30331768/logstash-elasticsearch-kibana-resource-planning

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!