Is there a limit on the number of indexes that can be created on Elastic Search?

后端 未结 3 909
Happy的楠姐
Happy的楠姐 2021-01-19 00:00

I\'m using AWS-provided Elastic Search.

I have a signup page on my website, and on each signup; a new index for the new user gets created (to be used later by his wo

相关标签:
3条回答
  • 2021-01-19 00:03

    If I'm not mistaken, the only limit is the disk space of your server, but if your index is growing too fast you should think about having more replica servers. I recomend reading this page: Indexing Performance Tips

    0 讨论(0)
  • 2021-01-19 00:07

    Note: I haven't used AWS-Elasticsearch, so this answer may vary because they have started using open-distro of Elsticsearch and have forked the main branch. But a lot of principles should be the same. Also, this question doesn't have a definitive answer and it depends on various factors but I hope this answer will help the thought process.

    One of the factors is the number of shards and replicas per index as that will contribute to the total number of shards per node. Each shard consumes some memory, so you will have to keep the number of shards limited per node so that they don't exceed maximum recommended 30GB heap space. As per this comment 600 to 1000 should be reasonable and you can scale your cluster according to that.

    Also, you have to monitor the number of file descriptors and make sure that doesn't create any bottleneck for nodes to operate.

    HTH!

    0 讨论(0)
  • 2021-01-19 00:13

    Indexes themselves have no limit, however shards do, the recommended amount of shards per GB of heap is 20(JVM heap - you can check on kibana stack monitoring tab), this means if you have 5GB of JVM heap, the recommended amount is 100.

    Remember that 1 index can take from 1 to x number of shards (1 primary and x secondary), normally people have 1 primary and 1 secondary, if this is you case then you would be able to create 50 indexes with those 5GB of heap

    0 讨论(0)
提交回复
热议问题