The memory consumption of hadoop's namenode?

前端 未结 4 1377
南方客
南方客 2021-01-30 18:42

Can anyone give a detailed analysis of memory consumption of namenode? Or is there some reference material ? Can not find material in the network.Thank you!

4条回答
  •  小鲜肉
    小鲜肉 (楼主)
    2021-01-30 19:38

    There are several technical limits to the NameNode (NN), and facing any of them will limit your scalability.

    1. Memory. NN consume about 150 bytes per each block. From here you can calculate how much RAM you need for your data. There is good discussion: Namenode file quantity limit.
    2. IO. NN is doing 1 IO for each change to filesystem (like create, delete block etc). So your local IO should allow enough. It is harder to estimate how much you need. Taking into account fact that we are limited in number of blocks by memory you will not claim this limit unless your cluster is very big. If it is - consider SSD.
    3. CPU. Namenode has considerable load keeping track of health of all blocks on all datanodes. Each datanode once a period of time report state of all its block. Again, unless cluster is not too big it should not be a problem.

提交回复
热议问题