The memory consumption of hadoop's namenode?

老子叫甜甜 提交于 2019-12-02 16:01:16
Pitt

I suppose the memory consumption would depend on your HDFS setup, so depending on overall size of the HDFS and is relative to block size. From the Hadoop NameNode wiki:

Use a good server with lots of RAM. The more RAM you have, the bigger the file system, or the smaller the block size.

From https://twiki.opensciencegrid.org/bin/view/Documentation/HadoopUnderstanding:

Namenode: The core metadata server of Hadoop. This is the most critical piece of the system, and there can only be one of these. This stores both the file system image and the file system journal. The namenode keeps all of the filesystem layout information (files, blocks, directories, permissions, etc) and the block locations. The filesystem layout is persisted on disk and the block locations are kept solely in memory. When a client opens a file, the namenode tells the client the locations of all the blocks in the file; the client then no longer needs to communicate with the namenode for data transfer.

the same site recommends the following:

Namenode: We recommend at least 8GB of RAM (minimum is 2GB RAM), preferably 16GB or more. A rough rule of thumb is 1GB per 100TB of raw disk space; the actual requirements is around 1GB per million objects (files, directories, and blocks). The CPU requirements are any modern multi-core server CPU. Typically, the namenode will only use 2-5% of your CPU. As this is a single point of failure, the most important requirement is reliable hardware rather than high performance hardware. We suggest a node with redundant power supplies and at least 2 hard drives.

For a more detailed analysis of memory usage, check this link out: https://issues.apache.org/jira/browse/HADOOP-1687

You also might find this question interesting: Hadoop namenode memory usage

David Gruzman

There are several technical limits to the NameNode (NN), and facing any of them will limit your scalability.

  1. Memory. NN consume about 150 bytes per each block. From here you can calculate how much RAM you need for your data. There is good discussion: Namenode file quantity limit.
  2. IO. NN is doing 1 IO for each change to filesystem (like create, delete block etc). So your local IO should allow enough. It is harder to estimate how much you need. Taking into account fact that we are limited in number of blocks by memory you will not claim this limit unless your cluster is very big. If it is - consider SSD.
  3. CPU. Namenode has considerable load keeping track of health of all blocks on all datanodes. Each datanode once a period of time report state of all its block. Again, unless cluster is not too big it should not be a problem.
user166555

Example calculation

200 node cluster
24TB/node
128MB block size
Replication factor = 3

How much space is required?

# blocks = 200*24*2^20/(128*3)
~12Million blocks
~12,000 MB memory.

I guess we should make the distinction between how namenode memory is consumed by each namenode object and general recommendations for sizing the namenode heap.

For the first case (consumption) ,AFAIK , each namenode object holds an average 150 bytes of memory. Namenode objects are files, blocks (not counting the replicated copies) and directories. So for a file taking 3 blocks this is 4(1 file and 3 blocks)x150 bytes = 600 bytes.

For the second case of recommended heap size for a namenode, it is generally recommended that you reserve 1GB per 1 million blocks. If you calculate this (150 bytes per block) you get 150MB of memory consumption. You can see this is much less than the 1GB per 1 million blocks, but you should also take into account the number of files sizes, directories.

I guess it is a safe side recommendation. Check the following two links for a more general discussion and examples:

Sizing NameNode Heap Memory - Cloudera

Configuring NameNode Heap Size - Hortonworks

Namenode Memory Structure Internals

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!