how can I increase hdfs capacity

后端 未结 2 432
北海茫月
北海茫月 2021-01-03 03:14

How can I increase the configured capacity of my hadoop DFS from the default 50GB to 100GB?

My present setup is hadoop 1.2.1 running on a centOS6 machine with 120GB

相关标签:
2条回答
  • 2021-01-03 03:34

    Stop all the service: stop-all.sh

    then add these properties in terms of increasing the storage size in hdfs-site.xml:


        <property>
            <name>dfs.disk.balancer.enabled</name>
            <value>true</value>
    </property>
    <property>
            <name>dfs.storage.policy.enabled</name>
            <value>true</value>
    </property>
    <property>
            <name>dfs.blocksize</name>
            <value>134217728</value>
    </property>
    <property>
            <name>dfs.namenode.handler.count</name>
            <value>100</value>
    </property>
     <property>
             <name>dfs.namenode.name.dir</name>
             <value>file:///usr/local/hadoop_store/hdfs/namenode</value>
    </property>
    <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:///usr/local/hadoop_store/hdfs/datanode,[disk]file:///hadoop_store2/hdfs/datanode</value>
    </property> 
    

    also remember to put [disk] for including a extra disk on folder, [ssd] for dedicated extra ssd drive. always remember to check the "///" triple "/" for the directory pointing.

    After that,

    format the namenode to get the settings inherited in the Hadoop cluster, by giving a command

    hadoop namenode -format then start the services from beginning: Start-all.sh

    "/* remember without formating the hdfs the setting will not be activated as it will search for the Blockpool Id (BP_ID) in dfs.datanode.data.dir, and for the new location it will not found any BP_ID. "/*

    0 讨论(0)
  • 2021-01-03 03:51

    Set the location of the hdfs to a partition with more free space. For hadoop-1.2.1 this can be done by setting the hadoop.tmp.dir in hadoop-1.2.1/conf/core-site.xml

    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
       <property>
          <name>fs.default.name</name>
         <value>hdfs://localhost:9000</value>
         </property>
       <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/myUserID/hdfs</value>
        <description>base location for other hdfs directories.</description>
       </property>
    </configuration>
    

    Running

    df

    had said my _home partition was my hard disk, minus 50GB for my /
    ( _root) partition. The default location for hdfs is /tmp/hadoop-myUserId which is in the / partition. This is where my initial 50GB hdfs size came from.

    Creation and confirmation of the partition location of a directory for the hdfs was accomplished by

    mkdir ~/hdfs
    df -P ~/hdfs | tail -1 | cut -d' ' -f 1
    

    successful implementation was accomplished by

    stop-all.sh
    start-dfs.sh
    hadoop namenode -format
    start-all.sh
    hadoop dfsadmin -report
    

    which reports the size of the hdfs as the size of my _home partition.

    Thank you jtravaglini for the comment/clue.

    0 讨论(0)
提交回复
热议问题