EC2 Can't resize volume after increasing size

后端 未结 14 496
青春惊慌失措
青春惊慌失措 2020-11-30 16:30

I have followed the steps for resizing an EC2 volume

  1. Stopped the instance
  2. Took a snapshot of the current volume
  3. Created a new volume out of t
相关标签:
14条回答
  • 2020-11-30 17:09

    So in Case anyone had the issue where they ran into this issue with 100% use , and no space to even run growpart command (because it creates a file in /tmp)

    Here is a command that i found that bypasses even while the EBS volume is being used , and also if you have no space left on your ec2 , and you are at 100%

    /sbin/parted ---pretend-input-tty /dev/xvda resizepart 1 yes 100%
    

    see this site here:

    https://www.elastic.co/blog/autoresize-ebs-root-volume-on-aws-amis

    0 讨论(0)
  • 2020-11-30 17:12

    Don't have enough rep to comment above; but also note per the comments above that you can corrupt your instance if you start at 1; if you hit 'u' after starting fdisk before you list your partitions with 'p' this will infact give you the correct start number so you don't corrupt your volumes. For centos 6.5 AMI, also as mentioned above 2048 was correct for me.

    0 讨论(0)
  • 2020-11-30 17:15

    There's no need to stop instance and detach EBS volume to resize it anymore!

    13-Feb-2017 Amazon announced: "Amazon EBS Update – New Elastic Volumes Change Everything"

    The process works even if the volume to extend is the root volume of running instance!


    Say we want to increase boot drive of Ubuntu from 8G up to 16G "on-the-fly".

    step-1) login into AWS web console -> EBS -> right mouse click on the one you wish to resize -> "Modify Volume" -> change "Size" field and click [Modify] button


    step-2) ssh into the instance and resize the partition:

    let's list block devices attached to our box:
    lsblk
    NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    xvda    202:0    0  16G  0 disk
    └─xvda1 202:1    0   8G  0 part /
    

    As you can see /dev/xvda1 is still 8 GiB partition on a 16 GiB device and there are no other partitions on the volume. Let's use "growpart" to resize 8G partition up to 16G:

    # install "cloud-guest-utils" if it is not installed already
    apt install cloud-guest-utils
    
    # resize partition
    growpart /dev/xvda 1
    

    Let's check the result (you can see /dev/xvda1 is now 16G):

    lsblk
    NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    xvda    202:0    0  16G  0 disk
    └─xvda1 202:1    0  16G  0 part /
    

    Lots of SO answers suggest to use fdisk with delete / recreate partitions, which is nasty, risky, error-prone process especially when we change boot drive.


    step-3) resize file system to grow all the way to fully use new partition space
    # Check before resizing ("Avail" shows 1.1G):
    df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/xvda1      7.8G  6.3G  1.1G  86% /
    
    # resize filesystem
    resize2fs /dev/xvda1
    
    # Check after resizing ("Avail" now shows 8.7G!-):
    df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/xvda1       16G  6.3G  8.7G  42% /
    

    So we have zero downtime and lots of new space to use.
    Enjoy!

    Update: Update: Use sudo xfs_growfs /dev/xvda1 instead of resize2fs when XFS filesystem.

    0 讨论(0)
  • 2020-11-30 17:15

    [SOLVED]

    This is what it had to be done

    1. Stop the instance
    2. Create a snapshot from the volume
    3. Create a new volume based on the snapshot increasing the size
    4. Check and remember the current's volume mount point (i.e. /dev/sda1)
    5. Detach current volume
    6. Attach the recently created volume to the instance, setting the exact mount point
    7. Restart the instance
    8. Access via SSH to the instance and run fdisk /dev/xvde
    9. Hit p to show current partitions
    10. Hit d to delete current partitions (if there are more than one, you have to delete one at a time) NOTE: Don't worry data is not lost
    11. Hit n to create a new partition
    12. Hit p to set it as primary
    13. Hit 1 to set the first cylinder
    14. Set the desired new space (if empty the whole space is reserved)
    15. Hit a to make it bootable
    16. Hit 1 and w to write changes
    17. Reboot instance
    18. Log via SSH and run resize2fs /dev/xvde1
    19. Finally check the new space running df -h

    This is it

    Good luck!

    0 讨论(0)
  • 2020-11-30 17:17

    Just in case if anyone here for GCP google cloud platform ,
    Try this:

    sudo growpart /dev/sdb 1
    sudo resize2fs /dev/sdb1
    
    0 讨论(0)
  • 2020-11-30 17:18

    Did you make a partition on this volume? If you did, you will need to grow the partition first.

    0 讨论(0)
提交回复
热议问题