问题
I had some unknown issue with my old EC2 instance so that I can't ssh into it anymore. Therefore I created a new EBS volume from a snapshot of the old volume and tried to attach and mount it to the new instance. Here is what I did:
- Created a new volume from snapshot of the old one.
- Created a new EC2 instance and attached the volume to it as
/dev/xvdf
(or/dev/sdf
) SSHed into the instance and attempted to mount the old volume with:
$ sudo mkdir -m 000 /vol $ sudo mount /dev/xvdf /vol
And the output was:
mount: block device /dev/xvdf is write-protected, mounting read-only
mount: you must specify the filesystem type
Now, I know I should specify the filesytem as ext4
but since the volume contains a lot of important data, I cannot format it through $ sudo mkfs -t ext4 /dev/xvdf
. Still, I know of no other way of preserving the data and specifying the filesystem at the same time. I've searched a lot about it and I'm currently at a loss.
By the way, the mounting as 'read-only' also worries me but I haven't look into it yet since I can't mount the volume at all.
Thanks in advance!
Edit:
When I do sudo mount /dev/xvdf /vol -t ext4
(no formatting) I get:
mount: wrong fs type, bad option, bad superblock on /dev/xvdf,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
And dmesg | tail
gives me:
[ 1433.217915] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.222107] FAT-fs (xvdf): bogus number of reserved sectors
[ 1433.226127] FAT-fs (xvdf): Can't find a valid FAT filesystem
[ 1433.260752] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.265563] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.270477] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
[ 1433.274549] FAT-fs (xvdf): bogus number of reserved sectors
[ 1433.277632] FAT-fs (xvdf): Can't find a valid FAT filesystem
[ 1433.306549] ISOFS: Unable to identify CD-ROM format.
[ 2373.694570] EXT4-fs (xvdf): VFS: Can't find ext4 filesystem
回答1:
The One Liner
🥇 Mount the partition (if disk is partitioned):
sudo mount /dev/xvdf1 /vol -t ext4
Mount the disk (if not partitioned):
sudo mount /dev/xvdf /vol -t ext4
where:
/dev/xvdf
is changed to the EBS Volume device being mounted/vol
is changed to the folder you want to mount to.ext4
is the filesystem type of the volume being mounted
Common Mistakes How To:
✳️ Attached Devices List
Check your mount command for the correct EBS Volume device name and filesystem type. The following will list them all:
sudo lsblk --output NAME,TYPE,SIZE,FSTYPE,MOUNTPOINT,UUID,LABEL
If your EBS Volume displays with an attached partition
, mount the partition
; not the disk.
✳️ If your volume isn't listed
If it doesn't show, you didn't Attach
your EBS Volume in AWS web-console
✳️ Auto Remounting on Reboot
These devices become unmounted again if the EC2 Instance ever reboots.
A way to make them mount again upon startup is to add the volume to the server's /etc/fstab
file.
🔥 Caution:🔥
If you corrupt the /etc/fstab
file, it will make your system unbootable. Read AWS's short article so you know to check that you did it correctly.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html#ebs-mount-after-reboot
First:
With the lsblk
command above, find your volume's UUID
& FSTYPE
.
Second:
Keep a copy of your original fstab
file.
sudo cp /etc/fstab /etc/fstab.original
Third:
Add a line for the volume in sudo nano /etc/fstab
.
The fields of fstab
are 'tab-separated' and each line has the following fields:
<UUID> <MOUNTPOINT> <FSTYPE> defaults,discard,nofail 0 0
Here's an example to help you, my own fstab
reads as follows:
LABEL=cloudimg-rootfs / ext4 defaults,discard,nofail 0 0
UUID=e4a4b1df-cf4a-469b-af45-89beceea5df7 /var/www-data ext4 defaults,discard,nofail 0 0
That's it, you're done. Check for errors in your work by running:
sudo mount --all --verbose
You will see something like this if things are 👍:
/ : ignored
/var/www-data : already mounted
回答2:
I noticed that for some reason the volume was located at /dev/xvdf1
, not /dev/xvdf
.
Using
sudo mount /dev/xvdf1 /vol -t ext4
worked like a charm
回答3:
I encountered this problem too after adding a new 16GB volume and attaching it to an existing instance. First of all you need to know what disks you have present Run
sudo fdisk -l
You'll' have an output that appears like the one shown below detailing information about your disks (volumes"
Disk /dev/xvda: 12.9 GB, 12884901888 bytes
255 heads, 63 sectors/track, 1566 cylinders, total 25165824 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/xvda1 * 16065 25157789 12570862+ 83 Linux
Disk /dev/xvdf: 17.2 GB, 17179869184 bytes
255 heads, 63 sectors/track, 2088 cylinders, total 33554432 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/xvdf doesn't contain a valid partition table
As you can see the newly added Disk /dev/xvdf is present. To make it available you need to create a filesystem on it and mount it to a mount point. You can achieve that with the following commands
sudo mkfs -t ext4 /dev/xvdf
Making a new file system clears everything in the volume so do this on a fresh volume without important data
Then mount it maybe in a directory under the /mnt folder
sudo mount /dev/xvdf /mnt/dir/
Confirm that you have mounted the volume to the instance by running
df -h
This is what you should have
Filesystem Size Used Avail Use% Mounted on
udev 486M 12K 486M 1% /dev
tmpfs 100M 400K 99M 1% /run
/dev/xvda1 12G 5.5G 5.7G 50% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 497M 0 497M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/xvdf 16G 44M 15G 1% /mnt/ebs
And that's it you have the volume for use there attached to your existing instance. credit
回答4:
I encountered this problem, and I got it now,
[ec2-user@ip-172-31-63-130 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 8G 0 disk
└─xvdf1 202:81 0 8G 0 part
You should mount the partition
/dev/xvdf1 (which type is a partition)
not mount the disk
/dev/xvdf (which type is a disk)
回答5:
I had different issue, here when I checked in dmesg logs, the issue was with same UUID of existing root volume and UUID of root volume of another ec2. So to fix this I mounted it on another Linux type of ec2. It worked.
回答6:
You do not need to create a file system of the newly created volume from the snapshot.simply attach the volume and mount the volume to the folder where you want. I have attached the new volume to the same location of the previously deleted volume and it was working fine.
[ec2-user@ip-x-x-x-x vol1]$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 10G 0 disk /home/ec2-user/vol1
来源:https://stackoverflow.com/questions/28792272/attaching-and-mounting-existing-ebs-volume-to-ec2-instance-filesystem-issue