Skip to content

Sharing EBS Volumes Among Instances

In this post I share an experiment to create an EBS volume, to attach it to an EC2 instance, to mount it in the instance, to put a file on it, to unmount it, and to detach it. Afterwards the volume will be mounted in another instance (while the first instance has been terminated, because attaching volumes to different instances at the same time is impossible).

I followed the instructions given in the Elastic Block Storage Feature Guide.

Starting an Instance

Let’s see which AMIs are available:

Terminal window
$ ec2-describe-images -o self
IMAGE ami-c6c622af dehonk-gettingstarted/image.manifest.xml 190912652296 available private i386 machine

I launch ami-c6c622af with Elasticfox. Let’s check the status of the instance with the command line tools:

Terminal window
$ ec2-describe-instances
RESERVATION r-9f3deef6 190912652296 default
INSTANCE i-520faf3b ami-c6c622af pending gettingstarted-keypair 0 m1.small 2008-09-25T09:50:01+0000 us-east-1c

Important to note for later is the availability zone in which the instance is running, because volumes can only be attached to instances when they live in the same availability zone.

Create the Volume

Create a volume of 1 GB in the same availability zone in which the instance resides:

Terminal window
$ ec2-create-volume --size 1 -z us-east-1c
VOLUME vol-4001e429 1 us-east-1c creating 2008-09-25T09:51:48+0000

Check the status of the volume:

Terminal window
$ ec2-describe-volumes vol-4001e429
VOLUME vol-4001e429 1 us-east-1c available 2008-09-25T09:51:48+0000

The volume is available now. Time to use it!

Attaching the Volume

Attach the newly created volume as device /dev/sdh to the running instance:

Terminal window
$ ec2-attach-volume vol-4001e429 -i i-520faf3b -d /dev/sdh
ATTACHMENT vol-4001e429 i-520faf3b /dev/sdh attaching 2008-09-25T09:59:14+0000

The command returns saying that the volume is attaching. Let’s check the status:

Terminal window
$ ec2-describe-volumes
VOLUME vol-4001e429 1 us-east-1c in-use 2008-09-25T09:51:48+0000 ATTACHMENT vol-4001e429 i-520faf3b /dev/sdh attached 2008-09-25T09:59:14+0000

While the volume was available and attaching before, now it is in-use and attached.

Formatting the Volume

Open another terminal. Connect to the instance via ssh:

Terminal window
$ ssh -i id_rsa-gettingstarted-keypair root@ec2-75-101-254-227.compute-1.amazonaws.com

Looking at the contents of /dev reveals that the volume is available as device sdh:

Terminal window
# ls /dev
MAKEDEV port ptyc1 ptye6 ptyqb ptyt0 ptyv5 ptyxa ptyzf ttya2 ttyc7 ttyec ttyr1 ttyt6 ttyvb ttyy0
--- cut here for brevity ---
loop7 ptyb3 ptyd8 ptypd ptys2 ptyu7 ptywc ptyz1 sdh ttyb9 ttyde ttyq3 ttys8 ttyud ttyx2 ttyz7
--- cut here for brevity ---

Since a new volume is not formatted, we do that first:

Terminal window
# yes | mkfs -t ext3 /dev/sdh
mke2fs 1.38 (30-Jun-2005)
/dev/sdh is entire device, not just one partition!
Proceed anyway? (y,n) Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
131072 inodes, 262144 blocks
13107 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 20 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

Mounting the Volume

Finally, the volume is ready to be mounted in the instance:

Terminal window
$ mkdir /mnt/data-store
$ mount /dev/sdh /mnt/data-store

Let’s check whether everything is as expected:

Terminal window
$ ls /mnt
data-store lost+found
$ ls /mnt/data-store/
lost+found

That looks okay.

Put a file on the volume

Using vi, I created a file named readme with this contents:

This is an example file to show that a file persists on an EBS volume after unmounting and detaching.

Unmounting the Volume Before we stop the instance, we have to unmount the volume. From the Elastic Block Storage Feature Guide:

A volume must be unmounted inside the instance before being detached. Failure to do so will result in damage to the file system or the data it contains.

Terminal window
$ umount /mnt/data-store

Remember to cd out of the volume, otherwise you will get an error message umount: /mnt/data-store: device is busy

Detach the Volume

From the Feature Guide:

An Amazon EBS volume can be detached from an instance by either explicitly detaching the volume or terminating the instance.

Let’s do it by explicitly detaching it:

Terminal window
$ ec2-detach-volume vol-4001e429 -i i-520faf3b -d /dev/sdh
ATTACHMENT vol-4001e429 i-520faf3b /dev/sdh detaching 2008-09-25T09:59:14+0000

Soon the status of the volume changes form detaching to available:

Terminal window
$ ec2-describe-volumes
VOLUME vol-4001e429 1 us-east-1c available 2008-09-25T09:51:48+0000

Mounting the Volume in Another Instance

Now do all the steps over again to start a new image and mount the volume. Because the volume resides in availability zone us-east-1c and instances and volumes have to live in the same availability zone, we have to launch the instance in us-east-1c.

Terminal window
$ ec2-run-instances ami-c6c622af -k gettingstarted-keypair -z us-east-1c
RESERVATION r-eb22f182 190912652296 default
INSTANCE i-5a12b233 ami-c6c622af pending gettingstarted-keypair 0 m1.small 2008-09-25T11:23:01+0000 us-east-1c
$ ec2-describe-instances
RESERVATION r-eb22f182 190912652296 default
INSTANCE i-5a12b233 ami-c6c622af ec2-72-44-53-70.compute-1.amazonaws.com domU-12-31-39-01-5C-76.compute-1.internal running gettingstarted-keypair 0 m1.small 2008-09-25T11:23:01+0000 us-east-1c
$ ec2-describe-volumes
VOLUME vol-4001e429 1 us-east-1c available 2008-09-25T09:51:48+0000
$ ec2-attach-volume vol-4001e429 -i i-5a12b233 -d /dev/sdh
ATTACHMENT vol-4001e429 i-5a12b233 /dev/sdh attaching 2008-09-25T11:25:46+0000
$ ec2-describe-volumes
VOLUME vol-4001e429 1 us-east-1c in-use 2008-09-25T09:51:48+0000
ATTACHMENT vol-4001e429 i-5a12b233 /dev/sdh attached 2008-09-25T11:25:46+0000

Now start another terminal to connect to the instance:

Terminal window
$ ssh -i id_rsa-gettingstarted-keypair root@ec2-72-44-53-70.compute-1.amazonaws.com
Last login: Tue Sep 9 14:48:20 2008 from 213.49.236.209
__| __|_ ) Rev: 2
_| ( /
___|\___|___|
Welcome to an EC2 Public Image
Getting Started
__ c __ /etc/ec2/release-notes.txt
[root@domU-12-31-39-01-5C-76 ~]# mkdir /mnt/data-store
[root@domU-12-31-39-01-5C-76 ~]# mount /dev/sdh /mnt/data-store
[root@domU-12-31-39-01-5C-76 ~]# cd /mnt/data-store
[root@domU-12-31-39-01-5C-76 data-store]# ls
lost+found readme
[root@domU-12-31-39-01-5C-76 data-store]# cat readme
This is an example file to show that a file persists on an EBS volume after unmounting and detaching.
[root@domU-12-31-39-01-5C-76 ~]# umount /mnt/data-store
[root@domU-12-31-39-01-5C-76 ~]# exit

The file we created earlier was on the volume and we could read it. This proves that we can share volumes among instances. To clean up:

Terminal window
$ ec2-detach-volume vol-4001e429 -i i-5a12b233
ATTACHMENT vol-4001e429 i-5a12b233 /dev/sdh detaching 2008-09-25T11:25:46+0000
$ ec2-describe-volumes
VOLUME vol-4001e429 1 us-east-1c available 2008-09-25T09:51:48+0000
$ ec2-terminate-instances i-5a12b233
INSTANCE i-5a12b233 running shutting-down
$ ec2-describe-instances
RESERVATION r-eb22f182 190912652296 default
INSTANCE i-5a12b233 ami-c6c622af ec2-72-44-53-70.compute-1.amazonaws.com domU-12-31-39-01-5C-76.compute-1.internal shutting-down gettingstarted-keypair 0 m1.small 2008-09-25T11:23:01+0000 us-east-1c
$ ec2-describe-instances
RESERVATION r-eb22f182 190912652296 default
INSTANCE i-5a12b233 ami-c6c622af terminated gettingstarted-keypair 0 m1.small 2008-09-25T11:23:01+0000