Resizing my Ubuntu Server AWS Boot Disk

AKA: Building a Bigger GeoSandbox

(Note: This article has been updated to make it clear that expanded EBS volumes will = additional charges from AWS. Something that is not clearly stated in the AWS documentation.)
If you’ve been reading my last few blog posts, you know I’ve been experimenting with various Ubuntu server configurations using Amazon Web Services (AWS) to serve web-maps and spatial data. As my procedures have evolved, the micro-instances I started working with have outgrown their usefulness. Lately, I’ve been testing GeoWebCache, and seeing how that works with GeoServer and the rest of the OpenGeo Suite. As anyone who’s ever delved into the map-tile world knows, tile caches start gobbling up disk space pretty quick once you start seeding layers at larger scales. I had to figure out a way to expand my storage space if I wanted to really test out GeoWebCache’s capabilities without bringing my server to its knees.
The Ubuntu AMIs I’ve been using all start out with an 8GB EBS volume as the root drive with an additional instance-store volume that can be used for “ephemeral” storage. That “ephemeral” storage means, whatever is in there is lost every time the instance is stopped. Supposedly, a reboot will not clear out the ephemeral storage, but a stop and then start, will. There are procedures you can set up to save whatever is in the ephemeral instance-store volume before you stop it, but I was looking for something a bit easier.
A medium instance AMI includes a 400GB instance-store volume, but it still starts out with the same 8GB root drive that a micro instance has. So, what to do? How do I expand that 8GB disk so I can save more data without losing it every time I stop the system?
A little searching led to a couple of articles that described what I wanted to do. As usual, though, I ran into a couple of glitches. So, for my future reference and in case it might be of some help to others, the following paragraphs describe my procedure.
The two articles this post was compiled/aggregated/paraphrased from are:

The standard “Out of the Box” Ubuntu AMI disk configuration

First, connect to the server using WinSCP, SecPanel, or some other means as described in one of my previous posts. Then open a terminal (or PuTTY) window, and enter:
df -h
You should see something like this:

The first line (/dev/xvda1) is the EBS root disk, and it shows 8.0 GB, with about 3.1 GB being used. The last line (/dev/xvdb) is the instance-store “ephemeral” space that’s wiped clean on every stop.

Note: The Ubuntu AMIs use “xvda1” and “xvdb” as device identifiers for the attached disks and storage space, while the AWS console uses “sda1” and “sdb”. In this case, “xvda1” equals “sda1”. Keep this in mind as you’re navigating back and forth between the two.

Step One: Shut It Down

First, look in the AWS console, and make a note of what availability zone your server is running in. You will need to know this later on. The one I’m working on is in “us-east-1d”. Then, using the AWS console stop the EC2 instance (Do not terminate it, or you will wind up rebuilding your server from scratch). Then move to the “Volumes” window, choose the 8GB volume that’s attached to your server, and under the “More…” drop-down button, choose “Detach Volume”. It will take some time for the detach action to complete.

Step Two: Make A Copy

Next, with the same volume chosen, and using the same “More…” button, create a “Snapshot” of the volume. I recommend you give this (and all your volumes) descriptive names so they’re easier to keep track of.

Step Three: Make It Bigger

Once the snapshot is done processing, it will show up in the “Snapshot” window. Again, giving the snapshot a name tag helps tremendously with organization. Choose this snapshot, and then click on the “Create Volume” button.

In the Create Volume dialog, enter the size you want the new root disk to be. Here, I’ve entered 100 GB, but I could enter anything up to the nearly 400GB of storage space I have left in my Medium Instance. Also in this dialog, choose the availability zone to create the volume in. Remember earlier in this post when I said to note the availability zone your server is running in? This is where that little piece of information comes into play. You MUST use the same availability zone for this new, larger volume as your original server volume used. Click the “Yes, Create” button, and a new larger volume will be placed in your list of volumes.

Step Four: We Can Rebuild It

Next, attach the new larger EBS volume to the original Ubuntu server instance. Go back to the Volume window, choose the newly created larger volume, click the “More…” button, and choose “Attach Volume”.

In this dialog box, make sure the correct instance is showing in the “Instance” drop-down. In the “Device” text box, make sure the device is listed as it is shown above. It should be “/dev/sda1”. Note: This will not be the default when the dialog opens. You must change it!
Clicking on the “Yes, Attach” button will begin the attachment process, which will take a minute or two to complete. Once it’s done, you can spin up the server with the new root drive and test it out.

Step Five: Start It Up Again

Choose the server, and under “Instance Actions”, choose “Start”. Once started, connect to the server using your preferred client. Open a terminal or PuTTY window, and once again enter:
df -h
You should now see something like this:

Notice the differences from the first df command. Now the root disk (/dev/xvda1) will show a size of 99GB, or whatever size you might have made your new volume.

More Room To Play

Now I can adjust my root disk size to suit the task at hand. I can store more spatial data in my GeoServer data directory, and seed my map tiles down to ever larger scales. Knowing how to shuffle and adjust these volumes opens up a slew of other possibilities, too. I can imagine setting up a separate volume just to hold spatial data and/or tiles, and using that to copy or move that data back and forth between test servers.
Be mindful though, this extra space is not free. The larger EBS volume does not replace the space on the ephemeral instance-store volume, it is an addition to it. There will be additional charges to your AWS account for the larger EBS volume based on it’s size. This fact is not made clear in the AWS documentation. So, I recommend you increase the size of the EBS root disk as much as you need, but no more.
Oh the possibilities…

Reader Comments

  1. Helpful article, thanks!
    Not sure if it is different because I’m running an CentOS instance, however I had to run the following to resize the Filesystem for ‘df -h’ to correctly display the size:
    resize2fs /dev/sda1
    Cheers,
    James

  2. Good reference article. However, you realize that you didn’t “expand that 8GB disk into that untapped 400+GB empty storage space”? Rather, you expanded the root volume of your EBS-backed instance? That has nothing to do with the instance storage provided by AWS.
    The instance storage (located at /dev/xvdb) is ephemeral (read: temporary) That volume gets cleared out when you instance stops for any reason. It’s reasonable for disposable data, like temp and swap files, etc.
    Increasing your EBS volume can be done to a seemingly infinite size. Keep in mind that your *are* paying for that storage, unlike the freebie instance volume…

    1. ThePhantom –
      Yes, I do get that. The 400+GB is not really empty, it is usable, but as you say, ephemeral. The purpose of expanding my root volume above 8GB is to make more of that space permanent, so any data there will stick around for the next time I spin it up. Since I’m now working with a Reserved Medium Instance, I can grow my root volume up to the 400GB allotted space without incurring additional charges.
      A quick check of the AWS docs says you can increase the size of an EBS volume up to 1TB!

      1. Hi Don,
        I don’t believe that the EBS volume expansion is without charge. As you can see right from your ‘df’ screenshots, the increase of your EBS volume does not take anything away from your allocated ephemeral volume.
        AFAIK, EBS-backed AMI’s (barring the free tier for new users) are billed at .10 per GB per month from byte 1. You may want to check your account activity page (under “Amazon EC2 EBS”) for validation. It would be nice if the generous instance storage allotment *could* be used for EBS, but doing so would open up potential abuse scenarios — such as people creating a seldom-run instance to make a poor-man’s free S3 bucket, etc…

      2. Yes, after a lot of searching, I believe you’re right. The great thing about AWS: It offers a lot of options and fine-grained control over what you use. The bad thing about AWS: It’s complicated, and the pricing for all of these options is not always clear.
        I have a few AMIs and Snapshots saved in my account, so I wasn’t sure if that’s what was being billed, or if my “expanded” root drive was the culprit. The charges are small, though, so I’m not complaining. The extra convenience of the larger-than-8GB root drive is well worth it. I believe the trick is, to make the “expanded” root drive only as big as you really need it to be. EBS is cheaper than S3, as long as you take that into account.

      3. All of what you’ve said is wildly correct. The product offerings are certainly complex, which makes deciphering your actions and subsequent financial effect a game of trial and error…
        One thing to keep in mind. I recently had a client that needed quite a bit of additional persistent storage, and wanted their root volume expanded. I talked them (and myself) out of it after explaining that expanded volumes can’t reasonably be shrunk if you’ve made and error in your estimates (or your needs change in the future). Instead, I ended up creating a large, empty volume, fdisk/mkfs’ing it, attaching the additional volume to the running instance, mounting and symlinking accordingly. In the future, when they come to their senses and realize that they don’t need the additional space, disposing of the additional EBS volume will be trivial.
        I know in reality we’re talking a potential expense of, perhaps, tens of cents — but a reasonably sized root volume may be more prudent in the long run…

Comments are closed.