My latest personal project (still in progress) is to get a true cloud-based map server up and running, posting maps from a free-tier Amazon Web Services (AWS) Ubuntu server. This has not been easy. I’ve looked at AWS a number of times over the last year, and a few things have made me shy away from trying it out. Mainly, It’s incredibly hard to decipher all the jargon on the AWS website. And it’s not your everyday jargon. It’s jargon that’s unique to the AWS website. It’s jargon2. Amazon has been sending me multiple emails the last few weeks warning me that my free-tier account status is about to expire. That, and a few days free of pressing work spurred me on to dive in and give it a try. I knew this was going to be a complicated process, so I wanted to document it for future reference. That’s what led to this post.
As the title says, this is part 1 of what will most likely be a 2 part post. (Update: It wound up being a 3 part series) At this point I have the server up and running. I’m able to download, edit, and upload files to the directories I need to. I have an Apache server running on the instance, and the OpenGeo Suite installed. However, I am having some problems with the OpenGeo Suite. As soon as I get them ironed out, I’ll either update this post, or add a part 2.
So, here we go…
(If you’re already familiar with the AWS management console and AMIs, you can scroll down to the “How do I connect to this thing…” section)
Wading through the AWS setup
The first step in the process is to sign up for an AWS account which allows you to run a free Amazon EC2 Micro Instance for one year. These free-tier instances are limited to Linux operating systems. You can see the details and sign up here: http://aws.amazon.com/free/.
The next thing I did was to sign into the AWS Management Console and take a look around.
Gobbledygook. I needed some help translating this foreign language into something closer to English.
There are a lot of websites out there that try to explain what’s what in AWS, and how to use it. One such example is “Absolute First Step Tutorial for Amazon Web Services”, and what follows here is largely based on what I found there. The easiest way to get started is by using an “ami” which is a pre-built operating system image that can be copied and used as a new instance. A little more searching ensued, and I found a set of Ubuntu server amis at alestic – http://alestic.com/. The tabs along the top let me choose the region to run the new server from, (for me, us-east-1). I picked an Ubuntu release (Ubuntu 11.10 Oneric), made sure it was an “EBS boot” ami, and chose a 64-bit server.
This brought up the Amazon Management Console – Request Instances Wizard. The first screen held the details about the AMI I was about to use.
(You can enlarge any of the following screen shots by clicking on them)
- I made sure the instance type was set to Micro (t1.micro, 613 MB) and clicked continue.
- I kept all the defaults on the Advanced Options page and clicked continue.
- I added a value to the “Name” tag to make it easier to keep track of the new instance and clicked continue.
- I chose “Create a new Key Pair” using the same name for the key pair as I used for the instance.
- I clicked “Create & Download your Key Pair”, and saved it in an easy to get to place.
There are some differences in where you should save this key depending on what operating system you’re using, which I’ll explain later in this post.
On the next screen, I chose “Create a new Security Group”, again naming it the same as I did the instance. Under Inbound Rules, I chose the ports to open:
- 22 (SSH)
- 80 (HTTP)
- 443 (HTTPS)
- 8080 (HTTP)
…clicking “Add Rule” to add each one, one at a time. If you’re following along, it should look something like this:
The last screen showed a summary of all of the settings, and a button to finally launch the instance.
Once launched, it shows up in the AWS Management Console, under the EC2 tab.
The good news: After all that, I finally have a real cloud-based server running Ubuntu on AWS.
The bad news: That was the easy part.
Now the question is:
How do I connect to this thing, and get some real work done?
The default settings on AWS lock things down pretty tight. And that’s how it should be for any server, really. The thing is, this is more of a test-bed than a production server. I want to be able to easily navigate around, experiment with settings, and see how things work. Having some kind of a GUI really helps me out when I want to learn where things are, and how they work together. Long story short – I settled on setting up an FTP client to view the directory structure and files on the AWS server, and used command line commands to change settings, permissions, and perform some editing of files (Yes, I’m talking VI). It’s a bit harder to find info on how to set things up on a Linux box, so I’ll start there. Windows will follow.
For Linux (Ubuntu/Mint) users
If you’re an experienced, or even a novice Linux user, you’re familiar with Secure Shell (SSH), or at least heard the term before. Most websites explaining how to access a new Ubuntu AWS instance from a Linux box suggest using SSH, tell you to put the downloaded key file in the ~/.ssh folder, or the /etc/ssh folder, and then changing its permissions so it’s not publicly viewable by running the following command in terminal:
sudo chmod 400 ~/ssh/<yourkeyfilename>.pem
If you’re going to be doing all your work through the command line using only SSH, that is the way to go. However, I wanted to connect to my new cloud server through FTP so I can upload, download, and otherwise manage files with some kind of GUI. After many hours of searching and testing and beating my head against the wall, I settled on using SecPanel and FileZilla.
The major hurdle I had to overcome in order to use FTP on a Linux (Ubuntu/Mint) box to connect to my AWS server, is AWS’s use of Key Pairs instead of passwords. There are no ftp clients that I could find that allow using key pairs for authentication. Yes, I vaguely remember managing to set up an SSH tunnel at one point, but that seemed overly complicated to me, and not something I want to go through every time I have to update a webpage. To get around this, I used two pieces of software: SecPanel, and FileZilla. If you’re familiar with FTP at all, you should be familiar with FileZilla, so I won’t explain how to use it here, except to reiterate, it does not allow using key pairs to authenticate user sign-in to a server. To get around that, SecPanel comes to the rescue. The problem with SecPanel? There is absolutely no documentation on the website, nor any help file in the software. Needless to say, much hacking ensued.
To get right to the point, here’s what I did to get things working:
- I copied my key file out of the hidden folder (~/.ssh) and into a new “/home/<user>/ssh” folder, keeping the same “400” file permissions.
- In SecPanel, I entered the following values in the configuration screen:
- Entered a Profile Name and a Title in the appropriate boxes.
- Copied the Public DNS string from the AWS management console
(which looks something like “ec2-50-17-117-199.compute-1.amazonaws.com”)
and pasted that into the “Host:” box.
- Entered User: “ubuntu” and Port: “22”
- Entered the complete path to my key file into the “Identity:” box
- Everything else I kept at the default settings.
- Clicked on the “Save” button
Here’s what it looks like:
Going back to the Main screen in SecPanel, there should be a profile listed that links to the profile just set up. Highlighting that profile, and clicking on the SFTP button then starts up FileZilla, and connects to the AWS server, allowing FTP transfers… as long as the folders and files being managed have access permission by the user entered in SecPanel.
So, how do we allow the “ubuntu” user to copy, edit, upload, and download all the files and folders necessary for maintaining the server?
- Open a terminal window and SSH into the Ubuntu server
(sudo ssh –i <PathToKeyFile>.pem ubuntu@<UniqueAWSinstance>.compute-1.amazonaws.com ).
- Get to know the chown, chgrp, and chmod commands.
- Use them in Terminal.
- Make them you friend.
You can also perform all the other server maintenance tasks using this terminal window, e. g. apt-get update, apt-get upgrade, apt-get autoclean, and installing whatever other software you want to use on the new server.
Really, it’s not that hard once you dive into it. And, the fact that you can now SEE the files you’re modifying, SEE the paths that lead to them, and SEE what the permissions are before and after changing them, makes things a whole lot easier. For example, the following command:
sudo chgrp ubuntu /var/www
will change the /var/www “Group” to “ubuntu”, which will then allow the ubuntu user (you) to upload files to that directory using FTP.
For Windows Users
Windows access was much easier to set up than it was in Ubuntu/Mint. For this I used PuTTY and WinSPC. As in Linux, I copied the Key File to a new SSH folder under my user name. A couple of differences here: there are no access permissions to worry about in Windows, however, the Key File has to be converted to a different format before WinSPC and PuTTY can use it. Both the WinSPC and PuTTY downloads include the PuTTYgen Key Generator that can convert the <keyname>.pem file to the appropriate <keyname>.ppk format. In PuTTYgen, click on “Load”, set the file type to “*” to see all files, and make your way to the <keyname>.pem file. Once it’s loaded in PuTTYgen, click the “Save private key” button, and save the file to wherever you want. I saved mine to my new SSH folder, (without adding a passphrase).
Next it’s just a matter of opening WinSCP, setting the “Host name:” to the AWS Public DNS string, “Port number:” to 22, “User name:” to “ubuntu”, “Private key file:” to the path to the key file, and “File protocol:” to SFTP.
Clicking the “Save…” button will save these settings so they don’t have to be entered every time you want to log in. The “Login” button will open an FTP like window where files and folders can be managed.
And, there’s a “Open session in PuTTY” button on the toolbar that will open a PuTTY terminal where commands can be entered just like an Ubuntu terminal window.
File permissions can be set by entering chown, chgrp, and chmod commands in PuTTY just like using SSH in Ubuntu.
Next up, getting my OpenGeoSuite running
As I said at the beginning of this post, I have the OpenGeo Suite installed, and have been able to serve maps from it for short periods of time. However, I still need to iron out some wrinkles. It’s been suggested that my problems might be due to the lack of swap space on AWS micro instances. It might not even be possible to run the entire suite on a micro instance, I don’t know. If that’s the case, I might have to strip it down to just running GeoServer. But that will have to wait for another day.
Update – 12/21/2011
Link to part 2
Update – 12/22/2011
Link to part 3