Serving Maps – in the Cloud – for Free (part 1)

My latest personal project (still in progress) is to get a true cloud-based map server up and running, posting maps from a free-tier Amazon Web Services (AWS) Ubuntu server. This has not been easy. I’ve looked at AWS a number of times over the last year, and a few things have made me shy away from trying it out. Mainly, It’s incredibly hard to decipher all the jargon on the AWS website. And it’s not your everyday jargon. It’s jargon that’s unique to the AWS website. It’s jargon2. Amazon has been sending me multiple emails the last few weeks warning me that my free-tier account status is about to expire. That, and a few days free of pressing work spurred me on to dive in and give it a try. I knew this was going to be a complicated process, so I wanted to document it for future reference. That’s what led to this post.

As the title says, this is part 1 of what will most likely be a 2 part post. (Update: It wound up being a 3 part series) At this point I have the server up and running. I’m able to download, edit, and upload files to the directories I need to. I have an Apache server running on the instance, and the OpenGeo Suite installed. However, I am having some problems with the OpenGeo Suite. As soon as I get them ironed out, I’ll either update this post, or add a part 2.

So, here we go…
(If you’re already familiar with the AWS management console and AMIs, you can scroll down to the “How do I connect to this thing…” section)

Wading through the AWS setup

The first step in the process is to sign up for an AWS account which allows you to run a free Amazon EC2 Micro Instance for one year. These free-tier instances are limited to Linux operating systems. You can see the details and sign up here: http://aws.amazon.com/free/.

The next thing I did was to sign into the AWS Management Console and take a look around.
https://console.aws.amazon.com/ec2/home
Gobbledygook. I needed some help translating this foreign language into something closer to English.

There are a lot of websites out there that try to explain what’s what in AWS, and how to use it. One such example is “Absolute First Step Tutorial for Amazon Web Services”, and what follows here is largely based on what I found there. The easiest way to get started is by using an “ami” which is a pre-built operating system image that can be copied and used as a new instance. A little more searching ensued, and I found a set of Ubuntu server amis at alestic – http://alestic.com/. The tabs along the top let me choose the region to run the new server from, (for me, us-east-1). I picked an Ubuntu release (Ubuntu 11.10 Oneric), made sure it was an “EBS boot” ami, and chose a 64-bit server.

This brought up the Amazon Management Console – Request Instances Wizard. The first screen held the details about the AMI I was about to use.
(You can enlarge any of the following screen shots by clicking on them)

  • I made sure the instance type was set to Micro (t1.micro, 613 MB) and clicked continue.
  • I kept all the defaults on the Advanced Options page and clicked continue.
  • I added a value to the “Name” tag to make it easier to keep track of the new instance and clicked continue.
  • I chose “Create a new Key Pair” using the same name for the key pair as I used for the instance.
  • I clicked “Create & Download your Key Pair”, and saved it in an easy to get to place.

There are some differences in where you should save this key depending on what operating system you’re using, which I’ll explain later in this post.

On the next screen, I chose “Create a new Security Group”, again naming it the same as I did the instance. Under Inbound Rules, I chose the ports to open:

  • 22 (SSH)
  • 80 (HTTP)
  • 443 (HTTPS)
  • 8080 (HTTP)

…clicking “Add Rule” to add each one, one at a time. If you’re following along, it should look something like this:

The last screen showed a summary of all of the settings, and a button to finally launch the instance.

Once launched, it shows up in the AWS Management Console, under the EC2 tab.

The good news: After all that, I finally have a real cloud-based server running Ubuntu on AWS.
The bad news: That was the easy part.
Now the question is:

How do I connect to this thing, and get some real work done?

The default settings on AWS lock things down pretty tight. And that’s how it should be for any server, really. The thing is, this is more of a test-bed than a production server. I want to be able to easily navigate around, experiment with settings, and see how things work. Having some kind of a GUI really helps me out when I want to learn where things are, and how they work together. Long story short – I settled on setting up an FTP client to view the directory structure and files on the AWS server, and used command line commands to change settings, permissions, and perform some editing of files (Yes, I’m talking VI). It’s a bit harder to find info on how to set things up on a Linux box, so I’ll start there. Windows will follow.

For Linux (Ubuntu/Mint) users

If you’re an experienced, or even a novice Linux user, you’re familiar with Secure Shell (SSH), or at least heard the term before. Most websites explaining how to access a new Ubuntu AWS instance from a Linux box suggest using SSH, tell you to put the downloaded key file in the ~/.ssh folder, or the /etc/ssh folder, and then changing its permissions so it’s not publicly viewable by running the following command in terminal:

sudo chmod 400 ~/ssh/<yourkeyfilename>.pem

If you’re going to be doing all your work through the command line using only SSH, that is the way to go. However, I wanted to connect to my new cloud server through FTP so I can upload, download, and otherwise manage files with some kind of GUI. After many hours of searching and testing and beating my head against the wall, I settled on using SecPanel and FileZilla.

The major hurdle I had to overcome in order to use FTP on a Linux (Ubuntu/Mint) box to connect to my AWS server, is AWS’s use of Key Pairs instead of passwords. There are no ftp clients that I could find that allow using key pairs for authentication. Yes, I vaguely remember managing to set up an SSH tunnel at one point, but that seemed overly complicated to me, and not something I want to go through every time I have to update a webpage. To get around this, I used two pieces of software: SecPanel, and FileZilla. If you’re familiar with FTP at all, you should be familiar with FileZilla, so I won’t explain how to use it here, except to reiterate, it does not allow using key pairs to authenticate user sign-in to a server. To get around that, SecPanel comes to the rescue. The problem with SecPanel? There is absolutely no documentation on the website, nor any help file in the software. Needless to say, much hacking ensued.

To get right to the point, here’s what I did to get things working:

  • I copied my key file out of the hidden folder (~/.ssh) and into a new “/home/<user>/ssh” folder, keeping the same “400” file permissions.
  • In SecPanel, I entered the following values in the configuration screen:
  • Entered a Profile Name and a Title in the appropriate boxes.
  • Copied the Public DNS string from the AWS management console
    (which looks something like “ec2-50-17-117-199.compute-1.amazonaws.com”)
    and pasted that into the “Host:” box.
  • Entered User: “ubuntu” and Port: “22”
  • Entered the complete path to my key file into the “Identity:” box
  • Everything else I kept at the default settings.
  • Clicked on the “Save” button

Here’s what it looks like:

Going back to the Main screen in SecPanel, there should be a profile listed that links to the profile just set up. Highlighting that profile, and clicking on the SFTP button then starts up FileZilla, and connects to the AWS server, allowing FTP transfers… as long as the folders and files being managed have access permission by the user entered in SecPanel.

So, how do we allow the “ubuntu” user to copy, edit, upload, and download all the files and folders necessary for maintaining the server?

  • Open a terminal window and SSH into the Ubuntu server
    (sudo ssh –i <PathToKeyFile>.pem ubuntu@<UniqueAWSinstance>.compute-1.amazonaws.com ).
  • Get to know the chown, chgrp, and chmod commands.
  • Use them in Terminal.
  • Make them you friend.

You can also perform all the other server maintenance tasks using this terminal window, e. g. apt-get update, apt-get upgrade, apt-get autoclean, and installing whatever other software you want to use on the new server.

Really, it’s not that hard once you dive into it. And, the fact that you can now SEE the files you’re modifying, SEE the paths that lead to them, and SEE what the permissions are before and after changing them, makes things a whole lot easier. For example, the following command:
sudo chgrp ubuntu /var/www
will change the /var/www “Group” to “ubuntu”, which will then allow the ubuntu user (you) to upload files to that directory using FTP.

For Windows Users

Windows access was much easier to set up than it was in Ubuntu/Mint. For this I used PuTTY and WinSPC. As in Linux, I copied the Key File to a new SSH folder under my user name. A couple of differences here: there are no access permissions to worry about in Windows, however, the Key File has to be converted to a different format before WinSPC and PuTTY can use it. Both the WinSPC and PuTTY downloads include the PuTTYgen Key Generator that can convert the <keyname>.pem file to the appropriate <keyname>.ppk format. In PuTTYgen, click on “Load”, set the file type to “*” to see all files, and make your way to the <keyname>.pem file. Once it’s loaded in PuTTYgen, click the “Save private key” button, and save the file to wherever you want. I saved mine to my new SSH folder, (without adding a passphrase).

Next it’s just a matter of opening WinSCP, setting the “Host name:” to the AWS Public DNS string, “Port number:” to 22, “User name:” to “ubuntu”, “Private key file:” to the path to the key file, and “File protocol:” to SFTP.

Clicking the “Save…” button will save these settings so they don’t have to be entered every time you want to log in. The “Login” button will open an FTP like window where files and folders can be managed.

And, there’s a “Open session in PuTTY” button on the toolbar that will open a PuTTY terminal where commands can be entered just like an Ubuntu terminal window.

File permissions can be set by entering chown, chgrp, and chmod commands in PuTTY just like using SSH in Ubuntu.

Next up, getting my OpenGeoSuite running

As I said at the beginning of this post, I have the OpenGeo Suite installed, and have been able to serve maps from it for short periods of time. However, I still need to iron out some wrinkles. It’s been suggested that my problems might be due to the lack of swap space on AWS micro instances. It might not even be possible to run the entire suite on a micro instance, I don’t know. If that’s the case, I might have to strip it down to just running GeoServer. But that will have to wait for another day.

Update – 12/21/2011

Link to part 2

Update – 12/22/2011

Link to part 3

Not Just Another MacBook

An account of my recent adventures in repairing and setting up a tri-boot MacBook

Step one – Obtain the MacBook

Dead Mac

I do not recommend obtaining a MacBook the way I did (or wish it upon anyone), but I do believe in making lemonade from lemons whenever possible. Last week I got a call from one of my daughters explaining that her sister, who attends the same college as her, had spilled “something” on her computer, rendering it inoperable. Luckily my girls attend a college that’s not far from home, so the next day was spent bouncing between the Apple store and Best Buy analyzing our options. I’ll leave out the gory details here, but the end result was: I chipped in the cost of a replacement PC, the daughters split the cost of an upgrade to a new MacBook Pro, and I wound up with the old, wet, borked MacBook to use however I wanted.

Here are the two culprits in the dead-Mac fiasco. Daughter on the right is the former dead-Mac owner. Daughter on the left is the Mac-killer.

The Two Culprits

Step Two – Rebuild the MacBook

The tech at the Apple store said there was a significant amount of “liquid” inside the Mac when he opened it up. The estimate for repairs was $750, with no guarantees that it would work properly in the end. He offered us some info for an off-site computer repair person that might be able to fix it for less. But his recommendation was to sell it for parts. With this in mind, I decided to take a chance, open it up, and see what I could do with it. I had nothing to loose.

Upon removing the base plate, it was obvious that the “liquid” spilled was something a bit stronger than water. My guess was some combination of beer and margarita mix, but daughter 1 insisted it was nothing stronger than wine.  I read in a few places that it’s possible to actually wash the logic board in various ways (using water, alcohol, and a few other concoctions) but that seemed a bit extreme to me. Since the “liquid” involved did not contain a lot of extraneous solids and/or sugary substances, I decided a simple drying out was the best route to take.

Mac disassembled

A quick search led me to the fantastic ifixit site that led me through the entire disassembly process. I found the Mac a lot easier to take apart without ruining anything than I have the few PC laptops I’ve worked on. I disassembled the entire bottom half of the Mac, wiped off the “water” droplets, hit everything with compressed air, heated it up with a hair dryer, and put everything back together again. This whole process took about 5 hours. No one was more surprised than I was when I plugged the thing in and it started right up. That felt good.

Step Three – Improve the MacBook

Both my daughters use Macs, so I’ve had a chance to work on them a few times. I’ve never been a fan of the Mac operating system. There’s something about the user interface, with all those expanding and bouncing icons that just turns me off. That and a few other things that I won’t go into here, leave me preferring Windows over OSx. Still, I saw this as a chance to give the Mac another try. One of the best computing decisions I ever made was to turn my work PC into a Win7/Ubuntu dual boot machine. I’ve tried virtual machines in the past, but I think if you’re serious about trying out and learning a new OS, it deserves to be installed on a high quality computer, and run natively on its own partition. Since I did that, I’ve found I enjoy learning and using Ubuntu much more.

So, this time around, I decided to up the ante, and go for three. The big three: Mac OSx, Windows 7, and Ubuntu 11.04.

image_____image_____image

My process follows very closely the steps outlined in the Ubuntu Community Documentation MacBookTripleBoot page. This page was created using a 2008 MacBook, so I’ve pointed out a few specific steps below where I had problems, or deviated from that process.

Bootcamp

First thing to do is get the Mac Bootcamp drivers on a disc or USB drive. I highly recommend getting the Bootcamp drivers assembled before doing anything else. I did not use Bootcamp to setup the partitions, but the drivers are needed after installing Windows in order to use all of the Mac hardware from within Windows. It’s mentioned in a few forum posts that the bootcamp drivers are available from within the bootcamp.app itself. For example:

Solution: Control-click on the Boot Camp Assistant program, choose Show Package Contents, and then navigate into Contents » Resources. Inside there you’ll find DiskImage.dmg. Just burn that disk image, and you’ll have your Windows drivers CD.

However, I was unsuccessful in finding these on my system. You can also download the drivers by running the bootcamp application. However, you must do this BEFORE partitioning your hard drive, or it won’t work. Something I learned the hard way. A third option I did not get to try, is simply using your original Mac OSx disk. Of course the two culprits in this Mac-fiasco could not find said disks, so I was SOL there. I wound up waiting for daughter 2 to come home with her new MacBook Pro, and I downloaded the bootcamp drivers from that, using the burn directly to disk option.

One last note: When you get to the point in the Bootcamp app where it asks you to partition the hard drive, cancel out. You want to use the Mac disc utility app to do that instead.

rEFit

The next step is to download, install, and run rEFit. This is a pretty straight forward process, but I will point out one thing. I suggest you run the rEFit partitioning tool once before actually partitioning the hard drive. This will sync up your partition tables, and fixed some errors I was getting before I did that.

image

Disk Partitions

Use the Mac OSx disk utility to create partitions for the three operating systems. As outlined in the  MacBookTripleBoot page, I used the command line, and entered:

diskutil resizeVolume disk0s2 100G "JHFS+" 4-Linux 60G "MS-DOS FAT32" 4-Windows 0b

…in order to divide my drive up into a 100GB OSx partition, a 60GB Ubuntu partition, and a 90GB Windows partition. (The 0b for the Windows partition assigns whatever’s left to that partition). Then use rEFit to sync the new partition tables again.

Install the OSs

Install Windows on the last partition, as outlined in the MacBookTripleBoot page. I can confirm that using an XP disk and a Win7 upgrade disc does work just fine. No need for a single full install disk.

Install Ubuntu on the middle (third) partition as outlined in the MacBookTripleBoot page. The installation screens appear to be different in the 11.04 edition of Ubuntu. Just make sure you use the correct partition for the root mount point, and install the grub boot loader to that partition’s boot sector, not MBR.

Clean Up

Once you have Windows installed, use the Bootcamp setup program to install the necessary Windows drivers so you can use all the Mac hardware. I had to do this in order to get my wifi, bluetooth, ethernet, and a few other things working.

Ubuntu was able to recognize most of the Mac hardware, but I had to use an ethernet cable to hook up to my router in order to access the internet. Ubuntu then automatically identified the drivers needed for the Mac’s wireless and graphics adapters.

That’s It

Everything seems to be working smooth as silk. If it weren’t for the white case and MacBook logo staring me in the face, I wouldn’t know I was using a Mac. I made one change to the rEFit configuration file. I went into the refit.conf file and changed the timeout setting to 0 in order to disable automatic booting. This loads up the rEFit boot menu, letting me choose whichever OS I feel like using at that time. The only glitches I’ve noticed is during a restart Ubuntu hangs sometimes, requiring a push on the power button. And Windows seems to want to reboot back into Windows occasionally, instead of to the rEFit boot menu. That’s easily fixed by holding down the Option button during the reboot.

IMG_1159IMG_1160IMG_1161

ArcGIS vs QGIS Clipping Contest Rematch

Round 2 in which ArcGIS throws in the towel.

(Please note: This post is about clipping in ArcGIS version 10.0. The functionality has been improved, and problems mentioned have been fixed in later versions of ArcGIS)

This is a follow-up to my previous post where I matched up ArcGIS and QGIS in a clipping contest. One of the commenters on that post expressed some concern that there might be “…something else going on…” with my test, and I agreed. It was unfathomable to me that an ESRI product could be out-done by such a wide margin. Knowing that ArcGIS often has problems processing geometries that are not squeaky clean, I began my investigation there. I ran the original contour layer through ArcToolbox’s Check Geometry routine, and sure enough, came up with 5 “null” geometries. I deleted those bad boys, and ran it through ArcToolbox’s “Repair Geometry” routine, and then ET GeoWizard’s “Fix Geometry” routine for good measure (These may or may not be identical tools, I do not know). No new problems were found with either tool.

I wanted to give ArcGIS  a fighting chance in this next round, but also wanted to level the playing field a bit. I did a restart of my Dell m2400 (see the specs in the previous post), exited out of all my desktop widgets, and turned off every background process I could find. I also turned of Background Processing in the Geoprocessing Options box. The only thing running on this machine was ArcGIS 10, and the only layers loaded were the contour lines and the feature I wanted to clip them to. I ran the “Arc Toolbox > Analysis Tools > Extract > Clip” tool and watched as it took 1 hour 35 minutes and 42 seconds for ArcGIS to go through the clipping process before ending with the message:
ERROR 999999: Error executing function
Invalid Topology [Topoengine error.]
Failed to execute (Clip)

ArcClipFail

Now granted, this is much better than the 12 hours it took the first time I ran it, but still, no cigar in the end.

Giving QGIS a chance to show it’s stuff, I used Windows version 1.5.0 to run a clip on the same files, on the same machine. QGIS took all of 6 minutes and 27 seconds to produce a new, clean contour layer.

QGIS - Contours v2

I ran this through the same geometry checks as the original contour layer, and came up with no problems.

My goal here is not to jump all over ESRI and do a dance in the end zone. I would really like to figure out what’s going on. As I’ve said before, I’ve had problems in the past with ArcGIS producing bad geometries with its Clipping process (and other tools, too). But the fact that another product can handle the same set of circumstances with such ease baffles me.

I’ve put about as much time as I can into this test, and taken it as far as I can. If you would like to give it a go, feel free to download the files I used through his link:
www.donmeltz.com/_files/ContourClipTest.zip

(Note: This is a 878MB file, and is not completely uploaded as of this posting. Check back later if the link does not work for you right now)

If any of you have better results than I did, or find any faults with my files or process, please let me know and I WILL make a note of them here. Thank you.

ArcGIS–QGIS Faceoff

Is QGIS a viable alternative to ArcGIS?

(Please note: This post is about clipping in ArcGIS version 10.0. The functionality has been improved, and problems mentioned have been fixed in later versions of ArcGIS)

I’ve never enjoyed working with contours. They seem to bog down my system more than any other layer type I work with. However, most of my clients are so used to looking at USGS Topo maps they expect to see them on at least one of the maps I produce for them. I recently worked on a project covering a five-town area in the Catskill Mountain region. The large area covered, and the ruggedness of the topography was proving exceptionally troublesome in processing their contours. So much so that I decided to look at other options to get the work done. I’ve used a variety of GIS tools over the years, but do most of my paying work exclusively in ArcGIS. It’s what I’m most familiar with, it does (nearly) everything I need it to do, and therefore provides my clients with the most efficient use of my time. However, in this situation that was not the case.

The one geoprocessing operation that frustrates me most often (in ArcGIS) is the Clip operation. It seems to take more time than most other geoprocessing tools, and often results in bad geometries. This happens so often, I usually resort to doing a union, and then deleting the unwanted areas of the Union results. For some reason this works much faster, and with more reliable results than doing a Clip.

Since what I wanted to do here was a clip on a contour layer, I was in for double trouble. Yes, I could have clipped the original DEM I wanted to produce the contours from first, then generated contours from the clipped DEM. But that wouldn’t have led to anything to write about. So, here’s a short comparison of how ArcGIS handled the process versus QGIS:

The hardware and software used:

ArcGIS 10, SP2

  • Windows 7, 64 bit
  • Dell Precision m2400 laptop
  • Intel Core 2 Duo CPU, 3.06GHz
  • 8 GB RAM

QGIS 1.4.0

  • Ubuntu 11.4
  • Dell Inspiron 600m laptop
  • Intel Pentium M CPU, 1.60 Ghz
  • 1GB RAM

A fair fight?

I started out with ArcGIS, and loaded up my 20’ contour lines and a 1 mile buffer of the study area to which I wanted to clip them. I began the clip operation 3 times. The first two times I had to cancel it because it was taking too long, and I needed to get some real work done. Curious to see how long it would really take, I let the process run overnight. The progress bar kept chugging away “Clip…Clip…Clip…Clip…”, and the Geoprocessing results window kept updating me with its progress, so I assumed it would complete eventually. In the morning, I looked in the Geoprocessing results window and found it had run for over 12 hours before throwing an error, never completing the clip operation. The error message said something about a bad geometry in the  output. Really, no surprise there.

ArcGIS - ClipContour1

(Yes, those are lines in the picture above, not polygons. They’re very densely packed)

QGIS gets to play

The next day I decided to give QGIS a shot at it. I copied the two shapefiles over to my 6 year old lappy. (The contour.shp file was 1.3GB) fired up QGIS, and ran the Clip operation on the two files.

QGIS - Contours Screenshot-Clip

This time it took all of 17 minutes and 21 seconds to get a new contour layer.

Clip Results - QGIS

So, who’s the winner here? Was it a far contest?

My take-away is, ESRI really needs to do some work on its Clip geoprocessing tool. As I said earlier, it is slow, and results in bad geometries more often than any of their other geoprocessing tools I use.

Addendum June 11, 2011: See the follow-up post here:
http://bit.ly/k0Fm7D

Find duplicate field values in ArcGIS using Python

As ESRI is making it’s move away from VB Script and towards Python, I’m also slowly updating my stash of code snippets along the way. One of those little pieces of code I use quite often is one that identifies duplicate field names in a layer’s attribute table. I find this particularly helpful when I’m cleaning up tax parcel data, looking for duplicate parcel-ID numbers or SBL strings. Since I’ve been working a lot with parcel data lately, I figured it was time to move this code over to Python, too. So, here it is in step-by-step fashion…

1 – Add a new “Short Integer” type field to your attribute table (I usually call mine "Dup").

2 – Open the field calculator for the new field

3 – Choose "Python" as the Parser

4 – Check "Show Codeblock"

5 – Insert the following code into the "Pre-Logic Script Code:" text box making sure you preserve the indents:


uniqueList = []
def isDuplicate(inValue):
  if inValue in uniqueList:
    return 1
  else:
    uniqueList.append(inValue)
    return 0


6 – In the lower text box, insert the following text, replacing "!InsertFieldToCheckHere!" with whichever field you want to check the duplicates for:


isDuplicate( !InsertFieldToCheckHere! )


This will populate your new field with 0’s and 1’s, with the 1’s identifying those fields that are duplicates.

 

FindDups-Python

Generating Vertical Buffers

One of the more popular analyses I’m asked to perform for my clients is a viewshed analysis. Beyond simply identifying what areas of a town are visible from roads or other public viewpoints, I’m often asked to help identify, and sometimes rank, areas that are most worthy of protection. One way to help a town identify and evaluate these high priority vistas, is to identify prominent ridgelines and the areas around them that are susceptible to inappropriate development.

One way to mitigate the impact of development on highly visible ridgelines is to make sure new buildings do not break the horizon line – the point where the ridge visibly meets the sky. Since most local zoning codes restrict building heights to 35-40 feet (in my client towns, anyway), producing a vertical buffer of 40 feet helps to identify the areas susceptible to such an intrusion.

The steps to produce such a vertical buffer are not overly complex, but I have not found them readily available online. So, for your benefit (and my easy reference) I outline the process here.
(Note: I outline these steps specifically using ArcGIS 10. I’m sure it’s possible using other tools, but this is what I use most often in my daily work)

Begin with:

A DEM of your study area

This DEM must be an integer raster for one of the following steps, so start off by using the raster calculator.
(Arc Toolbox > Spatial Analyst Tools > Map Algebra > Raster Calculator)
Use the INT( “DEM” ) expression, where “DEM” is the elevation raster that you want to convert from a floating point to an integer raster.

1-DEM

Prominent ridgelines

Ridgelines are often generated using watershed boundaries, with extensive field checking to identify the more prominent features. For these calculations, the ridgelines must be in a raster format that includes the elevation of each raster cell. To convert a line-type ridgeline to a raster, first buffer the ridgeline to produce a polygon feature (I typically use a 20 foot buffer) and then use Extract by Mask.
(Arc Toolbox > Spatial Analyst Tools > Extraction > Extract by Mask)
Use the Integer DEM produced in the first step as the input raster, and the polygon buffer of the ridgelines for the feature mask data.

2-Ridgelines

Generate the Euclidean Allocation of the Ridgeline Raster

This is where the need for an integer raster comes into play.
(Arc Toolbox > Spatial Analyst Tools > Distance > Euclidean Allocation)
Simply use the Ridgeline raster generated in the previous step as the input raster, and choose the elevation value as the Source field. This process will generate a new raster that covers the entire study area, with each cell holding the elevation value of the ridgeline raster that is nearest to it.

3-EucoAllo

Generate the Vertical Buffer

Use the Raster Calculator to generate a vertical buffer of the ridgeline.
(Arc Toolbox > Spatial Analyst Tools > Map Algebra > Raster Calculator)
The expression will look something like this: “IntegerDEM” >= (“RidgelineEucoAllo”-12) where “Integer DEM” is the DEM produced in step 1, “RidgelineEucoAllo” is the Euclidean Allocation raster produced in step 3, and “12” is the height of the buffer you want to produce. In this case, the raster measurements are in meters, so using a buffer value of 12 results in about a 39 foot vertical buffer. This allows me to identify areas where there is the potential for a new building built to the maximum allowable height to break the horizon line.

4-VerticalBuffer

Once the buffer is generated, there is usually some cleanup required. As you can see from these results, buffers are generated in areas not connected to the ridgeline areas we want to focus on, so I’ll delete these before moving on to the next steps in this town’s viewshed analysis.

I hope this is helpful, and as always, I welcome any comments and feedback.

LOGAS–Tweaked

Over the past week I’ve probably generated enough material for a half-dozen blog posts. However, since I have to get some billable hours in and invoices sent out this week, I’ll just post a short update for now on one aspect of project LOGAS. In my prior post I described the steps I took to get an Ubuntu based Apache/GeoServer up and running. Since then I’ve tweaked the process, and with a lot of help, been able to get everything in working order.

One of the roadblocks I faced was how to get Apache/Tomcat and GeoServer working together. Working individually was easy, together was not. As I progressed with my education in JavaScript, I ran into some cross-domain issues due to the fact my website and GeoServer were hosted on different machines. Even when I put them both on the same machine, the problems persisted due to the fact the webserver  and the the map server use different ports (Apache – 80, and GeoServer – 8080). The solution turned out to be setting up a proxy server. Sounds easy. It was not.

After banging my head on the problem to the point of near concussion, a few online cohorts came to my rescue; in particular, @spara. She was generous enough to write up a script that installs Apache and the OpenGeo Suite, and configures Tomcat and the rest in a way that makes them all work together. She’s posted the code on github for anyone to use. Note: My previous post installed a full LAMP system. This one is limited to just Apache and the OpenGeo Suite, with some tweaks to the Tomcat servlet configuration.

I’ve tested her script on a few clean Ubuntu 10.10 setups, but I haven’t been able to get it to work as a single copy/paste yet. For me, it’s still a multi-step process. But, it does work and it is about as painless as it gets. What works consistently for me, is performing the OpenGeo Suite install first, and then running the Apache install/Tomcat configuration separately.

OpenGeo Suite

So far, the only process that works consistently for me is to first sudo to root, then setup the repository and install the OpenGeo suite. Start by typing the following into a terminal window:


sudo su


… hit enter, and enter your password.

Then copy and paste the following into terminal:


wget -qO- http://apt.opengeo.org/gpg.key | apt-key add -

echo “deb http://apt.opengeo.org/ubuntu lucid main” >> /etc/apt/sources.list

apt-get update

apt-cache search opengeo

apt-get install opengeo-suite


After OpenGeo finishes installing, it will ask for a proxy URL (just leave it blank and hit enter unless you know what that means), a user name, and a password. Set these up as you wish. When all that’s done, move on the the next step.

Apache/Tomcat

Here’s the script for the Apache install/Tomcat setup, just copy and paste this into a terminal window:



sudo apt-get install -y apache2
sudo ln -s /etc/apache2/mods-available/proxy.conf /etc/apache2/mods-enabled/proxy.conf
sudo ln -s /etc/apache2/mods-available/proxy.load /etc/apache2/mods-enabled/proxy.load
sudo ln -s /etc/apache2/mods-available/proxy_http.load /etc/apache2/mods-enabled/proxy_http.load
sudo chmod 666 /etc/apache2/sites-available/default
sudo sed -i '$d'  /etc/apache2/sites-available/default
sudo sh -c "echo ' ' >> /etc/apache2/sites-available/default"
sudo sh -c "echo 'ProxyRequests Off' >> /etc/apache2/sites-available/default"
sudo sh -c "echo '# Remember to turn the next line off if you are proxying to a NameVirtualHost' >> /etc/apache2/sites-available/default"
sudo sh -c "echo 'ProxyPreserveHost On' >> /etc/apache2/sites-available/default"
sudo sh -c "echo ' ' >> /etc/apache2/sites-available/default"
sudo sh -c "echo '<Proxy *>' >> /etc/apache2/sites-available/default"
sudo sh -c "echo '    Order deny,allow' >> /etc/apache2/sites-available/default"
sudo sh -c "echo '    Allow from all' >> /etc/apache2/sites-available/default"
sudo sh -c "echo '</Proxy>' >> /etc/apache2/sites-available/default"
sudo sh -c "echo ' ' >> /etc/apache2/sites-available/default"
sudo sh -c "echo 'ProxyPass /geoserver http://localhost:8080/geoserver' >> /etc/apache2/sites-available/default"
sudo sh -c "echo 'ProxyPassReverse /geoserver http://localhost:8080/geoserver' >> /etc/apache2/sites-available/default"
sudo sh -c "echo 'ProxyPass /geoexplorer http://localhost:8080/geoexplorer' >> /etc/apache2/sites-available/default"
sudo sh -c "echo 'ProxyPassReverse /geoexplorer http://localhost:8080/geoexplorer' >> /etc/apache2/sites-available/default"
sudo sh -c "echo 'ProxyPass /geoeditor http://localhost:8080/geoeditor' >> /etc/apache2/sites-available/default"
sudo sh -c "echo 'ProxyPassReverse /geoeditor http://localhost:8080/geoeditor' >> /etc/apache2/sites-available/default"
sudo sh -c "echo 'ProxyPass /geowebcache http://localhost:8080/geowebcache' >> /etc/apache2/sites-available/default"
sudo sh -c "echo 'ProxyPassReverse /geowebcache http://localhost:8080/geowebcache' >> /etc/apache2/sites-available/default"
sudo sh -c "echo 'ProxyPass /dashboard http://localhost:8080/dashboard' >> /etc/apache2/sites-available/default"
sudo sh -c "echo 'ProxyPassReverse /dashboard http://localhost:8080/dashboard' >> /etc/apache2/sites-available/default"
sudo sh -c "echo 'ProxyPass /recipes http://localhost:8080/recipes' >> /etc/apache2/sites-available/default"
sudo sh -c "echo 'ProxyPassReverse /recipes http://localhost:8080/recipes' >> /etc/apache2/sites-available/default"
sudo sh -c "echo ' ' >> /etc/apache2/sites-available/default"
sudo sh -c "echo '</VirtualHost>' >> /etc/apache2/sites-available/default"
sudo chmod 644 /etc/apache2/sites-available/default


This installs Apache, and configures it and Tomcat so it appears as though your website and all of the OpenGeo Suite apps are using the same port. The last thing to do is test everything as I outlined in my previous post.

Test the Apache2 default website:

on the host computer – http://localhost – It works!
and via a remote computer (substitute your server’s domain here) – http://24.105.210.45/ – It works!

Test the OpenGeo Suite:

Open the dashboard – http://localhost:8080/dashboard/
Launch GeoExplorer
Save the default map and exit GeoExplorer

Test GeoServer by loading a GeoExplorer map:

Open the default map on a remote computer (again, substitute your server’s domain here) – http://24.105.210.45:8080/geoexplorer/viewer#maps/1

If everything is set up correctly, you should see something like this:

OpenGeoExplorerView

There are probably a few more tweaks that can be made to this process, but I don’t know enough about how Ubuntu works yet to make those changes. I do know that this process works most of the time. A restart between processes helps sometimes, and trying them in a different order occasionally works, too. If anyone can come up with a better, more streamlined approach, please feel free to let me know about it. For now, I’ll just have to live with a little complexity.

And in honor of the day, a little more cow bell, too –

LOGAS–Linux OpenGeo Apache Server

Moving the GeoSandbox to Full OpenSource

I’ve run into a couple of roadblocks recently, regarding my experiments with my GeoSandbox. I want to be able to play with some of of the JavaScript libraries available, so I can continue my education on those fronts. Some cross domain issues arose when I started putting JavaScript in my web pages as my website is on a different server than is my GeoServer. So, that means learning some more about setting up web servers and how proxy servers work. Also, my old Dell 600m is starting to feel the effects of the increased demands placed on it as the GeoSandbox becomes more popular, and more complex. That means an eventual move onto a cloud server. Since cloud servers running an open source OS are much less expensive than those running Windows, I felt the need to begin transferring my entire setup over to open source tools.

For now, I’m just going to do a short outline of what it took to get a basic Ubuntu/Apache/OpenGeo Suite operating on my home server. Once I get a better handle on how Ubuntu and Apache work I’ll add some posts about that.

Starting with a clean install of Ubuntu 10.10

Download and install Ubuntu 10.10 Desktop

I downloaded Ubuntuu 10.10 Desktop Edition from their website:

http://www.ubuntu.com/desktop/get-ubuntu/download

Yes, I could have used the Server edition, but Coming from a Windows world, It’s easier for me to work with some kind of GUI than to go 100% command line. After downloading the .iso, I burned it to a CD, installed it on a clean hard drive, and then installed all updates.

Turn Ubuntu Desktop into an Apache Server

Install LAMP with a single command

In case you didn’t know, LAMP stands for “Linux Apache MySQL PHP”. This may or may not be more than I need for my purposes, but a one-line install looked like the easiest way to go, so:

http://nick.onetwenty.org/index.php/2010/12/02/installing-lamp-server-on-ubuntu-10-10-desktop/

Then I opened a terminal window and entered:

sudo apt-get install lamp-server^

Add in the ability to serve maps

Install OpenGeo Suite

This was the most difficult part for me to figure out. Instructions can be found here:

http://projects.opengeo.org/suite/wiki/LinuxPackages

Being somewhat new to Ubuntu, I missed what the first step under “Repository Setup” meant (sudo to root). Once I figured that out, things went smoothly. The steps are:

In terminal, sudo to root:

sudo su

Import the OpenGeo gpg key:

wget -qO- http://apt.opengeo.org/gpg.key | apt-key add -

Add the OpenGeo repository:

echo "deb http://apt.opengeo.org/ubuntu lucid main" >> /etc/apt/sources.list

Update the package list:

apt-get update

Search for OpenGeo packages:

apt-cache search opengeo

Install the OpenGeo Suite package:

apt-get install opengeo-suite

Restart Ubuntu

See if everything works

The last step was to see if everything was working properly, which it was.

Test the Apache2 default website:

on the host computer – http://localhost – It works!
and via a remote computer (substitute your server’s domain here) – http://24.105.210.45/ – It works!

Test the OpenGeo Suite:

Open the dashboard – http://localhost:8080/dashboard/
Launch GeoExplorer
Save the default map and exit Geoexplorer

Test GeoServer by loading a GeoExplorer map:

Open the default map on a remote computer (again, substitute your server’s domain here) – http://24.105.210.45:8080/geoexplorer/viewer#maps/1

In my case, everything worked as expected. After this, I continued trying to set up a custom website to serve my GeoServer maps, but ran into a few problems. I’ll be switching the GeoSandbox back and forth between the Windows and Apache server as I continue my climb up the learning curve, so don’t be surprised if some of the links appear to be broken on occasion.

GeoSandbox 2.0

This Week’s Project

I decided it was time to take the GeoSandbox to the next level. As usual, I had multiple goals in mind for this project. Primarily, I wanted to learn how to use and write JavaScript. It’s been a long time since I’ve tried to write any kind of meaningful code, but I figured if I wanted to keep up with the big kids, it was time to take the plunge. I also wanted to get my map viewer into a state where I could make it useful enough to display my GIS projects, and help promote my business.

My Experience with Writing Code

My first exposure to computer programming was back in 1980 when I took a computer language class as an undergrad. I remember trying to write some code for a fictional car dealership inventory system, and saving it on a 8” floppy disc. I believe it was in Basic, but I really can’t remember. I never did get it to work. Since then I’ve played around with various versions of Visual Basic, but never got very far past the “Hello World” types of programs. I took a GIS class as a grad student back in 2001 where I used MapObjects to make a functional GIS program. That was fun, and I did get that to work, but I haven’t written any real code since then. Yes, I do read, use, and modify scripts in ArcGIS. That keeps me in tune with the basics of programming, but I don’t think it’s the same as writing real code.

The Goal

I wanted to make the following three enhancements to the GeoSandbox:

  • Change the buttons in the toolbar
  • Make the map window dynamically change size with changes to the browser window size
  • Have the map window change its contents through links in order to showcase different projects

The first two items were fairly easy. I was able to read through the GeoExplorer.js script, and just delete the buttons I didn’t want showing on the toolbar. I also made some changes to the About box, and figured out how to generate a proper Google API key that allows the embedded 3D viewer to show. Dynamically resizing the map window was just a matter of doing a search, copying some code, and pasting it in the right place on the web page. Yes, I had to modify the <iframe>  ID in the code, but that was simple enough.
I had it in my head that I wanted to make four maps available on the page, and to allow users to switch between these maps using a set of links along the top of the map. Seemed simple enough.

No, it wasn’t.

Setting up the webpage and the GeoServer maps was easy. Here’s what the page looks like:

GeoSandbox2-0

And you can access the page using this link: www.donmeltz.com/maps

Changing iFrames

What I found out along the way was, banging your head against an <iframe> can really hurt.

I started out by putting a single <iframe> on the page. Since the src attribute of said <iframe> specifies the URL of the document to show in the <iframe>, I figured I’d just change the src attribute to point to the various GeoServer maps I wanted to show. That didn’t seem to work for me. What I found was, there are many different ways to change the <iframe> src. I must have tried them all, and I got most of them to work. The problem was, they didn’t work in every browser I tried. Here’s a list of the browsers I used in my tests:

  • Firefox 3.1.15
  • Internet Explorer 9 beta
  • Chrome 9.0.597.107
  • Safari 5.0.3
  • Opera 10.10

Learning About iFrames

First, I’d like to mention the Dynamic Web Coding website has the most comprehensive and understandable information on scripting <iframes> that I’ve found: http://www.dyn-web.com/tutorials/iframes/
What I haven’t figured out yet is – Why do all of the examples used on that website (and many others I visited) work for them, but not for me? It just boggles my mind. In a nutshell, here are the various things I tried.

I started out by setting up some bare-bones test sites where I stripped out all of the css styling, and deleted all but two links in order to make it easy to modify and test the code, and see what I was doing. I have links to all of these test pages below each of the following paragraphs in case you want to check them out. You can right-click on each page and view the source if you want. Click on the upper part of the page, NOT on the <iframe> map.
(Please note, I have greatly simplified the syntax in these examples so they fit on one line. If you want to see the actual code, please visit the Dynamic Web Coding site, or right click on my test maps and view the source.)

I began by trying to set the <iframe> target directly in the link like this:

<a href=http://www.Maps.html#maps/2 target="mapiframe">link</a>

This worked in IE and Firefox, but only if I hit the browser refresh button after I clicked the link. Yes, I also went through many iterations of adding a variety of refresh functions to the code, but that only worked in one or two instances. And in some cases it made things even worse. Here’s a link to my test page for this trial: http://www.donmeltz.com/mapstest1.html

Next up was an attempt to set the iframe source (scr attribute) using some JavaScript code as in:

document.getElementById('MapFrame').src="http://Maps.html#maps/2"

I tried using both a function and embedding the code in the link using the onClick event. Results were similar to the previous attempt, and again, inserting a refresh function did not help. There are other variations of this, such as using “window.getElementById, etc. which I also tried, but to no avail. Here’s the link to my test page for this trial:
http://www.donmeltz.com/mapstest2.html

I then moved on to trying to assign the contents of the iframe using code like this:

contentWindow.location.assign("http://Maps.html#/2")

which worked in IE (again, with a browser refresh) but not in anything else. The trial page: http://www.donmeltz.com/mapstest3.html

What came closest to getting the task done was working with the <div> that holds the <iframe> instead of manipulating the <iframe> directly, as in:

<div id="MapContainer">
MapContainer.innerHTML = "<iframe id="MapFrame" src="http://Maps.html#maps/2">

This one worked for me in every browser except Firefox: http://www.donmeltz.com/mapstest4.html

In the end, what I wound up doing was placing four different iframes on the page. I then set the style.display property of the iframe to “none” (hide) or “block” (show) in order to hide or show each one based on which link is clicked on the web page and looks something like this:

document.getElementById("MapFrame").style.display="block";

Yes, “block” means “show”. Intuitive, I know. Here’s a link to my test page that shows how this one works: http://www.donmeltz.com/mapstest_adinfinitum.html
This works in every browser I’ve tried, but it does slow down the load times significantly. All four maps load when the page loads. It does make switching between maps faster once everything is loaded, though.

I Got It

So, where did all of this hacking get me? I got what I wanted. A workable map viewer that shows a few samples of my work, and an education in JavaScript. Even though I spent many hours, days even, working with this, I still think it was worthwhile. I’ve learned a lot, even though it doesn’t seem like I got very far. I can read JavaScript code, and understand it a lot better now than I did when I started. I’d still like to figure out if I’m doing something wrong. It seems like these options should work, as it looks like they work on other peoples pages. Yes, I’m still banging my head against the <iframe> once in a while :-)

For now though, I think I’ll just kick back with a bottle of homebrew and a little Jimmy Cliff.

Mapping My Twitter Followers – No Code Needed

The Challenge

I’ve been pondering what new data sets to add to the GeoSandbox for a while now. It’s been a couple of weeks since the last addition, and I felt things were getting a little stale. Once again, a single tweet set my mind in motion. This time it was @briantimoney who said “2011 is the year tweet maps replace coffeeshop-finding as the go-to demo scenario in geo.” An RT by @billdollins cemented the statement in my head, and there was no going back after that. Obviously if the GeoSandbox is to remain the cutting-edge tool that it is, I’d have to move quickly.

The Process

All I wanted to do was to make a push-pin style map of the people who follow me on Twitter. Simple as that. I did not want to get into learning APIs and writing any kind of code, and I didn’t want to get into auto-updating, or anything like that. KISS. A little searching led to the following workflow:

MyTweeple – Export to CSV – Import to Fusion Table – Visualize on map –
Export to KML – Convert to shapefile – Put it into GeoServer.

More extensive searching might reveal a more efficient process, but this is what I came up with. From the time I first read the tweet to the point the data was in the GeoSandbox took about one hour.

My Tweeple to CSV

Once I signed into the MyTweeple site, I chose the Tools tab, and then the Export All (csv) option. This allowed me to save a csv spreadsheet containing up to 5,000 of my followers (I only have ~560). There are a lot of extraneous columns that can be done away with. All I really wanted was the name and address columns. Some sorting and editing of the address column can help out, too. Not everyone includes an address in their bio, and some have addresses that obviously won’t geocode properly. These were deleted.

MyTweeple

Fusion Table to KML

I’ve heard a lot of talk on the inter-tubes lately about Google Fusion Tables, so I’ve been looking for something to give this new tool a test drive. This was my chance, and it did not disappoint. All I had to do was go to the Fusion Table page, select New Table>Import Table, and browse to the MyTweeple.csv file I saved earlier.

FusionTable

I double checked a few addresses, and then clicked on the Visualize>Map link. The geocode routine ran for a few minutes, and produced a map showing all of the people who follow me on Twitter. It was like magic.

KMLmap

Clicking the Export to KML link allowed me to save the KML file to my computer, where I then converted it to a shapefile (I used the KML to Layer tool in ArcToolbox, but I’m sure there are many other ways to do this). From there it was just a matter of adding the shapefile to my GeoServer as I’ve outlined in previous posts.

The Results

GeoSandboxTwitterers

So there you have it. My somewhat convoluted way of showing my Twitter followers in the GeoSandbox.