3D Prince George Buildings Map


I’ve been working through some of the demos for Cesium JS (a JavaScript library for creating 3D maps), and came across this one demonstrating how to display the buildings of Nanaimo.

I was curious to see if the data exists for Prince George, BC, and it turns out, it does (PG Open Data Catalogue, buildings download)! I’m not 100% sure if this dataset was created manually or through automation, but I’m impressed with the level of detail.

The method I used to make this map* is actually shockingly simple. I downloaded the shapefile, saved it as GeoJSON in QGIS, and uploaded it to my server. Then, the JavaScript on the webpage incorporates Cesium, loads the geometry data from the GeoJSON, adds heights from the buildings’ attributes list, and sets the camera view. That’s how easily you can make a pretty neat looking 3D map of Prince George.

*note: there are over 30,000 buildings in this dataset. It takes about 30 seconds to load, for me. It may not work on your mobile devices.

what3words Poetry


I’ve been seeing lots of buzz about what3words (w3w), an intriguing addressing system that assigns a combination of three words to every 3m x 3m square on Earth. For example, you can find the center of LSU Tiger Stadium at upward.searcher.superstar. w3w’s API toggles 3-word addresses with lat/long coordinates, so you can actually locate the address (one of the, I assume necessary, annoyances of w3w addresses is that the words are random, so you have no information about adjacent addresses based on a known address).

As a completely useless example of how you can use the w3w API, I present w3w Poems. Following the rules found here for creating a 3-word poem for Ms.Guillory’s 5th grade classroom at Gardner Pilot Academy in Allston, Massachusetts, the user is presented with a random w3w address, which is the first line of the poem. The last two words become the first two words on the next line. The user enters a third word. If it completes a new w3w address, the user is whisked away to the location, the poem line completes, and a new partial poem line is presented. The user repeats until they decide the poem is complete.


FYI: There’s a whole blog about w3w inspired poetry.

SEC Cross-divisional Games Diagram

sec_ncWhile watching some uninspiring NCAA football games today (I’m looking at you, OSU/Michigan and Georgia/Vanderbilt), I had a chance to wonder about this year’s SEC cross-divisional games. Each year, each SEC team plays two games against teams in the opposite division (i.e. each SEC West team plays two SEC East opponents).

Here is a diagram showing who’s playing who in 2015.

Cesium the weekend!


Looked into Cesium, the WebGL JavaScript library for creating 3D globes, and 2.5D and 2D maps, today.

So far, so good. The tutorials are great, and documentation relatively well populated. Installation was painless. The major quirks so far have been due to funny interactions between Windows Chrome and DirectX, which I won’t pretend to understand, but apparently it makes it so outlineWidth is not supported, which means all polygon outlines are set permanently to 1px. Not a huge deal, but I’m guessing I’ll run into more technical stumbling blocks as I delve further in.

Here’s a quick example, which draws a 10 degree grid over the Earth, and places an LSU beacon above Tiger Stadium.

Note: must use WebGL-enabled browser, which means Chrome or newer version of Firefox/IE/etc. You can check if your browser is compatible here.

GeoNet Prime Answering Times


ESRI runs a biannual contest to encourage participation on their help forum, GeoNet. Prizes go to the top ten point-getters. I found out during the last contest that it takes a great deal of time and persistence to keep up with the top of the pack (I ended up 5th). There are several tips for optimizing your effort (some of which I outlined, somewhat sarcastically, here).

One such tip is: get on GeoNet during times when there are the greatest number of fresh, unanswered questions. If you’ve spent any time on GeoNet, you have likely noticed that questions are generally asked during North American working hours, when people are struggling to get through their work-related GIS tasks. I wanted to put some better numbers to this idea, so I set about gathering the data myself.

All the information is there: each post has the date/time it was asked written right there in the posting. You could click on each post and record that date/time into an Excel and be done with it, but that would be awfully tedious. This is where screen scraping comes in. Screen scraping is the direct equivalent of having your computer control your web browser: click here, find this part of the HTML code, read it, and do something with it.Luckily, your computer doesn’t care if it has to spend all day doing the same thing over and over and over…

I chose to use Python, but you can do this in other languages, as well. Useful libraries to download are Requests and lxml. I use Requests for making the, you guessed it, “requests”, which are similar to typing a URL in the address bar of your browser. I use lxml for parsing and traversing the returned HTML code, which you can look at on any web page by pressing Ctrl+u (at least, in Chrome).

from lxml import html
import requests, time, csv

with open('C:/junk/geonet.csv', 'w') as csvfile: # create and/or open a CSV file
  csvWriter = csv.writer(csvfile, delimiter=" ", quoting=csv.QUOTE_MINIMAL) # writer
  dateList = []
  baseUrl = 'https://geonet.esri.com/content' # store the URL prefix

  for i in range(10): # loop through the first 10 'Content' pages
    page = requests.get(baseUrl + '?start=' + str(i*20)) # navigate to page
    tree = html.fromstring(page.text) # retrieve the HTML
    linkList = tree.iterlinks() # find all the links on the page
    threads = []

    for link in linkList: # loop through the links
      if link[2].startswith('/thread/'): # find those starting with "thread"
        threads.append(link[2]) # add the link to the list

    threadBase = 'https://geonet.esri.com' # store the URL prefix
    for thread in threads: # loop through the threads listed on the 'Content' page
      page = requests.get(threadBase + thread) # navigate to the correct thread page
      tree = html.fromstring(page.text) # retrieve the HTML
      dates = tree.find_class('j-post-author') # retrieve the date
      dateList.append(dates[0].text_content().strip()) # write to list
      csvWriter.writerow(dates[0].text_content().strip()) # write to CSV
      time.sleep(5) # wait 5s to give server a chance to handle someone else's requests

Anyhow, the graph at the start of this post shows pretty much what I expected: people on the East Coast get confused, then people on the West Coast get confused, then everyone goes home.