# Mapbox Visual Center Algorithm: arcpy (attempt #2)

Yesterday, I attempted to replicate the Mapbox Visual Center algorithm using ArcGIS arcpy (Python). Today, I actually looked at the source code (code, license) and attempted to fully port it over from JavaScript. I think it works properly, but you tell me.

def polylabel(polygon, centroid, precision, debug):
cells = []
precision = precision or 1.0
extent = polygon.extent
minX, minY, maxX, maxY = (extent.XMin, extent.YMin, extent.XMax, extent.YMax)
width = extent.width
height = extent.height
cellSize = min([width, height])
h = cellSize / 2
cellQueue = []
x = minX
while x < maxX:
y = minY
while y < maxY:             cell = Cell(x+h, y+h, h, polygon)             cells.append(cell.geom)             cellQueue.append(cell)             y += cellSize         x += cellSize     bestCell = Cell(centroid[0], centroid[1], 0, polygon)     numProbes = len(cellQueue)     while len(cellQueue):         cellQueue.sort(key=lambda cell: cell.d)         cell = cellQueue.pop()         cells.append(cell.geom)         if cell.d > bestCell.d:
bestCell = cell
if cell.max - bestCell.d <= precision:
continue
h = cell.h/2
cellQueue.append(Cell(cell.x-h, cell.y-h, h, polygon))
cellQueue.append(Cell(cell.x+h, cell.y-h, h, polygon))
cellQueue.append(Cell(cell.x-h, cell.y+h, h, polygon))
cellQueue.append(Cell(cell.x+h, cell.y+h, h, polygon))
numProbes += 4
return [bestCell.x, bestCell.y, cells]
class Cell():
def __init__(self, x, y, h, polygon):
self.x = x
self.y = y
self.h = h
self.d = pointToPolygonDist(x, y, polygon)
self.max = self.d + self.h * math.sqrt(2)
self.geom = arcpy.Polygon(arcpy.Array([arcpy.Point(x-h,y-h),arcpy.Point(x+h,y-h),arcpy.Point(x+h,y+h),arcpy.Point(x-h,y+h)]))
def pointToPolygonDist(x, y, polygon):
point_geom = arcpy.PointGeometry(arcpy.Point(x, y),sr)
polygon_outline = polygon.boundary()
min_dist = polygon_outline.queryPointAndDistance(point_geom)
sign_dist = min_dist[2] * ((min_dist[3]-0.5)*2)
return sign_dist
fc = 'BEC_POLY selection'
sr = arcpy.Describe(fc).spatialReference
centroids = []
label_points = []
cells = []
with arcpy.da.SearchCursor(fc,['SHAPE@','SHAPE@XY']) as cursor:
for row in cursor:
best_point = polylabel(row[0],row[1],.75,'#')
centroids.append(arcpy.PointGeometry(row[0].centroid,sr))
label_points.append(arcpy.PointGeometry(arcpy.Point(best_point[0], best_point[1]),sr))
for cell in best_point[2]:
cells.append(cell)
arcpy.CopyFeatures_management(centroids,r'in_memory\centroids')
arcpy.CopyFeatures_management(label_points, r'in_memory\label_points')
arcpy.CopyFeatures_management(cells, r'in_memory\cells')

# Mapbox Visual Center Algorithm: arcpy

On Monday, Mapbox published a JavaScript implementation of a fast algorithm for finding the visual center of a polygon using quadtrees (blog, code). I took a stab at it using Python/arcpy in ArcGIS:

def quadify(poly): # split polygon into quads
ext = poly.extent
TM = arcpy.Point((ext.XMin+ext.XMax)/2,ext.YMax)
LM = arcpy.Point(ext.XMin,(ext.YMin+ext.YMax)/2)
BM = arcpy.Point((ext.XMin+ext.XMax)/2,ext.YMin)
RM = arcpy.Point(ext.XMax,(ext.YMin+ext.YMax)/2)
TL = arcpy.Polygon(arcpy.Array([ext.upperLeft,LM,poly.centroid,TM]),sr)
TR = arcpy.Polygon(arcpy.Array([TM,poly.centroid,RM,ext.upperRight]),sr)
BL = arcpy.Polygon(arcpy.Array([LM,ext.lowerLeft,BM,poly.centroid]),sr)
BR = arcpy.Polygon(arcpy.Array([poly.centroid,BM,ext.lowerRight,RM]),sr)
return [TL,TR,BL,BR]
poly_boundary = poly.boundary()
dist = q_dist[2] if q_dist[3] else -q_dist[2]
max_dist = 0
dists = {}
max_dist = cur_dist if cur_dist > max_dist else max_dist
for k,v in dists.iteritems():
if k > max_dist * 0.90: # precision = 90%
fc = 'wetlands_select' # feature class/layer
sr = arcpy.Describe(fc).spatialReference # spatial reference
out_polys = []
out_points = []
with arcpy.da.SearchCursor(fc,'SHAPE@',spatial_reference=sr) as cursor: # loop polygons
for row in cursor:
ext = row[0].extent
center = arcpy.PointGeometry(arcpy.Point((ext.XMin+ext.XMax)/2,(ext.YMin+ext.YMax)/2),sr)
for i in range(10): # quadify 10x
arcpy.CopyFeatures_management(out_polys,r'in_memory\polys')
arcpy.CopyFeatures_management(out_points,r'in_memory\points')

Notes:

I’m not 100% sure that this emulates the base Mapbox algorithm, and I didn’t attempt to implement the “priority queue” enhancement.

The results are somewhat sensitive to the precision factor, but I suppose that’s the be expected with a speed algorithm.

I realize that you can calculate a label point much easier within ArcGIS using out-of-the-box tools. This was just me exploring the algorithm.

# 3D Grande Prairie Buildings (LiDAR)

Sometime in the last year, the County of Grande Prairie (Alberta) starting providing bare earth and full feature LiDAR through their Open Data Catalogue (LiDAR download map here). Here’s how you can use the data to visualize 3D buildings, using a combination of ArcGIS for Desktop (with Spatial Analyst and LAStools), QGIS, and Cesium JS.

1. Find and download the bare earth (digital terrain model = DTM) and full feature (digital surface model = DSM) LiDAR zip files for a quarter section of your choice from the LiDAR viewer (I used the files here and here). To familiarize yourself with the concepts of DSMs and DTMs, read this answer.
3. Run the las2dem tool within the LAStools toolbox for each of your downloaded .las files. You can accept all default parameters.
4. The tool should output a .tif image for each surface, projected to UTM Zone 11. If not, run Define Projection to assign the projection.
5. The names of each layer will be the same, by default (e.g. “710622NW.tif”). Change them to be unique, so geoprocessing tools (and you) can easily tell them apart (e.g. “710622NW_BE.tif” and “710622NW_FF.tif”).
6. The DTM and DSM each report elevations in meters above sealevel. Since we are interested in the difference between those elevations (i.e. the difference between rooftop and the ground pixels) we need to subtract the two images. You can do this using the Minus tool, or more generally, Raster Calculator. I used the expression: Int(“710622NW_FF.tif”-“710622NW_BE.tif”) to return the difference between the DSM and DTM, as integer numbers (this will simplify things later on).
7. Eventually, we are going to want polygons to load into Cesium (on the order of hundreds or perhaps a few thousand), not individual pixels (there are 700,448 pixels in my image). At this point, we have what is called a classified image, classified by height in meter increments. We want to group this classified image by similar height values, and we can do that by processing our classified output. Following the instructions in this link, we are guided through filtering, smoothing, and generalizing our output.
8. Run Raster to Polygon on your generalized output to produce polygons.
9. Cesium is going to want lat/long coordinates, so you may as well run the Project tool on your polygons to switch the data to WGS84.
10. As far as I know, ArcGIS does not support GeoJSON (which we are going to use in Cesium). Conveniently, QGIS does. So, load your polygons in QGIS and “Save As…” GeoJSON.
12. Switching to Cesium, create a map and load your GeoJSON (general example here). Cycling through the GeoJSON polygons, assign the polygon height to 0, and the extruded height to match the elevation attribute in your data (for me, that was stored in the “GRIDCODE” field).
13. That’s it! You can see my interactive example here, using a “modern” browser. Here’s a sample, showing the Paradise Inn (pictures from my example, and Tom Shield Realty):

Note: I tried and failed to add a link to the LiDAR Open Data License in my example. Hopefully, this will be sufficient.

# Garden Placement Using Publicly Available LiDAR

Here’s a quick method for finding the sunniest spot on your property in Prince George, BC (and elsewhere if you can find the data). Unfortunately, this method is not free from start to finish (it requires ArcGIS Spatial Analyst), but there are a fair number of free tools and data used.

1. Get the data. Specifically, you need the raw LiDAR (.las file) for your area, which are not available for download from the City of PG’s Open Data Catalogue. While you’re at it, also ask for the orthophotos (aerial photos) and cadastral data (parcel boundaries, road files, etc.). Get in touch with the City of Prince George’s GIS department to make your data request: gisinfo@city.pg.bc.ca
2. Load your orthophoto into your GIS program (I’m using ArcGIS, but you can use QGIS for the time being [it’s free]).
3. Create a new polygon feature class and draw a polygon to narrow down your area of interest.
4. Download LAStools (it’s free for the tools you need). Bring the LAStools toolbox into your GIS (there are ArcGIS and QGIS versions).
5. Run the lasclip tool to ignore the millions of LiDAR points that you’re not interested in.
6. Run las2dem to create an elevation raster of your LiDAR data.
7. Consult your favourite source, like the Farmer’s Almanac, to determine the timing for your growing season, .
8. Here is where I used a Spatial Analyst tool called Area Solar Radiation, which is not free (in fact, it’s darn expensive). Run the tool, using your latitude (PG is about 53.914), and frost free start and end dates (for PG, June 4 to Sept. 3). You should end up with something like this, which my legend tells me ranges from blue (353 watt hours per square metre) to red (883061 WH/m^2):
9. Now, it’s a matter of overlaying parcel boundaries, finding your property, and seeing what kind of sunlight you can expect. I’ve circled a few good candidates on this block with lots of sun in the backyard, and (surprise!) some of them are existing gardens.

Bonus: another cool thing you can do (for free) is load your monochrome elevation model in Blender, extrude the terrain heights, drape the aerial imagery overtop, and create a 3D animation like this one.