Archive for category Web Maps
Adventures in Plant Hardiness Zone Cartography
Yesterday I spent some time looking at hardiness planting zone maps in my pursuit to buy vegetable seeds and starter sets that may actually germinate this year. My historic gardening flubs aside, I was astonished by the bad quality of the maps that were listed at the top of the Google search results.
The first result for Colorado hardiness zone map is a site called Plant Maps that looks to be “optimized” (this word is used very loosely here) for ad revenue. It is also de-optimized for anything that would resemble a pleasurable experience including the fact that you can’t actually figure out what hardiness zone you are in. Oh wait, yes you can. After much perusal a viewer may finally realize that the colored squares on the left-hand side of the screen are, in fact, a legend of sorts, for the map that resides in the middle of the screen (with a nice vertical ad bar in-between).
Part of the problem with this site is the obtrusive ads, but another problem is what appears to be feature-creep: the insidious and rightfully maligned process that tends to get engineers into trouble. For example, clicking on their last blog entry (notably from 2011) shows us that they last added an interactive map of drought conditions across the United States, when their focus might have been raising the quality of the features that were already present on the site.
If you google plant hardiness zone map without the Colorado bit, you are pointed to the much more official USDA Plant Hardiness Zone page, which by comparison looks downright cutting-edge. But even this one could be so much better.
For starters, I’m willing to bet that their analytics will tell them that most visitors to the site are wanting to view the map rather than read about it. They probably know this, which is why they’ve allowed the map to take up a little more than 50% of the screen space. In reality, though, it should fill 75% or more of the screen space. The USDA logo, the text about how it’s the first time they’ve published a hardiness zone map, the Stay Connected buttons, the six tabs including the Help* tab? They all must go!
Clicking on the individual states pages brings up a new map, as expected, though zooming it to the current map would be better. But here we see that a bad symbology choice has left Colorado looking like blood-shot eyes. Are rivers a necessary addition to the map or could people know what they are looking at without the rivers? I’m going to wager a guess on the latter. Additionally, I wonder if the choice of blue works here, or does it suggest that Colorado will be inundated when the glaciers melt? Just wondering.
So when I read the first sentence on Stephen Few’s blog today, it seemed apropos:
We are overwhelmed by information, not because there is too much, but because we don’t know how to tame it.
In the case of the Plant Map website, I think we can conclude that in some ways it must be functional…for the owner of the website, who is apparently collecting enough revenue from its egregious number of ads to make it worth hosting despite the fact that the last update was in 2011. Perhaps this space is ripe for some disruption. A pared-down site along the lines of the wind map**, for example, with a few ads to sustain the maintenance costs and perhaps even provide a modicum of profit for the host would be just the thing.
Maybe I’ll just stick to indoor gardening.
*A map meant for the general public that requires a Help section means it’s not good enough.
**The Wind Map gets a higher google ranking (#1) than Weather Underground’s (#4) — which is not too bad but has quite a bit of feature bloat.
Edited to add some additional hardiness zone mapping resources from readers:
@PetersonGIS Related, if the source data is of any use: http://t.co/ByJ5Cxvw6Z
— Bill Morris (@vtcraghead) April 29, 2015
@PetersonGIS @vtcraghead here are a couple the story map http://t.co/LFRmXVsyZF and the source rest for the USDA map http://t.co/3CggWcf8Td
— Damian (@spangrud) April 29, 2015
2015 GeoHipster Calendar
At Boundless, we put together a nice and subtle world-wide basemap for our new product: Versio. It’s meant to be a basemap that shows you where your data is but doesn’t get in the way, thus the quiet color scheme coupled with ample data from OpenStreetMap.
A stitched together series of screenshots at about zoom level 14 in the San Francisco region provided a good entry for the 2015 GeoHipster Calendar and I’m pleased to announce that it has made the cover.
While I was the main designer for the map, we all know that cartography is only as good as its underlying data, and in the case of dynamic maps, as good as its underlying infrastructure. That’s why the map was really a
team effort by all of the Versio team at Boundless.
A short background on the map in case you’re interested: we used imposm3 to obtain a world-wide OpenStreetMap dataset with a customized mapping.json file that allowed us to get some generalized data for roads and things for the lower zoom levels while still getting the non-generalized data for the higher zooms. We also used quite a bit of NaturalEarth data for the lower zooms, including a raster hillshade for the ocean overlaid with a semi-transparent ocean layer to make it more subdued. Most of the labels are not cached, they are dynamic so that we don’t have any issues with double labels or labels cut off at tile edges. Because we aren’t using too many labels in the dynamic label layer, this doesn’t seem to affect performance. The map was made with most of the OpenGeoSuite components, including–yes, I’ll say it–SLDs that I basically edited by hand. GeoServer serves up the data + SLDs, PostGIS holds the OSM data, the NaturalEarth data are kept in shapefile format, geowebcache cuts the tiles, and OpenLayers shows them off on the webmap.
More TileMill Tutorials
I’ve produced two more tutorials for the TileMill series. We’re still covering the basics here. If you missed the first (badly produced) one, you can look at it here:
That’s One Short Tutorial: TileMill
I promise better production quality in these two follow-ups, produced with camstudio and my nice studio mic. Transcript summaries follow!
While you’re watching, please consider subscribing.
Transcript Summary for Intro to TileMill 2: Adding Layers
Click on the Add Layers box in the lower left of the project window. There are lots of different file types supported by TileMill including CSV, shapefile, geojson, KML, GeoTiff, SQLite, and PostGIS. In this tutorial we use a GeoTiff from www.naturalearthdata.com of hypsometric tinting (elevation tinting). Natural Earth is a good place to get data if you don’t have any map data yet or if you need to add some data layers to your collection. The data is free.
In the ID box of the Add Layer dialog you name the dataset that you’re adding. (Note: if you don’t put anything in that box TileMill will put something in there fore you.) We’re going to call our layer “Tinting.” That’s how it’s going to be referenced in the code. You aren’t required to put anything in the Class box. The only reason you would use it is if you have several data layers that are all describing the same sort of thing. For example, you could have a U.S. road layer, a Euro roads layer, and a world roads layer. For those three layers you could give them all the Class name of “roads” and then use “.roads” to reference them all at the same time in the code if you want them all to have the same styling (e.g. they all would be black and width=1). We won’t use the Class box today.
In the Datasource box, the only required box, you put the path to the dataset. You can use the browse button to browse for the dataset. Here I’ve just copied and pasted it in. (Incidentally: when I was doing a production level TileMill project a few months ago I had a notepad file open where I would put all these parameters so I could easily copy and paste them in while troubleshooting. Especially nice for the PostGIS tab which has more complex info to put in.)
SRS stands for Spatial Reference System. This is basically the projection that your data is in. You can use Autodetect, which works most of the time. I happen to know that my data is in WGS84, that’s because I took a look at the ReadMe file that came with the Natural Earth data that I downloaded and it told me that the projection was WGS84.
We’re going to leave the Advanced box empty and click Save & Style. If you just click Save, it’s not going to style your data for you in the code. So it’s a little easier when you’re just starting out to press Save & Style. And then magically you’ll see that it’s put my layer in the layer list and it’s also referenced in the code right here with some default styling. It’s also covered up the rest of my data with this greenish color. You can see the data a lot better if I zoom out, then you can see the hypsometric tinting dataset that I downloaded.
Transcript Summary for Intro To TileMill 3: Adding Comments
In this tutorial we’re going to start off with the hypsometric tinting layer showing, that we added in the previous tutorial, and learn about how to add comments to the code. The code on the right is called CartoCSS. Knowing how to comment things is pretty important.
Let’s start with just a basic comment. In part of this code I have a “marker-allow-overlap” parameter set to “true,” which is an uncommon thing to do. So you might want to make a comment to the effect of why you overrode it. It’s pretty easy to do, you just use a forward-slash star combination at the beginning of the comment and a star forward-slash combination at the end of the comment so it looks like this:
/*Override the default layer*/
Another reason that you might want to comment the CartoCSS code is that these stylesheets can get very long. You can visually separate different sections of the stylesheets by putting in section separators like this:
/*————LABELS————–*/
Whatever you put in between the /* and */ is just a comment. Another reason you might want to comment is to put a nice looking header at the top so that people understand who wrote it, when, and whatever other metadata information you want up there.
/***********************
* PetersonGIS
* 12/4/2013
************************/
A fourth, very commonly used, reason to comment is to get rid of data in the map without actually deleting the styling. So if I want to make the hysometric tinting dataset not show up in the map but still have the CartoCSS code for it in the stylesheet, I can go ahead and put that /* and */ to comment it out. This is great for debugging when you aren’t sure what is breaking the map. The comments can help you figure out what’s going wrong. Also, if you’re going to be exporting slightly different maps to different customers, you can comment out map layers prior to rendering the tiles. (Note: you could also do this another way, by clicking the eye icon next to the layer name in the layers dialog, which effectively makes it disappear without actually deleting it or the code.)
That’s One Short Tutorial: TileMill
Everyone and their brother has been asking for TileMill tutorials. I have zero time for a full-on, production level, quality video series. Sorry. But I do have time for a 1 minute long, cell phone attached to a tripod made of Legos, weird screen glare, introduction! Hooray!
I went for the non-screencast format in favor of the friendlier, “I’m actually here” format. If we produce a whole series then we’ll have to switch to a screencast where you can see the screen better. And I’ll have to take that bad*%( microphone down from the top of the bookshelf. Until then…
Digital Map Wrongs and Rights
Murphy’s Law: What can go wrong will go wrong.
Let’s count the ways that your digital map can bite the dust.
1. Labels get cut off at tile seams. (Increase map buffer area.)
2. Effects at land/water boundary don’t work if the two datasets lack topology. Mouths of large rivers and canals? (Use data that was built to work together like Natural Earth.)
3. Layering isn’t correct. Trailhead symbols underneath park shading. (Do lots of testing and moving code and stylesheets around as needed.)
4. Symbols not appearing. (Remember to include them in the right folder and call correctly. Hello.)
5. You forgot to normalize thematic, population-related data. (Normalize.)
6. Line widths aren’t changing incrementally. When you zoom in they just get bigger and bigger until pretty soon the entire map is one road. (Specify a different line-width for every zoom level or groups of 2-3 zoom levels.)
7. Lines are jumbled or too thick at low zooms. (Generalize the low-zoom data. Simplify the lines using a simplifying algorithm.)
8. All features show up but with no styling, all black and Arial and width = 1. (Re-code, nesting isn’t working right.)
9. You published it but nobody cares. (Remove half the functionality and increase the prominence of the central purpose.)
10. You published it and everybody cares but in the wrong way. (Remember the most vocal voices are not always the majority opinion.)
Zoom Levels, Pixel Sizes, and Scales, Oh My
Ever wondered what the map scale is at each zoom level in a digital map? Well, it’s not an easy answer, since almost all webmaps are in the Web Mercator projection (some are in Web Mercator Auxiliary Sphere), which has a greatly varying scale depending on your latitude.
The varying scales of a Mercator projection:
Assuming a 96 DPI*, we can translate the zoom levels to map scales and pixel sizes, and for the sake of simplicity they have been recorded only for locations at the equator. Here’s the very useful Esri chart for this. Obviously these will change depending on how far from the equator your features are. But it is a good start to get a handle on, say, what datasets will look good at what zoom levels. For example, you might want to use some Natural Earth large scale data that comes in 1:10m resolution. You’ll see from the chart that it will look good through zoom 6.
The Esri chart is, like I said before, quite handy. However, you might want to see just approximate scale equivalents as you go about looking at various datasets and plotting which ones are good at which zoom levels. Incidentally, if you’re doing an exercise like that, I highly recommend making a data chart in a spreadsheet and putting these values across the top, horizontally, and then xing out or coloring the squares that correspond to each dataset’s resolution. This gives a handy visual way to scroll through 100+ datasets to see what resolution they have.
REMEMBER these are just APPROXIMATE scales for each zoom level. Not only does it vary by latitude but these have been significantly rounded:
zoom approx. scale
0 1:500m
1 1:250m
2 1:150m
3 1:70m
4 1:35m
5 1:15m
6 1:10m
7 1:4m
8 1:2m
9 1:1m
10 1:500,000
11 1:250,000
12 1:150,000
13 1:70,000
14 1:35,000
15 1:15,000
16 1:8,000
17 1:4,000
18 1:2,000
19 1:1,000
You’ll note that in the Esri zoom level document they also wrote out how many meters, at the equator, one pixel is. Why would this be at all important? Simply put, if your features are less than one pixel in size, this may have implications for how much of that data you show and how you style it. For more on that and some other very interesting dot styling information, see Eric Fischer’s excellently illustrated Mapping Millions of Dots article.
*See this post on why I/we/everyone shouldn’t be assuming 96 dpi. Required reading!
Recent Comments