Archive for September, 2012
My First TileMill – MapBox Map
When Franςois* claimed he had a TileMill – MapBox map up and running in 30 minutes, I had to try it out for myself. So, I spent a half day yesterday and a bit of time this morning downloading it, going through the tutorial, and then messing around with it. If you had just been trying to slap some simple point data on a map then you could have done it in close to 30 minutes for sure. For those users, the TileMill – Mapbox setup works quite well. However, I got immersed in trying to figure out as many of the capabilities as I could, so it took a bit longer. Oh, and there was also the slight problem with the export, explained below.
So first off, for those who don’t know, TileMill is an open source project by MapBox**. It allows you to upload your own spatial data, style it using CartoCSS, then export to formats like PDF, SVG, or upload straight to MapBox so that you can have MapBox serve it up for you in a dynamic webmap.
There are a lot of very enthused GIS folks who love TileMill for its great cartographic capabilities. I didn’t quite get to the point where I could enable a cool carto effect, but it does seem doable, maybe just not within a 30 minute learning time frame (or even a half-day time frame). In all, my experience was a good one, and I was sufficiently impressed to want to continue using it.
Some notes:
- If you want to do geographic manipulation to a dataset such as buffering, selecting by location, or other GIS tasks, you still have to use a GIS. TileMill is not a GIS.
- When you start a new project, the program asks you if you want to import their world dataset. This world dataset is great for small and medium scale maps. At a local, large-scale, it mocks me in cartoonish fashion. You will need your own background layer for large scale maps, or wait until you export it to MapBox, then add in their streets data (which is what I did).
- There are quirks you have to get used to just as in any program. Don’t let anyone say there isn’t a learning curve. There always is. For example, what’s the difference between “save” and “save&style”? Well, after some trial and error I realized that save&style adds the CartoCSS code to the stylesheet for you whereas “save” just adds a layer to the map without the corresponding code help.
- It still has some developer-speak in the text. For example, the Open Streets, DC example project states, “OpenStreetMap shapefile extracts provided by…” I believe the word “extracts” is meant just to show-off. But I nitpick. Similarly, it’ll help if you’re used to such language as this (found in the support area): “Does adding a ‘text-min-padding’ style to your text help out at all? I would start experimenting with values in the 10-50 range. This could also be coupled with a reduction in your buffer size.” So if you are used to point and click buttons in your GIS and you aren’t used to open source software, this will be a new way of talking, thinking, and writing for you. That’s okay. I’m just sayin’.
- The export dialog was telling me it would take 6 days to export my map. Thankfully, Dane Springmeyer (@springmeyer on twitter) pointed out that you have to set the maximum zoom level to something lower than the default of 22, with each lower zoom max representing a marked decrease in the number of tiles needing to be exported. When I lowered it to 12 it exported within a few seconds. Much better.
*In comments section of The TileMill Map Gallery post.
**It’s a mystery why the two aren’t integrated into one product.
Here’s my first mapping attempt. It uses the MapBox Streets background with colors changed somewhat, and some forest permit harvest data that I built for the Hood Canal Coordinating Council using existing state data and a custom algorithm for teasing out specific harvest areas by date. You can get information about individual harvests by hovering over the polygons. The dataset itself represents a large amount of effort in getting useable information out of a public dataset. It is nice to be able to show it off in webmap form. I can see a lot of other GIS analysts wanting to do this with their data quickly and easily.
Hover over the green polygons. These tooltips were a breeze to implement in TileMill.
GIS Practitioners Vs. Graphic Designers
Posted by Gretchen in Cartography Profession on September 26, 2012
There are two types of professionals that routinely make maps: GISers and graphic designers. They come from very different backgrounds and often come up with very different map solutions. But first, what do maps do?
(1) Maps are powerful to the extent that they convey useful information. The more useful the information, the more powerful they are.
(2) Maps are entertainment, to the extent that they appeal to the aesthetic sensibilities in the map reader. The better they are designed, the more entertaining they are.
GIS practitioners are traditionally focused on the level of powerful impact that their cartographic products impart. Whether or not GIS practitioners succeed in this is based on how well they also understand the entertainment needs of their map readers.
Graphic designers are traditionally focused on the level of entertainment that their maps provide. Whether or not they succeed in creating interesting maps does not always depend on their maximum employment of useful information.
Not all GIS practitioners have embraced what is really a critical part of their workflow: the aesthetic qualities of the finished map, and certainly there are graphic designers who could care less about the information behind their designs.
A cartographer, however, must be proficient in both ways of thinking, and it is what sets us apart.
Tuesday Random Roundup
A few things to explore in the world of geo today:
- Brian Timoney does it again over on his MapBrief blog in The Delusional Job Ad That Reveals What’s Wrong With GIS. In this post, Timoney points out the futility, and downright ridiculousness, of a recent job ad for a senior GIS person who would be required to be everything to all people. It’s a great critique along with some great visuals. A must read. Going along with that same line of thinking, is a passage from a book I was just reading called The Dream, in which author Gurbaksh Chahal states, “A lot of people look for perfection right out of the gate, and I think this is a big mistake. Nothing is perfect, so for me it’s really about the future and about whether I think the candidate and I are a good fit.”
- There’s a new do-it-yourself globe download and tutorial by graphic designer Joachim Robert. Check it out:
- If you have a copy of Cartographer’s Toolkit and would like to write a review of it on its Amazon page, I’d be very grateful. Since it is a small-press publication it’s going to need all the reviews it can get. Thank you!
- The 2012 Esri Map Book is available and definitely worth a flip-through. Some of the cartography maps are a bit disappointing. Most don’t have the kind of WOW factor I’m looking for these days. Oh, they are all examples of good cartography, to be sure, just not that kind of totally amazing &&*%#O$ that you want to buy and put on your wall.
Dawn Wright’s Talk Today: The Age of Science and Big Data
Dawn Wright, Chief Scientist at Esri, presented a one-hour talk today on The Age of Science and Big Data at Colorado State University. I was in attendance due to my current interest in the topic of Big Data and, of course, how it relates to spatial scientists like ourselves. She delivered a great talk with many takeaways, some of which I will try to enumerate here.
The first interesting thing I took note of was the fact that the journal nature actually ran a cover story on Big Data back in 2008, a full three years before most people started discussing it in the spatial community. The second interesting thing was this thought that crossed my mind as Wright said the word “interoperability”. My thought? That this word, interoperability, is really one of those self-limiting words in that the minute that things actually become interoperable we will no longer have a need to use that word.
Wright mentioned GeoDesign, which in my mind is really just urban planning with GIS, but she emphasized that it is about both how we see the world and how we manipulate the world to be the way we want to see it. Just think on that for a bit.
The main learning points of this talk centered on the 3 traditional characteristics of Big Data as well as 2 additional characteristics that they think about at Esri. First, Big Data is characterized by big volume: often in the petabytes. Second, we’re talking about data that has a high velocity, by which we mean near real-time or real-time data from automated sensors, and which we have to ask questions such as: what do we keep and what do we discard? The third is variety, which she argues is the thing that geospatial experts are most interested in since we like to combine different datasets to derive novel conclusions. The two Esri add-ons are: veracity and values.
Examples of Big Data sets that Wright mentioned are: air and ship traffic, Yahoo! Finance data (# of nodes=42,000!), critical zone observatories, and GEOSS. She also mentioned the US NSF Earth Cube, which seems to be an attempt to organize and hold this data. Of course, we’ve seen attempts like this before that never got off the ground, so we’ll see if this one is any different. Wright went on to emphasize that if you are at all interested in data intensive science then you had better read The Fourth Paradigm, which Wright asserts is the text to have on the subject.
The only thing I was disappointed in was the fact that Wright did not discuss the importance of Big Data visualization, which I posit–as long as we are all adding “v” words to the list of Big Data tenets, is going to be the make-it or break-it aspect to whether results of these analyses make any difference in the world. In other words, without a good way to show-off your Big Data results, the public won’t listen in the first place, let alone try to understand. So that’s what I propose: Visualization needs to be the 6th Big Data tenet.
The TileMill Map Gallery
There’s a lot of buzz surrounding the open source web mapping platform called TileMill, mainly for its massive map styling capabilities and apparent* ease of use. If nothing else, you at least have to visit their map gallery and browse some of the innovative mapping techniques on display.
Also, take a look at the help file Compositing Operations for a guide to all the ways that TileMill can change the textures and styles of your webmaps. Hat tip Bill Morris
Running Routes built with TileMill, by Tom MacWright
*I say “apparent” because I haven’t actually used it yet. But from what I can tell, it is easy in the sense that a lot of other webmap platforms are a bit difficult in comparison. That, of course, doesn’t mean that TileMill is easy when compared to, say, Illustrator or ArcGIS if drag-drop is what you’re used to.
Big Data in the News Again
I’ve talked about big data on this blog before (Demand for GIS Analysts on the Rise? and Big Data Articles Everywhere) and also mentioned it at the end of my recent talk with James Fee. So it was with much interest that I read Harvard Business Review’s headline article this month titled “Big Data: The Management Revolution” by Andrew McAfee and Erik Brynjolfsson.
In it, the authors discuss how Big Data is different from regular data and report on the results of their study of 330 public North American companies. They sought to find out if businesses that use Big Data analytics actually perform better financially than businesses that don’t. I highly recommend getting the article–which you could read at your local library if you don’t have a subscription, but I find the kindle version to be quite worth the cost–to find out what the results of the study are.
Now, I realize that those of us who specialize in spatial analytics and mapping are interested in making a difference in all kinds of different ways, not just in big-business financials, but we could assume that the outcomes of this research are applicable to increasing performance in whatever field of expertise you currently work in, whether its local government, natural resources, utilities, or something else.
In terms of natural resources, which is the field I work in, I’ve thought about it in terms of one of the longest standing analytical subjects that I’ve been involved in: salmon habitat in the Pacific Northwest. For over 10 years the approach has been to work with the salmon scientists to determine what factors are important in the salmon habitat equation, where those resources are available on the ground, how those resources might fare in the future, and where the potential risks are, to name a few relevant metrics.
And yes, those datasets can be quite large–think individual tree ages based on LiDAR (more on that here and here)–but they are not real time. One of the differences between regular data and Big Data is the real-time nature of Big Data. What I foresee as something that can make a big impact along these lines is to see data on fecal coliform levels in real time or septic system failures in real-time, for example, so that measures can be taken to immediately ameliorate their impacts to salmon. This is only one way in which Big Data could impact my work, there are, I am sure, a large number of other things that could be done once this concept takes hold, that I haven’t even begun to be cognizant of. The possibilities!
Recent Comments