Archive for July, 2011
Simple and Effective Mapping
Think about your audience when designing your maps. For example, at the California Academy of Sciences yesterday I noticed that the maps in the rainforest exhibit were both extremely simple and extremely effective. The rainforest exhibit is comprised of several stories wherein each represents typical species you would find in each of three or four country’s rainforests (Borneo, Costa Rica, for example). Each level had a map at the beginning to show you where, for example, Borneo is.
The maps are just landmasses of the earth and a simple square around the general region. There wasn’t a need for labeling of major countries or hillshading or aerial photography or anything that would take away from the very simple message. Now, there are a lot of times when there is A LOT more needed on a map than landmasses and a locator box, but the designer adroitly regarded anything other than the basics to be superfluous and, quite likely, distracting from the exhibits.
Now, mind you, these maps are not pieces of art that someone’s going to take a picture of themselves next to. But then again, I noticed they were used quite a bit for actual spatial-knowledge purposes!
Cartography: Traditional Vs. Digital
Lately the posts here have been more about the latest goings-on in the GIS world and GIS analysis/data rather than cartographic design principles. Thinking about this today, I realized it is because I’m a bit perplexed about how to modify traditional cartography for digital media. But that’s not the right way to think about it. Even those who are designing maps for digital presentations (e.g., slide shows), interactive web maps, smaller device apps, and the like, still need to know fundamentals of:
- Color
- Text
- Arrangement,
- Figure-ground,
- Hierarchy,
- Placement,
- Purpose,
- Audience, and
- Peer-review.
Furthermore, less major subjects such as isolines, terrain depiction, road casing, for example, still remain important as well. Only now you have to know how to program these things rather than draw them.
We saw it in this great Google maps analysis where, despite the fact that it is an online map, it is extremely sophisticated. Think about all the things they are able to do like dynamically “clearing” an area of labels around a major city to provide a visual distinction between cities and their surroundings. If anything, the art and science of cartography has only grown more complex.
Your thoughts???
A Picture of GIS Cartography: A Guide to Effective Map Design at the EsriUC
As I scanned the Esri User Conference tweets last week (I wasn’t there), I got to wondering if my book was being offered for sale anywhere at the conference. Sometimes my publisher is represented at these conferences but my editor tells me that book sellers aren’t allowed on the exhibition floor at the EsriUC.
However, the conference does feature a geo store so it came to me that someone on twitter could probably find out if it was there. Thus was launched my first twitter contest with the tweet:First person to send me a pic of my book at #EsriUC gets a free Colors For Maps or Type For Maps.
About a half hour to an hour later @GISTweet responded with this picture:
So @GISTweet won but about one minute after their submission, @albertda sent a picture too, telling me that he had been standing next to @GISTweet, “opening my phone, as I watched him take his picture. :)”
So that was my first twitter contest. Very fun. And an ego boost at that – to see my book displayed right next to a Tufte book! I also wasn’t following those two twitter-folks until then so it helped expand my circle* just a little bit.
*Using the term “circle” reminds me to mention that I joined Google+ this week, thanks to an invite from @GeoDawg and am still figuring out how it works.
Crowd Sourcing the Esri User Conference Plenary
This is great, Dave Bouwman is jotting down notes from the Esri UC plenary and we get to read them as he types…
Just visit his Google Docs page to read, ask your own questions, etc. He may move it over to Google + for later sessions, but not everyone is on Google + yet (I’m not but apparently I’ve got an invite headed my way).
*Updated to say: Okay, now I’m confused. Perhaps its James Fee who is writing these plenary notes as I see he’s got the same stuff written over on his blog. Either way, check it out.
Reading Conference Tweets
Today is the first day of the 2011 Esri International User Conference. Since I’m not able to be there, I’ll be keeping up with the chatter on twitter from those who are there.
There was a tweetup last night (#esriUCtu) at which @gletham and @Merrick_Geo attended, among others.
@johnjreiser is apparently promising lots of snarky esriUC tweets.
@geoparadigm is there and excited about the spatial outlet (wonder if my book is there?!)
@EsriUC is the official Esri twitter account covering the conference.
@dbouwman will probably have a lot to say. If you are at the conference, you should check out Dave’s BackChannel website to participate in the unofficial photo scavenger hunt.
There are definitely others on twitter who are attending. If you’ve got other good twitter folks that you follow who are covering the conference, please let us know! You can search for the hashtag #esriUC also.
What Data Have You Been Working With Lately?
Some of the datasets I’ve been working with lately include:
- NAIP, 1 meter, 4 band imagery – A colleague classified 3.5 county’s worth of NAIP images into between 4-7 categories and handed it to me to reclassify into “trees” and “not trees” pixels. Though I was not asked to do an error analysis, I loathe using classified imagery without a formal error analysis, so I did one. With 20 randomly chosen pixels in each county (since they were classified separately) checked by-eye to see if they were correctly identified or not, we got a 94% concurrence. That is an excellent error rate. Another error-check that should be done, however, is to randomly choose 20 non-forest pixels in each county to determine concurrence since the original error-analysis was heavily weighted toward “tree” pixels given the huge percentage of trees in the study area. That will be one of my next tasks if I have the time to undertake it.
- NOAA CCAP, 30 meter, landcover – This dataset covers the coastal regions of the U.S. but was problematic for my project’s needs in that it has a “Palustrine Forested” category whereas we wanted to know specifically what type of forest (coniferous, deciduous, mixed) that those pixels represented. The NOAA people were very responsive and sent me the Landsat mosaics that were used to produce each of the four CCAP year’s worth of data (1992, 1996, 2001, and 2006) so that I could mask out those palustrine forested pixels and reclassify them using a supervised classification. While there is very little way to error-test the results because the data are at least 5 years old, some visual assessment of the 2006 results showed a decent amount of concurrence with what we know to be true on-the-ground right now.
- Regions – I’m currently involved in a fun project involving by-eye digitizing, at a high resolution, some logically drawn regions (some might call these “territories”) based on demographics and existing political boundaries, but weighted more toward demographics and travel corridors when they cross political boundaries. This is a very fun exercise in the sense that it gives a level of geographic awareness that is only possible when immersed in such a task.
So…what data have you been working with lately?
Recent Comments