Archive for category Best Practices

A Common Map Design Error: The Bold Boundary Line

My posts are coming out less frequently lately. This is because I’m coming up with cooking metaphors for my book such as:


The box’s contents would ultimately provide all the ingredients and recipes for you to make informed and inspired decisions that could ultimately be whipped up into a delicious cartographic concoction.

I’m still trying to find a good spot to put that one. :)

So today’s post is just a quick reminder to everyone out there to watch out for this common cartographic catastrophe…The Bold Boundary Line. Don’t let The Bold Boundary Line happen to your maps!

What is The Bold Boundary Line? It’s a situation where your map is completely overwhelmed by boundaries of one sort or another. For some reason that I can’t figure out, beginning cartographers make this error all the time. In fact, I made this mistake quite a bit in my early career too. I certainly was not immune.

It seems we sometimes fool ourselves into thinking that political boundaries, watershed boundaries, ecosystem boundaries, geologic boundaries, and so on are the most important thing on the map. In reality, the most important thing on the map is usually what is inside those boundaries such as the quantity or type of something, and of course that is what should be highlighted. The boundaries may still be of some importance but they ought to be lightened in order to lower their place in the visual hierarchy.

The best way to describe this map problem is with an illustration. Here are good / bad / worse maps of my own creation.

To start, let’s look at a GOOD map. This is a little map of Colorado that I made for the post prior to this one:

Now, here’s an example of a map with The Bold Boundary Line problem:

And here’s an even worse one, because it adds in The No Figure-Ground Sin (to be discussed in a later post?):

2 Comments

Ambiguity Aversion Applied to Map Making

Have you heard of ambiguity aversion? This is the theory that when faced with an unknown probability, people are less willing to take a risk than when they are faced with a known probability. It was first defined by Daniel Ellsberg in 1961 (HT Peter L. Bernstein, Against the Gods: The Remarkable Story of Risk). In other words, if you know a lot about a subject, you will be more willing to take a risk on it than if you know little about it. Bernstein’s example is “People who play dart games…would rather play darts than games of chance, although the probability of success at darts is vague while the probability of success at games of chance is mathematically predetermined.”

Reading about this the other day reminded me of a hypothesis that I proposed over a year ago at Ignite Spatial Northern Colorado. The hypothesis is that a professional cartographer probably creates twice as many drafts of a map prior to publication that a novice does.

There are a couple of big risks involved with creating twice as many map drafts. For one, there is a time risk: will putting in twice the time and effort produce a much better map or only a marginally better map? For two, there is an emotional risk: will allowing the map to be peer-reviewed (peer-review being a likely contributing factor toward increasing the number of drafts required) be too upsetting? Don’t take that second risk too lightly. A map is much like a piece of art; without a doubt, at least part of the map maker’s soul has been poured into it.

A professional cartographer has learned that these risks are worth taking. A novice cartographer may go to final publication before putting enough time into making the map better via extra drafts and before doing any peer-review. Those risks create an ambiguity in terms of potential return for a novice.

Now, the parallel I’m drawing here isn’t completely clear. With ambiguity aversion, both outcomes could have equal chances for success, or the more ambiguous one, as in Bernstein’s example, may even have a better chance for success than the known risk. With my map example, the ambiguity lies mostly with the novice–who doesn’t know that making more drafts can make the map better and therefore stops before a truly great map is produced.

However, the analogy falls apart a bit when considering the professional cartographer: the professional knows the risks of both actions: more drafts or finalizing prematurely and is better able to choose between them with the experience that they have. In this way I can’t draw a completely accurate example of ambiguity aversion.

The hope is that this discussion still helps to further your understanding of the risks associated with making extra map drafts and that those risks are worth the extra effort.

1 Comment

Mapping Secrets from the New York Times Graphics Department: The AAG Talk

On Monday of this week the New York Times graphics department gave a talk at the AAG conference to a standing room only audience. The talk, titled Mapping the News in the Age of Visualizations: the Art and Science of the NY Times Graphics Department, was obviously a popular one at the conference so I’ll attempt to summarize some of the things that I got out of it in this post.

Of the 22 people in the NYT graphics department, four presented during this talk: Matthew Bloch, Matthew Ericson, Archie Tse, and Jeremy White. They started by going through a case-study of some of the decision making that has to go on when presenting data visually for the news by describing various ways that the 2008 elections results could be mapped.

First, there’s the typical blue/red by state mapping technique that most everyone is familiar with where states that voted primarily for Obama were colored blue and states that voted primarily for McCain were colored red.

Then they went through slides showing other ways of mapping the data in order to better reflect the population distribution such as taking out all counties where the persons per square mile was less than 3 (colored white), which created a map that showed a more balanced red/blue scheme than the typical map. Also, they tried to extrude cities as 3D bars reflecting population but the New England cities and Los Angeles overwhelmed this map. They tried cartograms but these distorted the geography in the middle of the country so much that you couldn’t tell what color belonged to what city, in, say, Texas. One of the more successful maps was one that depicted party shift from the 2004 election to the 2008 election, shown here:

Now, things were moving quickly in the talk and I’m not sure if I’ve got this down right, but there was a map that got a lot of critical acclaim, and I believe it was this party shift map. However (and again, I hope this is the map that they referred to) some did not like the map because it was not the typical election map, supposedly causing some confusion. Thus we arrive at learning point #1:

If you do something differently when people are expecting something that has always been done a certain way, make sure you make it very clear that the map is DIFFERENT. There was a remark that perhaps some day we will teach people that all maps require a certain amount of time for interpretation by the reader, but until then, people may draw the wrong conclusions if they read the map as though it were a standard election map.

Before this post gets too long, what I’m going to do is just summarize the rest of what I thought were the major learning points. A lot of these came from the question/answer part of the talk. If you were at that session and want to chime in with other bits that I missed please do so. This is certainly not a comprehensive report on the talk since that wasn’t what I set out to do.

  • They don’t keep a strict style book. It did eventually come out that they do, indeed, have a color book and a typography book with standard color schemes and 15-20 typefaces, but it sounded like they may not adhere (at least to the color styles – I’d guess they don’t have a lot of leeway with fonts) to these very strictly. Regarding the common style found in all their graphics from print maps to interactive maps and data-displays, it sounds like they do quite a bit of review and eventually everyone winds up with about the same aesthetic. It was noted that they strive for a clear and simple presentation.
  • Someone asked what tools they use. I jotted down as many as I heard, though I may not have got them all, here are a few: ArcGIS, Illustrator for the endpoint of all print maps, map publisher, r, various APIs, OSM, TileMill.
  • They recommend that those who are in college or new to the field learn JavaScript. They also recommend that whatever language you learn, you know enough of the fundamentals of programming to be able to learn a new language in 2 weeks.
  • Their design process consists of a lot of preliminary sketching. TAKE NOTE!!!
  • When asked how they deal with data of high uncertainty, they jokingly answered that they don’t know. Their real answer was something along the lines of omitting that kind of data from the display. What they mean by this, I assume, is if parts of the country (for example) have uncertain election returns, those areas are colored white instead of red or blue.
  • There was some discussion on how they serve up their interactive maps. Not being a complete expert in this arena, what I got out of their answer was that they can’t do it the normal way because they have too much traffic. Therefore, they heavily cache the map so that not every single user is getting a fresh call from the database, maybe only 1 out of 20 users is, and the rest are getting snap-shots.

For a few examples of great New York Times maps, see this post on Map Elegance: Putting the Data First.

5 Comments

Introduction to Classifying Map Data

Step 1) Determine if there is a standard for data classification that you want to use. For example, analyses on impervious surface in the Northwest using 30 meter resolution data are often split into class-breaks of 5%, 10%, and 20% due to these being important breakpoints for environmental degradation in the Northwest*. Likewise, if there are a set of intuitive classes that make sense for the visualization, use those. Otherwise proceed to Step 2.

Step 2) Graph the data values. Determine if the data are skewed or normally distributed.

Step 3) Consult this chart as a starting point.

Step 4) Read more about classifications in a GIS text. Other considerations when classifying data include whether or not to normalize the data and whether or not the data might be suitable for classification by spatial proximity.

*However, when using finer resolution data, we’ve found that these values may not be applicable.

4 Comments

Lessons From the Road

We just got back from a road trip to Washington State by way of CO, WY, UT, ID, and OR. Here is the accumulated mapping wisdom from the experience:

  • The paper road atlas is easier to pass around the car than the smartphone GPS navigator, and less likely to insight parental admonishment if sticky fingers land on it.
  • Goes without saying though: GPS is a great thing to have at night when you need a hotel and everything in your current location is booked.
  • I wish I had gotten a picture of the I-80 road sign maps. These are gigantic maps that show way too much information for a driver going 75 to be able to understand. However, I suppose if you were turning around due to a winter road closure then these are useful.
  • When you’ve been driving a while, you and your spouse can come up with such gems as, “You need to know where the polygon is in order to be able to think outside of it.”
  • Don’t forget to enjoy the view:

1 Comment

My Other Articles

If you’d like to read some of my older articles, going back to 2007, take a look at my other site’s publications page. I was posting articles to that site for a long time before starting this blog.

No Comments