Pairwise Primer


Note: This is a really basic primer that completely leaves out all the math behind this technique. I built a spreadsheet that does all the calculations several years ago that you could modify. Ask me if you want it…

Creating a GIS decision model often involves weighting criteria in order to reflect its relative contribution to the model or its effect on the variable being measured. To do this, we usually start by ranking the inputs to the model in order of importance, then we try to set some weights according to our ranking. The process of choosing the ranking and weights can be decided by one person or a group of people. Essentially the process winds up being a “whoever shouts out the loudest wins” kind of thing as opposed to a disciplined scientific ranking based on facts.

For example, let’s say you have a simple erosion model with some inputs: aspect, slope, and soil type. It’s tempting to use your subjective reasoning, based on intuition and prior experience, to give a weight of, say, 40% to slope, 10% to aspect, and 50% to soil type (I’m completely making these up). But do we know for sure that slope should be 40% and not perhaps 45%? While it isn’t possible to get around some subjectivity, it is possible to do this in a more rigorous manner.

I’ll go over the basics of the pairwise comparison method here (for a very thorough discussion and implementation plan, you must read GIS and Multicriteria Decision Analysis by Malczewski.) Essentially, you take all of the model’s criteria that you want to weight, put them in a matrix where they are repeated on both the horizontal and vertical axes, then fill in the cells where they meet with numbers representing their relative importance. The brilliance of this is that you are only comparing two criteria at a time instead of trying to rank the whole list at once.

For each comparison you ask yourself or your team: Is X criterion way more important relative to Y criterion, somewhat more important, or of equal importance? That’s essentially what you ask yourself, though when you run an actual pairwise comparison you use 9 gradations of importance running the gamut from way more important to equal importance, with 1 being that the two are of equal importance and 9 being that the horizontal criterion is of extreme importance relative to the vertical criterion.

Once all of the numbers are filled in to the matrix you run a bunch of calculations on them and at the end of it all you get an ordered list of criteria with weights for each. But wait, there’s more! You also get to do a test with the numbers to make sure they are significantly different from one another. If, for example, you have filled in the matrix with all the pairs being essentially equal to each other, then the test will fail. In other words, you need to have criteria that are different enough in importance to make a ranking worthwhile. Otherwise all the criteria should just go into the model without being weighted at all.

If this sounds like the path you want to follow for your next modeling endeavor, check out the Malczewski book linked above and follow his calculations. I recommend creating a spreadsheet with all the calculations in it so that you can go back and change your comparisons if you find that you need to.

  1. #1 by Parker Wittman on February 11, 2014 - 11:28 am

    Thanks for the post Gretchen. I remember a talk you gave on this subject about 5 years ago at WAURISA (in Bellevue). I love hounding my colleagues about this stuff. It’s such an important intellectual/analytical exercise that gets overlooked in hasty spatial analysis.

    This also reminds me that I need to get Malczewski’s book.

Comments are closed.