Archive for May, 2011
Remote Sensing Reveals Ancient Egyptian Cities
When I saw @cubitplanning’s tweet about the use of infra-red imagery to detect Egyptian pyramids, I wasn’t sure what the linked article was going to be all about. This is a MAJOR thing! It’s just absolutely amazing what you can see on those images – the ancient city streets are very apparent in what was once a densely occupied city. As the article states:
Ancient Egyptians built their houses and structures out of mud brick, which is much denser than the soil that surrounds it, so the shapes of houses, temples and tombs can be seen.
Definitely check out the other image shown in the article, and the movie too and see what you think. What an amazing discovery.
As a very incidental side note, also check out the use of not one but four zoom-levels in the article’s beginning illustration, indicated by 3 location boxes. Ambitious but effective.
Stavanger Maps Just Launched
The Stavanger Guide Maps, Norway website was just launched today and features some beautiful maps of the Stavanger Region, Norway. The maps can be viewed in many different languages and are a culmination of what must have been A TREMENDOUS AMOUNT of work. Look at some of the details on these as well as the overall design aesthetics, and they will serve as great inspiration pieces for your next work. These were made by Kevin-Paul Scarrott.
Using LiDAR to Calculate Tree Height Part II
In an earlier post I went over how I was using LiDAR to calculate tree height in the Pacific Northwest. Since that post there’s been a lot of new developments on this project. The first thing to note is that, though there was some trepidation over the sheer quantity of data, it turned out to be a needless worry. As mentioned in the earlier post, one tile was processed rather quickly. To do the 100 tiles that covered the whole study area (the Hood Canal watershed and on up to Port Townsend, for those familiar with the area) it could have been a big processing nightmare. Thankfully, the Puget Sound Consortium LiDAR data that I was using was already in DEM format as opposed to the raw point data. That was key since converting the raw point data to a surface grid, is, from what I hear, a very process intensive task.*
So basically there was the task of subtracting the ground surface from the top surface. The ground surface was already mosaicked into a single dataset while the top surface was in tiles. I went ahead and mosaicked the top surface tiles, which did take some time but was not completely onerous. It might’ve been about a 4 – 8 hour time period from starting the mosaic to finishing, though I did not set a timer to record the exact amount and was doing other things in between batches. With that done, the subtraction calculation did not take too much time either.
Now for the more interesting stuff: the results. Here’s an image showing the Lower Dosewallips River. The blue line is the current hydrology layer (you can see it isn’t perfect, of course) and a buffer of that hydrology line 200′ on either side. The tree height results are transparent so that you can see through them to a high-resolution NAIP 2006 image underneath. Dark green is trees of height 100′ – 200′ and light green are trees 30 – 100′.
Now, visible in this picture, a problem with the data became quite apparent to those who know this area: the analysis did not pick up any of the deciduous trees. Since this area is dominated by coniferous forest – which shows up remarkably well when compared to the image – this wouldn’t be a problem except for the fact that these river corridors are one of the major analysis sub-units. And in the river corridors, patches of deciduous trees, such as alder, can be more prominent and are important to measure in this project. The reason for this problem is simple: all of the LiDAR was flown during the winter, i.e., leaf-off, thereby making the deciduous trees more or less disappear.
The great thing is, there is some LiDAR data available for a few of the study area stream corridors that was commissioned separately and taken during the summer, when the deciduous trees should show up. I just got that data a few days ago, processed it for height and used the same symbology as for the other data to show you a comparison in the same area as the last screen shot:
That data is also higher-resolution than the original LiDAR I was working with (3 feet versus 6 feet). You can see that this summer LiDAR was much better at picking up all the trees. There are a few patches of green on the imagery that don’t seem to show in the LiDAR height analysis, but perhaps it isn’t 30′ yet, with only the 30′ being shown as green in this classification.
Unfortunately the leaf-on data isn’t available for the entirety of the study area. This brings us to one of the age-old GIS analyst’s questions: do we cobble together better data with worse data even though it means that the error is highly spatially variable or do we stick with using the worse data for the whole study area since it’ll enable an apples to apples comparison across the landscape? The answer is: it depends. It depends on how the data will ultimately be used. And right now, we don’t know the exact questions we are trying to answer though we do know that some comparisons from basin to basin will be made. So for now this question, for this particular project, can be left unanswered.
*For those in the know, I’d welcome your comments on this.
Workflow and the Experienced Professional
I’ve been thinking lately about how my workflow has changed since I started out in GIS 12 years ago. One obvious change is that I no longer take copious workflow notes in spiral bound notebooks and Word documents like I used to. It used to be very important that every step was recorded for every intermediate dataset created and analysis conducted so that going back to a previous iteration after going down a wrong path would be less painful.
In basic terms, this meant that if I was currently working on ForestConversionStats6_clip (!) and discovered that going back to the non-clipped, unresampled, data from yesterday was necessary, that this could be done by looking in the notes and finding the dataset labeled, say, ForestConversionStats2 and starting again from there. I’m not kidding, I had MANY notebooks with dataset names and processes.
I still take notes, usually just in Notepad and saved to the project directory that I’m working in, but they focus more on specific parameters or perhaps long selection strings that I can just copy/paste when needed. Over the last five years or so it’s been nice to finally feel like the steps to create a good analytical result are second-nature (not for every single possible thing that can be done, but for many of the common tasks) enough to not need to back-track or write everything down.
With specific regard to cartography, this type of professional growth manifests differently. The cartographic workflow is very often not written down and not revisited. In fact, it can be difficult to tell if the end result (the map) is even meeting the original needs. Or at least it is harder to discern than the end result of an analysis. Either an analysis works or it doesn’t. You might even run some error tests on that analysis to test if it works, if you are a good scientist. But with cartography there are two barriers: #1 you don’t know if the map is as good as it could be and #2 even if you realize it isn’t as good as it could be, you don’t feel like going back and starting from a previous step.
The seasoned cartography professional is much more adept at those two steps than the novice. The seasoned map maker, though this may come as a surprise, is more critical of the end result and more knowledgeable about how that product should look. They will do the equivalent of error-testing analysis: they will send it out for peer-review, whether formal or informal. Also, they will take the time to change things based on that self-critical assessment and that peer-review, even if it means completely changing one of the key first decisions that had been made.
For example, let’s say that a map needs to be made to display at a public meeting about a proposed nature area. The cartographer makes a key decision at the beginning of the project to zoom out quite a bit from the nature area so that a nearby city is shown. The cartographer reasons that the overall context of the nature area in relation to the city area is important. However, when the map is done it is apparent that the map is too busy and the nature area has wound up not being the central focus as it should be. The experienced cartographer will go back and start over again. The novice may just keep it as-is.
I suppose you could argue that a seasoned cartographer doesn’t make bad choices to begin with. It may be that design decisions like that become easier and better with experience. However, no matter how experienced we are, there is always a chance that we will make blunders. It’s if we fix them that matters.
My Other Articles
Posted by Gretchen in Best Practices on May 19, 2011
If you’d like to read some of my older articles, going back to 2007, take a look at my other site’s publications page. I was posting articles to that site for a long time before starting this blog.
Recent Comments