A Cartographer’s Gifkit
- When the colors, typography, labels, and features harmonize perfectly on each page in the map atlas you just created.
- When one small change in the cartography causes a cascade of unintended consequences.
- Your reaction when someone in a frog suit tells you to make that color a little more peachy.
- When your first d3.js interactive map actually works.
- When someone tries to reign in your cartographic profligacy but you badass onward.
A Cartographer's Gifkit. http://t.co/stX1SHdV4u
— Gretchen Peterson (@PetersonGIS) September 18, 2015
TGIF! #gistribe https://t.co/4rrpFU8c2G
— Kiri Carini (@tomatopurl) September 18, 2015
This is awesome: @PetersonGIS “When your first d3.js map actually works” and other GIFs http://t.co/UrRBzUZ6cB pic.twitter.com/o0YIuJEjpQ
— Alberto Cairo (@albertocairo) September 18, 2015
A bit of humour #cartographyhumour https://t.co/DRONgbXId3
— Geomatics UCT (@GeomaticsUCT) September 18, 2015
Huge Increase in Sharability by Combining Git and QGIS
After tweeting today about the Unmitigated Amazingness that is a QGIS + Git workflow, someone suggested that I write a blog about my experiences in this regard. Unfortunately today is a deadline day for a portion of what will become my next book* so I can’t put a lot of time into a full-blown explanation of how this workflow will CHANGE your life. But I can give you a taste.
To that end, in a nutshell, and realizing I might be leaving out some important bits of information and because I suspect there are a lot of people out there who’ve never used this workflow before in their life, I’m deliberately not using the technical Git terms pull, push, etc., just to keep it simple:
- You begin by installing Git on your machine
- Unless you want to use command-line Git the two choices that I’m familiar with are the following combinations: Bitbucket for your online stuff** and SourceTree to manage updating that stuff OR GitHub for your online stuff and GitHub Desktop to manage updating that stuff
- You create a project (aka “repo”) on Bitbucket or GitHub
- Copy it locally via SourceTree or GitHub Desktop
- (alternatively you can create it locally and then create it in the cloud)
- I suggest that all the geodata you’ll use goes in one folder within the repo while all the QGIS projects you create go in another, any images or other odd things that you need in your QGIS projects could go in a Misc folder
- You do your work normally: create a QGIS project, add data, but do it all within that repo folder on your machine
- Open SourceTree or GitHub Desktop on your machine and it’ll tell you that you made changes like that you added data and that you created a project, you can choose if you want all of that to be put in your Bitbucket or GitHub cloud. If you do want it up in the cloud, you use one of those programs to sync it up with your cloud repo
- Your collaborators simply use their own SourceTree or GitHub programs to put that project and its data on their machine exactly as you uploaded it. If they make changes that they want you to see then they can also sync those up, then your SourceTree or GitHub alerts you about the changes
And guess what?! In this way your QGIS project and all the files it uses are easily synced with other people. You don’t have to zip anything up. You don’t have to locate all the places where you put your data because you’ve already put it all in that repo/geodata folder. You can order online cheap Viagra that will help you get a strong erection in bed. There is NO repairing of data source paths on your collaborator’s end! Think of the possibilities! It is truly a wonderful thing.
Now, I really am sure that I’ve left a whole lot of info out while trying to create this simple bird’s-eye view of the process but hopefully this provides a taste of the possibilities so that you can go learn more. After using both the Bitbucket/SourceTree workflow and the GitHub/GitHub Desktop workflow I personally find the GitHub/GitHub Desktop workflow to be a bit easier. Its desktop program is a little more streamlined as it “exposes” less of the advanced capabilities.
————Edited 9/1/2015 to add: Soon after posting this a reader pointed out that James Fee and I had coincidentally written about similar topics on our blogs yesterday. His topic was spatial DATA versioning while mine was spatial PROJECT versioning. To be clear, the project-sharing that I’m talking about in this post doesn’t really involve changing data at all. In fact, what I’ve been doing is collaborating with someone else on cartography designs using QGIS, and we needed a way to see each other’s designs (i.e., QGIS projects) and tweak them and send them back and forth. So yes, while we do store spatial data in our git repos, we aren’t concerned about that data changing, just really the styling of the data within the QGIS projects themselves. Fee explains much better in his follow-up post GIS and Git. ————
*First public hint about my next book: it will be about cartography! 😉
**Highly technical here
Very simple overview of how to use a QGIS-Git workflow to dramatically increase sharability: http://t.co/WIC9hkshSl
— Gretchen Peterson (@PetersonGIS) August 31, 2015
@PetersonGIS it’s a great workflow. How do you deal with changes in binary data? Potentially huge repo if you commit many changes to .shp’s
— Kristian Evers (@kbevers) August 31, 2015
@kbevers @NickBearmanUK @PetersonGIS Put your map data in a “geogig” repo: https://t.co/GxAgrioqX3
— Barry Rowlingson (@geospacedman) August 31, 2015
.@erikfriesen @PetersonGIS being doing this for almost a year now, no regrets. easier than geogig albeit the granularity of diffs missed
— Antonio Locandro (@antoniolocandro) August 31, 2015
Simple version control for #QGIS with Git: http://t.co/vBt7YdZWWT /by @PetersonGIS
— zanols (@zanols) August 31, 2015
@PetersonGIS how large of files do you keep under version control?
— Nick Swanson-Hysell (@polarwander) August 31, 2015
@antoniolocandro @PetersonGIS I'm wondering if maybe a hybrid of the two would be manageable. Git for map project files, geogig for data.
— Erik Friesen (@erikfriesen) August 31, 2015
@erikfriesen if you aren't using a database for example using no shapefiles using GML or JSON in GitHub could be enough @PetersonGIS
— Antonio Locandro (@antoniolocandro) August 31, 2015
@erikfriesen we actually manage map doc in git and version control in postgis through custom solution @PetersonGIS
— Antonio Locandro (@antoniolocandro) September 1, 2015
@antoniolocandro @erikfriesen @PetersonGIS Stupid question: what if collaborator changes dir path in a repo, does it break everything?
— Christopher Rice (@colocarto) September 1, 2015
@polarwander The repo I'm working with right now has 1.2 GB in it and 200 files (there's an additional 400 small git files too).
— Gretchen Peterson (@PetersonGIS) September 1, 2015
@colocarto no stupid question, hmm it might for map project not sure for data itself @erikfriesen @PetersonGIS
— Antonio Locandro (@antoniolocandro) September 1, 2015
@PetersonGIS @antoniolocandro @erikfriesen Slick, can it do rollbacks too? Postgres support also?
— Christopher Rice (@colocarto) September 1, 2015
@antoniolocandro @PetersonGIS absolutely. I was just thinking of the case of using postgis for a datastore
— Erik Friesen (@erikfriesen) September 1, 2015
@erikfriesen for that you need geogig but project left boundless to eclipse nothing new in a while @colocarto @PetersonGIS
— Antonio Locandro (@antoniolocandro) September 1, 2015
@antoniolocandro @colocarto @PetersonGIS damn 
— Erik Friesen (@erikfriesen) September 1, 2015
@erikfriesen @antoniolocandro @PetersonGIS Agreed. If I'm using QGIS it feels strange not utilizing PostGIS
— Christopher Rice (@colocarto) September 1, 2015
@PetersonGIS and it's smooth sailing when there are changes to big files?
— Nick Swanson-Hysell (@polarwander) September 1, 2015
@PetersonGIS I ask as I was just at a workshop where they advised against using git for quite large files.
— Nick Swanson-Hysell (@polarwander) September 1, 2015
@PetersonGIS but as you say it is great to just have everything up on Github.
— Nick Swanson-Hysell (@polarwander) September 1, 2015
@PetersonGIS I was working on project where the .git directory of the repo was ballooning in size due to changes to graphic files
— Nick Swanson-Hysell (@polarwander) September 1, 2015
@polarwander @PetersonGIS Did they say why?
— Cian Dawson (@cbdawson) September 1, 2015
@cbdawson @PetersonGIS but it seems to work pretty well to have large files in repos and it is nice to treat all file types the same way
— Nick Swanson-Hysell (@polarwander) September 1, 2015
@spatialadjusted @cageyjames It sounds like @PetersonGIS is thinking along the same lines http://t.co/OffokmVufL
— Phil Knight (@PhilipWhere) September 1, 2015
@PhilipWhere @spatialadjusted GeoGit = versioning for data, whereas I was speaking more to versioning a project.
— Gretchen Peterson (@PetersonGIS) September 1, 2015
@PetersonGIS great blog [again] Gretchen, what format data are you using though? Is it all shapefile?
— Nicholas Duggan (@Dragons8mycat) September 1, 2015
@Dragons8mycat In our case we have shapefiles and a SpatialLite osm db of the Seattle area.
— Gretchen Peterson (@PetersonGIS) September 1, 2015
@PetersonGIS Just curious if your tried any branching/merging of your binary data files.
— Bill Dollins (@billdollins) September 1, 2015
@PetersonGIS Thx. Specifically, I want to see if this works: http://t.co/e37RfyMXcX
— Bill Dollins (@billdollins) September 2, 2015
foundational education vs pinnacle education
Posted by G.P. in Cartography Profession on August 20, 2015
In education there’s a debate over whether a student should start with foundational knowledge and build from there or whether they should start at a place (i.e., framework) that’s further along so that they can reach even higher at an earlier age than those who have to, for example, spend time memorizing addition and subtraction problems when they could have just relied on a calculator. (Apparently this is something they do in Sweden?)
Recently we in GIS land have brought up the question of whether a 12 week course that promises to teach you to be a data scientist is snake oil or not. Given: an advanced statistics degree is a huge accomplishment and we need some people who know those fundamentals.
The question is, though, what if we taught the tools (but not the theory, thus saving people time) to people who are experts in other disciplines? I think there’s definitely a place for modularized education like this. And providing this 12 week option–if the quality of the teaching is good–could enable more and better advances.
It’s kind of like saying: “we shouldn’t enable people without expertise in cartography or GIS analysis to make maps by providing shortened educational opportunities.” Well guess what? We didn’t provide shortened educational opportunities and still our entire profession has been up-ended over the past 10 years by people who have had virtually no expertise in GIS analysis or cartography.
The gut tells us that too much territoriality never leads to new thinking. If the 12 week course turns out to be a disappointment due to poor teaching that’s certainly something to complain about. But to complain about the spirit of the course? Perhaps that’s sour grapes.
Disclaimer Idea
I’m thinking something like this for your next cartographic disclaimer? Adjusted for mappiness of course.
We DID NOT walk 500 mile.
And we WOULD NOT walk 500 more.
~ The Disclaimers.
— Ollie Garch (@ojedge) November 30, 2014
Too Much New Software, Too Many New Libraries, Even for the Nerds
Posted by G.P. in Cartography Profession, Design on July 28, 2015
I’ve been learning d3 and it’s been a fantastic learning experience. It’s a low-level javascript library for making interactive visualizations–including maps–inside SVGs for nifty client-side rendering effects of datasets that can be updated easily with new data. Incidentally, Scott Murray’s book Interactive Data Visualization for the Web is a fantastic first resource full of the most important bits to know and written in an accessible style.
So that’s been great, yes. But even the nerdiest among us have their limits to how much of their day, their week, or even their year, they can jam with brand-new material. And that leads me to my thesis, which is that when you are designing a new product or a new library or a new software, keep in mind that it isn’t just your computer laypersons who have to leap a mental hurdle to even begin working with your product, it’s also people who have just simply saturated their number of “new things” that they can handle that week or that month, even if they are supremely computer-savvy individuals who would really have very little trouble with your product.
In product design teams, the engineers and managers all have a tendency to think about the lowest common denominator when designing, and that’s certainly not a bad thing. But the computer scientists among us hit their limits from time to time too. So you’re not just in the business of making things simple for the newbies, your making your product easy to adopt by everyone.
My forays into d3 came at a time when I was granted a few hours of paid time to work on it, and that helped. It also came at a time when I was ready to really dive into something new. Don’t count on that being the case for all your users.
As usual I’ve reached the end of my little thesis statement with a feeling that it could be argued in the opposite direction as well (the downside of being analytical). So if you’d like to argue the opposite please go for it. And keep in mind the goal here: giving the best advice for teams building brand-new products as well as individuals who are building brand-new tools and libraries.
@PetersonGIS ok, now I get your point, there is a limit to the time one can spend on learning new stuff, and there is more stuff than ever
— Atle Frenvik Sveen (@atlefren) July 28, 2015
@PetersonGIS hardest thing about Unix philosophy. Do you think of ls, cp, mkdir as different programs? how do we package 2 work better?
— David Bitner (@bitnerd) July 28, 2015
@PetersonGIS and as a js-developer this is real: there are a new framework/library/build tool/package manager each week
— Atle Frenvik Sveen (@atlefren) July 28, 2015
@PetersonGIS so, I'll say my limit to learning new ways to do stuff I know is low, but the limit to new stuff (ie 3d, vector tiles) is not
— Atle Frenvik Sveen (@atlefren) July 28, 2015
@PetersonGIS Absolutely! I've ignored stuff like d3, too much to learn and work won't adopt it if I'm the bottleneck anyway.
— Adrian Lee (@mapsnstats) July 28, 2015
@atlefren @PetersonGIS see https://t.co/SkaWvicDeM
— Sean Gillies (@sgillies) July 28, 2015
Before even writing a single line of code, determining the best js lib/fw option is far more challenging than doing https://t.co/sDKgj09aaY
— Christopher Rice (@colocarto) July 28, 2015
@PetersonGIS Agreed. My approach 1) Choose a framework 2) Get proficient with it 3) Build apps with it 4) Ignore the cool kids
— Bill Dollins (@billdollins) July 28, 2015
@billdollins @PetersonGIS About #1, what is the deciding factor(s)?
— Christopher Rice (@colocarto) July 28, 2015
@billdollins @PetersonGIS 5) Don't reference the show "Friends" in blog post.
— Jonah Adkins (@jonahadkins) July 28, 2015
@jonahadkins @billdollins You secretly loved it.
— Gretchen Peterson (@PetersonGIS) July 28, 2015
@colocarto @PetersonGIS I wish I could say I had a scientific approach. Depends a lot on the connection I feel with the framework authors.
— Bill Dollins (@billdollins) July 28, 2015
New.js is now Old.js because you turned your back for a second https://t.co/0d68TdLcDn
— Silas Toms (@silasmerlin) July 28, 2015
@colocarto @PetersonGIS There are few better ways to make a psychological connection with someone than using their code.
— Bill Dollins (@billdollins) July 28, 2015
@jonahadkins @PetersonGIS The one with the Javascript framework.
— Bill Dollins (@billdollins) July 28, 2015
@billdollins @PetersonGIS Yes, and of course, the level of success that f/w as evidenced by case studies is a big one.
— Christopher Rice (@colocarto) July 28, 2015
@colocarto @PetersonGIS They by and large all do the same stuff, it's "how" that varies for the most part.
— Bill Dollins (@billdollins) July 28, 2015
@DonMeltz I prefer the term 'augmented human.' @colocarto @PetersonGIS
— Bill Dollins (@billdollins) July 28, 2015
.@PetersonGIS re your blog post http://t.co/QCbXCosJUn pic.twitter.com/EeroQn0wsN
— Brian Timoney (@briantimoney) July 28, 2015
@PetersonGIS learning stuff takes lot of mental power and you need downtime to interanlize all that,going deeper wins&new makes us fresh
— Kuba Kończyk (@KubaKonczyk) July 28, 2015
More Mark Twain, on His Paris Map
Posted by G.P. in Uncategorized on July 17, 2015
I previously wrote about Mark Twain’s Paris Map but I don’t believe I had come across this particular account of it in his autobiography at the time. I’ve been re-reading the autobiography lately and when I read the bit shown below I figured I should pass it along. It sets up the reasons why he made the map–in a fit of creativity resulting from the somberness of having just taken care of two people who ultimately died–and also goes into what he (imagines?) the map’s effects were on those who saw it.
It seems as though Twain forgot to mention in this part of his autobiography the fact that the map had been printed in reverse. Elsewhere he says:
By an unimportant oversight I have engraved the map so that it reads wrong end first, except to left-handed people.
Recent Comments