It is with great pleasure that I am able to announce the award of the 2016 “Best Map” to Bernhard Jenny (RMIT University), Johannes Liem (City University London), Bojan Savric (Esri Inc) and William M. Putman (Goddard Space Flight Center) for their animated map visualizing a year of changes to Earth’s CO2 titled “Interactive video maps: A year in the life of Earth’s CO2”. When the map is first loaded it appears as an animated map of the world showing just how dynamic this part of the Earth system is. But interact with the map - you find it’s pannable and zoomable - all other ways of interacting with 4D data seem mundane in comparison.
The awards committee noted the remarkable interactive animation; something that both tells a story and allows you to investigate. A big leap forward for interactive cartography, drawing the viewer in and allowing them to formulate potential global implications. For these reasons it is a deserving winner of this year’s award.
As you will see with this Editorial, it has been a year ofintense activity at the Journal of Maps (JoM). The mostimportant announcement is the move of JoM back toan Open Access (OA) publishing model which waseffective from 1st September 2016.
Art-geoscience: exploring interdisciplinary representations of space and place
We would like to invite contributions to a special issue of the Journal of Maps devoted to interdisciplinary collaborations between the arts and sciences, with a specific focus upon an exploration of a location using, at least in part, some form of mapping and ideally involving the collaboration of artists and scientists.
PURPOSE The fundamental basis for this special issue is the growing interest in interdisciplinary collaboration and in particular the crossover between the arts and sciences. Art is seen an important component in exploring and explaining science, whilst science offers new avenues for creative investigation and recording of phenomena. This is a general call for a special issue entitled ‘art-geoscience: exploring interdisciplinary representations of space and place’ and provides an opportunity for collaborative researchers to present their work.
BACKGROUND Recent years have seen increased collaboration between the arts and sciences, with conferences, exhibitions and residencies devoted to exploring the inspirations and mutual benefits that can arise from activities that bridge the two spheres. Subjects such as biology, chemistry, and global climate change commonly feature prominently in such collaborations, but many of the geosciences (e.g. geomorphology, geology, geophysics) are less well represented.
Despite rapid movements towards global connectedness, with people, goods, services and scientific data now moving at speed over vast distances, space and place still retain great power in shaping the world. Many visual art forms can help to document and represent such themes, especially when combined with various forms of mapping.
TOPICS Without constraining the range of topics that are potentially suitable for inclusion in the special issue, we offer the following as examples:
use of scientific methods or techniques specifically for an artistic investigations of a location;
scientific data already collected for a location-based projects that are re-used or re-purposed for artistic means;
artistic data or outputs that are re-purposed and re-used for a location-based, scientific project;
use of artistic techniques to investigate phenomena and/or enhance presentation and communication of scientific data.
The artistic medium can be anything that can be reasonably explained or presented within the journal. Beyond the inclusion of traditional mapping products (see below), we are keen to see submissions that may also use 3D models, video or audio to enable space- and place-based representations, or videos that present and explore the artistic work itself.
SUBMISSION All papers are expected to consist of a map or series of maps (loosely and broadly defined to include various forms of spatial representation) accompanied by brief explanatory text. Papers should be bespoke, and the mapping of good quality. All papers in this special issue will be peer reviewed. To submit a paper, authors should do the following:
1. Submit a short draft (500 word limit) outlining the key themes and scope of the paper, where possible including example mapping, by 28 February 2017.
Abstract selection will be by the special issue editorial team. You will receive a notification by 31 March 2017.
2. Submit a completed paper (4000 word limit) by 30 June 2017.
3. The special issue will be published in 2018.
Ideally, the work would involve the collaboration of artists and scientists.
The special issue editorial team are happy to discuss ideas for papers and their suitability with potential contributors prior to the short draft submission stage. Please email Mike Smith (firstname.lastname@example.org) or Stephen Tooth (email@example.com) in the first instance.
All submissions should be made via the Journal of Maps website (http://www.tandfonline.com/toc/tjom20/current) where further guidance on all aspects of submission can be found. Please note the journal is open access, with an article processing charge of £400.
Stephen Tooth, Aberystwyth University, UK Mike Smith, Kingston University, UK Heather Viles, University of Oxford, UK Flora Parrott, Tintype, London, UK
I was recently teaching a class on introductory cartography where we were using a range of different socio-economic datasets including 2011 counties and middle super output areas (MSOA) of the UK from the UK Data Service. These are (helpfully) made available in a range of different formats including the ubiquitous shapefile. These are helpful for choropleth mapping of socio-economic (census) data, use as location maps and when clipping other datasets for including topographic data on maps (e.g. Meridian 2).
One student wanted to generalise the polygons for the location map - thinking this would be easy he went ahead and ran the toolbox tool but end up with lots of sliver polygons as a result. Crucially, as a shapefile doesn’t store topological relationships, the tool was generalising each polygon separately resulting in a very poor output. And this was exacerbated by the fact that the borders were provided pre-generalised.
The obvious solution is to use a topological version of the data - which isn’t provided. The next step is therefore to create the topology in ArcGIS before generalising it. And whilst not difficult, it is a little convoluted to achieve! I found this page particularly helpful and it provided the core of processing (and remember, as with all computing instructions, you need to follow it to the letter!) which can be carried out in ArcCatalo. In short, the steps are:
1. Create a new geodatabase (either file or personal) 2. Create a new feature dataset within that 3. Import the shapefile into the feature dataset 4. Create new topology in the feature dataset 4a. For the topology you will need to use two rules: (a) no gaps and (b) no overlap 4b. This will throw an error where you have coastlines because (obviously) you have a gap! 5. At this point you now have built topology for the dataset and you can proceed to simplify/generalise the borders. Note that there will be multipart polygons present and if (like me) you want to delete any small islands to clean up data for use as a location map then you will need to run the “multipart to singlepart” toolbox tool.
This all proved a little more long-winded than I was expecting, but such is the price of topology! That did make me wonder if I could (easily) do this in QGIS and my initial research suggests not. Yes, the latest versions of QGIS have the Topology Checker Plugin (built-in) which checks topology (doh!) but as far as Im aware there is not an open source file format that supports topology. The grown up solution would be to use a PostGIS/PostgreSQL database but this isn’t particularly useful when you want to distribute data. If anyone knows better (or can correct me) then please do get in touch!
… and the stories they tell. The Washington Post ran a nice story earlier this month mapping the extent of infrastructure in the US. This is in response to Donald Trump’s (sketchy) plans to invest in infrastructure projects. This was subsequently followed up with a nice blog post on how they were created and, in particular, the courses of data and an idea of the data wrangling going on behind the scenes. What’s telling here is the simplicity of the rendering and that journalists use QGIS because its free, but that Photoshop and Illustrator (rather than GIMP and Inkscape) are still the graphic artists tools of choice. I wonder if this would be any different if there was GIS expertise on their teams to support the graphic designers…
My big lesson was the importance of a simple message, and saying it the same way over and over. If you’re going to change it, change it in a big way, and make sure everyone knows it’s a change. Otherwise keep it static.
…and that goes for any type of communication. Keep the core message simple to understand because whilst the implications may be profound, your target audience needs to be able to take it in and interpret it unequivocally.
Sense About Science kicked off their Evidence Matters campaign earlier this year and this month held a meeting in parliament to push the importance of policy decisions based upon factual evidence. That is, making decisions that have impact upon society for the benefit of all, not simply to push a political agenda or because it’s what politicians believe but not what evidence shows. And the corollary is ignoring evidence - when it has been collected and presented, don’t make a decision because you don’t like the evidence (the so-called “post-truth society”). It’s critically important for the community we share and the environment we inhabit. There shouldn’t be elitist strongholds on decision making, but egalitarian approaches that value all.
Way back in 2009 I published a paper on the Cookie Cutter which outlined a method (and accompanying script) for calculating the volume of drumlins. This worked in ArcGIS 9.2 using the Python interface to a number of ArcGIS toolbox tools. Fast forward 9 years since I first wrote the script and, not too surprisingly, it doesn’t work (thanks for telling me Arturs!).
I finally sat down a few weeks ago to bug fix the script which was actually easier than I thought it would. It’s actually comprised of two scripts - the first sets up some working directories and takes an input shapefile, splitting into a number of new shapelines (one per drumlin). The second script then performs the volume calculation on each drumlins. It turns out (given Im pretty much only calling Toolbox tools) that there wasn’t much to fix… a third party script splitting the initial shapefile had to be removed, a bug in the command adding a new field and then reference to ArcGIS 10.4 paths. For those wanting to use it, please download the attached files and follow the notes below.
I use WinPython 2.7 and then the excellent Spyder IDE to run the scripts from
in Spyder you need to change the Python console to the path of the one that ArcGIS has installed. Goto Tools -> Preferences -> Console -> Advanced Settings then change “Use the following Python Interpreter”. It should be something like: C:Python27ArcGIS10.4python.exe
at the top of the cookie_setup script set the project_directory to the location of the main input shapefile. For outlines, set the name of the input shapefile
Press F5 to run the setup script, creating the working directories and adding a new field to the shapefile
the next part needs to be performed manually (I haven’t had time to add in and test the Toolbox call)… add a new text field to the outlines attribute table called “split” and consecutively number each row from A1 to An (ie your last row). In QGIS the expression in the field calculator is concat(‘A’, @row_number )
save the file then use the ArcGIS Toolbox tool Analysis Tools->Extract->Split
use this to split the outlines shapefile based upon the “split” field you just created. Specify the “Target Workspace” as the “input” directory that has been created in your project directory
now load Cookie_Cutter into Spyder and again specify the following 5 inputs:
project_directory : the project directory
nextmap_dsm_img : the input DEM
gp.cellSize : the DEM cellsize (in metres)
tension_parameter : leave this as it is
buffer_parameter : the distance to buffer your drumlins (in metres). The example shows 20m, for a 5m DEM
on line 66-68 you might need to change the path to the listed toolboxes. This is specified for 10.4 at the moment
RUN IT! The console pane in Spyder should show you a whole load of information as it processes each drumlin. There will be a counter showing you which drumlin you are on
The key output is the Volume_Master.dbf table. You can open this in excel. It is zonal stats from ArcGIS for each individual drumlin (subtracted from the cookie cut DEM). The critical value is the SUM column that shows the total height for all pixels within the drumlin. Multiply this by 25 (for a 5x5 pixel) to give you drumlin volume.
UPDATE: If you can’t (or don’t want to) use Spyder you can just run the py script directly from the command line using the interpreter that ships with ArcGIS.
This is one of those actions that is incredibly useful every so often… you have a tab open and you actually want to duplicate (so you have an active copy) and then carry on working with the current tab. Except there is no “duplicate tab” (or clone) option when you right click in the window or on the tab. This is actually one of those Unix-esque type daisy-chaining of functions to achieve the same in result. So… middle clicking on a link will load that link in a new tab (very useful itself). Solution:
middle click on the reload page icon (next to the address bar)
Yes, one of the manuscript writing moments where I was using Endnote and wanted to cite a webpage for an organisation. Enter the oranisation name in and the European Geosciences Union gets turned into…
This is one of those annoying diversions where you either go and work and the syntax for citing it or… do it manually.
In this instance I Googled it and found that all that was needed was the humble
at the end of the author field. It’s always easy when you know how!
This week a selection (well list!) of two relatively recent resources which struck a chord.
1. cartographic-design: this is hosted over at Github and is a series of links to cartography sources that supported Maptime Boston’s May 2016 meetup. Its a relatively short but extremely useful set of resources for this wanting slightly more detail on a range of carto/design topics. One to refer back to - often.
2. Beyond the Core Knowledge: a blog post from Gretchen Peterson that looks at some important topics that sit outside (for example) The GIS&T Body of Knowledge. It’s interesting because it takes a concept, a dataset and the hoops to jump through to get (more or less) through to the end. And it’s nice because she covers all those inner decisions you end up making as a designer to get to the final product.
James and I were in Norfolk a few weekends back completing data collection for his PhD studies (his blog has at least one post relating to this) and the whole topic of data transfer speeds came back to haunt me. Amongst a number of cameras, we had been shooting with the Nikon D700 using a Sandisk 16Gb Ultra card which has read speeds of 30MB/s. We actually filled the card on day 1 (1000 shots) and need to unload the data off it. I had brought with me my cheap and cheerful Integral CF->USB card reader which works fine. Except it took the best part of 30 minutes to copy the data off the card around <5MB/s. Painfully slow.
When we got back I thought I’d dig back into data transfer speeds again. Remember that the firmware (and hardware) in a digital camera will be able to use cards up to a certain specification. The D700 is at least 64Gb cards up to 90MBs (although it might not be able to utilise the full speed of the card). My Fuji XM1 can take 32Gb cards at 50MBs. Now to achieve these speeds during data transfer to a PC, all parts of the chain need to be quick - card, card reader, USB port, bus and hard disk. In this instance the card reader was the limiting factor as it was plugged into a USB3 port on a new laptop. And just as a reminder, USB3 has a throughout of (depending upon what you read), somewhere above 400MBs (and doubling for USB3.1), whilst USB2 somewhere around 35MBs.
So, one lowcost USB3 card reader later and a new (lowcost!) 50MBs SD card, plugged directly into the USB3 port and read speeds race along. Getting this kind of throughout is both cheap and easy, but its not hard to accidentally put a weak link in that chain and see those rates plummet.
James O’Connor and Mike J. Smith (2016) GIM International
With the boom in the use of consumer-grade cameras on unmanned aerial vehicles (UAVs) for surveying and photogrammetric applications, this article seeks to review a range of different cameras and their critical attributes. Firstly, it establishes the most important considerations when selecting a camera for surveying. Secondly, the authors make a number of recommendations at various price points.
So a nice piece of environmental marketing below from Sainsburys, but it does frustrate me as a scientist when you see qualifying statements that introduce uncertainty. OK, so, the original statement:
We Harvest Rain That’s a good statement - up front, some environmental credentials. A positive message
By saving water… Saving? I assume they mean harvesting, but at least they are trying to use a different word. Collecting maybe??
…we halve the amount of mains water we use… two qualifications here. Half the mains water. OK, so we dont know if they used anything other than mains water and of course we don’t know the base. What are they comparing this to? Last year? Ten years ago? And just for this store? All Sainsbury stores? All Sainsbury owned/operated buildings, including warehouses and offices?
… per sq ft… COME ON PEOPLE!! Sainsburys, you moved to grams and litres, as much as some of your customers didn’t like it. WE ARE METRIC!!
…of sales area Ermmm…. another qualification. I’m guessing, if this is across ALL Sainsbury facilities, then warehouses will make up the vast amount of the floor area. Taking into account what the base is actually measuring against, how does this change from half.
Sorry Sainsbury, BIG fail on the marketing front. Be honest - because every little helps.
The Victorian era was the age of invention, although the discovery of photography just pre-dates this with Niépce’s famous View from the Window at Le Gras in 1826. His early collaboration with Louis Daguerre led to the announcement of the daguerreotype in 1839 and its subsequent commercialization, alongside Talbot’s calotype. These early photos now appear very rudimentary alongside their modern film and digital counterparts, however it never ceases to amaze me with the ability of these early pioneers to push the limits of possibility. I wanted to highlight how two of these continue to have had profound impact.
Our recent fascination with 3D will most likely have come from the movies through the use of polarised glasses, although some of us may well remember using filtered red/blue glasses to view a dinosaur or shark in a kid’s magazine. However an understanding of binocular vision and exploiting this to view images in 3D (stereoscopy) goes back to Sir Charles Wheatstone in 1833 with his invention of the stereoscope.
Whilst Wheatstone used pencil drawings for his stereoscope, photography was the obvious companion for it and was immensely popular with a Victorian society eager to consume new technologies. Photographers experimented with stereo through the 1840s, however it was the Great Exhibition in 1851 that was the catalyst for its exposure to an international audience. Brian May’s (yes, that Brian May!) sumptuously illustrated photobook is a prime example (May, B. and Vidal, E. (2009) A Village Lost and Found, Frances Lincoln), showcasing T.R. Williams’ wonderful stereophotos of an undisclosed village. The book identifies the village as Hinton Waldrist in Oxfordshire, rephotographs the same scenes and includes a stereoscope (designed by Brian May). Viewing examples such as this demonstrates that there is something magical about stereo vision - it’s a window on ‘a world that was’ and we view it as if we were actually there.
The second, and at the time, unrelated technology was aerial photography. Whilst we might think of this being inextricably linked with the invention of the aircraft and its rapid development in the First World War, there had been a range of creative methods for lofting a camera off the ground. The very first aerial photo was taken by Nadar in 1858 and whilst this hasn’t survived, James Black’s 1860 photo of Boston does. It may look a little passé now, however pause for a moment to consider what was involved. The 1850s saw the dominance of the collodion wet-plate process that produced a high quality negative on a glass plate. This had to be prepared on the spot as it was light sensitive only as long as it was wet and then needed to be developed straight after exposure. That meant Black had a full darkroom in his tethered balloon that was likely swaying 365m above the Boston streets. I don’t imagine there was a detailed risk assessment completed before the trip!
Probably the most successful alternative to balloons has been kites, with the first successful photo by Batut over Labruguiere, France, in 1888. However it is George Lawrence’s photos of San Francisco in the aftermath of the 1906 earthquake and fire that are astonishing (see below). He used up to 17 large kites to lift an enormous 22kg panoramic camera (my Nikon D700 with 70-200mm lens “only” weights 2.5kg!) with a 19” focal length and 20x48” plate. This was serious kite flying!
These Victorian inventions may seem distant now that stereoscopy is a key component in movie production, something movie-goers have become very familiar with. Aerial photography is equally important in map making and, when combined with stereoscopy, allow us to extract 3D features from the landscape. Kite photography is the direct ancestor of drones, a rapidly burgeoning area. Everything that was learnt about near-Earth imaging is now being re-learnt for a new generation.
Well the Windows 10 Anniversary Update has landed and, after the big download, it comes with quite an array of tweaks and new features. To get the skinny on some of these head to your favourite IT site for their run down… for example cnet or How-to-Geek.
Perhaps the most interesting for techies out there is the arrival of a Linux Ubuntu subsytem. How-to-Geek has a great rundown for installation (and to note its not really Linux, as its the bash shell, so really GNU apps). Anyway, think of this as the reverse of WINE. Opens up a world of command line scripting.
And a final note on usability and interface. Yes, really yes, the start menu has changed AGAIN. Supposedly simpler, cleaner, nicer, fresher - pick your superlative. Except… usable?? My Mum went through the update, it all installed perfectly, no errors and then… she couldn’t work out how to shut down the machine. Sorry Microsoft, FAIL on that count.