I had been looking through my box of “spare stuff” to find a slightly ageing Transcend Wifi SD Card which I could use in my Nikon D800. To cut a long story short, I wanted to upload a few selected photos from the card to my smartphone and this seemed like the easiest way. OK, so the card is a little slow, but for a few photos that’s fine. The first task was to upgrade the firmware of the card to the latest version, install the WiFi SD App and then connect to the camera. It didn’t work. In fact, the smartphone couldn’t find the card at all which suggested that the card wasn’t being powered. I tried scanning at the same time as I was taking a photo and the card would briefly appear before disappearing.
Clearly the card is not continually powered by the camera and after some slightly long-winded Googling I found this page. In short, there are two modes where the card is constantly powered:
- Live View
- “Auto-Meter Off Delay” switched off
The “Auto-Meter Off Delay” from your Custom Settings is the one to change (and select it as an option on your MyMenu). Once you set this to infinity the camera powers the card and you can then access it via the smartphone app.
If you are using something like Snapseed on your phone to edit, then it is a whole lot quicker to shoot in “RAW+JPEG Basic” (the 36MP resolution means “Basic” is actually pretty detailed!), before uploading just the JPEG.
Designing SSRS reports in Visual Studio is liberating in how easy it is to get them up and running, but every so often you come across a gotcha that you think should be straight forward. One of them is copying a “solution” (VS’s name for a set of project files) to a new location. You might want to do this because you want to back it up, duplicate it for another related project, or just to run some tests against a demo version. What’s missing in VS is a “Save As” for the whole solution (you can do it for individual reports). If you copy the folder containing all the files you can create a new version in a new location, however all of the hard coded file locations will be incorrect and it will then fail to load.
So what is the solution?! Well you could create a new solution, then add in copies of all the existing reports, but then you still have to set it all up again which is just a little self-defeating. Surprisingly, the simplest thing is to copy the solution folder, but keep it within the same directory as the original, just changing the name. You can then open the copied solution from within this folder and all the reports load correctly (as new copies). If you are deploying this to SSRS then you will need to change the name of the solution in the solution properties, but then you are good to go.
Whilst designing a report for deployment to SSRS from Visual Studio 2015, I received this error message when entering a SQL query I knew worked in to the New Report wizard:
An error occurred while the query design method was being saved. An item with the same key has already been added.
This is a classic Microsoft error message that is both specific and vague at the same time… and also shouldn’t happen. There are scant details online as to where this comes from but is a result of Microsoft SQL Server Report Designer having a requirement for unique column names (even if the underlying SQL query returns unique columns with the same name). This is a stupid limitation and whilst the error message is accurate, it is sufficiently vague to obfuscate what is going on.
The solution - unsurprisingly - is to make sure that there is no repetition in the names of the columns.
Grouping objects should be one of those things that is - well - easy to do! In Microsoft Word you Ctrl select each object, then right-click and select “Group”. Easy. In Visual Studio 2012, not so. You would have thought that, in Microsoft’s prime programming environment, these simple layout tasks would be easy, but thy’re not and it’s not documented anywhere. In my particular instance I was creating a SQL Server Reporting Services report where images in the template were moving depending on the number of rows in the output. The solution was to group the images together.
The grouping concept is sensible and well implemented, it’s just that working out how to do it is difficult! You actually have to insert a new rectangle object and then drag-and-drop the objects you want to group in to it. Once you’ve done this, the properties of your contained objects should look something similar to this where the “Parent” attribute under “Other” shows “Rectangle”. Now if you move the group, they all move. Job done!
ISO 3166-1 just trips off the tongue, however it’s one of those standards that underpins a fair amount of daily geospatial traffic that is undertaken on a daily basis. Yes, I’m talking about country codes which Wikipedia helpfully defines as:
ISO 3166-1… defines codes for the names of countries, dependent territories, and special areas of geographical interest
This is important because it is used in so much analogue and digital data exchange between countries, although don’t for a moment think the ISO is the only organisation that defines country codes… but that’s a whole other blog post!
What gets in included in the list is interesting… the criteria for inclusion include member states of the United Nations, a UN specialized agency or a party to the Statute of the International Court of Justice. Becoming a member state of the UN is clearly helpful, although what makes a country is interesting in itself, as well as highly politicised. Palestine is an obvious example, but just look at the UK. The UK is a country, but should Wales, Scotland, and Northern Ireland also be included? For example, they are included for FIFA. The UN loosely uses Article 1 from the Montevideo Convention which outlines four qualities a state should have: a permanent population, a defined territory, government, and the capacity to enter relations with other states.
Anyway, once you are on the ISO 3166-1 list you get 2 and 3 letter codes, along with a 3 digit numerical code. These are maintained by the ISO 3166 Maintenance Agency and, given the above, change regularly. You can view the current list here and subscribe to official updates.
At the RGS we are a membership organisation and take online international payments, so having up-to-date country codes is important. Rather than subscribe to the ISO, we use the UK government Country Register, which includes an update service. It has the ISO-2 letter codes, although isn’t necessarily identical (as it’s countries the UK recognises).
EGU 2020 Short Course: UAV Data Collection and Analysis: operating procedures, applications and standards
UAV Data Collection and Analysis: operating procedures, applications and standards
Conveners: Paolo Paron; Co-conveners: Mike James, Michael Smith
UAVs have reached a tipping point in geoscience research such that they are near-ubiquitous and commonly used in data collection. In this way they are opening new ways to study and understand landforms, sediments, processes and other landscape properties at spatial and temporal scales that is close to the scale of the processes that operate. However this implies that non experts are entering the field of photography, image interpretation, photogrammetry and 3D modelling often without a solid grounding in the principles of surveying. This course aims at providing a solid foundation for UAV users in order to avoid simple mistakes that can lead to legal restrictions, UAV loss, operational problems and poor quality data.
We will introduce pre-flight, in-flight, and post-flight procedures that aim at optimizing the collection of high quality imagery for subsequent downstream processing. We will also demonstrate the analyses of data by means of existing state of the art commercial software, such as Pix4D and Metashape for point cloud analysis, and eCognition for object based image analysis. We will also demonstrate the use of open source/open access software like Cloud Compare and Orfeo Toolbox
This blog has been offline for a little while as the original Blosxom implementation had been hacked. Blosxom was a wonderful CGI script that was elegant in its simplicity yet eminently extensible through the many plugins which existed and made it moderately feature rich. Best of all, it used plain text files to store all its entries which makes backup and conversion much simpler than a database. With my implementation of blosxom decommissioned, I needed to find a replacement. Google flat file blogging engines and there are a lot. However many of the projects have been orphaned, like blosxom, and no longer in active development. What I wanted to find was an engine that was simple, had some good features and an active community. Flatpress seems to fit the bill with a new maintainer - and active Flatpresser - Arvid Zimmerman.
The next step was to convert my archive of over 1000 blosxom blog entries to Flatpress. Big shout out to James O’Connor who wrote the Python script to convert the files. The process is broadly this:
- download your Blosxom files, including all the sub-directories for categories, but make sure to maintain the date/time filestamp of individual files - this is used to timestamp the entry for Flatpress. WinSCP does this (Filezilla doesnt)
- make sure the categories only ONE DIRECTORY DEEP. Move any sub-sub-directories up to the top level
- rename all the directories to numbers. These are used to tag the entries and can then be recreated within FlatPress
- copy the script.py and template files to the directory the folders are stored in
- edit the template file to have the header/footer you want. The content, date and categories will be changed for the entries
- run the script
- a new fp-content directory will be created with all your entries
- upload this to your flatpress site and rebuild the index
The script does the following
- renames the file to entry‹date›-‹time›.txt based upon the date modified date
- copies the file to a new subfolder in FlatPress /content folder based upon year and month
- deletes the first line from the file (and deletes the first line break)
- prefixes the file with:
- suffixes with:
This Special Issue of the Journal of Maps is devoted to
highlighting contemporary examples of interdisciplinary collaborations between the arts and the geosciences (e.g. geomorphology, geology, Quaternary studies), with a specific focus upon the exploration of locations using, at least in part, some form of mapping. As previous
contributions to the journal have exemplified, mapping is essential for the exploration of locations, particularly by supplying visual representation to help with the characterisation of three core geographical concepts (Matthews & Herbert, 2008): space (e.g. distances, directions), place (e.g. boundaries, territories), and environment (e.g. biophysical characteristics).
FREE EPRINT: Testing and application of a model for snow redistribution (Snow_Blow) in the Ellsworth Mountains, Antarctica, Journal of Glaciology
Wind-driven snow redistribution can increase the spatial heterogeneity of snow accumulation on ice caps and ice sheets, and may prove crucial for the initiation and survival of glaciers in areas of marginal glaciation. We present a snowdrift model (Snow_Blow), which extends and improves the model of Purves et al. (1999). The model calculates spatial variations in relative snow accumulation that result from variations in topography, using a digital elevation model (DEM) and wind direction as inputs. Improvements include snow redistribution using a flux routing algorithm, DEM resolution independence and the addition of a slope curvature component. This paper tests Snow_Blow in Antarctica (a modern environment) and reveals its potential for application in palaeo-environmental settings, where input meteorological data are unavailable and difficult to estimate. Specifically, Snow_Blow is applied to the Ellsworth Mountains in West Antarctica where ablation is considered to be predominantly related to wind erosion processes. We find that Snow_Blow is able to replicate well the existing distribution of accumulating snow and snow erosion as recorded in and around Blue Ice Areas. Lastly, a variety of model parameters are tested, including depositional distance and erosion vs wind speed, to provide the most likely input parameters for palaeo-environmental reconstructions.
FREE EPRINT: Quantification of Hydrocarbon Abundance in Soils using Deep Learning with Dropout and Hyperspectral Data, Remote Sensing
Terrestrial hydrocarbon spills have the potential to cause significant soil degradation across large areas. Identification and remedial measures taken at an early stage are therefore important. Reflectance spectroscopy is a rapid remote sensing method that has proven capable of characterizing hydrocarbon-contaminated soils. In this paper, we develop a deep learning approach to estimate the amount of Hydrocarbon (HC) mixed with different soil samples using a three-term backpropagation algorithm with dropout. The dropout was used to avoid overfitting and reduce computational complexity. A Hyspex SWIR 384 m camera measured the reflectance of the samples obtained by mixing and homogenizing four different soil types with four different HC substances, respectively. The datasets were fed into the proposed deep learning neural network to quantify the amount of HCs in each dataset. Individual validation of all the dataset shows excellent prediction estimation of the HC content with an average mean square error of ~2.2×10-4. The results with remote sensed data captured by an airborne system validate the approach. This demonstrates that a deep learning approach coupled with hyperspectral imaging techniques can be used for rapid identification and estimation of HCs in soils, which could be useful in estimating the quantity of HC spills at an early stage.