I had been looking through my box of “spare stuff” to find a slightly ageing Transcend Wifi SD Card which I could use in my Nikon D800. To cut a long story short, I wanted to upload a few selected photos from the card to my smartphone and this seemed like the easiest way. OK, so the card is a little slow, but for a few photos that’s fine. The first task was to upgrade the firmware of the card to the latest version, install the WiFi SD App and then connect to the camera. It didn’t work. In fact, the smartphone couldn’t find the card at all which suggested that the card wasn’t being powered. I tried scanning at the same time as I was taking a photo and the card would briefly appear before disappearing.
Clearly the card is not continually powered by the camera and after some slightly long-winded Googling I found this page. In short, there are two modes where the card is constantly powered:
“Auto-Meter Off Delay” switched off
The “Auto-Meter Off Delay” from your Custom Settings is the one to change (and select it as an option on your MyMenu). Once you set this to infinity the camera powers the card and you can then access it via the smartphone app.
If you are using something like Snapseed on your phone to edit, then it is a whole lot quicker to shoot in “RAW+JPEG Basic” (the 36MP resolution means “Basic” is actually pretty detailed!), before uploading just the JPEG.
The only words to describe this are awe inspiring, spectacular, mesmerising - this is a visual tour de force that is matched by wonderful evocative music. Just the basic stats behind it say it all… completed over three months, 27 days of filming, traveling across 10 states, with 28,000 miles of driving and over 90,000 time-lapse frames. Produced to 4k this movie is simply begging to be viewed on a MASSIVE screen. Now if only my local cinema would show it as a short.
Utterly stunning timelapse from Julian Tryba of the New York skyline - just when you thought you had seen it all, this takes a giant leap into the future. This makes the landscape a pallette which you can draw upon and integrate into a piece of music. It is wonderful. Sit back, watch, marvel (and then go and read how he did it).
The Victorian era was the age of invention, although the discovery of photography just pre-dates this with Niépce’s famous View from the Window at Le Gras in 1826. His early collaboration with Louis Daguerre led to the announcement of the daguerreotype in 1839 and its subsequent commercialization, alongside Talbot’s calotype. These early photos now appear very rudimentary alongside their modern film and digital counterparts, however it never ceases to amaze me with the ability of these early pioneers to push the limits of possibility. I wanted to highlight how two of these continue to have had profound impact.
Our recent fascination with 3D will most likely have come from the movies through the use of polarised glasses, although some of us may well remember using filtered red/blue glasses to view a dinosaur or shark in a kid’s magazine. However an understanding of binocular vision and exploiting this to view images in 3D (stereoscopy) goes back to Sir Charles Wheatstone in 1833 with his invention of the stereoscope.
Whilst Wheatstone used pencil drawings for his stereoscope, photography was the obvious companion for it and was immensely popular with a Victorian society eager to consume new technologies. Photographers experimented with stereo through the 1840s, however it was the Great Exhibition in 1851 that was the catalyst for its exposure to an international audience. Brian May’s (yes, that Brian May!) sumptuously illustrated photobook is a prime example (May, B. and Vidal, E. (2009) A Village Lost and Found, Frances Lincoln), showcasing T.R. Williams’ wonderful stereophotos of an undisclosed village. The book identifies the village as Hinton Waldrist in Oxfordshire, rephotographs the same scenes and includes a stereoscope (designed by Brian May). Viewing examples such as this demonstrates that there is something magical about stereo vision - it’s a window on ‘a world that was’ and we view it as if we were actually there.
The second, and at the time, unrelated technology was aerial photography. Whilst we might think of this being inextricably linked with the invention of the aircraft and its rapid development in the First World War, there had been a range of creative methods for lofting a camera off the ground. The very first aerial photo was taken by Nadar in 1858 and whilst this hasn’t survived, James Black’s 1860 photo of Boston does. It may look a little passé now, however pause for a moment to consider what was involved. The 1850s saw the dominance of the collodion wet-plate process that produced a high quality negative on a glass plate. This had to be prepared on the spot as it was light sensitive only as long as it was wet and then needed to be developed straight after exposure. That meant Black had a full darkroom in his tethered balloon that was likely swaying 365m above the Boston streets. I don’t imagine there was a detailed risk assessment completed before the trip!
Probably the most successful alternative to balloons has been kites, with the first successful photo by Batut over Labruguiere, France, in 1888. However it is George Lawrence’s photos of San Francisco in the aftermath of the 1906 earthquake and fire that are astonishing (see below). He used up to 17 large kites to lift an enormous 22kg panoramic camera (my Nikon D700 with 70-200mm lens “only” weights 2.5kg!) with a 19” focal length and 20x48” plate. This was serious kite flying!
These Victorian inventions may seem distant now that stereoscopy is a key component in movie production, something movie-goers have become very familiar with. Aerial photography is equally important in map making and, when combined with stereoscopy, allow us to extract 3D features from the landscape. Kite photography is the direct ancestor of drones, a rapidly burgeoning area. Everything that was learnt about near-Earth imaging is now being re-learnt for a new generation.
Just back from an afternoon at the Tour of Britain where Steve Cummings took the overall general classification. The route itself was 16 laps of a 3-point star centred on Trafalgar Square going out to Aldwych, down to Downing Street and then up Regent Street almost to Oxford Circus. We positioned ourselves just around from Aldwych on the Strand at St Mary le Strand. The road snakes slightly and almost all the riders come within centimetres (literally!) of the railing. It makes for a quite exhilarating position as the headwind from the peloton hits you, following by the incredible noise.
We moved to a couple of different positions over the course of the 16 laps and Ryan shot some hyperlapse (240fps high frame rate) on his iPhone. High frame rates are great fun with some models pushing 1000fps which is pretty amazing. Anyway, see the video below and watch the start carefully to see how fast they are really going!
For all you photography lovers out there, a great announcement from Google to say that the Nik Collection is to be made freely available. These are a great set of processing applications (and plugins for Lightroom) to process your imagery. Specifically Analog Efex Pro, Color Efex Pro, Silver Efex Pro, Viveza, HDR Efex Pro, Sharpener Pro and Dfine which basically deal with analogue special effects, colour correction, B&W processing, selective colour enhancement, HDR, sharpening and noise reduction respectively. My goto for nearly ALL my B&W processing is Silver Efex Pro. Really really powerful functions.
Nik’s applications put a focus on ease of use and accessibility, compatible with Photoshop, Lightroom and Aperture. The company was purchased by Google in 2012, and prior to that each program cost around $100 for a total of up to $500 for the software suite. Google opted to offer the whole bundle for $150, and made it available for all of its supported applications via a single installer.
My worry is that the Nik Collection might suffer the same fate as Snapseed which Google also bought. This is a wonderful Android application (also developed by Nik) which has been killed as a desktop application. We shall see…
It was a great honour this week to hear that my entry for the EGU 2016 Photo Contest (below) made the final cut. Just look at the past finalists to see the quality of the photos that are submitted.
Imaggeo which hosts all the photos is a worthy cause in and of itself (and the photo-contest was in-part started to promote this) as its “the open access geosciences image repositoryof the European Geosciences Union” and is part of their outreach in terms of science photography and highlighting all aspects of that visually. It grows year on year and is an invaluable resource. So I’m happy to use a Creative Commons license for my image.
The final vote is up to attendees at the conference (as per below)… so to anyone attending, go and see the photos as past experience shows you will get to see some great entries. AND VOTE!!
The finalist photographs are printed in large format and exhibited during the General Assembly. Each participant of the Assembly can then vote for up to three of their favourite exhibited photos using voting terminals set up next to the exhibition area. The public voting takes place from 8 am on Monday to midnight on Thursday. The votes are counted automatically and the three photographs with the highest number of votes are the winners. The winning photos are awarded during the lunch break on Friday.
I quite often experiment with neutral density (ND) filters for long exposure photography as part of images I produce and, over the last couple of years, have had two particular problems (neither of which I have examples of any more so I’ve pulled in some links from Google Images):
1. Cross-banding: this occurs with many variable ND filters which comprise stacked circular and linear polarisers. When you rotate the outer ring you increase the density of the glass and so reduce the amount of light. However when you overcook the density then you can get this cross-banding. Easy to fix (reduce the effect!) and easy to work out where it comes from.
2. Linear Banding: this has occurred on and off on occasion and usually when I am shooting in sunlight. My initial reaction was that this was caused by leakage of light through the filter system itself (I use Lee Filters), however no amount of fiddling with that helped it. More by accident (and then some subsequent Googling) I covered the eyepiece on my Nikon D700 (in fact most Nikon DSLR have an eyepiece shutter) which instantly solved it. Clearly a small amount of light entering through the eyepiece and, even though the mirror was up, it was leaking on to the sensor. As David du Chemin notes, its a very strange (and irregular) phenomenon!
My Nikon D700 has what are called “Custom Banks” - on face value these look like ways of saving your settings (for example if you have a preferred setup for a type of shooting) which you can then use later. Indeed you can use them this way, except the way they are set up is somewhat counter intuitive (as Ken Rockwell explains)…. whenever you change a setting it is automatically stored in the current bank, there is no save option. So by all means use bank “B” for a studio portrait range of settings but dont change anything otherwise they are automatically stored! Given you can’t lock your settings, you’ll probably end up needing to check them which kind of negates the point of using a custom bank in the first place!
The newly released Pentax K3 II is a top end APS-C DSLR in a similar vein to the Canon 70D and Nikon D7200, with a BIG however….. Ricoh have introduced in-camera image stabilisation (like other manufacturers notably Olympus) which allows several degrees of movement on the sensor. As a result of this they have added what they call “Pixel Shift Resolution” mode. With the camera immobilised on a tripod and imaging a stationary subject, the sensor is moved by one pixel in each direction producing four images that are then merged back in to one pixel-shift image. Given that the sensor has a Bayer filter, the one pixel shift allows full colour information to be recorded for each and every pixel so removing the need for demosaicing (the the interpolation of RGB pixel values). It gives the benefit of no bayer filter yet with full colour (so detail should be similar to the Leica Monochrom). It also has the benefit of reducing noise as well.
So… you’re not going to use this on a UAV for capturing aerial imagery (which you could do with the Monochrom) but it could be very interesting for terrestrial capture of static objects.
The Ordnance Survey is currently mid-way through their competition to source (cheaply?!) new photographs for the covers of the iconic series of maps - the competition is called Photofit with the first two rounds now having closed (the next covers the iconic 1:50k Landranger series). If you have some good photos of iconic landmarks and locations, as well as people “active” in the outdoors then submit your photos. They are particularly short on the remoter parts of Scotland. Initial selection is through public voting, followed by a panel that selects the final photos.
And…. I’ve submitted a range of photos which I’ve (helpfully!) shown below with click through links to the OS page where you can vote for them. So, if you like, please click through and vote!!!
It’s a strange situation now - in the past monochrome (or B&W or panchromatic) photos were the standard images to be produced. You had a choice of…. B&W film and that was it!! B&W remained the mainstay of (particularly professional) photography right the way through to the 1970s. Whilst the idea for colour projection (and photography) dates back to James Maxwell in 1855, it wasn’t until the launch of Kodachrome in 1935 that there was a viable commercial product available… at a price. The 1970s was when colour finally decreased to “consumer” prices.
Since the 1980s we have had the rise of digital which works completely differently. Whilst film has 3 layers, each sensitive to different wavelengths of light, a digital sensor is inherently monochromatic….. it only records light (a greyscale value) incident upon the sensor. On top of the sensor sits a colour filter array (CFA) which filters either red, green or blue light. The arrangement of filters in the array is critical and typically a Bayer arrangement is used. This has 50% green, 25% red and 25% blue filters meaning that the sensor records a matrix (or patchwork) of values for different wavelengths of light - this single layer is then demosaiced in to three new layers representing red, green and blue light.
The obvious point is that, if you only want a monochrome image, what can you do? The digital sensor is inherently recording in a single layer. The sub-sampling employed by using the CFA requires interpolation to a colour image which, if you produce a B&W, means you then convert back to a single layer! Crazy!! One solution is to buy a monochrome camera - yes, at least one manfacturer now makes a B&W camera and thats the Leica M Monochrom. Nice camera, but a little pricey at £4,500. One alternative is to remove the Bayer array (debayer) from an existing camera - a few people appear to have done this but there are (as far as Im aware) no commercial services as it’s a risky business. The array is bonded to the sensor and you need to scrape it off, but clearly people have successfully completed the task.
Besides shooting in monochrome, what are the advantages? Well with no de-mosaicing process to go through the images should be sharper, a fact critical for photogrammetry where colour is less important. I think we’ll see more of this over the next few years.
Now this article really brings new meaning to the phrase!!! I REALLY gotta get myself one of these - imagining doing a portrait session in a studio!!!! Or going out shooting some street photography - me thinks you might get in to a little bother. So cool though!
Another photography note to self…. many cameras allow you to directly measure the white balance in a scene and set it prior to shooting. This can be very useful particularly if you are shooting JPEGs. Nikon’s use what they call Preset White Balance. If you set white balance to “PRE” (rather than auto or any other setting), who have 6 (on my D700) memory banks which can store a white balance setting. Long press the WB button to put the camera in to measurement mode and fire the shutter. The measurement is then stored in the memory bank and all subsequent photos then use that white balance setting.
Shot use a Nikon D700 with f1.8 85mm prime lens in landscape and then stitched together using Hugin. Hugin was got substantially better over the years and moved on from manual control points to stitch imagery to use a SIFT (scale invariant feature transform) algorithm, as used in Structure from Motion photogrammetry, to do this.
ZOOM IN - the detail is astonishing even with this straightforward setup!!