This post is slightly out-of-sequence with the earlier ones (I had meant to cover our first trial in West London scanning Paolozzi’s Krazy Kat archive) but it does follow neatly from the last post on the technical setup for our external camera….. so in the following I want to cover the workflow we used to take the RAW imagery off the Nikon D810 (hired from Calumet) all the way through to the production of cubemaps for import in to Cyclone. Remember, there are 48 images per scanner location and seven locations making a total of 336 images each weighing in at 40Mb. Deep breath… here goes:
1. Import: all import of the RAW imagery was performed using Adobe Lightroom (LR). This maintains all the original camera data and allows us to manipulate the image prior to export.
2. Image Optimisation: once in LR, the main optimisations are to correct the white balance, boost contrast, sharpen edges (through both sharpening and clarity) and remove any chromatic aberations. Give that the studio remained under the same internal lighting and the camera had fixed settings we could copy the corrections to all photos (Ctrl-Shift-C in LR!). The copy settings are shown in the image below. What this did highlight was that the strip light placed under the cabin bed had a different colour temperature to the spot lights on the ceiling which adds a colour cast to the photographs.
3. Focus Stacking: with the image optimised we then used the open source Enfuse (good summary)) to merge the exposures, part of the Hugin development community (which we were more familiar with, althouh PTGui also uses it). The manual on the Enfuse website provides much more detail and shows the range of options open to blend images together. When looking at the two overlapping images we shot for each location we realised that the overlap was actually very small and default settings for Enfuse produced very bad results. We therefore had to experiment with a range of settings to find a viable alternative. Enfuse uses three primary blending options: exposure, saturation or contrast. For focus stacking its contrast that needs to be weighted fully and the blending then uses a window size (in pixels) to perform this. The manual suggests 3-5 but we realised that the difference in focus between our two images produces a “halo” of unfocused area around objects. Experimentation showed that a 31 pixel window worked well. The other key setting was to opt for “hard-mask” which avoids averaging pixels providing a sharper image (but at the expense of noise). We also found changing the method to convert the colour image to greyscale (the “grey-projector”) also improved the results (in this case selecting luminance). To process the 336 images, we did this automatically directly within LR using LR/Enfuse, an LR plugin that can batch automate the whole process. It also allows you to align the images prior to blending - this can take account of any minor shifts of the camera on the tripod between exposures. To process 48 photos for a single scanner location took about 1 hour. For those using a command line the options are:
4. Panorama Stitching: with the optimisation and focus stacking of the individual photos complete we now needed to stitch them together in to a single panorama. We experimented with Hugin, open source image stitching/blending software. In the past you had to manually process control points across all your images, a laborious and not necessarily accurate (!) process. The Hugin team have subsequently implemented the SIFT algorithm to automatically detect these - it generally works well but, for whatever reason our panoramas were often disjointed. We therefore switched to PTGui (which originally was a GUI for the open source PanoTools but has since written many of its own algorithms). The default options for import, control point acquisition and pano generation worked well with no specific requirements.
5. Cubemap Generation: PTGui also allows you to create cubemaps (noted in Leica’s materials) from the “Tools->Convert to QTVR/cubic” menu. As per the Leica instructions, we selected cube faces (6 files) as the output, increased the JPG quality and changed the cube face sizes to 4096.
Below is a low-res example of a pano from Paolozzi’s bed - once I find a hosting service I’ll post a full res pano.
As reported at Mapperz and Google Earth Blog, Google Earth Pro now appears to be made available at no cost (just signup for an extended license key). Whilst for users consuming imagery this makes no material difference, there are some key features that will be of interest to a number of people.
(from Mapperz) Advanced Measurements: Measure parking lots and land developments with polygon area measure, or determine affected radius with circle measure. High-resolution printing: Print Images up to 4800x3200 px resolution. Exclusive Pro data layers: Demographics, parcels, and traffic count. Spreadsheet Import: Ingest up to 2500 addresses at a time, assigning placemarks and style templates in bulk. GIS import: Visualize ESRI shapefiles (.shp) and MapInfo (.tab) files. Also: viewshed and map making tools
These functions bring much more “GIS-like” functionality to the party making it another tool in the arsenal.
Following on from the last blog post (Paolozzi’s Studio of Objects) I wanted to touch upon the external camera setup that we used to integrate photographic imagery with the laser scans from the Leica P20. Its worth referring to these two “manuals” that Leica have produced on using an external camera - they are helpful and give some good points on hardware, setup, software, processing and integration. The first (which is older) is titled Cyclone External Camera Workflow-Nodal Ninja bracket, with the second (newer) titled Spherical Panoramas for HDS Point Clouds. In short though, the following setup is required:
1. We want to replace the internal camera on the sensor with a better quality external camera and then import the imagery in to Cyclone, so allowing you to texture the point cloud.
2. In order to best achieve this, the focal centre of the lens must match the optical center of the scanner - this enables accurate image-to-point cloud texture mapping.
3. Once the laser scan is complete, the scanner is removed from the tribrach and a tripod head is then attached to mount the camera on. This is actually two pieces of equipment: a. Nodal Camera bracket: this allows you to precisely position the focal centre of the lens and is a standard camera accessory. Leica have recommended the Nodal Ninja 3 Mk2 in the past. b. Spacer: this positions the Nodal Ninja in exactly the right location and is essentially a spacer with a tribrach adaptor. We used Red Door for both the adaptor (L2010 Tribrach Adapter for Leica Geosystems) and the spacer itself (L-N3-C-10 ScanStation Camera Adapter for Leica Geosystems). You (fairly obviously) need both bits (don’t make that mistake like I did!!!).
4. Select the lens you want to use, with two aspects to consider: a. Resolution: there is a trade-off between resolution and time. A wide angle lens will capture a panoramic sphere with relatively fewer photos (the reason Leica use a fisheye lens) but at the expense of resolution. There is a BIG caveat here: you need to convert the final panorama in to a cubemap for import in to Cyclone. Each cubemap “face” (or image) can have a maximum dimension of 4096x4096 pixels, the limit in Cyclone. This means the maximum image size is 16MP wide and 12MP high (but as its a cubemap that means the whole image is limited to 96MP) - given the scanning resolution of the P20 this is disappointing as it limits the resolution benefits you can achieve from an external camera (although the dynamic range will be significantly better), although I expect this to change in the future, not least because Leica internal cameras will almost certainly improve. For the record our panoramas with the 24mm lens were ~235MP! You need to choose a lens suitable for your camera: this will almost certainly be either an FX (35mm) or DX (24mm) sensor which applies a multiplier to the image. 1x for the former and 1.5x for the latter. We selected 16mm fisheye and 24mm rectilinear lenses. When shooting with these lenses they both had a 30 degrees horizontal rotation between shots (on the tripod) to make sure there was a large overlap between images. In addition, the 24mm lens required a vertical rotation of +30 degrees and -15 degrees to make sure there was sufficient vertical coverage. For a full sphere you would also want to take vertical upwards (i.e. pointing at the ceiling) with 30 degree horizontal rotation although we didn’t in this instance. b. Focus: objects “acceptably in focus” are denoted by the depth of field which is determined by the focal length, aperture and focus setting. Each lens has a hyperfocal distance where everything is in focus, typically at very small apertures (which increases softening in the image from diffraction). Where hyperfocal isn’t practical or possible, an alternative is to use a range of image processing techniques to stack multiple photos with different depths of field - this is know as focus stacking. Where you use a longer focal length for greater resolution, the depth of field is smaller and so focus stacking becomes more important. We used this technique with the 24mm lens - DoFMaster allows you to calculate your depths of field for objects at different distances where you probably want a good overlap in DoF. c. Shooting: with the Nikon D810 mounted in the Nodal Ninja we shot in RAW only mode to capture the “at sensor” data. The camera was put in to manual focus and manual mode. With the aperture set at f8 (based upon the DoF calculations) and ISO100, metering gave an optimum shutter speed of 1/3s. The lens was pre-focused and then a 2s timer used to release the shutter (essential for the slow shutter speed). For the 24mm lens the focus settings were 0.8m and 2m - so 2 photos were shot at -15 degrees for each of the 12 horizontal rotations (i.e. every 30 degrees) and then repeated for +30 degrees giving a total of 48 photos for each scan location.
5. Find the nodal point for your camera!! This is fiddlier than you think as you need to find the point of no parallax when taking an image - this is the point at which the focal centre of the lens remains fixed. It varies between camera body and lens combinations. Nodal Ninja have a set of resources which include specific settings for different cameras - some are in date and some out of data, however this page was very helpful showing traditional and rapid techniques to find it. We used the rapid technique, utilising an empty ball point pen as our siting device instead of a piece of card. It worked well!
This covers the hardware - I’ll talk about the processing side in a later post!
In this post I briefly want to outline the technical requirements for the Paolozzi Studio of Objectsproject. With a requirement to “laser scan the studio” the latitude for requirements is quite wide. At Kingston we have a ScanStation 2 which is a good all round scanner (although the same specification C10 is much smaller/lighter with a touch screen interface) with a range of ~300m and up to 50,000 points-per-second (pps).
For the studio interior we knew we wanted to over-sample to an excessive amount of detail - this would allow us to nail objectives (3) and (4) giving data redundancy in the rendering of point clouds on iOS devices, whilst also ensuring the maximum “archival” of the space itself. We therefore wanted a relatively small and fast scanner for the tight spaces. For limited time we would have in the museum space the Leica P20 seemed to fit the bill and offers scanning at up to an impressive 1M pps. After our initial test scan (more in the next post) Leica Geosystems UK kindly donated 2 days of P20 use and fielded a number of support calls!
OK, that covered the laser scanning aspect of the project, but this was in part about immersive visualisation of the studio space and for that we also need photographic imagery. The P20 integrates a 5MP camera however we wanted much better than this so went for an external solution that would could then integrate at a later stage. For those familiar with this technical setup, it involved placing the optical axis of the camera in exactly the same position as that of the laser scanner, photographing a panorama of the studio (around the nodal point of the lens) and then latterly “stitching” the photos together. We will cover the details of this workflow in a later post, but wanted to note the equipment we used here. Specifically we want a high resolution camera with excellent dynamic range - for this we chose the Nikon D810. This has an impressive 36MP sensor which has had the low pass filter removed meaning that there is more fine detail in the imagery. DxOMark testing shows it has an impressive 14+Ev dynamic range at ISO100. In all, it should offer the best imagery possible of this interior space - we also wanted to think carefully about the focal lengths and resulting output imagery, but more on this in the next post. However for the record we took Nikon 24mm f/2.8D and 16mm Fisheye f/2.8D lenses.
Next two posts will be looking at the production of the panoramic photos and the initial test scan.
It is with great pleasure that I am able to announce the award of the 2014 “Best Map” to Jerzy Zasadni (AGH University of Science and Technology) and Piotr Klapyta (Jagiellonian University) for their map reviewing the Tatra Mountains during the Last Glacial Maximum (LGM). This reconstructed the extent and surface geometry of all 55 glacier systems that were active during the LGM using existing published evidence as well as incorporating new fieldwork and analysis from remotely sensed data. The awards committee noted the elegant layout, good design and attractive inset maps. For these reasons it is a deserving winner of this year’s award and will be available through the Journal of Maps website as a limited print run.
Check out the great movies the authors have put together as well.
Following on from my (very much earlier!!) blog on “Studio of Objects” I’m involved in, I wanted to flesh out the project itself a little further and also talk about some of our initial experiences. No better than to quote from the project homepage:
“The project uses a 360-degree archaeological laser scan to capture the preserved studio of artist Eduardo Paolozzi. The 3D scans will be coded for tablets and used in workshops at Pallant House Gallery to explore how users interact and navigate the studio with this innovative technology…. [and] create new ways for organisations to capture and store their archives digitally, resulting in wider public access.”
For those of you who haven’t heard of Eduardo Paolozzi, take a look at the Wikipedia page. He has extraordinary large-scale sculptures (Vulcan, Newton, Head of Invention, A Maximis Ad Minima) that comprise a mixture of tactile objects (“junk”!) and traditional sculpting (I’m no artist or art historian though!!). He is also seen as a pre-cursor (i.e. did it before anyone else!) of the pop-art movement. His studio was almost entirely left to the Scottish National Gallery of Modern Art and the gallery is worth visiting just to see his gallery laid out in fine detail - having spent two days stood in this amazing location you get a real sense of his working life and the creations that gave him motivation. It is this (largely unique) ability to see art in varying stages of creation and pre-cursors and scale models to the final artworks that make this a remarkable insight in to the workings of Paolozzi.
Which neatly brings us back to the rationale for the research project - this is not about traditional art history, but rather four closely allied prongs to recording Paolozzi’s working environment:
1. Record the 3D interior of the studio using a state-of-the-art laser scanner 2. Develop iOS software to allow full immersion and interaction with the 3D scene 3. Trial and review the interaction of (young) users with the virtual environment as part of understanding and interpreting the work of Paolozzi 4. Document a workflow for other organisations to store their archives
These are printed to the following specifications: Flat Size: 297 × 420mm (A3) Colours: 4 colour process Paper: 170gsm matt (350 gsm matt cover) Print Method: HP Indigo digital print Supply: 10 page atlas, wiro bind on the left hand edge
If you would like a copy please leave your contact details in the comments below - winners will be drawn after Friday 13th February (it will be a lucky day for 20 people!)
Just purchased the Kingston MobileLite MLW221…. this is potentially very useful as it helps (partially) solve the problem of reading storage medium from mobile devices when a computer isn’t available. Whilst some tablets/smartphones allow bi-directional use of the micro-USB port to mount USB sticks through an OTG cable (see relevant part of my comments on the Nexus 7), not all devices can do this…. and it usually needs to be rooted. Hence the wifi storage reader….. it creates a 2.4GHz 802.11n wireless network which you can connect your device to. Sporting both a USB2 and SD card slot you can then access the contents of these media. Given the requirement to run a wifi hotspot, it has a moderately meaty 1800mAh battery which allows it to power external USB devices and also charge other battery powered devices. As the review suggests, its not the fastest performer (i.e. dont use it to transfer GBs of data!!) but for access to media when you are on the go its pretty good. And with Amazon selling it for £20 its a low cost purchase.
Nice piece at the BBC reporting on SpaceX’s attempt at a controlled landing on their off-shore platform of the first stage of their Falcon 9 rocket. Looks like partial success which is a big leap forward for the potential reuse of rocket components.
Following on from my previous update, I had a serious problem with the audio syncing just on some DVDs. Not sure why and I tried various solutions to re-sync the audio, none of which worked.
That sent me back to looking at alternatives to DVD Shrink to extract the VOB files from the disc - and the solution lay in the granddaddy, and original, DVD Decryptor (Wikipedia). This worked perfectly (although remember to switch to Mode->IFO) although all the VOBs are separate files. My preference is to merge them (using VOBMerge) before transcoding in TEncoder). So far its worked seamlessly.
Now this article really brings new meaning to the phrase!!! I REALLY gotta get myself one of these - imagining doing a portrait session in a studio!!!! Or going out shooting some street photography - me thinks you might get in to a little bother. So cool though!