Study of Objects: External Camera Image Processing

Saturday, January 31, 2015

This post is slightly out-of-sequence with the earlier ones (I had meant to cover our first trial in West London scanning Paolozzi’s Krazy Kat archive) but it does follow neatly from the last post on the technical setup for our external camera….. so in the following I want to cover the workflow we used to take the RAW imagery off the Nikon D810 (hired from Calumet) all the way through to the production of cubemaps for import in to Cyclone. Remember, there are 48 images per scanner location and seven locations making a total of 336 images each weighing in at 40Mb. Deep breath… here goes:

1. Import: all import of the RAW imagery was performed using Adobe Lightroom (LR). This maintains all the original camera data and allows us to manipulate the image prior to export.

2. Image Optimisation: once in LR, the main optimisations are to correct the white balance, boost contrast, sharpen edges (through both sharpening and clarity) and remove any chromatic aberations. Give that the studio remained under the same internal lighting and the camera had fixed settings we could copy the corrections to all photos (Ctrl-Shift-C in LR!). The copy settings are shown in the image below. What this did highlight was that the strip light placed under the cabin bed had a different colour temperature to the spot lights on the ceiling which adds a colour cast to the photographs.

3. Focus Stacking: with the image optimised we then used the open source Enfuse (good summary)) to merge the exposures, part of the Hugin development community (which we were more familiar with, althouh PTGui also uses it). The manual on the Enfuse website provides much more detail and shows the range of options open to blend images together. When looking at the two overlapping images we shot for each location we realised that the overlap was actually very small and default settings for Enfuse produced very bad results. We therefore had to experiment with a range of settings to find a viable alternative. Enfuse uses three primary blending options: exposure, saturation or contrast. For focus stacking its contrast that needs to be weighted fully and the blending then uses a window size (in pixels) to perform this. The manual suggests 3-5 but we realised that the difference in focus between our two images produces a “halo” of unfocused area around objects. Experimentation showed that a 31 pixel window worked well. The other key setting was to opt for “hard-mask” which avoids averaging pixels providing a sharper image (but at the expense of noise). We also found changing the method to convert the colour image to greyscale (the “grey-projector”) also improved the results (in this case selecting luminance). To process the 336 images, we did this automatically directly within LR using LR/Enfuse, an LR plugin that can batch automate the whole process. It also allows you to align the images prior to blending - this can take account of any minor shifts of the camera on the tripod between exposures. To process 48 photos for a single scanner location took about 1 hour. For those using a command line the options are:

enfuse.exe -o –no-ciecam –exposure-weight=0 –saturation-weight=0 –contrast-weight=1 –contrast-window-size=31 –depth=16 –gray-projector=luminance –compression=95 –hard-mask

4. Panorama Stitching: with the optimisation and focus stacking of the individual photos complete we now needed to stitch them together in to a single panorama. We experimented with Hugin, open source image stitching/blending software. In the past you had to manually process control points across all your images, a laborious and not necessarily accurate (!) process. The Hugin team have subsequently implemented the SIFT algorithm to automatically detect these - it generally works well but, for whatever reason our panoramas were often disjointed. We therefore switched to PTGui (which originally was a GUI for the open source PanoTools but has since written many of its own algorithms). The default options for import, control point acquisition and pano generation worked well with no specific requirements.

5. Cubemap Generation: PTGui also allows you to create cubemaps (noted in Leica’s materials) from the “Tools->Convert to QTVR/cubic” menu. As per the Leica instructions, we selected cube faces (6 files) as the output, increased the JPG quality and changed the cube face sizes to 4096.

Below is a low-res example of a pano from Paolozzi’s bed - once I find a hosting service I’ll post a full res pano.


in close association with hijack and Dacapo

Google Earth Pro available at no cost

Saturday, January 31, 2015

As reported at Mapperz and Google Earth Blog, Google Earth Pro now appears to be made available at no cost (just signup for an extended license key). Whilst for users consuming imagery this makes no material difference, there are some key features that will be of interest to a number of people.

(from Mapperz)
Advanced Measurements: Measure parking lots and land developments with polygon area measure, or determine affected radius with circle measure.
High-resolution printing: Print Images up to 4800×3200 px resolution.
Exclusive Pro data layers: Demographics, parcels, and traffic count.
Spreadsheet Import: Ingest up to 2500 addresses at a time, assigning placemarks and style templates in bulk.
GIS import: Visualize ESRI shapefiles (.shp) and MapInfo (.tab) files.
Also: viewshed and map making tools

These functions bring much more “GIS-like” functionality to the party making it another tool in the arsenal.

Studio of Objects: External Camera Panoramic Spheres

Wednesday, January 28, 2015

Following on from the last blog post (Paolozzi’s Studio of Objects) I wanted to touch upon the external camera setup that we used to integrate photographic imagery with the laser scans from the Leica P20. Its worth referring to these two “manuals” that Leica have produced on using an external camera - they are helpful and give some good points on hardware, setup, software, processing and integration. The first (which is older) is titled Cyclone External Camera Workflow-Nodal Ninja bracket, with the second (newer) titled Spherical Panoramas for HDS Point Clouds. In short though, the following setup is required:

1. We want to replace the internal camera on the sensor with a better quality external camera and then import the imagery in to Cyclone, so allowing you to texture the point cloud.

2. In order to best achieve this, the focal centre of the lens must match the optical center of the scanner - this enables accurate image-to-point cloud texture mapping.

3. Once the laser scan is complete, the scanner is removed from the tribrach and a tripod head is then attached to mount the camera on. This is actually two pieces of equipment:
a. Nodal Camera bracket: this allows you to precisely position the focal centre of the lens and is a standard camera accessory. Leica have recommended the Nodal Ninja 3 Mk2 in the past.
b. Spacer: this positions the Nodal Ninja in exactly the right location and is essentially a spacer with a tribrach adaptor. We used Red Door for both the adaptor (L2010 Tribrach Adapter for Leica Geosystems) and the spacer itself (L-N3-C-10 ScanStation Camera Adapter for Leica Geosystems). You (fairly obviously) need both bits (don’t make that mistake like I did!!!).

4. Select the lens you want to use, with two aspects to consider:
a. Resolution: there is a trade-off between resolution and time. A wide angle lens will capture a panoramic sphere with relatively fewer photos (the reason Leica use a fisheye lens) but at the expense of resolution. There is a BIG caveat here: you need to convert the final panorama in to a cubemap for import in to Cyclone. Each cubemap “face” (or image) can have a maximum dimension of 4096×4096 pixels, the limit in Cyclone. This means the maximum image size is 16MP wide and 12MP high (but as its a cubemap that means the whole image is limited to 96MP) - given the scanning resolution of the P20 this is disappointing as it limits the resolution benefits you can achieve from an external camera (although the dynamic range will be significantly better), although I expect this to change in the future, not least because Leica internal cameras will almost certainly improve. For the record our panoramas with the 24mm lens were ~235MP! You need to choose a lens suitable for your camera: this will almost certainly be either an FX (35mm) or DX (24mm) sensor which applies a multiplier to the image. 1x for the former and 1.5x for the latter. We selected 16mm fisheye and 24mm rectilinear lenses. When shooting with these lenses they both had a 30 degrees horizontal rotation between shots (on the tripod) to make sure there was a large overlap between images. In addition, the 24mm lens required a vertical rotation of +30 degrees and -15 degrees to make sure there was sufficient vertical coverage. For a full sphere you would also want to take vertical upwards (i.e. pointing at the ceiling) with 30 degree horizontal rotation although we didn’t in this instance.
b. Focus: objects “acceptably in focus” are denoted by the depth of field which is determined by the focal length, aperture and focus setting. Each lens has a hyperfocal distance where everything is in focus, typically at very small apertures (which increases softening in the image from diffraction). Where hyperfocal isn’t practical or possible, an alternative is to use a range of image processing techniques to stack multiple photos with different depths of field - this is know as focus stacking. Where you use a longer focal length for greater resolution, the depth of field is smaller and so focus stacking becomes more important. We used this technique with the 24mm lens - DoFMaster allows you to calculate your depths of field for objects at different distances where you probably want a good overlap in DoF.
c. Shooting: with the Nikon D810 mounted in the Nodal Ninja we shot in RAW only mode to capture the “at sensor” data. The camera was put in to manual focus and manual mode. With the aperture set at f8 (based upon the DoF calculations) and ISO100, metering gave an optimum shutter speed of 1/3s. The lens was pre-focused and then a 2s timer used to release the shutter (essential for the slow shutter speed). For the 24mm lens the focus settings were 0.8m and 2m - so 2 photos were shot at -15 degrees for each of the 12 horizontal rotations (i.e. every 30 degrees) and then repeated for +30 degrees giving a total of 48 photos for each scan location.

5. Find the nodal point for your camera!! This is fiddlier than you think as you need to find the point of no parallax when taking an image - this is the point at which the focal centre of the lens remains fixed. It varies between camera body and lens combinations. Nodal Ninja have a set of resources which include specific settings for different cameras - some are in date and some out of data, however this page was very helpful showing traditional and rapid techniques to find it. We used the rapid technique, utilising an empty ball point pen as our siting device instead of a piece of card. It worked well!

This covers the hardware - I’ll talk about the processing side in a later post!


in close association with hijack and Dacapo

Studio of Objects: Technical Setup

Sunday, January 25, 2015

In this post I briefly want to outline the technical requirements for the Paolozzi Studio of Objects project. With a requirement to “laser scan the studio” the latitude for requirements is quite wide. At Kingston we have a ScanStation 2 which is a good all round scanner (although the same specification C10 is much smaller/lighter with a touch screen interface) with a range of ~300m and up to 50,000 points-per-second (pps).

For the studio interior we knew we wanted to over-sample to an excessive amount of detail - this would allow us to nail objectives (3) and (4) giving data redundancy in the rendering of point clouds on iOS devices, whilst also ensuring the maximum “archival” of the space itself. We therefore wanted a relatively small and fast scanner for the tight spaces. For limited time we would have in the museum space the Leica P20 seemed to fit the bill and offers scanning at up to an impressive 1M pps. After our initial test scan (more in the next post) Leica Geosystems UK kindly donated 2 days of P20 use and fielded a number of support calls!

OK, that covered the laser scanning aspect of the project, but this was in part about immersive visualisation of the studio space and for that we also need photographic imagery. The P20 integrates a 5MP camera however we wanted much better than this so went for an external solution that would could then integrate at a later stage. For those familiar with this technical setup, it involved placing the optical axis of the camera in exactly the same position as that of the laser scanner, photographing a panorama of the studio (around the nodal point of the lens) and then latterly “stitching” the photos together. We will cover the details of this workflow in a later post, but wanted to note the equipment we used here. Specifically we want a high resolution camera with excellent dynamic range - for this we chose the Nikon D810. This has an impressive 36MP sensor which has had the low pass filter removed meaning that there is more fine detail in the imagery. DxOMark testing shows it has an impressive 14+Ev dynamic range at ISO100. In all, it should offer the best imagery possible of this interior space - we also wanted to think carefully about the focal lengths and resulting output imagery, but more on this in the next post. However for the record we took Nikon 24mm f/2.8D and 16mm Fisheye f/2.8D lenses.

Next two posts will be looking at the production of the panoramic photos and the initial test scan.


in close association with hijack and Dacapo

As easy as 1-2-3

Monday, January 19, 2015

My PhD student James O’Connor (and by the way, check out his cool 3D model generated from video) reminded me of my (unknown until this point!) mantra/advice for a PhD:

1. Complete ONE PhD thesis
2. Write TWO publishable papers
3. Any talk/presentation will (nearly) always have THREE main points

Can’t go wrong!