Ardusat for Schools

Wednesday, February 25, 2015 at 13:19:56

A good article over at edSurge on how Ardusat have raised money in a funding round to support the use of micro-satellites by schools for learning. Real science for real kids - inspiring.

Amazon Prime Drone

Monday, February 16, 2015 at 17:34:38

Clearly Amazon are very keen on the whole drone/UAv delivery thing…..the BBC reports on a recent FAA draft ruling that require operators to have “line of sight” with their kit. This pretty much puts paid to delivery for the time being. However expected this (air!)space to become crowded as a number of delivery operators will push for the this. I guess to start with we might see extended trials in some regions…. one to watch as it will clearly affect the wider UAV debate. And take a look at the Small UAV Coalition…. the only thing “small” is the UAV. Some big backers (see the members)behind this advocacy group…smiley happy website though!!

Studio of Objects: post-processing (Guest Post: Adam Goddard)

Sunday, February 15, 2015 at 16:13:40

With the scan complete for the Paolozzi Studio of Objects project, our starting point was the 41.7GB of compressed data from the P20 in the form of .bin files which required post-processing. The tasks required sounded fairly simple; import the data into Cyclone, import and attach the cube map images using the ‘texture map browser’, register the scanworlds to produce one point cloud, export and send to Touchpress as a PTX file (a fairly standard pointcloud format). However for a number of reasons it was not as simple as we had envisaged!

The first problem related to the multiple returns issue noted earlier. Information on optimal import settings and post processing techniques to help remove these points had been requested from Leica before the Edinburgh scan but it was a while before they were able to respond.

With post-processing starting, it was discovered that Cyclone v8 (installed on the network at Kingston) couldn’t import the P20 data and v9 was required. This had not been an issue during testing as post-processing had been carried out with a single install on v9, however this was not possible following the Edinburgh scan due to the amount of data collected. Cyclone was finally up and running just after New Year!

More problems were to follow… after numerous unsuccessful attempts to import the data, it was discovered that the existing workstations within the university did not meet the system requirements for Cyclone 9.0 and a new workstation was built. Originally this was accessed remotely but the amount of data being transferred over the network made this impossible and so a new desk was arranged in order to access the workstation locally however the problems continued.

During this time we were in constant contact with Leica’s helpdesk who provided faultless assistance for what must have seemed like an endless series of issues! Leica offered to import the data themselves and return it to us as an IMP file ready to be accessed by Cyclone. This offer was duly accepted and a flash drive dispatched to Leica’s Milton Keynes offices. Leica’s second tier support team (in Germany) then became involved once again. After remotely accessing the workstation, they established that the import was failing as the Cyclone software had been installed by the IT department with the settings used for previous Cyclone versions. These allowed the log and data files to be mapped over a network. However in Cyclone 9.0 these files have to be mapped locally. With the settings altered the import was finally successful and the flash drive duly arrived from Leica just as the issue had been resolved!

Once the scans could be viewed it was evident that the multiple return issue was still a problem despite adopting the import settings recommended by Leica. Furthermore, the tools Leica suggested could potentially eliminate this issue had little effect. This was the advice received:

“As you have seen, applying some of the filters - have slightly improved the noise levels but when applying full filters - this removes too much information. Our second support have gone through the workflows and have confirmed that the only way to remove these type of extraneous data, is to do manual fencing. This obviously means more manual work. There is currently no automated functionality that could solve your issue.”

Following this confirmation, it was decided to produce a point cloud immediately due to the work required to manually remove the points and then review both potential automated solutions and assess the time required.

So mapping the cube map images began. The images are imported by right clicking the image folder found under the ‘Known Coordinates’ section of each project. In order to align the cube maps to the point clouds, matching points have to be manually picked from the image and point cloud. This process is carried out in the modelspace for each scan using the ‘texture map browser’ found under ‘Edit Object - Appearance’. Only three matching picks are required for each cube map but the more picks you have the better the alignment is likely to be. After selecting matching points, Cyclone computes the picks to provide an estimated pixel error for each pair of picks and an average error for all of the picks. Any pair of picks with a large error can then be removed and recomputed until there are a satisfactory number of picks with a low average pixel error. The texture map is then confirmed by selecting ‘create multi-image from cube map’ in the texture map browser. This then adds the images under the correct scanworld in the navigator window. Right clicking on the multiImage folder within the scanworld provides the option to ‘Apply MultiImage’ which burns the texture map to the point cloud. This can also be completed in batch mode by right clicking on the project folder and selecting ‘Batch Apply MultiImages.’ The original images can then be deleted from the ‘Known Coordinates’ folder so the new cube map can be imported in order to texture map the next point cloud. This is a time consuming process but worth the effort to produce well aligned result.

Once all of the point clouds had been texture mapped, they were ‘stitched’ or registered together to produce one point cloud containing the data from all the scans. This is a process made easy by our use of targets during the scanning process in Edinburgh. Once the required scanworlds had been selected, the ‘Auto Add Constraints’ function was used which produced a registered point cloud with only a 2mm RMS error.

The registered and texture mapped point cloud was then exported as a PTX file, which includes the RGB data as well as XYZ coordinates and intensity for each point. The export process is a long one (allowing time to write blog entries!) with the resulting file 140GB uncompressed and 27GB compressed, containing around 2 billion points! Posted via USB to Touchpress….

in close association with hijack and Dacapo

Studio of Objects: The BIG One

Monday, February 9, 2015 at 09:51:44

(please view the accompanying gallery to this post)

I’ve already talked about the initial objectives of the Studio of Objects project - and I say objectives, but if we could boil it down it would simply be to “recreate Paolozzi’s studio.” I’ve already outlined the technical requirements and how we tested them - so on a cold and bright weekend in October we headed up to the Scottish National Gallery of Modern Art. I say for the weekend…. I wasn’t involved in any of the preparation of the studio itself, but rather our partners at hijack and Dacapo (Gilly and Ceri) who worked very closely with SNGoMA. In practice this meant liasing with the gallery to give us access to the studio on the Saturday and Sunday, install significantly brighter lighting in both the roof and under the cabin bed, rent the Nikon D810 (and lens), arrange for delivery of the Leica P20 (Leica very kindly donated scanner time to the project), arrange (the airbnb) accommodation and to panic about any last minute glitches! All I can say is that when Adam and I turned upon on Saturday morning everything looked perfect! Given that the studio is being preserved…. it’s preservation (i.e. that it remains undamaged!) is paramount. So both lighting installation and laser scanning are higher risk activities. We’d really like to thank Kirstie Meehan at SNGoMA who spent the whole weekend making sure we didn’t damage anything! If you look at the studio space you will see that it is very cramped and it was important to minimise the number of people in it - we only ever had a maximum of two people (myself and Adam) and only when needed. Otherwise it was just myself.

One of the most important things to do when running a scan is - DOCUMENT EVERYTHING. The mists of time will change what you remember so it is critical that you note down every decision, step and procedure undertaken. Below is the field sketch Adam created for the studio along with the location (and name) of each of our scan locations. An accompanying table then notes down each individual scan at each location and the settings for that scan. We also noted down Nikon camera settings and the exact measurements on the Nodal Ninja.

This setup process took much longer than we anticipated, partly because we had to check everything was there and working, and then decide exactly where we would have the “scan stations” and the location of the HDS targets within the studio. We were also concerned about trying to minimise the problem with multiple returns noted in the scan testing. However our rationale was for excessive, redundant, scanning, but at the lowest “quality” setting which is significantly faster but didn’t affect (for our purposes) the data collected. Each full dome 360 degree scan took about 3 minutes to complete meaning the workflow became pretty slick - move and level tripod, attach scanner (which is “always on”), identify targets in the studio (and scan), then set the scanner in to a full dome scan (rapidly exiting the studio!). At this point we switched to the D810 which then meant attaching the Nodal Ninja and taking the 48/12 photos for the 24mm/16mm panoramas. In fact by the end of the day we were pretty tired and had only finished two scan stations - we retired to basecamp and left Adam’s laptop importing the scan data from the first scan which took 2 hours. We were satisfied that it’d been collected satisfactorily although the multiple return issue still remained….

Day Two dawned bright and early and a brisk walk to SNGoMA set us on our way - with the workflow optimised and scan stations decided, we very rapidly worked our way through the remaining locations. The trickiest element was scanning from the top of the bed. This is about 2m high and contains a fairly small bed with very little space around it. We could only have one person on the platform (me) which meant very carefully running through the workflow above in both a constrained space that had very limited access…. whilst 2m off the ground!! Safety was the priorty which meant conciously being aware of where the edge of the platform was at all times. Obtaining scans of the HDS targets was the trickiest part as in both scanner locations the on-board screen was very difficult to access and had poor viewing angles. Then, when it came to the scan itself, I hid behind the bed or under the tripod whilst on the platform!!

With the main scans complete, we then looked at “in-filling” the data we had. When you look at the studio you realise there are many nooks-and-crannies. With so much “stuff” there will always be shadows with line-of-sight scanning - the more scan locations there are, the more in-filling you can do. With that in mind we did four “fast” (noted as (F) on the sketch) scans at slightly lower resolution but designed to add a slightly different perspective. In total we had 11 scan locations, 20 separate scans at a mix of 1.6mm and 50mm spacing.

We didn’t think we’d be stressed for time, but with everyone leaving at different times via different transport it ended up being slightly rushed….. we had to make sure that all rental equipment was ready to go back Monday morning and was accounted for and packed. We also wanted to make sure that all data was backed up, and in particular that all data was off the scanner - in the end we collected 40Gb of compressed scan data on the P20 and about 40Gb of RAW camera imagery from the D810 (split between the spherical panoramas and the photogrammetry James undertook). As a final note, the first thing we did once back was to coalesce all the data (from different media) in to one location as a master copy and then mirror that to network storage at Kingston as part of the archival strategy.

With data collection complete, processing then began….. I’ve already noted the processing workflow for the spherical panoramas. Next up will be the laser scan data processing!

(Kirstie, Adam, Gilly, James, Mike, Chris)

in close association with hijack and Dacapo

FSP Viewer

Monday, February 9, 2015 at 07:54:34

In an earlier post I talked about the spherical panoramas we have created for the Studio of Objects project. These are humungous 295MP images which, even when compressed, are pretty big files. Obviously with a (spherical) panorama rather than just looking from left to right it would be good to be able to fully rotate (as if you were standing in the middle) around them, in the same way you can in a point cloud. Well there are a few viewers for spherical panos, my favourite being FSP Viewer which is fast and easy to use.

(and once I’ve set up the online image viewing I’ll post a pano there)

in close association with hijack and Dacapo

Studio of Objects: point clouds from photogrammetry (Guest Post: James O’Connor)

Monday, February 2, 2015 at 17:48:12

(Ed: this is a guest post by James O’Connor)

Whilst the main aim of the technical side of the Studio of Objects project was the use of laser scanning to reconstruct and archive the 3D geometry of Paolozzi’s studio, as conserved at the Scottish National Gallery of Modern Art, we were interested in, at least partially, trialling the use of photogrammetry to achieve the same goal and in particular Structure from motion.

Structure from motion (SfM) is a strategy of image matching and 3D point cloud reconstruction based on minimal input information, namely no prior information about camera settings/shooting position. Whilst these added parameters add uncertainty to the solutions within the equations (notoriously non-linear!) for a well-designed survey the image matching for this strategy can minimise errors in the solutions stage if things are as consistent as possible. Another nuance to SfM, as opposed to traditional photogrammetry, is that the more angles the better. Furukawa’s dense matching algorithm has demonstrated its versatility in this regard, and so we set out to capture a mixture of orthogonal imagery with some oblique photos worked in.

Having a Nikon D810 to hand isn’t something you could hope for every day, so for the few hours I was let loose in Paolozzzi’s studio I knew a well-designed survey design was key. Whilst we discussed some strategies that were post-processing and data heavy, such as focus-stacking and bracketing (High Dynamic Range Imaging is something I’ve become fascinated by!) the majority of the imagery captured was taken using predefined settings with two different prime lenses (Nikon 24mm and 50mm) in order to ensure every part of the room was in focus from every position imagery was captured from. This meant shooting from 15 different positions facing in either direction using the 50mm lens and focusing at shorter distances for near field objects from 40 or so positions using the 24mm lens.

Imagery was captured in both RAW and JPG formats, RAW for its higher dynamic range and JPG for its ease of use. Some early results were produced using the very easy to use VisualSfM (see image below), a free to use SfM solution that was born out of work done by Noah Snavely in the late noughties. It’s my first port of call to exhibit the versatility of modern photogrammetry, as simple models conserving geometry can be captured from something as user-friendly as a video with a handheld camera and built into usable products (see this example). This also means historical archives could be data mined to generate meaningful 3D models of various culturally and scientifically important areas.

SfM paolozzi

An example of one corner of the room has been processed and can be interactively viewed here (using the fabulous Potree). The imperfections weren’t processed out to show the product without any manual intervention, and one can imagine with further processing (outlier removal, surface meshing) you will get a pretty good idea of what this part of the studio was like. Unfortunately areas remain without points as appropriate imagery couldn’t be obtained due to both time and physical constraints, but one exciting prospect is that images of Paolozzi’s studio could be mined from the internet to fill in the gaps, which would be very interesting to see!

Watch this space in the future for developments and other applications of this technology, as whilst competing with laser scanning is tricky, the low-cost and accessibility of photogrammetry means it won’t be going anywhere for a while, and with cameras becoming so ubiquitous in modern society (BIG data!) one can only imagine where we’ll be in ten years!

in close association with hijack and Dacapo

Studio of Objects: Testing Times

Sunday, February 1, 2015 at 22:49:32

(please view the accompanying gallery to this post)

The initial objectives of the Studio of Objects project led us to define a technical requirements and with this a plan to rent the Leica P20 rather than use our own Leica ScanStation 2, purely from a perspective of speed and size. In fact, prior to this decision, we undertook a test scan at Kingston University in one of the photographic studios (read: white backdrop) using a range of “objects” placed within the space (some of the from a skip… thanks Chris!!). We took scans at 1.0cm point spacing as well, as photos using the integrated camera, from two locations with survey targets placed in the scene - in Cyclone we registered the scenes and then exported the point cloud along with the RGB values from the overlain colour photos. Nothing new here and really just to demonstrate the workflow and get some realistic data over to Touchpress so they could work on the 3D import and rendering for the Apple iPad Air 2.

However with the decision to move to the P20 and an external camera solution, we wanted to test this both in the lab and in a similar “studio archive” setup with lighting similar to that we would expect to use in Edinburgh. What could be better than the Krazy Kat archive, a collection of over 20,000 objects amassed by Paolozzi over his lifetime and source of inspiration for much of his work. This is housed at Blythe House, the now store and archive of the Victoria and Albert, Science and British Museums. As they note:

“The over-arching theme of the collection is ‘The Image of the Hero in Industrial Society’, which can be seen clearly in our display of 3D objects where models of military heroes co-mingle with Mickey Mouse figures, robots, astronauts and characters such as Frankenstein to create some weird and wonderful juxtapositions.”

We duly ordered the camera nodal setup, rented the scanner from Leica and arranged for lighting. Turning up on the day was seamless and whilst Nick arranged the lighting and the archivists unpacked a number of objects from the archive, Adam and I set to work with the P20 (see the accompanying gallery). Neither of us had used this scanner, but it is very similar in operation to the ScanStation2 so a quick read of the manual and we were away.

This time we scanned from three locations using scanner targets in the image. We also used Gilly’s Canon 5D Mk2 with a 24mm lens to take the external photography - it was at this point that we realised we had not ordered both parts of the adaptor, missing the tribrach mount. However we knew that we only needed one shot at each location (as we weren’t doing a full dome scan) and that as long as we made-up a spacer we could hold the bracket in exactly the right position. This worked well and we acquired the need external photos.

Duly completed, we had three scans around our objects at a variety of different density and accuracy settings. Nick also experimented with the lighting - it was gelled to give it a very white “daylight” colour however he was intrigued as to whether it had any impact upon the scan itself - we duly obliged, but actually laser scanners are active devices so we could have happily scanned in a lightless room. However what this did highlight in post-processing that if you do more than one scan under one scanworld, they will automatically be merged so there is no opportunity to seperate them for registration (something we wanted to do here for testing purposes). When we undertook the real scan in Edinburgh we generated a new scanworld for each scan (to avoid having to follow this pullava).

Viewing the point clouds produced from the scanning (see video below) it was evident that there was an issue with multiple returns. Lots of points were located in the spaces between objects which was causing a “dragging/smearing effect” to occur, moulding objects together (see the accompanying gallery).

Leica were initially unsure of the cause but suggested possible causes were vibration and lack of calibration, however neither of these were an issue. The multiple returns were only in a direction directly away from the scan location so if vibration was the cause the points would have been found in all directions. Leica finally advised that “the scanner is finding it difficult to distinguish between the end and beginning of the front object to the object to the rear. Also the object surfaces do play a vital role in the reflectivity of the scan data.” We requested more details on this issue (e.g. whether there was a minimum/maximum distance at which this was known to occur and whether there was anything we could do to reduce the effect) and it was escalated to Leica’s tier 2 support.

Leica also advised that the effect could be reduced by using lower resolution scans although this defeats the purpose of having a high resolution scanner! In addition import settings could be altered to reduce the effect, including ‘remove low quality pixels’ (although this was found to significantly reduce the resolution), ‘remove intensity overloaded pixels’ and ‘remove mixed pixels’. Leica also suggested using the ‘area filter’ to remove data noise. in terms of post processing, ‘trim edges’ and ‘cut by intensity’ were suggested to alleviate the problem (and obviously manually removing points).

A huge amount learnt from the test scans which we took with us on the main scan to Edinburgh - we did spend several more sessions in the lab making sure the setup of the external camera was correct, something we’ve covered in an earlier post. Next post - the big show itself - scanning Paolozzi’s studio at the Scottish National Gallery of Modern Art.

(Thanks to Adam Goddard who worded the last few paragraphs from his experience in the processing)

in close association with hijack and Dacapo

Top of Page