Group assignment objects that required recording.

Group Assignment Part 3: The Figurines (AFF-622)

This blog is the third in a series, with the first being available here, and the second can be found here.

Technology 3: Photogrammetry 

As mentioned in my first blog in this series, we decided as a group that the best way to accurately capture and record the three figurines was to use the method of Photogrammetry. Photogrammetry is a means by which we can make measurements from photographs and recover the exact positioning of surface points on an object. Using photogrammetry, we were able to create photorealistic 3D models of each of the figurines using Agisoft Photoscan. Similar to the RTI method outlined in my last post, there are two stages to creating each of the models; capturing and post-capture processing. For the team, although this was undoubtedly the most laborious method of recording, the results were impressive.

Setting Up

We first set about setting up our rig for the capturing process. We decided that it would be best for us to use a lightbox to sufficiently light the models for optimal capture. We decided to use a lightbox from the imaging lab and went about constructing a lighting system. We planned to use three light sources to light the objects, namely three adjustable photography studio lights with low-heat bulbs so as not to damage the artefacts. The bulbs themselves were 36 watt, 2100 lumen low-intensity bulbs. There were two shorter light stands to light the box from the sides and one larger one to light the box from above. After a minor incident resulting in the break of the larger bulb, we initially tried to light  the box from behind with an LED torch, until opting for two clamped lights that sufficiently lit the box from in front and behind. We used a canon EOS camera with a regular lens fixed to a tripod to capture the images.

lightwithtorch
Experimenting with LED torch to find best lighting.
20151117_181239
The camera on the tripod, facing into lightbox from high angle. Noticed clamped light at front of box.

 

 

 

 

 

 

The camera was connected to our laptops, which had Canon’s imaging software installed which allowed us to adjust our settings and capture images live from the desktop. Having taken all of the necessary steps in setting up the capturing environment, we set about adjusting the camera to capture the figurines at optimal settings. We set the camera to auto-focus, generally sticking to an F-stop value of 32 and an ISO level of 100, and proceeded with capturing the bull. These settings tended to vary from object to object when deciding the best way to properly focus and capture each angle.

The Capturing Process

We began by setting the bull up in the lightbox atop an oasis on a cardboard box. We initially decided to shoot the figurines from two camera heights, one high and one low to capture every angle available, turning the bull slightly for each image on the oasis while also taking two of every image in case some were unsatisfactory. Later, we decided to capture the two other objects from 3 heights. After amassing the sufficient amount of photos for the first angle of the bull on his feet, we realised that once we moved him on to his side we would have trouble trying to keep the figurines balanced to capture images. It was at this point we decided to try a different surface and opted for a black square of foam padding, resuming the capturing process. This seemed to work, until we realised early on in the the image processing stage that the black padded foam interfered with the masking stage of image processing later on. We replaced the foam padding with a piece of black board which solved the issue.

stonefoamboard
The stone head statue on the black padded foam.
bulloasis
The bull model placed on the oasis, with a better view of the lighting set-up.

 

 

 

 

 

Once we had captured a sufficient amount of photos for the bull we decided to try to streamline the processes by having one member work on capturing, one member processing images in PhotoScan, and one member continuing Hyperspectral Scanning. I think that in terms of work management, this was one of our more successful moments, managing to multi-task successfully as a team to cover more ground faster in a very time-consuming process.

Over the course of our next two sessions, we successfully managed to capture our data-set of images, while also completing most of the hyper-spectral imaging and beginning the next phase of creating the 3D models of the figurines.

Image Processing and Creating the Models

We began processing the images for the bull while the stone head statue was being photographed. To process the images and create the models, we used Agisoft PhotoScan both for the general power it provided as software and our own familiarity with the interface. The first step was to take the images of the bull from the camera onto a hard-drive and import them into PhotoScan. From here, the quality of the images were estimated and all of the duplicates were eliminated. PhotoScan gives the user the ability to estimate an images quality giving values between 0 and 1. We opted for a minimum threshold of 0.5 when selecting the images we would use, after which we selected the remaining images by eye. For the bull dataset we had two sets of images from each of our two heights. By the time we began the stone head and the ‘water bearer’ as it became known, we were taking images from three heights. The only issue with general image quality in the datatset occurred when one of the lights turned on without us realising, resulting in a slightly blue discoloration on the stone head statue. This showed us the difference that slight environmental factors can make on a finished project. Generally with each model, we ended up using between 120 – 130 images or ‘cameras’. From here we used PhotoScan’s masking function to isolate the models from their back grounds. Masking the images tells PhotosSan which points of an image are to be used and separates the desire mode from the background. PhotoScan allows one to use tools such as the Magic Wand and the Intelligent Scissors to achieve this.

image estimation
Estimating the image quality. Notice this section is at the lowest point of the threshold and ranged usually as far as 0.8. The green check mark indicates successful photo alignment.
masking
The masking process, evidenced on the stone head. The outline around the head has been identified using the ‘magic wand’ tool.

 

 

 

 

 

 

 

It was at this point in the process that we could now begin actually building the models. Once the software had aligned the photos, (the process of discerning the points of perspective for a model), the next stage was to build the dense point cloud, a stage that essentially configures all of the angles captured by the cameras and plots them in the space. Had we not masked all of the images, the software would have included all of the background images and could not have differentiated the model from them.

bullpointcloud
The initial point cloud
bull wire frame
The wire-frame of the bull.

 

 

 

 

 

The next step was to build a mesh on the model, which meant that the software joins the points of the dense cloud into a wire-frame and from this create a mesh. This was a lengthy process, taking several hours. The final stage of this process was to add textures to the mesh model. The software knew where each colour went on the model from the alignment phase. Since the mesh model had build the physical aspects of the model from the point cloud, it could now actually imprint the images from the photos onto the model. This took a lot less time than the previous stages.

bull mesh
The completed mesh.

Overall, the bull model took around 12 hours in total to process at high settings. We were able to make use of PhotoScan’s ‘batch process’ function after this, whereby we could set our desired settings before production and leave the machines to complete the process.

bull textures
Textures added to the completed model.

While the bull model was processing, we were able to begin different stages of masking & processing the stone head statue and the water bearer. This staggered approach was instrumental in helping us get to the end of this stage of the project. In general, the other two models took less time to process once we had a better flow of work. After each of the models were completed, we cleaned them up by removing any straying points and uploaded them to Sketchfab, which allowed them to be viewed in an isolated 3D space.

This series ends in part 4, which can be viewed here.

The completed models can be viewed below.

 

Bull Zip
by richardbreen1
on Sketchfab

Stone Head
by richardbreen1
on Sketchfab

Water Bearer
by richardbreen1
on Sketchfab

 

 

Published by

rbreen

MA Digital Humanities student in An Foras Feasa, Maynooth University. BA Music from Maynooth University.

Leave a Reply

Your email address will not be published. Required fields are marked *