3D artefact recording part two

This is a continuation in a series of posts, the first of which may be found here.

This slideshow requires JavaScript.

Photogrammetry

Based on the experience gained in the R.T.I. process as described in my previous post, we now recognised how much time can be spent preparing the necessary conditions and rigging for a successful photographic capture session. On that basis we set aside quite a bit of time to prepare for the photogrammetry process.

Capturing

This method requires the artefact to be evenly and well lit as the software used for creating the 3D representation needs to recognise and link common points in each photo and, as such, discrepancies in the lighting of the object in particular can adversely affect the software understanding of the scene and the model it produces.

Equally, the camera settings need to ensure the environment is captured consistently and that the artefact itself is kept in focus. Both of these factors, light and camera set up, proved to take considerable time. A white, wire frame linen basket was used do diffuse the light from our three light sources evenly on the artefacts. This was certainly a case of trial and error.

The camera was set on a tripod and a laptop was used as a remote trigger to avoid image distortion through shaking. Again, the process of manually setting ISO and f-stop settings became quite an arduous trial and error scenario for each artefact.

Once the conditions were agreed upon, shooting began but this was also quite slow. The artefact needs to be moved to capture each surface and angle. We initially rotated the artefacts and left the oasis, which we’d been using as a kind of platform, static. This led to problems with keeping as many points as possible in each shot as parts of the artefacts might overhang and be out of shot. This would not necessarily have been a big problem as there was always a significant portion (over 60%) of the artefacts in shot but later we decided to use a larger piece of card and rotate that with the artefact on top.

This slideshow requires JavaScript.

Processing

Processing images for photogrammetry is a process with a number of stages. Firstly the dataset of images need to be filtered for quality. Agisoft PhotoScan, Professional Edition’s ‘estimate image quality’ function allows the software to grade the images on a scale of  0.0 to 1.0. A minimum threshold of .5 was adopted and all images below this standard were disregarded.

The remaining images were then considered individually by eye. Largely the images were deemed suitable. One notable exception was a portion of the images for the ‘head’ artefact were tinted blue because of light interference. The imaging lab’s own lights turned on automatically for a period of capturing and this caused a blue colouration in some of the dataset. This appears to have possibly distorted the colour in the model and demonstrated to us the importance of environment control when employing this method.

Once the dataset was agreed upon the masking process began. This involves indicating the the software what portions of each image is to be considered when aligning the ‘cameras’ or photos and ascertaining the points by masking and removing any parts of the image which should not be part of the model such as foreground and background. This task also conveyed many retrospective lessons. A grey foam base which had been used as a platform in shooting proved difficult to select in the software due to its coarse and varied surfaces. A plain grey/black card which had been used in later shots proved far easier to mask out because of its consistent colour and matt finish. As Richard had begun masking while capturing was still taking place he could report this issue before it was allowed to effect all the artefact datasets. This demonstrated to us the value in staggering the processes.

mask
Masking in Photoscan

The masked datasets were then aligned using the constrain by mask option to make use of the masking previously done. This step effectively allows the software to understand the images perspective by plotting common points.

The next step is to build a dense point cloud which builds on the points ascertained in the alignment stage. The dense cloud is then built upon using the build mesh command. This links close points in an intuitive way on the basis of the dense cloud. These linked points create lines, triangles as a wireframe and finally a three dimensional mesh of connected points.

 

The colours that were derived from each detected point in the alignment phase is added to the mesh through the build texture stage.textured productThis essentially maps the surfaces detected in the photos onto their related points in the newly created 3D space.

As a group we were quite pleased with the resulting models. Although numerous lessons were learned in often difficult ways we certainly committed quite a bit of time and effort to the photogrammetry process in particular. As novices we often committed effort and time where planning could have saved us both. However, the learning curve was steep and the results were rewarding.

More details of the photogrammetry capturing and processing methods employed can be found on Richard Breen’s blog here.

The next post in this series covering hyper spectral imaging is here.

PLEASE VIEW THE FINISHED MODELS BELOW…thanks.

Delivered Bull Model

Delivered Head Model

Delivered Water Bearer Model

 

Your two cents...