Explorations in Photogrammetry – Part 4

sepulchre_nomask_screenshot

For part 4 of my series on photogrammetry, I will discuss the creation of a 3D model of the sepulchre, the object I’ve chosen for the second part of my 3D recording assignment (for more information regarding the sepulchre and the process of taking photos of the sepulchre itself, see Explorations in Photogrammetry – Part 3). As with the bowl from the National Museum of Ireland (see Part 1 and Part 2 of Explorations in Photogrammetry), the construction of the object utilised Agisoft’s Photoscan Pro and the use of Adobe Photoshop.

Creating the 3D Model

The process of editing the pictures in photoshop was no different than it was for the bowl from the National Museum. As I covered this in Part 2 of my blog series, I will simply refer you to that post for more information regarding the Photoshop process.

The creation of the 3D model in Photoscan was somewhat different this time.  The initial process was still the same as it was for the bowl.  I imported the 178 photos into Photoscan and used the programme’s image quality assessment tool to determine the quality of the images.  To my surprise, I found the quality of the images to be considerably higher:

Number of Images Quality % of Total
4 .50 – .59 2%
174 .60 and above 98%

However, I knew from my experiences while taking the photos that there would be some photos that would need to be manually filtered out.  These were primarily photos were the sun was at the wrong angle and caused bright spots to appear in the photo or where the the photos were so bright as to look overexposed.  This led to a further 37 images being filtered out of the result set.

I then began to apply a mask to each photo.  However, this process proved to be much more involved than with the bowl. When applying the mask to the photos of the bowl,  I was able to use the magic wand tool to select the background (which was a uniform colour), and with a few simple clicks—and a reasonably high success rate—filter out everything but the bowl object itself. The photos of the sepulchre proved much more difficult.

Due to the lack of uniformity in the background (which contained grass, trees, and other objects in the cemetery), I was unable to utilise the magic wand tool. Thus, I found I had to draw the mask around each view of the sepulchre in every picture individually, using the lasso tool. This was not only time consuming but oftentimes difficult to do precisely, as objects from the background would occasionally blend into the sepulchre, making it difficult to determine where the sepulchre began and other objects ended.

Once I finished the masking, I then began to run the images through the standard processing workflow in Photoscan Pro. For some reason, however, these images were taking much longer to process. While aligning the photos took about an hour and a half (which was to be expected), building the dense point cloud proved to be a challenge. I initially kicked off this process on my Sony Vaio. This is a relatively powerful laptop with 8GB of RAM, a dedicated video card, and an Intel i7 1.80GHz chip. After running for almost 40 hours and being only 40% complete with the creation of the point cloud, I decided to cancel the operation and switch to my MacBook Pro, which has 16GB of RAM and a 2.3GHz Intel i7 chip. As of the writing of this post, the process to build the dense point cloud has been running for 17 hours and is approximately 50% complete.

When I saw the amount of time it was taking to process this model, I decided to try to utilise another method. Rather than applying a mask to every photo and then building the dense point cloud based on this mask, I decided to simply crop out areas of the photo that simply didn’t belong (such as a few photos where I managed to capture my book bag or another person visiting the cemetery). Then I simply aligned the photos utilising Photoscan Pro’s built-in alignment tool. From there I manually cut out the extraneous information the software pulled in and created a dense point cloud. I then continued to manually remove the points I did not wish the application to include in the final model (such as some of the surrounding objects). From there I was able to build a mesh and provide textures to create a model. This method was much less time consuming as I didn’t need to apply the individual mask to each photo. The time to process all of the photos to build the dense point cloud was about 20 hours—still a very intensive process but much better than the time it took to process the photos utilising the mask.

Assessment

Overall, I am very happy with the model itself.  It turned out rather well given the lighting conditions and difficulties I had with background objects and processing.  As learning outcomes, I would make note of the following:

  1. Try to take the pictures on an overcast day.  While this is not always within your realm of control, pictures taken when the sun is hidden behind a cloud tend to be easier to process, especially since they do not have shadows.
  2. Consider your environment.  One thing I did not take into consideration was the surrounding environment.  Had I thought of it, I might have taken some old sheets to drape over some of the surrounding objects.  This would have made applying the masks easier.
  3. Don’t always rely on the creation of masks.  For this object, I found the model that did not rely on masks but rather required me to manually edit the point cloud to be much easier to create. With this type of object, given the background items, I highly recommend this approach.

Final Model

As I mentioned earlier, the model utilising the mask approach is still processing. Once it finishes, I may post it here as an update. However, the model that I created without the use of masks turned out rather well. You can view it below.

Model Created Without the Use of Masks

Coming Up Next…

In my final post, I will discuss the process of 3D printing. I will evaluate available 3D printing services and discuss my chosen selection.

Leave a Reply