Photogrammetry Project – Part 2

The first step involved in creating my 3D model was to upload my photos to PhotoScan. I had 97 images in total. The first step involved after uploading the images was to check if there were any images that were too poor a quality for use in the alignment phase. All of my images were above the recommended quality level, so it was not necessary to discard any images just yet.

The next step was to draw masks around my images. Because the background behind the images was uniformly white, the software did not have much trouble in identifying the edges of the vessel. This meant that the “magic wand” tool worked quite well, with only some areas outside the vessel that were a little dark causing any confusion for the software.

Using the 'magic wand' masking tool to mark off the vessel's boundary lines
Using the ‘magic wand’ masking tool to mark off the vessel’s boundary lines

As a workaround to this issue, I could use another masking tool called the “intelligent scissors” which meant I could cut away any superfluous areas within the mask that I didn’t need. Using the “magic wand” and then the “intelligent scissors” to tidy up the mask borders worked really well and gave me clean, neat results.

Intelligent scissors tool was used to cut just above the base of the bowl, which was too dark and unnecessary to use in alignment, since the upside-down images of the bowl would capture the base anyway
Intelligent scissors tool was used to cut just above the base of the bowl, which was too dark and unnecessary to use in alignment, since the upside-down images of the bowl would capture the base anyway

The first time I attempted the masking process I mistakingly used the marker tool, which meant that I had unknowingly incorporated the background information into my images for the latter phases in the processing of the model.

Image of vessel before any masking had been carried out
Image of vessel before any masking had been carried out

The next step was to align the photos. I had to redo this process several times, in a kind of trial-and-error type way. The first time I noticed, there were many white, exterior points to my aligned model image. In order to fix this problem, I decided to choose the “constrain object by mask” option in the pre-alignment calibration phase. I also noticed that some of the bowl shape, particularly at the rim, was becoming stretched or distorted in some way, with some part of the rim trailing off like the arm of a spiral galaxy. This, of course, was in need of emendation, so I  decided, just out of curiosity, to disable all images of the vessel from the highest-point, those which were used to get the inside of the vessel and those that captured most of the base from high up.

Image of bowl from highest point
Image of bowl from highest point

I felt that the difference between the rest of the images and these ones taken from high up might have in some way confused the software – but  I’m still not too sure why. I also realised that the base was more or less captured in the images that were taken from the “second-highest” position, so the ones I had disabled probably weren’t giving the software any new information it didn’t already have about the vessel anyway.

Image of vessel's base
Image of vessel’s base

 

In disabling these images, however, I knew that I would be sacrificing the inside of the vessel for the purposes of getting a proper, undistorted model of its base, sides and rim. At least in this way, I would have a model that at least looked like the vessel I captured in the museum, albeit one without an interior.

 

The next stage was to build the dense point cloud, which I executed on “high” quality setting, and then the mesh.

 

Creating dense point cloud
Creating dense point cloud

In hindsight, I feel that 97 images was not quite enough for the purposes of the processing stage. If I were to capture the images again at the museum I would probably take between 125 – 150 images, as it is probably good to have more images than you need since you can choose whether to discard images you feel are not aligning properly and add ones that you feel may help in the processing of the image. Although I captured the vessel from four different height points, I think I would do it in five if there were a  next time, as I feel the ‘jump’ between the second highest point and the highest confused the system somewhat, or at least made it a difficult job in aligning all the points.

3D image of bowl with surface pattern
3D image of bowl with surface pattern

In terms of processing, it seems that I’m yet to find the magic formula for these images so that the image comes out clear and well aligned. I am still getting ‘debris’, or bits of unaligned or misaligned bowl scattered about the rendered model. I then realised I could get rid of these bits of debris with a cutting tool, so I went back to the dense cloud image and redid the mesh procedure.

3D image of vessel with surface pattern
Pot ‘debris’

 

Issue when aligning with images captured from high position
Issue when aligning with images captured from high position

 

Food Bowl
by alexghughes
on Sketchfab

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>