Experimenting with Photogrammetry

As part of my masters, I undertook a module which placed us in a project involving the national science museum Maynooth. I found the experience to benefit me greatly not only in developing skills for photogrammetry but also for working as a team with set deadlines and goals. My position intentionally was to be digitising coordinator. This meant setting up the capturing on the day and making sure the quality of the images on the day was satisfactory. As things progressed, however, they don’t always go as planned. I ended up taking a very active role in each stage of development. This post will outline some of the tips learned for capturing while also involving some of the post processing and eventual creation of the 3D models. The process and application of photogrammetry caught my attention directly from my involvement with it. The process allows extremely accurate semi-automated production of high-end 3D models. The low budget and high accuracy of these models can lead to new projects which would have been unimaginable by small scale heritage sites as well as personal projects. The dissemination properties are also a factor which made me extremely interested in the process. I have discussed in previous blog posts on who owns the digital. I feel this is only the start of engaging with this process for me and future projects will posted to this blog.

 

The museum holds an impressive collection of science instruments most of which are associated with Nicholas Callan. We worked with the curator to choose pieces in which he felt best represented the collection. The nature of the objects themselves proved difficult to capture. As anyone familiar with capturing for photogrammetry we encountered reflective surfaces, Glass, mirrors and fine thin details. The objects were an aggregation of what is the most difficult to capture. This caused worry at first but I’m glad for the experience as I felt it thought me more about the process than if i was dealing with simpler objects. I made arrangements to tackle these issues on the day but it is only in hindsight and experimentation through this project did I find better solutions. Capturing went well despite the complications but improvements could be made. Most members of the group were not familiar with using the DSLRs or equipment. This held up capturing and caused errors in the data set. A few members were also not familiar with the general workflow for capturing objects and did not take photos in layers or in a 360 pattern around the object. I tried to fix this issue by showing members on the day and setting up cameras for their use. When an object was captured incorrectly I reassigned the members capturing to recapture the object after I explained what could be improved. However, this was not 100% effective. Errors were still produced, for example, the ISO settings while capturing the spectrometer were as high as 2000 which caused a lot of errors with the processing later due to the distortion of the image. In regards to the future, there is only so much you can show someone on the day. If working with a team like this again I would offer a training session before the day to make sure everybody is able to use the equipment and is familiar. I was also experimenting with the Nextengine 3D laser scanner on the day to capture the Ford T Induction Coil. I also ran into issues here with the software, not auto aligning and issues due to lighting and positioning but again I was happy with the results. I might write a blog post about the 3d scanner in the future. In the end i had to manually align the scans which was painstaking.

 

Polarizing filter and reflection reduction

Despite group issues, there is a lot I would personally do differently. As became apparent later in the project the images we captured which had semi reflective surfaces caused major issues. For the capturing, I utilised a light box with two side studio lights and an overhead light in order to give even coverage. The power of these lights caused massive reflections and central points which remained persistent in capturing which messed with the image processing algorithm. I found later when experimenting with beer cans that an even muted light works far better for reflective surfaces. This effect is hard to create with artificial lights. I found using cloud cover on an overcast day to provide the right amount of light needed. Using this method along with a polarizing filter goes a long way in reducing the effect of reflective surfaces. I also found the sharpness on the image to be extremely important even more so than the overall quality of the image in pixels. With my own camera, I have produced higher quality images for model building than using the higher end cameras provided by the college. This is due to focusing on the sharpness of the image and making sure edges and outlines are clearly defined. Powders can also be used to reduce reflection but I was conscious of the age and significance of the objects I was capturing knowing that the curator would not be happy if I was to give the objects a new powdered paint job.

Delivering presentation on agisoft experimentation
Delivering presentation on agisoft experimentation

 

I used the from background masking method to mask my objects which improved the quality. This method detects differences from a background image to try to identify automatically the object in the foreground. Though very helpful and saved time I still encountered problems. Some other objects in frame ie the base, etc, moved as well as the image which would then also be included in the mask. I would fix this issue in the future by constructing a better light box which had full white cover. This benefited our group greatly however as it cut out the arduous task of manually masking all the objects.

Faulty auto generated mask
Faulty auto generated mask

 

Successful auto generated mask
Successful auto generated mask

 

The software used to process images were Agisoft Photoscan, Photoshop and MeshMixer. Images imported into Photoscan went through the usual workflow from alignment to texture building. I first tried to address reflection issues within agisoft. I tried different key point and tie point limits. I used high accuracy as lower accuracy downscales the photos analysed. I did not expect this to change much as the camera positions were not the problem. The issue was with the sharpness, image quality and reflection. In the future, in this stage, there is not much I could change as a good data set is central to a good sparse cloud. We revised our datasets and removed faulty images to try to to improve our results on some items.

I chose mild depth filtering for most objects as the algorithm attempts to capture more important features and deal with outliers. This can be an effective method for defective data sets. For the Maynooth battery, I tried aggressive as it was a more solid object without fine detail. Moderate was also used when others failed.

Initial point cloud complications
Initial point cloud complications
Point cloud after experimentation
Point cloud after experimentation

 

I often had issues where detail was lost from the dense cloud and was not present in the mesh. I used medium quality when experimenting and very high when satisfied with the results. Interpolation settings were the large divider in this section. I found extrapolation to work well in many cases where detail was missing. With this feature the software attempts to generate holeless models and makes more assumptions in the joining of the mesh cloud. The default mode was used for other models without this issue. Disabled was experimented with but was not used as it leaves the model very unfinished.

Extrapolated model (better results but still not satisfactory)
Extrapolated model (better results but still not satisfactory)

 

I decided that I could not get any further in photoscan and turned to photoshop to try to fix the dataset. In photoshop we used the colour replace, highlights and shadows and brightness and contrast to try to improve the images. We also used patch tool to a small degree. We feared this would interfere with the nearest neighbour algorithm in photoscan but it seemed not to matter. We managed to improve the images and the end result but for many objects not to a point where they could be published without errors.

Finished and textured model
Finished and textured model


I also used mesh fixer to smooth and repair some models. For this process, I exported the meshs from photoscan as as objs we then imported them to mesh mixer fixed the geometry as I saw fit and imported the objs back into photoscan to retexture them. This worked really well for creating higher quality models and fixing geometry. I used the enhanced smooth, shrink smooth, bubble smooth, inflate, pull and other sculpting tools to achieve this. Analysis inspector tool was also used to find and fill holes. The bottoms of the models were closed with the plane cut tool. The models which were to be 3D printed were also prepared in meshmixer

Uncleaned model
Uncleaned model
Cleaned model
Cleaned model
Hole filling for 3D printing
Hole filling for 3D printing

What I learned the most from this project was the importance of creating a good data-set to work with. Post processing can turn a good data set into a great model but it can’t turn a bad dataset into a good model. There is no compensating for a faulty dataset. In my own experimenting these issues were not encounter but that was due to the nature of my captured objects being rocks and wood which are solid no reflective easily detailed objects.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *