Explorations in Photogrammetry – Part 5

In the 5th and final part of my series on photogrammetry, I will discuss the process of 3D printing. For those unfamiliar with the process, 3D printing involves taking a digital object stored in a specific file format and creating a three-dimensional, solid object. Typically, the object is “printed” using some kind of plastic, although more expensive printers can utilise metal alloys.[1] The process involves creating a StereoLithography file (or STL for short) that contains a 3D model. This file can then be sent to a 3D printer and, after several hours or even days depending on the size and complexity of the object, a real world representation of the digital object can be seen.

Types of 3D Printing

There are a few different types of 3D printing:[2]

  • Selective Laser Sintering (SLS) – utilises a high powered laser to super heat and fuse together tiny particles of glass, plastic, or ceramic in order to create a 3D object.[3] Objects created with an SLS printer typically require little post processing, such as sanding or other alterations.  Also, because SLS doesn’t require the use of support structures for the object while it is being printed, it is typically faster than FDM and SLA.
  • Fusion Deposition Modeling (FDM) – creates objects by heating thermoplastics and constructing the object layer by layer.[4]  When the object is complex, the 3D printer will build scaffolding to support the structure while it is being printed (this scaffolding can then be broken off or dissolved in detergent later).  It typically supports more complex geometries and cavities than SLA or SLS.
  • Stereolithography (SLA) – like FDM, SLA builds a 3D object layer by layer.  However, the difference with SLA is that the layer is built in a liquid polymer that is then hardened by a laser (as opposed to using heated thermoplastics such as FDM).[5] Like FDM, SLA also utilises support structures while printing the object, which are then cut away once the process is complete.

3D Printing Services

A number of different 3D printing services are available.  One of the most common providers is Shapeways. While working on my models and googling for solutions with regards to questions I had about tools within the software packages, I found a number of references to Shapeways for 3D printing. Shapeways is a type of retail market that allows users to sign up as “sellers”.  They can then submit their 3D models, choose their material and have it printed. The cost varies based on the material selected.  Further, the size of the object that can be printed is also limited by material (some materials allow you to print larger items at an increased cost). You can then feature your items in the Shapeways marketplace for purchase.

Sculpteo is another 3D printing service and is quite similar to Shapeways.  With Sculpteo, however, you can opt to not sell your items in their marketplace and simply use them as a 3D printing service. Sculpteo offers a number of different materials (which influence the cost of the printing) and provide specifications for each type of material that include minimum sizes and specifications for a model should that material be selected.

iMaterialise is another common 3D printing service. Unlike Sculpteo and Shapeways, iMaterialise also offers student discounts which is certainly an incentive for cash-strapped students such as myself. They offer 17 different materials along with a number of finishes.  Additionally, they also offer a comparison tool where a user can compare the various types of materials on offer and see the differences between them.  Their process is very straightforward, and they provide a considerable amount of information in an easy-to-consume format.

My Selection

As for my selection, I chose to use the 3D printer on campus, which is housed at the library.  This choice was made strictly for sake of convenience.  I was able to provide my lecturer with the 3D object.  He then uploaded the file to the 3D printer and set the appropriate sizes.  The cost of this effort was covered by my department, so I had no out of pocket expense.

However, had the option of the Library printing been unavailable, I most likely would have chosen iMaterialise as my 3D printer.  While the student discount was certainly a mark in their favour, the overriding reason was the presentation of information.  As mentioned above, all of the information regarding the different materials was easily presented, and I really liked the comparison tool, which allowed me to fully understand the differences between the different types of materials.  The easy to find specification information for each material provided me with all of the information I needed for my model to ensure the process would happen smoothly.


3D printing is certainly an emerging technology and is being leveraged today more than ever.  It is useful in the rapid production of prototypes and also can provide for unique marketing and economic opportunities for small businesses (as evidenced by the marketplaces popularised on Shapeways and Sculpteo). For the purposes of my explorations in photogrammetry, 3D printing offers a way for me to reflect on my 3D model in a real, tangible way (as opposed to only viewing it on screen).  I can use the 3D printed version and compare it to the real version and see where I might make improvements in the future.

3D printed objects can also provide value within the sphere of Cultural Heritage.  By creating a 3D printed object of an ancient piece of pottery, we can allow for the general public to closely examine the object without risk of damaging something that is irreplaceable.  In this same regard, 3D objects can also enhance museum or cultural heritage exhibitions by creating more immersive experiences. These types of experiences should continue to be explored and leveraged whenever possible as they raise awareness of Cultural Heritage and the Humanities as a whole.


[1]Casting aluminum parts directly from 3D printed PLA parts“. 3ders.org. 25 Sept 2012. Web. 3 April 2015.

[2]What Is 3D Printing“. 3dprinting.com. Web. 3 April 2015.

[3] Palermo, Elizabeth. “What is Selective Laser Sintering“. Livescience.com. 13 August 2013. Web. 3 April 2015.

[4]FDM Technology: 3D print durable parts with real thermoplastic“. Stratasys.com. Web. 3 April 2015.

[5]Stereoligthography“. Materialise.com. Web. 3 April 2015.

Explorations in Photogrammetry – Part 4

For part 4 of my series on photogrammetry, I will discuss the creation of a 3D model of the sepulchre, the object I’ve chosen for the second part of my 3D recording assignment (for more information regarding the sepulchre and the process of taking photos of the sepulchre itself, see Explorations in Photogrammetry – Part 3). As with the bowl from the National Museum of Ireland (see Part 1 and Part 2 of Explorations in Photogrammetry), the construction of the object utilised Agisoft’s Photoscan Pro and the use of Adobe Photoshop.

Creating the 3D Model

The process of editing the pictures in photoshop was no different than it was for the bowl from the National Museum. As I covered this in Part 2 of my blog series, I will simply refer you to that post for more information regarding the Photoshop process.

The creation of the 3D model in Photoscan was somewhat different this time.  The initial process was still the same as it was for the bowl.  I imported the 178 photos into Photoscan and used the programme’s image quality assessment tool to determine the quality of the images.  To my surprise, I found the quality of the images to be considerably higher:

Number of Images Quality % of Total
4 .50 – .59 2%
174 .60 and above 98%

However, I knew from my experiences while taking the photos that there would be some photos that would need to be manually filtered out.  These were primarily photos were the sun was at the wrong angle and caused bright spots to appear in the photo or where the the photos were so bright as to look overexposed.  This led to a further 37 images being filtered out of the result set.

I then began to apply a mask to each photo.  However, this process proved to be much more involved than with the bowl. When applying the mask to the photos of the bowl,  I was able to use the magic wand tool to select the background (which was a uniform colour), and with a few simple clicks—and a reasonably high success rate—filter out everything but the bowl object itself. The photos of the sepulchre proved much more difficult.

Due to the lack of uniformity in the background (which contained grass, trees, and other objects in the cemetery), I was unable to utilise the magic wand tool. Thus, I found I had to draw the mask around each view of the sepulchre in every picture individually, using the lasso tool. This was not only time consuming but oftentimes difficult to do precisely, as objects from the background would occasionally blend into the sepulchre, making it difficult to determine where the sepulchre began and other objects ended.

Once I finished the masking, I then began to run the images through the standard processing workflow in Photoscan Pro. For some reason, however, these images were taking much longer to process. While aligning the photos took about an hour and a half (which was to be expected), building the dense point cloud proved to be a challenge. I initially kicked off this process on my Sony Vaio. This is a relatively powerful laptop with 8GB of RAM, a dedicated video card, and an Intel i7 1.80GHz chip. After running for almost 40 hours and being only 40% complete with the creation of the point cloud, I decided to cancel the operation and switch to my MacBook Pro, which has 16GB of RAM and a 2.3GHz Intel i7 chip. As of the writing of this post, the process to build the dense point cloud has been running for 17 hours and is approximately 50% complete.

When I saw the amount of time it was taking to process this model, I decided to try to utilise another method. Rather than applying a mask to every photo and then building the dense point cloud based on this mask, I decided to simply crop out areas of the photo that simply didn’t belong (such as a few photos where I managed to capture my book bag or another person visiting the cemetery). Then I simply aligned the photos utilising Photoscan Pro’s built-in alignment tool. From there I manually cut out the extraneous information the software pulled in and created a dense point cloud. I then continued to manually remove the points I did not wish the application to include in the final model (such as some of the surrounding objects). From there I was able to build a mesh and provide textures to create a model. This method was much less time consuming as I didn’t need to apply the individual mask to each photo. The time to process all of the photos to build the dense point cloud was about 20 hours—still a very intensive process but much better than the time it took to process the photos utilising the mask.


Overall, I am very happy with the model itself.  It turned out rather well given the lighting conditions and difficulties I had with background objects and processing.  As learning outcomes, I would make note of the following:

  1. Try to take the pictures on an overcast day.  While this is not always within your realm of control, pictures taken when the sun is hidden behind a cloud tend to be easier to process, especially since they do not have shadows.
  2. Consider your environment.  One thing I did not take into consideration was the surrounding environment.  Had I thought of it, I might have taken some old sheets to drape over some of the surrounding objects.  This would have made applying the masks easier.
  3. Don’t always rely on the creation of masks.  For this object, I found the model that did not rely on masks but rather required me to manually edit the point cloud to be much easier to create. With this type of object, given the background items, I highly recommend this approach.

Final Model

As I mentioned earlier, the model utilising the mask approach is still processing. Once it finishes, I may post it here as an update. However, the model that I created without the use of masks turned out rather well. You can view it below.

Model Created Without the Use of Masks

Coming Up Next…

In my final post, I will discuss the process of 3D printing. I will evaluate available 3D printing services and discuss my chosen selection.

Explorations in Photogrammetry – Part 3

In part 3 of my photogrammetry series, I will discuss the second aspect of the photogrammetry assignment: using an outdoor object.  While the mechanics of this aspect of the assignment were similar to that of the first part (see Explorations in Photogrammetry – Part 1 and Part 2 for more information), this part of the assignment presented unique challenges.  As I mentioned in earlier posts, while working at the National Museum of Ireland, I was working in a controlled environment.  The object was small and placed on a rotating table.  The camera itself was on a stationary tripod. And most importantly, we utilised artificial light and a lightbox to ensure proper and consistent lighting.  I had none of these luxuries for the second aspect of this assignment.

For the second part of the assignment, I was tasked with creating a 3D model, with the subject of the model being something outdoors—the challenge being the lack of a controlled environment, especially in regards to lighting conditions.  I chose a sepulchre as my subject, one of the many objects in the cemetery behind St. Patrick’s College on South Campus here at Maynooth University.  The cemetery itself is rather small and houses mainly priests and nuns who have served St. Patrick’s College (although supposedly there are 2 unmarked graves of students who took their own lives and whose deaths have entered into folklore regarding the “Ghost Room” on campus[1]). The cemetery has a number of interesting markers, sepulchres and a crypt. The sepulchre I chose was that of Rev. Jacobi Hughes who, according to the inscription, served as Dean of St. Patrick’s College for 15 years. I found the sepulchre architecturally interesting with a number of interesting angles and faces, which is why I chose it as my subject piece.

Taking the Photos

The process of taking the photos of the object was rather different than it was for the photos taken at the National Museum.  First, I had to be very aware of any shadows being cast—not just of shadows cast by the object and any surrounding objects, but also of shadows cast by myself.  Too many shadows would make it difficult for the software to accurately compile a point cloud.

Ideally, it is considered best to take photos on a cloudy day.  Given I am in Ireland, one would think this wouldn’t be a difficult task; however it would seem the weather was not my friend, and the sun decided to shine high and bright the entire time I was attempting to take pictures.  This meant I had to be very careful with how I positioned myself while taking the pictures.  Due to the size of the object, I had to move around the object in order to capture it from all of the requisite angles (as opposed to the bowl at the National Museum which sat on a turntable that I could then rotate).  As such, I often found myself having to reposition the view finder on the camera and hold the camera at odd angles in order to ensure my shadow wouldn’t fall on the object as I attempted to capture it.

Another downside was the lack of a true preview.  While working with the camera at the National Museum, I was able to keep it connected to my laptop, where I could preview every picture and, if necessary, make constant adjustments to the settings.  This was not feasible with the sepulchre object, as I was moving around the object and could not keep the camera connected to my laptop.  I had to rely on the view finder on the camera itself for a preview—an option which isn’t ideal for truly examining an image upon capturing it.

I was able to apply some lessons learned from the National Museum, however.  In this instance, I used a much higher aperture setting (I kept the f-stop set at 22) and allowed the camera to adjust the ISO, so as to optimise this setting.  Overall, I feel these pictures were much sharper and of a higher quality than the pictures taken while at the National Museum.

Coming Up Next…

In part 4, I will explore the process of creating the 3D model of the sepulchre. Specifically, I will be discussing the differences between the construction of this model and that of the bowl model from part 2. I will also assess the quality of the model and what areas of improvement could have been made to create a better object.


[1] Sam. “The Ghost Room in Maynooth“. Come here to Me. 20 July 2012. Web. 3 April 2015.

Explorations in Photogrammetry – Part 2

In part 2 of my series on photogrammetry, I will discuss the process of creating the three-dimensional model of the bowl from the National Museum (for more information on the process of taking the images of the bowl—which I mentioned in my last post—see part 1 of Explorations in Photogrammetry).  The process itself involves the use of two pieces of software:  Adobe Photoshop and Agisoft Photoscan Pro.

Photoshop to the Rescue

The first step was to unify the images. This process was done using Photoshop and a few of its “auto” features. I began by setting up a batch job (learn how to create batch jobs in Photoshop here). This job applied the following auto corrections in order:

  1. Auto contrast – to adjust the brightness settings of each photograph and create a uniform brightness/contrast
  2. Auto colour – to correct any oddities in colour
  3. Auto tone – to smooth out an residual white / black in the image and apply a universal look

Each file was then re-saved in a separate location. It’s best practice to always keep a copy of the original image, unedited.

Building the Model in Photoscan Pro

The next step was to import the photos into Photoscan Pro.  Photoscan Pro is software that allows a number of images to be “stitched” together using a point cloud.  This point cloud is then used to construct a 3D model of the object.

The first step involved assessing the quality of the images.  Photoscan Pro has a built-in quality assessment tool.  After running this tool against all the images imported into the programme, each photo is then given a score (from 0 to 1) that shows how high the quality of the image is in order to align it and produce a 3D model.  While learning how to use the software in class, we were taught a general rule of thumb is to only use images whose quality is .6 or higher.

After importing the images and assessing their quality, I received the following results:

Number of Images Quality % of Total
7 .40 – .49 5%
45 .50 – .59 29%
101 .60 and above 66%

I was a little dismayed that such a large number of my images (over 30%) were under the .60 threshold.  I decided to run two different models in order to compare the results.

I began by excluding all images under .60 quality. I applied a mask to each image so as to instruct Photoscan to ignore everything other than object (this involved basically “cropping” out the background and having Photoscan ignore anything but the bowl object—a very time intensive process but well worth the results).  I then used Photoscan to align the photos, build a point cloud, a dense point cloud, and then a mesh.  This created a somewhat reasonable 3D model of the bowl; however there were a number of errors in its rendering and the model itself looked incomplete.

In order to correct this issue, I attempted to rerun the model by including the 45 images that were marked with an image quality of .50 – .59.  When including these images, along with the original 101 images of .60 quality or better, I received a much stronger model. This model lacked the errors in its rendering that were present in the first model and looked more complete.  Upon close inspection, however, one can see the very bottom of the bowl looks as though it was “cut out”.  This is not a feature of the bowl itself, but rather a failure to capture images at a deep enough angle so as to capture the entire inside of the bowl.

As I began to closely inspect the second version of the bowl, I also noticed there was quite a bit of “noise” in the model (areas of the bowl that looked pixelated or out of place).  I attempted to reapply the masks on the objects by cropping the images further (thus excluding the very edges of the object in each photo).  However, this did not turn out well. My bowl ended up rather flat-looking, as though someone had collapsed the bowl upon itself.  I think took the original point cloud from the second model and began manually cleaning up the cloud by looking for areas of white (which were the background and thus not part of the bowl).  After manually removing these points, I rebuilt the mesh and texture and developed my final model.


Overall, the bowl turned out better than I initially expected, especially given this was my first attempt at such an endeavour.  Given the opportunity to repeat the process under the same settings, I would make the following corrections:

  1. Adjust the aperture and ISO. For most of the images, I was using an F-stop of 8 and an ISO of 640.  I think I would adjust this to lower the ISO (thus reducing visual noise in the image) and raise the F-stop on the camera.  The exact values I would use are difficult to say, but I would most likely try to lower the ISO to somewhere between 100 and 200 and raise the F-stop up to at least 22.  This would give me a crisper image, especially in regards to depth of field, as well as the added benefit of removing a small amount of noise that exists in the image.
  2. Adjust the shutter speed.  I left the shutter speed on the automatic setting for this exercise, mainly because I didn’t think I would need to adjust it.  Given another opportunity, I might attempt to manually adjust this setting to see how it affects the quality of light in the image (many of images seemed a little dark, despite all of the artificial lighting present).
  3. Adjust the angle.  I would also take another round of photographs at a much steeper angle so as to completely capture the interior of the bowl.
  4. Take more time.  As I was the first student to begin taking pictures, I self-imposed pressure to complete my pictures quickly, so as to ensure my fellow classmates had ample time with the camera as well.  If I repeated this process again, I would be less conscious of time constraints and take more time in between photos to evaluate the quality of the image and adjust my settings as needed.

Final Models

In an effort to show off the difference between the two different models, I’ve included both below, as well as a version of the bowl that was sent to the 3D Printer (more on 3D printing in a future blog post). The first is the “bad” model that only included the images with a quality of .60 or higher. The second model includes images with a quality of .50 or higher. Also note the absence of part of the bottom of the bowl where I failed to take images at a deep enough angle to cover the entire bottom of the bowl.  The model used by the 3D printer does not have the hole in the bottom of the bowl as I used 3ds Max to fill in the mesh.

“Bad” Bowl
“Good” Bowl
“Printed” Bowl


Special thanks to the National Museum of Ireland and their Archaeology department for allowing us to photograph some of their objects. The pictures taken of the bowl featured in this post and my previous post were courtesy of National Museum of Ireland – Archaeology. All rights reserved.

Coming Up Next…

In part 3 of my series on photogrammetry, I will discuss my choice for the second part of my assignment. I will detail why I chose the particular object in question and what challenges were presented to me as part of this aspect of the assignment. I will also detail the steps I took to overcome those challenges.