Saint Manchán’s Church Project: Part 2, Capturing & Processing

First arriving at the site in Lemanaghan, I noticed that the ground inside the church walls, now quite unkempt in appearance with long grass growing in points and many rocks and stones cluttered about, was quite uneven, and clearly open to the elements, judging by the mould and moss growing on the gravestone slabs and vertical gravestone markers. In total, there are three gravestone slabs and three gravestones, two of which bear cross emblems on the top. It was an overcast day, which would minimize the possibility of heavy or stark shading across the stone surface of the church walls and the ground surface. The camera I used was a Sony Handycam DCR-SR55E. f-Stop at F/4 and shutter at 1/100. I began at the the west side of the church, keeping the camera held at a 90 degree angle from the ground as much as possible and went around the four walls in that position (held at a height of about 2 and a half ft from the ground). I did this process again at about 10 inches from the ground and then once again at a about 5 and 1/2 ft, and finally at around 7ft, holding the camera above my head, so to get the top of each wall.

After processing these images at home, I realised that I had made one fatal error in my capturing process. Although the model had come out relatively well, many of the 164 images I had taken could not align, as I was remiss in keeping in mind the 60% overlap rule. Whenever I had turned a corner, from say, the north corner between the north to the east wall, I failed to gradually turn the corner by degrees. Instead I turned at a sharp 90 degree angle, which meant  the software could not stitch, say, the last photo from against the north wall capturing the south wall opposite with the next photo taken from the east wall capturing the west side, since there was no gradual overlap or incremental change from the south wall to the west side of the church. The software, confused, could not align a whole series of shots taken along the south wall because of the lack of overlap between the last image taken at the east wall and the first along the south wall.

 

Last photo taken at east wall
Last photo taken at east wall
First photo taken at south wall (no overlap)
First photo taken at south wall (no overlap)

As many photos of the north wall failed to align because of this very issue, much of that wall was left out of the final 3D model. There were also areas behind the easternmost gravestone slab not processed, as well as some of the ground behind it. Many of the ground areas failed to render also, which resulted in many areas of the church ground and parts of the north wall left as gaping, empty gaps in my model.

I decided, given the capturing process had affected the quality of my model (despite the fact that all the images themselves were above the recommended PhotoScan quality), to head out to the site once more and to capture the images again (except this time turning by small degrees around the corners). I considered just taking the ‘corner images’ again on their own, and adding them to my last sequence of photos as a way to fill in those gaps that were the cause of the software’s confusion during the alignment phase. I then considered that the difference in lighting might be too great between the two different days of shooting, and so I decided just to take another whole sequence of images.

The north and east wall were brightly lit by sunshine, while the south wall was in the shade. I knew that this lighting was not optimum for photogrammetry, yet I knew that I would just have to make do as I might not get the chance to head to the church at a later time before the project submission deadline. Preferably, it would be better to head out sometime in the evening if it’s a sunny day with clear skies. Unfortunately, I was only able to make it to the church in the early afternoon.

I took my images of the church this time from five different height positions, not four, as I felt there was much too wide a gap between the ‘medium’ height position and the highest, so I decided to go around the church with an in-between position at around 5 ft.

Lowest height postion
Lowest height postion
Medium height position
Medium height position
Highest position
Highest position

When I uploaded my photos to PhotoScan this time, I was surprised to see that I had 464 images in total, compared to the 160 making up my last batch. This was probably due to the extra photos taken when turning corners and the extra height position, plus the fact that I went around more of the church area (I decided not to cover some of the west side of the church last time as a grave in the corner made it hard for maneuverability around that area. The fact that I wanted all of the information from each photograph processed meant that I did not have to make masks for the images. I noticed that 8 images were not up to the image quality standard as recommended by PhotoScan, so I decided to deactivate these from the alignment phase. The reason for their low quality was mainly to do with the images being too blurry, and two images were taken from very low down and were obscured by small rocks on the ground, and were therefore too dark and didn’t register any information.

DSC01276
Ground level view blocked by vegetation

For the alignment phase of my processing I chose the option ‘high’ with ‘generic’ matching of images. The process took just over 2 and a half hours, and when I saw the result I was extremely impressed. It seems that for larger areas, the more photos the better, as the 460 plus photos appear to make it an easier job for the software to match all points. 164 images, it seems in hindsight, just wasn’t enough. The fact that the ground area is often obscured from different angles, because of the tomb slabs, long grass and scattered debris and rocks, the more angles and images taken around the area meant that some photos in which an area may be obscured can be registered in another image. I then decided to execute the dense point cloud in ‘ultra high’ quality mode for best results. On realising this setting would take up to 25 hours to complete, I aborted and chose the ‘high’ quality setting instead. This took 3 and a half hours to complete.

dense_point_cloud
Model After Alignment

Issues with Model

The church area was quite difficult to capture. The uneven ground ,tombstones and fallen masonry cluttering the area make it difficult to capture each and every nook and cranny. The long grass does not appear to process correctl. In the dense point cloud model , much of what should have been the rendered grass was positioned “underneath” the actual ground level where the base of the tombstones begin. If I had more time to undertake this project, I would test out different methods (maybe cut out the images taken from the lowest height position, as many of these were simply capturing just the base sides of tombstones or long leaves of grass). There are many ‘holes’ in the ground of the model as a result. I would also use MeshLab or other similar image processing tools to edit my model. I am pleased that the position of the tombstone slabs and headstones came out correctly, and that there is a good sense of the ‘space’ of the area.

Dense point cloud
Dense point cloud
Window recesses
Window recesses

Vulnerability of the Church and Conservation:

According to the Conservation Plan for the church site and wider monastic complex, the sites and their fabric will continue to deteriorate unless corrected (47).

Phases of development of the church [Margaret Quinlan Architects]
Phases of development of the church [Margaret Quinlan Architects]

Excessive ivy growth (which has now been cleared by conservation specialists) was at one stage the only thing keeping masonry in place before part of the south wall was restored and the ivy cleared.  Other preservation and conservation issues include:

•Concern for the future security of some of the higher quality carved stone associated with the site.

• The two early Christian slabs attached to the wall of the church
are at risk of theft and should be made more secure. As these are not in their original location, it would be desirable to remove them and store them with the rest of the collection of slabs from the site.

• The potential of increased visitors to the site poses the risk of additional wear and tear to the structures, and perhaps to the togher (historical tree-lined pathway to the monastic complex).

For reasons of preservation, the Plan also points out that :

local pride in the history and archaeology of Lemanaghan is strong. However, an increased awareness of the national significance of the site would be of benefit in terms of providing further protection. This is particularly true of educational projects at school levels which will ensure understanding and protection by future generations.

After rendering my three-dimensional model, I can see many benefits of photogrammetry for archaeological purposes and for the dissemination of information about these sites. If a visitor centre were ever built for the site, a three-dimensional model could be used as an edutainment tool, or a way for visitors to virtually explore and experience the site from all dimensions and vantage-points. Photographs may not give us a real sense of a significant historical site, as, especially with the case of Manchán’s church, lack of perspective means that the width and length of the church cannot be captured satisfactorily, nor the distance between the grave stone slabs and the vertical gravestones. Photographs cannot give us a true sense of the amount of space the stone slabs take up either, only two dimensional overhead survey sketches or overhead images could potentially afford us these views. Three dimensional models can also afford us accurate masonry detail including depth, as well as the depth and relative size of features such as the piscina in the wall and window recesses that splay inwards. The church site is itself quite ‘squashed’, for want of a better word, and it’s difficult to take a picture from one position without any of the fallen masonry or tomb structures blocking your view: again, a three-dimensional model manages to outstrip this issue. The model could also be thought of as a digital preservation technique. The Saint Manchán Church site, as stated by the Conservation Plan report, is in dire need of repair, conservation and preservation. Although work has been carried out for conservation, if this does not continue and if the masonry is not continually strengthened much of the facade of the church, along with the ancient stone slabs within the church, will continue to deteriorate. A three-dimensional model may preserve the state of the site at the time the photographs were taken – potentially serving in some form as a record – with the 3D model bringing that space to life.

 

Church door final model
Church door – mesh
Church west gable final model
Church west gable – mesh
Sketchfab screenshot
Sketchfab screenshot
Sketchfab Screenshot - South wall
Sketchfab Screenshot – South wall
Piscina - final model
Piscina – final model

Manchán’s Church
by alexghughes
on Sketchfab

O’Brien, Caimin. Stories from a Sacred Landscape. Offaly County Council. 2006

Quinlan,  Margaret and Rachel Moss. Lemanaghan County Offaly Preservation Plan. The Heritage Council. 2007.

From Pseudocode to Prototype

In my last practicum-related blog post, I adumbrated some of the preparatory procedures involved when setting out to develop a software application. You can read this post here. In my case, the application is a text differentiation tool (which highlights the differences between two text strings) for the new, revised version of the Versioning Machine, due to be rolled out sometime later this summer.

Now that I have drafted and revised my pseudocode to a point that I am relatively happy with, it is time to (hopefully) realise the pseudocode through a working JavaScript prototype. Attempting to build the application through the actual Versioning Machine source code would be extremely difficult.  Therefore, careful not to put the cart before the horse, the best starting point, I decided, would be to write rudimentary HTML through which my JS can be more easily tested.

The first point will be to create two HTML strings with which I can change their content with my JavaScript code:

<html>
<body>
<p id=”string1″>The quick brown fox jumps over the lazy dog </p>

<p id=“string2″> The quick brown fox jumped over the lazy dog</p>
</body>
</html>

The two strings above, enclosed in <p> tags and each with their own unique id attribute, serve as proxies for the VM’s HTML sample files, through which I will hopefully be working at a later date, given my prototype is in working order.

Once the HTML has been set up, it is now time to ‘get’ the correct HTML elements that we need to change with our JavaScript. This is currently the first step in my pseudocode.
In this instance, because the two strings each have an id attribute (namely, “string1” and “string2”, the best way to access these  elements is to use JavaScript’s getElementById() method. Rather than simply accessing the two strings, however, I would also need to create a variable in which to store the information within each string:

var String1 = document.getElementById(“string1″);

var String2 = document.getElementById(“string2″);

One thing I need to keep in mind, however, is that the VM code may not lend itself to being accessed through the getElementById() method, and so I may need to figure out another way of grabbing the appropriate elements if I were to begin implementing my JS code within the VM. All I am doing here is creating a working HTML and JS environment in order to experiment with different JS methods and to see if they may be of use for the application I intend to build.

Once I get the correct HTML elements and have stored them in variables, I will need to figure out how to “split” the strings by each word. This is because my text comparison tool needs to differentiate each single word between two strings. As of yet, the two HTML strings stored inside var String1 and var String2 are stored as a series of characters – not specifically word by word. The computer at the moment sees each string as one, made up of a series of characters(like letters) and whitespace(which is also a character). In order to manipulate the way the computer reads the strings – that is, word for word, closer to how  a human parses a sentence – we need to break, or split, each string up like this. As I mentioned, whitespace is in and of itself a character, just like a letter. Therefore, if we were to split each string up by the whitespace character, this means we would capture each word within those two areas of whitespace. Don’t forget though, these words are still in a sense strings (series of characters) but we’ve just found a way that the computer will recognise each of these groups of characters as distinct entities. Splitting characters up within a string by the whitespace is very easy. All you have to do is use the string split method, followed by (“”).  The “” between the parentheses is where we ask the computer to break the split by whitespace. If, instead, we had string split followed by (“b”), all the characters in our string would be broken up each time the computer comes across the least “b” in a string.

If then I were to split the variable String1 up by whitespace:

var String1 = document.getElementById(“string1”);
var myArray1 = String1.split(“”);

The result would be that it is broken up, like this:

The, quick, brown, fox, jumps, over, the, lazy, dog

If I were to do String1.split(“b”), it would then come out like this:

The quick ,rown fox jumps over the lazy dog.

Employing the split() method, then, on both var String1 and var String2 means that we have two arrays of substrings, so to speak, made up of blocks of characters that are essentially words.  As with the example of the String1.split(“”) above, String1 as a substring array is now passed into the variable myArray1. We would do the exact procedure for String2, and we could call that array myArray2.

Arrays in JS are essentially a special type of object that can store multiple values in a single variable. Each of these values has an index number. The index numbers for each array are the same: beginning at 0 and so on for the array’s length. The myArray number indexes then are:

Index [0] = The
Index [1] = quick
Index [2] = brown
Index [3] = fox
Index [4] = jumps
Index [5] = over
Index [6] = the
Index [7] = lazy
Index [8] = dog

For our other array, myArray2, that is now storing the String2 variable information following the use of the split() method, the character within each index number is identical, except for Index[4], which instead of ‘jumps’ is ‘jumped’.
The next phase, then, will require comparing these two arrays, Array1 and Array2, getting the computer to determine which Index is not the same (Index[4]) and then HIGHLIGHTING this difference. The highlighting can be done very easily with the simple CSS background-color property. So, whenever the computer comes across a difference, it will need to somehow apply this background-color highlight on the Index in question.
At the moment, I need to do more research into how to compare two arrays, and then, once a difference is picked-up on, I need to find out how to somehow take this ‘difference’ out of the array so that it can be highlighted. I intend to keep you all posted on my progress.

Photogrammetry Project: Saint Manchán’s Church, Lemanaghan: Part 1, History & Context

The second part of the photogrammetry project  involved the capturing and processing of a larger object, conceivably something that was outside, like a statue, or an interior area, like a room. Since I had already captured an object in my first task at the museum with the Bronze Age ‘food bowls’, I thought that it might be interesting to attempt capturing an area this time round. From capturing the Bronze Age bowl at the National Museum in Dublin, I knew that it was imperative to incorporate all sides and all aspects of an object so that the processing and rendering software, namely Agisoft’s PhotoScan, could triangulate the images, or stitch them together for alignment in order to create a three-dimensional figure. I was also aware that it was important to have at least a 60% overlap between each image. I extrapolated from this information that the same must be the case for the capturing of an area – photographs of all sides and 60% overlap for alignment.

St Manchán's Church, South Wall and East Gable
St Manchán’s Church, South Wall and East Gable

As my object of study, I chose Saint Manchán’s Church, Lemanaghan. This site can be found on the R436 Ballycumber to Ferbane road, County Offaly. <http://www.offaly.ie/eng/Services/Heritage/Archaeology/Monastic_Sites/>. According to the Offaly County Council information page about local monastic sites, this monastery had recently undergone a programme of conservation from 2000 to 2010. The Lemanaghan conservation plan can be found online at the following link: <http://www.offaly.ie/eng/Services/Heritage/Documents/Lemanaghan_Conservation_Plan.pdf>.

Site Map of Church and Surroundings
Site Map of Church and Surroundings

Information about the Church:

As it now stands, the church is rectangular, measuring 19.4m x 7.5m. (63’6” x 24’6”) (Lemanagan Conservation Plan, 24). It is roofless, and by 2001 had a vigorous ivy growth on all walls. The monastery at Lemanaghan is said to have been founded in AD 645 (41), with the construction of the earliest section of the church circa AD 900-1100. An extension of the west facing wall of the church was then added c. 1200. Another addition to the eastern wall was constructed sometime in the 17th Century. In summation, the Conservation Plan has established the site at Lemanaghan as:

• A sacred place of great antiquity
• A place containing buildings of architectural significance
• A place rich in documentary history and archaeological potential
•A place where there is a long tradition of devotional practice
•A place ‘apart’, possessing a strong sense of being untouched by the modern world
(9)

manchan
St Manchán’s Church, Facing West

History of Manchan and Lemanaghan:

Manchan, an early-christian monk, founded the monastery at Tuaim-nEirc (now Lemanaghan), an island of dry land surrounded on all sides by the red bogs of the region (Bog of Allen) (O’Brien, 180). Because of his deep knowledge of the scriptures, Manchan was often referred to as the ‘Jerome of Ireland’. According to local tradition, Manchan was a tall, lame old man. The natural spring well situated beside the monastery would have provided the monks- and the community living in the area before their arrival – with a source of clean water. It is possible that this well was once a focus of pagan rituals (181). To succeed in establishing his monastery amongst the people of this region, Manchan had to ‘convert’ their most important spiritual places, and so in christianising the well at Lemanaghan, Manchan would have enabled local people to accept the new religion without leaving behind their long established symbols of worship. Manchan perished in the epidemic known the Yellow Plague of 664. Those who succeeded him took the title of ‘Abbot of Liath Manchain’. During the twelfth century, the monastery is said to have experienced a Golden Age, with many abbots governing the site (and many of whom were probably selected from the sister-monastery at Clonmacnoise (183). This period, it was said, was when the construction of the church proper first occured, and had a “beautiful Romanesque doorway’ (183).

Saint Manchán with His Cow
Saint Manchán with His Cow

Works Cited:

O’Brien, Caimin. Stories from a Sacred Landscape. Offaly County Council. 2006.

Photogrammetry Project – Part 2

The first step involved in creating my 3D model was to upload my photos to PhotoScan. I had 97 images in total. The first step involved after uploading the images was to check if there were any images that were too poor a quality for use in the alignment phase. All of my images were above the recommended quality level, so it was not necessary to discard any images just yet.

The next step was to draw masks around my images. Because the background behind the images was uniformly white, the software did not have much trouble in identifying the edges of the vessel. This meant that the “magic wand” tool worked quite well, with only some areas outside the vessel that were a little dark causing any confusion for the software.

Using the 'magic wand' masking tool to mark off the vessel's boundary lines
Using the ‘magic wand’ masking tool to mark off the vessel’s boundary lines

As a workaround to this issue, I could use another masking tool called the “intelligent scissors” which meant I could cut away any superfluous areas within the mask that I didn’t need. Using the “magic wand” and then the “intelligent scissors” to tidy up the mask borders worked really well and gave me clean, neat results.

Intelligent scissors tool was used to cut just above the base of the bowl, which was too dark and unnecessary to use in alignment, since the upside-down images of the bowl would capture the base anyway
Intelligent scissors tool was used to cut just above the base of the bowl, which was too dark and unnecessary to use in alignment, since the upside-down images of the bowl would capture the base anyway

The first time I attempted the masking process I mistakingly used the marker tool, which meant that I had unknowingly incorporated the background information into my images for the latter phases in the processing of the model.

Image of vessel before any masking had been carried out
Image of vessel before any masking had been carried out

The next step was to align the photos. I had to redo this process several times, in a kind of trial-and-error type way. The first time I noticed, there were many white, exterior points to my aligned model image. In order to fix this problem, I decided to choose the “constrain object by mask” option in the pre-alignment calibration phase. I also noticed that some of the bowl shape, particularly at the rim, was becoming stretched or distorted in some way, with some part of the rim trailing off like the arm of a spiral galaxy. This, of course, was in need of emendation, so I  decided, just out of curiosity, to disable all images of the vessel from the highest-point, those which were used to get the inside of the vessel and those that captured most of the base from high up.

Image of bowl from highest point
Image of bowl from highest point

I felt that the difference between the rest of the images and these ones taken from high up might have in some way confused the software – but  I’m still not too sure why. I also realised that the base was more or less captured in the images that were taken from the “second-highest” position, so the ones I had disabled probably weren’t giving the software any new information it didn’t already have about the vessel anyway.

Image of vessel's base
Image of vessel’s base

 

In disabling these images, however, I knew that I would be sacrificing the inside of the vessel for the purposes of getting a proper, undistorted model of its base, sides and rim. At least in this way, I would have a model that at least looked like the vessel I captured in the museum, albeit one without an interior.

 

The next stage was to build the dense point cloud, which I executed on “high” quality setting, and then the mesh.

 

Creating dense point cloud
Creating dense point cloud

In hindsight, I feel that 97 images was not quite enough for the purposes of the processing stage. If I were to capture the images again at the museum I would probably take between 125 – 150 images, as it is probably good to have more images than you need since you can choose whether to discard images you feel are not aligning properly and add ones that you feel may help in the processing of the image. Although I captured the vessel from four different height points, I think I would do it in five if there were a  next time, as I feel the ‘jump’ between the second highest point and the highest confused the system somewhat, or at least made it a difficult job in aligning all the points.

3D image of bowl with surface pattern
3D image of bowl with surface pattern

In terms of processing, it seems that I’m yet to find the magic formula for these images so that the image comes out clear and well aligned. I am still getting ‘debris’, or bits of unaligned or misaligned bowl scattered about the rendered model. I then realised I could get rid of these bits of debris with a cutting tool, so I went back to the dense cloud image and redid the mesh procedure.

3D image of vessel with surface pattern
Pot ‘debris’

 

Issue when aligning with images captured from high position
Issue when aligning with images captured from high position

 

Food Bowl
by alexghughes
on Sketchfab

Photogrammetry Project – Part 1

As part of the first stage for the practical assignment in 3D recording we were given the task of capturing ancient Irish pottery bowls, dated as far back as 2500BC <http://www.museum.ie/en/collection/bronze-age.aspx>. These Bronze Age vessels were found in burial sites, or cists, and it is believed that they contained food as an offering to the dead for their journey to the afterlife.

In the process of photogrammetry, it is necessary to capture all sides of an object, in 360 degrees. Particular challenges involved in capturing these ceramic vessels was getting images of the inside or interior of the object, as well as the base, and making sure that the lighting was correct so to capture as accurately as possible for capturing the decorations and patterned incisions made around the exterior of the objects.

As aforementioned, it is important to capture the object in 360 degrees, on all sides, so that the software program can then extrapolate from the two-dimensional images a model in three dimensions. Since it is best to keep camera movement to a minimum, it is a good idea when working with smaller objects to rotate the object rather than to move around it with the camera. Rotating the object is relatively easy if placed on a turntable. In keeping the camera in the same position for a series of captures this would hopefully obviate the possibility of blurred images. Images that are blurred may not be of a quality that is good enough for the image processing stage in PhotoScan.

Lighting is yet another issue to think about in photogrammetry. Harsh lighting and the casting of shadows is best to be avoided, and so it is worth your while to light your object equally from all sides <https://dinosaurpalaeo.wordpress.com/2013/12/20/photogrammetry-tutorial-3-turntables/>. A well-lit object means surface detail is not distorted by shadow, and so the intricately incised patterns and shapes on the surface of the ceramic vessels, in this instance, would be clear and undistorted once the three-dimensional models are processed. There is also a very handy method to diffuse the light hitting your object: namely, the lightbox. The lightbox is essentially a white, thin box made of canvas material, with an opening on one side through which you can place your object inside to record. The object can be placed in the centre of the lightbox on the turntable. It is best to place two lights at an equal distance from the box on either side, with a third light source placed directly above, pointing directly down on the box. This arrangement of lights ensures that there is no area of the object is more lit up than any other areas, and so there are stark shadows distorting the surface detail.

Museum

One of the 'food bowls' with girding patterned design
One of the ‘food bowls’ with girding patterned design

We arrived at the museum early in the morning as five different objects were to be recorded, which we knew could take up to at least four hours between all of us. We chose a table on which to place the lightbox and set up our equipment. The camera was hooked up to the laptop in order take the pictures remotely, as it was thought that pressing down on the shutter-release button every time to take a picture could potentially cause the camera to shake. We used a Canon E0560D digital camera to take our pictures.

Lightbox with light sources on either side
Lightbox with light sources on either side

After choosing the vessel I could like to capture and once it was placed in position on the turntable, I inspected the image through the “live feed” on the laptop screen. I noticed that the image was a little dark, so I decided to turn the ISO up a little to 120. I needed to be careful not to turn the ISO up too much, since this could potentially cause the image to go quite grainy and therefore decrease its quality or sharpness. I also noticed that the rim of the vessel furthest from the position of the camera was a little out of focus, and I decided to turn the depth of field up to f22.

Camera with laptop hooked up for taking photos remotely
Camera with laptop hooked up for taking photos remotely

During the photographing stage, I was careful to rotate the turntable only a little between each take, maybe by only about 10 degrees each time, since there needs to be at least a 60 percent overlap between each image in order for PhotoScan to triangulate the images correctly. I made sure to capture my vessel from three different height positions. I had to make sure that, at my highest angle, the camera was able to capture the interior of the vessel. I then needed the bowl to be turned upside-down so to get the base, and then to repeat the process, taking the vessel from three different height position, 360 degrees around each time. The photographing process in total took about 40 minutes in total.

Sketch image of bipartite bowl
Sketch image of bipartite bowl

 

All bowls featured in this blog courtesy of the National Museum of Ireland.