Working with 3dsMax was a lot more fun than I had initially anticipated. I turned to YouTube or the Autodesk Knowledge Desk <http://knowledge.autodesk.com/> website for any tips or information whenever I got stuck.
The way I mostly went about my constructing my model was with a trial-and-error type approach. If I needed a rounded, rectangular object, for instance, I thought of a chamfer box and started from there. If I needed a semi-circle for an archway or the top of a window, I chose a tube or a cylinder, knowing I could slice them at 180 degrees. I would often run through ten or twenty different modifiers on an object to see what they did, and if I didn’t like it or it didn’t give me the desired effect, I would simply delete it and the object would return to its previous state.
One of the more difficult things about 3dsMax for me was organising my objects into layers or attaching objects so that two or three objects became one, and so could be manipulated, moved or rotated easily as a single unit. I never got to stratifying my objects into taxonomic units because I was so tight for time, and kept thinking I’d be better off just getting on with shaping the window or applying test materials and displacement maps. The issue with not organising my objects meant I would often clumsily have to select all the criss-crossed wooden beams making the roof individually when I could potentially have just placed them all inside of the same layer, selected that layer and then have them chosen as one for manipulation.
I also never got around to practicing snapping or aligning objects, which meant getting the different parts of the walls to fuse seamlessly was very time consuming and quite difficult.
Overall, my organisation policy was awful. If I were to undertake a project in 3dsMax again layering, snapping and aligning would be the first things I would brush up on – and I would invariably use these features throughout the constructing phase. I’m not ever going to spend fifteen minutes trying to determine the width of a box to the width of a semicircle so that they fit on top of one another without any jutting out bits ever again.
Patience is something you really need for 3dsMax – but I find once you get into that graphic design trance where you forget about who or where you are and are thinking in shapes rather than words – it’s not too stressful a job to do. On one occasion I went in to the computer room at 9 in the morning, and by one o’clock I realised I had only one window-shaped hole cut into my wall to show for my effort.
Yet, for all its cannibalisation into one’s time, 3dsMax renders some amazing images for you — to the point that when you first see them you think “I didn’t really make that, surely!?”. And, in many respects, no, you didn’t. Using mental ray for textures involves the simple assignment of a bitmap image for a displacement map or a bump map, choosing the image for the diffuse colour if you like, adding more or less specular lighting or glossiness, whatever you like, and setting up lighting like photometric target or mr Sky Portal lights, and the rendering capabilities of mental ray will do the rest.
In order to begin rendering my images, I needed lighting for my interior.
For exterior shots, I decided to use the mental ray Daylight system. I chose my coordinates and positioned the sunlight.
Since virtually no light from the daylight system makes it inside, I decided to use the mr Sky Portal for daylight shots in the interior. I also made two candles out of cylinders (the wicks are tubes and the flame is a self-illuminated squeezed sphere). The candlelight is used by placing a photometric free light, with a warm glow, over the flame shapes. Below are my rendered results, playing with light and camera positions. (I forgot to mention I added an alter, which was more than likely made of wood back in this time as no alters exist from this period and so must have been made from a perishable material. The alter would also have been placed up against the wall as the priest had his back turned to the congregation).
I would like to have added the statuary, but this would have been impossible to do given the time limit. I would like to add a cloth or drape of some sort over the alter, with a cross motif or something like that. I would like to play around with materials even more in order to get closer to a realistic effect.
For the textures of the walls I used images of the walls I had taken at Lemanaghan Church. I applied these as displacement maps of bitmaps, having chosen mental ray’s Arch and Design setting for adding textures. I changed the diffuse colour to white, to create that lime or whitewashed effect.
I had to add a UVW Map modifier to each wall and choose BOX shape so that the materials would not stretch. I also had to uncheck REAL WORLD MAP SETTINGS on every primitive object BEFORE I turned them into editable objects so that complications with textures would not arise later on. I then had to uncheck REAL WORLD MAP SETTINGS on every UVW Map modifier also for the material to wrap itself properly around the object.
For my doorway I added an Arch and Design Stone texture.
For the thatched roof I was torn between using a bitmap of a thached roof or to use the HAIR AND FUR modifier. In the end I opted for the latter. I had to turn the glossiness and specular settings all the way down so that there wouldn’t be any sheen off the thatch.
For the wooden door I used a bitmap from CGTextures and and applied it as a displacement map. I chose the door image as the diffuse colour.
The earthen floor is made using a bitmap image of soil. I applied it as a displacement map and diffuse map on top of a plane.
The first step in building the model was to make a plane on which I could paste the floor plan of the church.
I scaled the plane to a length of 19.4 metres by 7.5 metres, the size of Lemanaghan Church as it stands today (Quinlan, Moss 24). I then traced around the walls with a line (notice the white lines around the top wall. I had to make an ‘imaginary’ gable wall at the right of the top and bottom walls, since the extension right of these walls was added in the fifteenth century, and my model is a representation of the structure as it was in the twelfth century.
The next step was to extrude my lines of the floor plan with the extrude modifier. I gave my newly formed walls a height of 4.5 metres, which is the height they stand today.
For the window recess I took the following steps:
I turned my primitive object into an editable poly.
I cut a rectangle around where the window recess would be positioned.
I grabbed this polygon and beveled it inwards.
(Not as easy as it sounds, lots of trial-and-error before I figured out how to achieve this).
Next was the Romanesque doorway.
Since there is not much left of the Romanesque doorway extant today at Lemanaghan, it was up to me, myself and I to try and figure out what would be the most easy, but still aesthetically pleasing, design to choose. The columns were no big deal – just a cylinder, with a narrow box wedged between each one to seperate them. The capitals over the columns were a bit of a nightmare. I tried for so long to try get the conched, shell-like design – and think I finally did achieve it.
I started with a chamfer box.
I added a taper modifier so that it tapered in at the bottom.
I added a wave modifier to give it that nice wavy effect you see on the top of the left capital in the above image.
I tessellated each face and grabbed each polygon (I’ve turned the object into an editable poly at this juncture) and extruded each one a little.
I then added the TURBOSMOOTH modifier, which smoothed the extruded polygons and gave a nice natural conch shape, which is what I was going for.
I added a squeeze or a push modifier to just squash the capital a little and give it that bulging shape. With each modifier I just played around with the options and values until I got the desired effect.
For the arch, or archivolts as the experts call them, I had to add “twelfth century Romanesque Irish doorways” to my history of Google searches. I kept getting back images along the lines of this:
so I modelled the Lemanaghan Church archway on the similar pattern motifs that could be seen on all the Romanesque doorways of the twelfth century period.
For the doorway archivolts I made:
Sliced it in half.
Turned it into an editable poly.
Tessellated it (I think).
Grabbed each polygon one by one and beveled each one at a time.
For the window:
I made a box and a tube.
Turned them both into editable polygons.
Tessellated the box.
Went to Polygon Modelling > Generate Topology
Changed inner lines of box from a square to a lattice shape.
I chose all EDGES and then chose the CREATE SHAPE FROM SELECTION option.
I then had a LATTICE mesh object which I placed over the glass window.
TO MAKE A HOLE IN THE WINDOW RECESS
Got my glass window.
Positioned it in its rightful place in the wall. Widened it for a moment.
Chose compound object.
Then I chose BOOLEAN > SELECT OPERAND B.
Had my glass window selected.
I chose SUBTRACTION [B-A]
I clicked on the wall.
The window shape disappears and leaves a hole through the object.
I then moved one of my copies of the glass window with the lattice design into the hole slot.
As part of the 3D modelling project for the module in Remaking the Physical, I decided to choose as my case study Saint Manchán’s Church in Lemanaghan, County Offaly. The church dates back to the eleventh century, with extensions added in both the twelfth century and the fifteenth century.
I decided that it would be interesting to try and recreate how the church may have looked in the twelfth century. The first thing to do was to find out historically factual information about the church itself and archaeological information about twelfth century Irish churches and what they looked like — including what type of furnishings or features might have been commonly found in these buildings during the early medieval period.
Dr Rachel Moss, Assistant Professor of the Department of the History of Art and Architecture at Trinity College, Dublin, and co-author of the Lemanaghan Heritage Conservation Plan, drawing on her expertise of medieval Irish ecclesiastical sites, kindly gave me an archaeologically-based interpretation based on how the Lemanaghan site may have appeared circa the twelfth century, which I have listed below:
The walls would have been rendered and lime washed, possibly with wall paintings, but many were also apparently left plain.
Parish churches (as this would have been by the twelfth century) often had two devotional statues on separate shelves either side of the altar- in this case St Manachan on the left (north) of the altar and the BVM [Blessed Virgin Mary] on the right (south).
There would have been no seating and probably a beaten earth floor with rushes and some flat grave slabs.
The roof was probably thatched.
It’s possible that the east window might have had coloured glass.
You can find out more about the
Given the time limitations of this project, I may have to omit more difficult features such as the statuary and the wall paintings out of my project for the moment, but if I get more time in the future to add these details I will do. For the moment, I think it’s important to get the main structure of the building, along with their textures and colouration. The main things to worry about for the time being, then, are:
The lime washed stone walls.
The earth floor.
The thatched roof.
First arriving at the site in Lemanaghan, I noticed that the ground inside the church walls, now quite unkempt in appearance with long grass growing in points and many rocks and stones cluttered about, was quite uneven, and clearly open to the elements, judging by the mould and moss growing on the gravestone slabs and vertical gravestone markers. In total, there are three gravestone slabs and three gravestones, two of which bear cross emblems on the top. It was an overcast day, which would minimize the possibility of heavy or stark shading across the stone surface of the church walls and the ground surface. The camera I used was a Sony Handycam DCR-SR55E. f-Stop at F/4 and shutter at 1/100. I began at the the west side of the church, keeping the camera held at a 90 degree angle from the ground as much as possible and went around the four walls in that position (held at a height of about 2 and a half ft from the ground). I did this process again at about 10 inches from the ground and then once again at a about 5 and 1/2 ft, and finally at around 7ft, holding the camera above my head, so to get the top of each wall.
After processing these images at home, I realised that I had made one fatal error in my capturing process. Although the model had come out relatively well, many of the 164 images I had taken could not align, as I was remiss in keeping in mind the 60% overlap rule. Whenever I had turned a corner, from say, the north corner between the north to the east wall, I failed to gradually turn the corner by degrees. Instead I turned at a sharp 90 degree angle, which meant the software could not stitch, say, the last photo from against the north wall capturing the south wall opposite with the next photo taken from the east wall capturing the west side, since there was no gradual overlap or incremental change from the south wall to the west side of the church. The software, confused, could not align a whole series of shots taken along the south wall because of the lack of overlap between the last image taken at the east wall and the first along the south wall.
As many photos of the north wall failed to align because of this very issue, much of that wall was left out of the final 3D model. There were also areas behind the easternmost gravestone slab not processed, as well as some of the ground behind it. Many of the ground areas failed to render also, which resulted in many areas of the church ground and parts of the north wall left as gaping, empty gaps in my model.
I decided, given the capturing process had affected the quality of my model (despite the fact that all the images themselves were above the recommended PhotoScan quality), to head out to the site once more and to capture the images again (except this time turning by small degrees around the corners). I considered just taking the ‘corner images’ again on their own, and adding them to my last sequence of photos as a way to fill in those gaps that were the cause of the software’s confusion during the alignment phase. I then considered that the difference in lighting might be too great between the two different days of shooting, and so I decided just to take another whole sequence of images.
The north and east wall were brightly lit by sunshine, while the south wall was in the shade. I knew that this lighting was not optimum for photogrammetry, yet I knew that I would just have to make do as I might not get the chance to head to the church at a later time before the project submission deadline. Preferably, it would be better to head out sometime in the evening if it’s a sunny day with clear skies. Unfortunately, I was only able to make it to the church in the early afternoon.
I took my images of the church this time from five different height positions, not four, as I felt there was much too wide a gap between the ‘medium’ height position and the highest, so I decided to go around the church with an in-between position at around 5 ft.
When I uploaded my photos to PhotoScan this time, I was surprised to see that I had 464 images in total, compared to the 160 making up my last batch. This was probably due to the extra photos taken when turning corners and the extra height position, plus the fact that I went around more of the church area (I decided not to cover some of the west side of the church last time as a grave in the corner made it hard for maneuverability around that area. The fact that I wanted all of the information from each photograph processed meant that I did not have to make masks for the images. I noticed that 8 images were not up to the image quality standard as recommended by PhotoScan, so I decided to deactivate these from the alignment phase. The reason for their low quality was mainly to do with the images being too blurry, and two images were taken from very low down and were obscured by small rocks on the ground, and were therefore too dark and didn’t register any information.
For the alignment phase of my processing I chose the option ‘high’ with ‘generic’ matching of images. The process took just over 2 and a half hours, and when I saw the result I was extremely impressed. It seems that for larger areas, the more photos the better, as the 460 plus photos appear to make it an easier job for the software to match all points. 164 images, it seems in hindsight, just wasn’t enough. The fact that the ground area is often obscured from different angles, because of the tomb slabs, long grass and scattered debris and rocks, the more angles and images taken around the area meant that some photos in which an area may be obscured can be registered in another image. I then decided to execute the dense point cloud in ‘ultra high’ quality mode for best results. On realising this setting would take up to 25 hours to complete, I aborted and chose the ‘high’ quality setting instead. This took 3 and a half hours to complete.
Issues with Model
The church area was quite difficult to capture. The uneven ground ,tombstones and fallen masonry cluttering the area make it difficult to capture each and every nook and cranny. The long grass does not appear to process correctl. In the dense point cloud model , much of what should have been the rendered grass was positioned “underneath” the actual ground level where the base of the tombstones begin. If I had more time to undertake this project, I would test out different methods (maybe cut out the images taken from the lowest height position, as many of these were simply capturing just the base sides of tombstones or long leaves of grass). There are many ‘holes’ in the ground of the model as a result. I would also use MeshLab or other similar image processing tools to edit my model. I am pleased that the position of the tombstone slabs and headstones came out correctly, and that there is a good sense of the ‘space’ of the area.
Vulnerability of the Church and Conservation:
According to the Conservation Plan for the church site and wider monastic complex, the sites and their fabric will continue to deteriorate unless corrected (47).
Excessive ivy growth (which has now been cleared by conservation specialists) was at one stage the only thing keeping masonry in place before part of the south wall was restored and the ivy cleared. Other preservation and conservation issues include:
•Concern for the future security of some of the higher quality carved stone associated with the site.
• The two early Christian slabs attached to the wall of the church
are at risk of theft and should be made more secure. As these are not in their original location, it would be desirable to remove them and store them with the rest of the collection of slabs from the site.
• The potential of increased visitors to the site poses the risk of additional wear and tear to the structures, and perhaps to the togher (historical tree-lined pathway to the monastic complex).
For reasons of preservation, the Plan also points out that :
local pride in the history and archaeology of Lemanaghan is strong. However, an increased awareness of the national significance of the site would be of benefit in terms of providing further protection. This is particularly true of educational projects at school levels which will ensure understanding and protection by future generations.
After rendering my three-dimensional model, I can see many benefits of photogrammetry for archaeological purposes and for the dissemination of information about these sites. If a visitor centre were ever built for the site, a three-dimensional model could be used as an edutainment tool, or a way for visitors to virtually explore and experience the site from all dimensions and vantage-points. Photographs may not give us a real sense of a significant historical site, as, especially with the case of Manchán’s church, lack of perspective means that the width and length of the church cannot be captured satisfactorily, nor the distance between the grave stone slabs and the vertical gravestones. Photographs cannot give us a true sense of the amount of space the stone slabs take up either, only two dimensional overhead survey sketches or overhead images could potentially afford us these views. Three dimensional models can also afford us accurate masonry detail including depth, as well as the depth and relative size of features such as the piscina in the wall and window recesses that splay inwards. The church site is itself quite ‘squashed’, for want of a better word, and it’s difficult to take a picture from one position without any of the fallen masonry or tomb structures blocking your view: again, a three-dimensional model manages to outstrip this issue. The model could also be thought of as a digital preservation technique. The Saint Manchán Church site, as stated by the Conservation Plan report, is in dire need of repair, conservation and preservation. Although work has been carried out for conservation, if this does not continue and if the masonry is not continually strengthened much of the facade of the church, along with the ancient stone slabs within the church, will continue to deteriorate. A three-dimensional model may preserve the state of the site at the time the photographs were taken – potentially serving in some form as a record – with the 3D model bringing that space to life.
In my last practicum-related blog post, I adumbrated some of the preparatory procedures involved when setting out to develop a software application. You can read this post here. In my case, the application is a text differentiation tool (which highlights the differences between two text strings) for the new, revised version of the Versioning Machine, due to be rolled out sometime later this summer.
<p id=”string1″>The quick brown fox jumps over the lazy dog </p>
<p id=“string2″> The quick brown fox jumped over the lazy dog</p>
The two strings above, enclosed in <p> tags and each with their own unique id attribute, serve as proxies for the VM’s HTML sample files, through which I will hopefully be working at a later date, given my prototype is in working order.
var String1 = document.getElementById(“string1″);
var String2 = document.getElementById(“string2″);
One thing I need to keep in mind, however, is that the VM code may not lend itself to being accessed through the getElementById() method, and so I may need to figure out another way of grabbing the appropriate elements if I were to begin implementing my JS code within the VM. All I am doing here is creating a working HTML and JS environment in order to experiment with different JS methods and to see if they may be of use for the application I intend to build.
Once I get the correct HTML elements and have stored them in variables, I will need to figure out how to “split” the strings by each word. This is because my text comparison tool needs to differentiate each single word between two strings. As of yet, the two HTML strings stored inside var String1 and var String2 are stored as a series of characters – not specifically word by word. The computer at the moment sees each string as one, made up of a series of characters(like letters) and whitespace(which is also a character). In order to manipulate the way the computer reads the strings – that is, word for word, closer to how a human parses a sentence – we need to break, or split, each string up like this. As I mentioned, whitespace is in and of itself a character, just like a letter. Therefore, if we were to split each string up by the whitespace character, this means we would capture each word within those two areas of whitespace. Don’t forget though, these words are still in a sense strings (series of characters) but we’ve just found a way that the computer will recognise each of these groups of characters as distinct entities. Splitting characters up within a string by the whitespace is very easy. All you have to do is use the string split method, followed by (“”). The “” between the parentheses is where we ask the computer to break the split by whitespace. If, instead, we had string split followed by (“b”), all the characters in our string would be broken up each time the computer comes across the least “b” in a string.
If then I were to split the variable String1 up by whitespace:
var String1 = document.getElementById(“string1”);
var myArray1 = String1.split(“”);
The result would be that it is broken up, like this:
The, quick, brown, fox, jumps, over, the, lazy, dog
If I were to do String1.split(“b”), it would then come out like this:
The quick ,rown fox jumps over the lazy dog.
Employing the split() method, then, on both var String1 and var String2 means that we have two arrays of substrings, so to speak, made up of blocks of characters that are essentially words. As with the example of the String1.split(“”) above, String1 as a substring array is now passed into the variable myArray1. We would do the exact procedure for String2, and we could call that array myArray2.
Arrays in JS are essentially a special type of object that can store multiple values in a single variable. Each of these values has an index number. The index numbers for each array are the same: beginning at 0 and so on for the array’s length. The myArray number indexes then are:
Index  = The
Index  = quick
Index  = brown
Index  = fox
Index  = jumps
Index  = over
Index  = the
Index  = lazy
Index  = dog
For our other array, myArray2, that is now storing the String2 variable information following the use of the split() method, the character within each index number is identical, except for Index, which instead of ‘jumps’ is ‘jumped’.
The next phase, then, will require comparing these two arrays, Array1 and Array2, getting the computer to determine which Index is not the same (Index) and then HIGHLIGHTING this difference. The highlighting can be done very easily with the simple CSS background-color property. So, whenever the computer comes across a difference, it will need to somehow apply this background-color highlight on the Index in question.
At the moment, I need to do more research into how to compare two arrays, and then, once a difference is picked-up on, I need to find out how to somehow take this ‘difference’ out of the array so that it can be highlighted. I intend to keep you all posted on my progress.
The second part of the photogrammetry project involved the capturing and processing of a larger object, conceivably something that was outside, like a statue, or an interior area, like a room. Since I had already captured an object in my first task at the museum with the Bronze Age ‘food bowls’, I thought that it might be interesting to attempt capturing an area this time round. From capturing the Bronze Age bowl at the National Museum in Dublin, I knew that it was imperative to incorporate all sides and all aspects of an object so that the processing and rendering software, namely Agisoft’s PhotoScan, could triangulate the images, or stitch them together for alignment in order to create a three-dimensional figure. I was also aware that it was important to have at least a 60% overlap between each image. I extrapolated from this information that the same must be the case for the capturing of an area – photographs of all sides and 60% overlap for alignment.
As my object of study, I chose Saint Manchán’s Church, Lemanaghan. This site can be found on the R436 Ballycumber to Ferbane road, County Offaly. <http://www.offaly.ie/eng/Services/Heritage/Archaeology/Monastic_Sites/>. According to the Offaly County Council information page about local monastic sites, this monastery had recently undergone a programme of conservation from 2000 to 2010. The Lemanaghan conservation plan can be found online at the following link: <http://www.offaly.ie/eng/Services/Heritage/Documents/Lemanaghan_Conservation_Plan.pdf>.
Information about the Church:
As it now stands, the church is rectangular, measuring 19.4m x 7.5m. (63’6” x 24’6”) (Lemanagan Conservation Plan, 24). It is roofless, and by 2001 had a vigorous ivy growth on all walls. The monastery at Lemanaghan is said to have been founded in AD 645 (41), with the construction of the earliest section of the church circa AD 900-1100. An extension of the west facing wall of the church was then added c. 1200. Another addition to the eastern wall was constructed sometime in the 17th Century. In summation, the Conservation Plan has established the site at Lemanaghan as:
• A sacred place of great antiquity
• A place containing buildings of architectural significance
• A place rich in documentary history and archaeological potential
•A place where there is a long tradition of devotional practice
•A place ‘apart’, possessing a strong sense of being untouched by the modern world
History of Manchan and Lemanaghan:
Manchan, an early-christian monk, founded the monastery at Tuaim-nEirc (now Lemanaghan), an island of dry land surrounded on all sides by the red bogs of the region (Bog of Allen) (O’Brien, 180). Because of his deep knowledge of the scriptures, Manchan was often referred to as the ‘Jerome of Ireland’. According to local tradition, Manchan was a tall, lame old man. The natural spring well situated beside the monastery would have provided the monks- and the community living in the area before their arrival – with a source of clean water. It is possible that this well was once a focus of pagan rituals (181). To succeed in establishing his monastery amongst the people of this region, Manchan had to ‘convert’ their most important spiritual places, and so in christianising the well at Lemanaghan, Manchan would have enabled local people to accept the new religion without leaving behind their long established symbols of worship. Manchan perished in the epidemic known the Yellow Plague of 664. Those who succeeded him took the title of ‘Abbot of Liath Manchain’. During the twelfth century, the monastery is said to have experienced a Golden Age, with many abbots governing the site (and many of whom were probably selected from the sister-monastery at Clonmacnoise (183). This period, it was said, was when the construction of the church proper first occured, and had a “beautiful Romanesque doorway’ (183).
O’Brien, Caimin. Stories from a Sacred Landscape. Offaly County Council. 2006.
The first step involved in creating my 3D model was to upload my photos to PhotoScan. I had 97 images in total. The first step involved after uploading the images was to check if there were any images that were too poor a quality for use in the alignment phase. All of my images were above the recommended quality level, so it was not necessary to discard any images just yet.
The next step was to draw masks around my images. Because the background behind the images was uniformly white, the software did not have much trouble in identifying the edges of the vessel. This meant that the “magic wand” tool worked quite well, with only some areas outside the vessel that were a little dark causing any confusion for the software.
As a workaround to this issue, I could use another masking tool called the “intelligent scissors” which meant I could cut away any superfluous areas within the mask that I didn’t need. Using the “magic wand” and then the “intelligent scissors” to tidy up the mask borders worked really well and gave me clean, neat results.
The first time I attempted the masking process I mistakingly used the marker tool, which meant that I had unknowingly incorporated the background information into my images for the latter phases in the processing of the model.
The next step was to align the photos. I had to redo this process several times, in a kind of trial-and-error type way. The first time I noticed, there were many white, exterior points to my aligned model image. In order to fix this problem, I decided to choose the “constrain object by mask” option in the pre-alignment calibration phase. I also noticed that some of the bowl shape, particularly at the rim, was becoming stretched or distorted in some way, with some part of the rim trailing off like the arm of a spiral galaxy. This, of course, was in need of emendation, so I decided, just out of curiosity, to disable all images of the vessel from the highest-point, those which were used to get the inside of the vessel and those that captured most of the base from high up.
I felt that the difference between the rest of the images and these ones taken from high up might have in some way confused the software – but I’m still not too sure why. I also realised that the base was more or less captured in the images that were taken from the “second-highest” position, so the ones I had disabled probably weren’t giving the software any new information it didn’t already have about the vessel anyway.
In disabling these images, however, I knew that I would be sacrificing the inside of the vessel for the purposes of getting a proper, undistorted model of its base, sides and rim. At least in this way, I would have a model that at least looked like the vessel I captured in the museum, albeit one without an interior.
The next stage was to build the dense point cloud, which I executed on “high” quality setting, and then the mesh.
In hindsight, I feel that 97 images was not quite enough for the purposes of the processing stage. If I were to capture the images again at the museum I would probably take between 125 – 150 images, as it is probably good to have more images than you need since you can choose whether to discard images you feel are not aligning properly and add ones that you feel may help in the processing of the image. Although I captured the vessel from four different height points, I think I would do it in five if there were a next time, as I feel the ‘jump’ between the second highest point and the highest confused the system somewhat, or at least made it a difficult job in aligning all the points.
In terms of processing, it seems that I’m yet to find the magic formula for these images so that the image comes out clear and well aligned. I am still getting ‘debris’, or bits of unaligned or misaligned bowl scattered about the rendered model. I then realised I could get rid of these bits of debris with a cutting tool, so I went back to the dense cloud image and redid the mesh procedure.
As part of the first stage for the practical assignment in 3D recording we were given the task of capturing ancient Irish pottery bowls, dated as far back as 2500BC <http://www.museum.ie/en/collection/bronze-age.aspx>. These Bronze Age vessels were found in burial sites, or cists, and it is believed that they contained food as an offering to the dead for their journey to the afterlife.
In the process of photogrammetry, it is necessary to capture all sides of an object, in 360 degrees. Particular challenges involved in capturing these ceramic vessels was getting images of the inside or interior of the object, as well as the base, and making sure that the lighting was correct so to capture as accurately as possible for capturing the decorations and patterned incisions made around the exterior of the objects.
As aforementioned, it is important to capture the object in 360 degrees, on all sides, so that the software program can then extrapolate from the two-dimensional images a model in three dimensions. Since it is best to keep camera movement to a minimum, it is a good idea when working with smaller objects to rotate the object rather than to move around it with the camera. Rotating the object is relatively easy if placed on a turntable. In keeping the camera in the same position for a series of captures this would hopefully obviate the possibility of blurred images. Images that are blurred may not be of a quality that is good enough for the image processing stage in PhotoScan.
Lighting is yet another issue to think about in photogrammetry. Harsh lighting and the casting of shadows is best to be avoided, and so it is worth your while to light your object equally from all sides <https://dinosaurpalaeo.wordpress.com/2013/12/20/photogrammetry-tutorial-3-turntables/>. A well-lit object means surface detail is not distorted by shadow, and so the intricately incised patterns and shapes on the surface of the ceramic vessels, in this instance, would be clear and undistorted once the three-dimensional models are processed. There is also a very handy method to diffuse the light hitting your object: namely, the lightbox. The lightbox is essentially a white, thin box made of canvas material, with an opening on one side through which you can place your object inside to record. The object can be placed in the centre of the lightbox on the turntable. It is best to place two lights at an equal distance from the box on either side, with a third light source placed directly above, pointing directly down on the box. This arrangement of lights ensures that there is no area of the object is more lit up than any other areas, and so there are stark shadows distorting the surface detail.
We arrived at the museum early in the morning as five different objects were to be recorded, which we knew could take up to at least four hours between all of us. We chose a table on which to place the lightbox and set up our equipment. The camera was hooked up to the laptop in order take the pictures remotely, as it was thought that pressing down on the shutter-release button every time to take a picture could potentially cause the camera to shake. We used a Canon E0560D digital camera to take our pictures.
After choosing the vessel I could like to capture and once it was placed in position on the turntable, I inspected the image through the “live feed” on the laptop screen. I noticed that the image was a little dark, so I decided to turn the ISO up a little to 120. I needed to be careful not to turn the ISO up too much, since this could potentially cause the image to go quite grainy and therefore decrease its quality or sharpness. I also noticed that the rim of the vessel furthest from the position of the camera was a little out of focus, and I decided to turn the depth of field up to f22.
During the photographing stage, I was careful to rotate the turntable only a little between each take, maybe by only about 10 degrees each time, since there needs to be at least a 60 percent overlap between each image in order for PhotoScan to triangulate the images correctly. I made sure to capture my vessel from three different height positions. I had to make sure that, at my highest angle, the camera was able to capture the interior of the vessel. I then needed the bowl to be turned upside-down so to get the base, and then to repeat the process, taking the vessel from three different height position, 360 degrees around each time. The photographing process in total took about 40 minutes in total.
All bowls featured in this blog courtesy of the National Museum of Ireland.