Presentation at ESTS 2015

I recently attended the European Society for Textual Scholarship’s 2015 conference held at De Montfort University in Leicester, UK. At this conference, I gave a presentation entitled Beyond Google Search: Editions as Dynamic Sites of Interaction. The focus of the presentation was a discussion around some of the common UI tropes and metaphors we rely upon in Digital Scholarly Editions and an examination of how these elements are applied.  The presentation consisted of a discussion around the subject of interaction design, a break down of the common tropes & metaphors along with a comparison of 14 different scholarly editions and which of the metaphors were utilised, and a brief case study involving the Letters of 1916, a project at Maynooth University with which I have had the pleasure to be involved.

While my plan is turn this presentation into a paper for the ESTS 2015 Variant, I have had some requests for presentation slides, as a few people were interested in the content I have presented. As such, I’ve included a link to a google slides version of the presentation which can be found here.

This presentation just begins to scratch the surface of my research, and I am more than happy to discuss any questions or comments you may have.  Please feel free to utilise the contact form on this blog to get in touch with me.

Happy reading!

Modeling Maynooth Castle Part 3 – The Keep

I started modeling the keep a few days ago and felt I was making good progress.  Unfortunately, I was wrong.  However, I’ve learned a very important lesson:  when doing extrusions on an object, always zoom out to make sure you didn’t accidentally destroy the geometry.  Sadly, this lesson cost me about 3 hours’ worth of work.  Thankfully, I started making regular backups of my saves; otherwise this could have been much worse.

The Process

I started the keep with just a standard cube. I then constructed small towers that went along the tops, and for the crenellations, I did what I have always done—I extruded the polygons of the cube (after converting the cube to an editable polygon). I’ve never had a problem with this. Here is what the top part of the keep (which was the area I was focused on) looked like when I noticed my problem.
Top of Keep
I then zoomed out in order to inspect some aspects of the roof of the keep (for which I was about to create the pitched roof). That’s when I noticed my problem.
Messed Up Keep
As you can see from the photo, quite a bit of the geometry of the keep is distorted. Random sections are extruded or missing. The bottom of the keep was completely distorted. I probably could have fixed it, but I decided the amount of work it would have taken me to fix it most likely would have equaled the amount of time I spent getting to that point since my last back up. So I decided to cut my losses and simply revert to my last backup.

What Went Wrong?

I’m not entirely sure where everything went wrong, to be honest. The only thing I can suspect is that I accidentally selected other polygons while I was selecting the polygons for the roof (either that or I had failed to set the height segments when converting to an editable polygon and thus was extruding the entire height of the keep as opposed to one that was supposed to be closer to 1m3.

Lessons Learned

One lesson I’m taking away from this is to always check my line segments before converting to an editable polygon, and another is to frequently check the entire object when making modifications which affect the geometry. But the most important lesson here is to create FREQUENT backups of my scene. Thankfully, this really saved me this time (as I only lost a few hours’ worth of work), so it’s definitely a lesson I have taken (and am continuing to take) to heart.

Modeling Maynooth Castle Part 2 – Crafting the Walls

In my first blog on modeling Maynooth Castle, I discussed the goal of my final project for AFF621 and a little bit of the background regarding the castle itself.  As I began the process of modeling the castle, I decided the first place to start would be to model the walls, which is what I intend to discuss today.  However, before I get started, I’d like to take a few moments to discuss the planning of the model itself.

A Little Project Planning

Like some graphic design software, such as Adobe Photoshop, 3DS Max requires you to think through how the model is going to be constructed, so that you can be organise your scene.  In order to start this process, I started thinking about what kind of objects would be in my scene, and how I wanted to arrange them.  I came up with the following major items that would need to be a part of the scene:

  • the castle walls
  • the keep
  • the church
  • the gates
  • the living quarters / kitchens / etc.
  • miscellaneous buildings (such as the stables, barn, etc.)
  • the river(s)
  • misceallaneous nature (trees, grass, any other surrounding items, etc.)

Depending on how quickly I am able to execute on the model itself, I may or may not be able to complete everything, so I’ve decided to prioritise the first 5 items with the hope of adding the other objects should I have the bandwidth.

In order to facilitate this process, I decided I would keep each of the above mentioned-sections in their own layers.  This way I could easy turn off entire sections if needed (which could be helpful while fine-tuning other aspects of the scene). Additionally, by grouping them in layers, I can focus on each section at a time—very similar to how I would construct a physical model from legos or other building material(s). With these decisions made, I decided it was time to start the actual modeling process.

Constructing the Walls

I decided to start with the walls for two reasons:

  1. They would form the boundaries of the rest of the inner buildings of the castle itself
  2. They would most likely be the easiest thing to model

I decided to do a quick Google search to begin with just to see if anyone had modeled anything similar in 3DS Max and might therefore have some insight into the process of building walls.  I was lucky enough to happen upon a video that discussed how to construct a castle using 3DS Max.  I watched some of it and decided that it may be useful later, so I’ve filed it away for future use. I also found a script that was created to make the construction of walls easier.  I thought this might be the ideal script to facilitate the creation of walls, so I downloaded it and started using it in my scene.

Unfortunately, the script wasn’t quite what I was looking for.  It’s the kind of script that is great for creating brick walls, but one of the biggest problems I had is that, while it allowed me to set the number of segments for length and height, I couldn’t set the number of segments on the width.  Since I needed to create crenulations along the top of the walls, using this script would have meant manually creating individual crenulations as opposed to using some effects on the wall itself.  I also discovered the script was very computationally intensive and a single wall greatly reduced the processing power of my laptop.  Ultimately, I decided to discard the script and create my own walls from scratch.

I decided to start by creating a box that met the dimensions of the wall I was trying to construct.  I then gave the wall an appropriate number of segments for length, width, and height that would simulate brick work. For example, the south wall is 86.8 metres long, 3.51 metres wide, and 4.96 metres tall. As a result, I gave it 86 length segments, 3 width segments, and 5 height segments which makes each stone roughly 1 metre x 1 metre x 1 metre.  This was an arbitrary decision but one I felt was reasonable.

After drawing out the wall, I converted it to an editable polygon and then used the extrude tool to create crenulations along the top.  I decided each crenulation would be roughly 2 polygons wide and 1 polygon deep, and would be extruded by 1.016 metres.  I then built the remaining 3 walls following the same pattern.  When I was finished, the scene looked as below.  Note: each wall is color coded so I can easily keep track of what wall it is.  The south wall is red; the east wall is yellow; the north wall is green, and the west wall is blue.


Next, I needed to fit the walls together.  The castle walls themselves do not form a perfect square, but rather bend at some points to create a sort of rounded, circular square.  There are also some breaks in the walls where the walls either protrude or where the walls are broken up by gates or other buildings. I accomplished this in a number of ways: by creating generalised boxes of the appropriate size as placeholders, by cutting up the walls to create the protrusions, and by using soft selection to bend sections of the walls. The outcome can be seen below.


As you can see, there was a drastic change in the state of the walls from their first construction.  It’s slowly starting to come together.

Further Struggles

One of the things I’ve struggled with a bit is the application of materials. I have some images of the castle walls themselves. I was hoping to apply them as both a bump map and a diffuse map to create the illusion of stone on the walls.  Unfortunately, it hasn’t worked so well.  Either the images tile too much and look fake, or they blur and, well, look fake.  This will be something I’ll have to work on more later.  I’m sure I’ll blog about it in future posts.

Next Steps

Next, I’m going to focus in on the castle keep and the enclosing walls. Hopefully this will be a little easier than the outer walls, especially now that I’ve picked up a few tips and tricks. More to come later . . .

Designing the Diary

We are now almost two months into the second half of Digital Scholarly Editing. The bulk of the work this semester is focused on the creation of a digital scholarly edition. We have chosen the war diary of Albert Woodman, a signaller with the Royal Engineers during the Great War. The diary itself is an interesting object; it spans two physical books and, unlike traditional diaries in which the author tends to only write a single entry per page, Mr. Woodman will often have multiple entries on a single page (presumably to conserve paper). Additionally, he often inserts newspaper clippings and other miscellany into the diary, which is not easy to represent digitally.

Representing Related Media

One of the biggest questions we had to address was, “How do we represent these miscellaneous objects digitally in a way that holds true to the spirit of their analogue representation?” Most of these objects are directly tied to a diary entry (often, Mr. Woodman makes mention of the object in question in his entry, or the object itself refers to a battle or news item he discussed in a particular entry). Showing them separately or as secondary entries in the diary breaks the metaphor of the diary itself. After all, you can’t really have two entries for the same day—that isn’t in keeping with the way a user envisions a diary to work.

Ultimately we decided to tie these items to an entry as “related media”. From an implementation standpoint, this is relatively simplistic. Within the TEI of the diary, we simply insert another <div> tag at the bottom of the <div> day which wraps that day’s entry. This <div> tag is then given a type attribute with a value of “insert”. When we run the TEI through an XSLT transformation mechanism, these “related media” divs are then extracted and added to the entry in a section on the page titled accordingly.

As to the represented model—the model that bridges the gap between the user’s mental, or expected model[1]—and the implementation, it was decided the best implementation would be to group these additional inserts (which may also include other interesting bits of media we find relevant to an entry) and provide a lightbox implementation in order to view them. The lightbox, a modal popup which presents an image in an overlay[2], has numerous advantages:

  1. It provides additional screen real estate. By loading larger versions of the image into an overlay, a thumbnail of the image can be displayed on the main screen, which has a much smaller visual footprint.
  2. It can provide increased performance. Many lightbox implementations utilise javascript and AJAX (Asyncronous Javascript And XML) to load images only when they are requested by the user. This means the image is not loaded into the DOM (Document Object Model) until the image is actually requested, thus cutting down on the amount of data that is transmitted to the user’s browser. The less data that is transmitted, the faster the page will load.
  3. It maintains the visual narrative. Every interface tells a story and the goal of every interaction should be to supplement that story. If a user clicks on an image and the page is then reloaded to display a larger view of the image on another page, the narrative is broken because the user is moved away from the page. In order to re-enter the narrative, the user must use the back button in the browser. Anything that breaks the visual narrative runs the risk of breaking the entire experience for the user and thereby decreasing the overall “stickiness” of the website.

When used properly, the lightbox can provide a strong user experience and present the designer with additional screen real estate that would otherwise be unavailable. By using a lightbox approach here, we have managed to solve the issue of the ephemeral material as well as the problems presented with its presentation.

Multiple Entries on a Single Page

The second major interaction issue we had to address was the appearance of multiple entries on a single page. In many digital scholarly editions, the transcription is presented with the original image side by side[3] [4] [5]. This allows for a direct comparison between the transcription and the original text. However, such an implementation in the Woodman Diary is confusing. Often times, an entry may begin in the middle of the page, but as we are trained to read from top to bottom, the user would immediately begin scanning the image from the top and may be confused as to why the transcription doesn’t match the image—not immediately realising the transcription begins with text half way down the “page” in the image. Multiple methods were considered for handling this unique situation:

  1. The image would not be viewable side by side with the transcription. The user would read the transcription and could click on a thumbnail of the image that would then display only the image with no transcription. This method was discarded due mainly to the expected user interaction of comparing transcription and image side by side.
  2. We would attempt to position the transcription text on the page to match its position in the image. This, however, would require quite a bit of extra encoding as we would need to encode locations of text at given pixel points within the original image. While a novel approach, it was ultimately decided this would require too much effort, given we are working with limited time and resources. Additionally, we felt it was a less aesthetically pleasing interaction.
  3. We would present multiple entries on a single day to match whatever was displayed in the related images. This idea was also discarded due to potential confusion by the user. Because the user has certain expectations in their mental model as a result of the diary metaphor, a user clicking on 25 January would expect to see one entry for 25 January. Under this model, however, they would instead receive an entry for 24 January and 25 January, which might be confusing. The break in the mental model is potentially jarring enough as to disrupt the visual narrative.

After considerable deliberation, it was decided to modify the third approach and create a hybrid. On the intial view of the diary entry, the user would see only the transcription of the entry he or she had selected. Thumbnails of the original diary page(s) would be presented, which could then be clicked and viewed in a lightbox. This lightbox would present a larger view of the image along with the transcription of that entire page. If text exists in the transcription that is not part of the entry being viewed, it would be rendered in a grey font indicating it is unrelated to the entry as a whole. As an example, if the user views an entry on 25 January, that entry begins in the middle of the page in the actual diary. When viewing that page in the lightbox, the latter half of the entry from 24 January is transcribed along with the entry for 25 January that appears on that image of the diary page. However the text for 24 January is rendered in a light grey so as to visually indicate to the user that it is unrelated to the entry being viewed, while still providing the user with the visual cue that the text he or she may be looking for in the image is not directly at the top of the image.


The Albert Woodman Diary has been an interesting project to handle. It has challenged the class as a whole to consider different ways of presenting an analogue object in a digital environment. By drawing on our own digital experiences as well as research conducted within the field of User Experience design, we have been able to overcome these challenges and present a true digital edition that adheres to the underlying metaphoric premise of the diary but without limiting the interactions by adhering to the metaphor too strictly.


[1] Cooper, Alan, Robert Reimann, and Dave Cronin. About Face 3: The Essentials of Interaction Design. Indianapolis: Wiley Publishing, 2007. Print.
[2] Adam. “Are Lightboxes Good Web Design?”. 29 January 2011. Web. 20 March 2015.
[3]Sutherland, Kathryn., et al. Jane Austen’s Fiction Manuscripts. 2015. Web. 20 March 2015.
[4] Baillot, Anne.Letters and Texts: Intellectual Berlin around 1800. 2013. Web. 20 March 2015.
[5] Schreibman, Susan, et al. The Thomas MacGreevy Archive. 2001-2004. Web. 20 March 2015.