Explorations in Photogrammetry – Part 4

For part 4 of my series on photogrammetry, I will discuss the creation of a 3D model of the sepulchre, the object I’ve chosen for the second part of my 3D recording assignment (for more information regarding the sepulchre and the process of taking photos of the sepulchre itself, see Explorations in Photogrammetry – Part 3). As with the bowl from the National Museum of Ireland (see Part 1 and Part 2 of Explorations in Photogrammetry), the construction of the object utilised Agisoft’s Photoscan Pro and the use of Adobe Photoshop.

Creating the 3D Model

The process of editing the pictures in photoshop was no different than it was for the bowl from the National Museum. As I covered this in Part 2 of my blog series, I will simply refer you to that post for more information regarding the Photoshop process.

The creation of the 3D model in Photoscan was somewhat different this time.  The initial process was still the same as it was for the bowl.  I imported the 178 photos into Photoscan and used the programme’s image quality assessment tool to determine the quality of the images.  To my surprise, I found the quality of the images to be considerably higher:

Number of Images Quality % of Total
4 .50 – .59 2%
174 .60 and above 98%

However, I knew from my experiences while taking the photos that there would be some photos that would need to be manually filtered out.  These were primarily photos were the sun was at the wrong angle and caused bright spots to appear in the photo or where the the photos were so bright as to look overexposed.  This led to a further 37 images being filtered out of the result set.

I then began to apply a mask to each photo.  However, this process proved to be much more involved than with the bowl. When applying the mask to the photos of the bowl,  I was able to use the magic wand tool to select the background (which was a uniform colour), and with a few simple clicks—and a reasonably high success rate—filter out everything but the bowl object itself. The photos of the sepulchre proved much more difficult.

Due to the lack of uniformity in the background (which contained grass, trees, and other objects in the cemetery), I was unable to utilise the magic wand tool. Thus, I found I had to draw the mask around each view of the sepulchre in every picture individually, using the lasso tool. This was not only time consuming but oftentimes difficult to do precisely, as objects from the background would occasionally blend into the sepulchre, making it difficult to determine where the sepulchre began and other objects ended.

Once I finished the masking, I then began to run the images through the standard processing workflow in Photoscan Pro. For some reason, however, these images were taking much longer to process. While aligning the photos took about an hour and a half (which was to be expected), building the dense point cloud proved to be a challenge. I initially kicked off this process on my Sony Vaio. This is a relatively powerful laptop with 8GB of RAM, a dedicated video card, and an Intel i7 1.80GHz chip. After running for almost 40 hours and being only 40% complete with the creation of the point cloud, I decided to cancel the operation and switch to my MacBook Pro, which has 16GB of RAM and a 2.3GHz Intel i7 chip. As of the writing of this post, the process to build the dense point cloud has been running for 17 hours and is approximately 50% complete.

When I saw the amount of time it was taking to process this model, I decided to try to utilise another method. Rather than applying a mask to every photo and then building the dense point cloud based on this mask, I decided to simply crop out areas of the photo that simply didn’t belong (such as a few photos where I managed to capture my book bag or another person visiting the cemetery). Then I simply aligned the photos utilising Photoscan Pro’s built-in alignment tool. From there I manually cut out the extraneous information the software pulled in and created a dense point cloud. I then continued to manually remove the points I did not wish the application to include in the final model (such as some of the surrounding objects). From there I was able to build a mesh and provide textures to create a model. This method was much less time consuming as I didn’t need to apply the individual mask to each photo. The time to process all of the photos to build the dense point cloud was about 20 hours—still a very intensive process but much better than the time it took to process the photos utilising the mask.

Assessment

Overall, I am very happy with the model itself.  It turned out rather well given the lighting conditions and difficulties I had with background objects and processing.  As learning outcomes, I would make note of the following:

  1. Try to take the pictures on an overcast day.  While this is not always within your realm of control, pictures taken when the sun is hidden behind a cloud tend to be easier to process, especially since they do not have shadows.
  2. Consider your environment.  One thing I did not take into consideration was the surrounding environment.  Had I thought of it, I might have taken some old sheets to drape over some of the surrounding objects.  This would have made applying the masks easier.
  3. Don’t always rely on the creation of masks.  For this object, I found the model that did not rely on masks but rather required me to manually edit the point cloud to be much easier to create. With this type of object, given the background items, I highly recommend this approach.

Final Model

As I mentioned earlier, the model utilising the mask approach is still processing. Once it finishes, I may post it here as an update. However, the model that I created without the use of masks turned out rather well. You can view it below.

Model Created Without the Use of Masks

Coming Up Next…

In my final post, I will discuss the process of 3D printing. I will evaluate available 3D printing services and discuss my chosen selection.

Explorations in Photogrammetry – Part 3

In part 3 of my photogrammetry series, I will discuss the second aspect of the photogrammetry assignment: using an outdoor object.  While the mechanics of this aspect of the assignment were similar to that of the first part (see Explorations in Photogrammetry – Part 1 and Part 2 for more information), this part of the assignment presented unique challenges.  As I mentioned in earlier posts, while working at the National Museum of Ireland, I was working in a controlled environment.  The object was small and placed on a rotating table.  The camera itself was on a stationary tripod. And most importantly, we utilised artificial light and a lightbox to ensure proper and consistent lighting.  I had none of these luxuries for the second aspect of this assignment.

For the second part of the assignment, I was tasked with creating a 3D model, with the subject of the model being something outdoors—the challenge being the lack of a controlled environment, especially in regards to lighting conditions.  I chose a sepulchre as my subject, one of the many objects in the cemetery behind St. Patrick’s College on South Campus here at Maynooth University.  The cemetery itself is rather small and houses mainly priests and nuns who have served St. Patrick’s College (although supposedly there are 2 unmarked graves of students who took their own lives and whose deaths have entered into folklore regarding the “Ghost Room” on campus[1]). The cemetery has a number of interesting markers, sepulchres and a crypt. The sepulchre I chose was that of Rev. Jacobi Hughes who, according to the inscription, served as Dean of St. Patrick’s College for 15 years. I found the sepulchre architecturally interesting with a number of interesting angles and faces, which is why I chose it as my subject piece.

Taking the Photos

The process of taking the photos of the object was rather different than it was for the photos taken at the National Museum.  First, I had to be very aware of any shadows being cast—not just of shadows cast by the object and any surrounding objects, but also of shadows cast by myself.  Too many shadows would make it difficult for the software to accurately compile a point cloud.

Ideally, it is considered best to take photos on a cloudy day.  Given I am in Ireland, one would think this wouldn’t be a difficult task; however it would seem the weather was not my friend, and the sun decided to shine high and bright the entire time I was attempting to take pictures.  This meant I had to be very careful with how I positioned myself while taking the pictures.  Due to the size of the object, I had to move around the object in order to capture it from all of the requisite angles (as opposed to the bowl at the National Museum which sat on a turntable that I could then rotate).  As such, I often found myself having to reposition the view finder on the camera and hold the camera at odd angles in order to ensure my shadow wouldn’t fall on the object as I attempted to capture it.

Another downside was the lack of a true preview.  While working with the camera at the National Museum, I was able to keep it connected to my laptop, where I could preview every picture and, if necessary, make constant adjustments to the settings.  This was not feasible with the sepulchre object, as I was moving around the object and could not keep the camera connected to my laptop.  I had to rely on the view finder on the camera itself for a preview—an option which isn’t ideal for truly examining an image upon capturing it.

I was able to apply some lessons learned from the National Museum, however.  In this instance, I used a much higher aperture setting (I kept the f-stop set at 22) and allowed the camera to adjust the ISO, so as to optimise this setting.  Overall, I feel these pictures were much sharper and of a higher quality than the pictures taken while at the National Museum.

Coming Up Next…

In part 4, I will explore the process of creating the 3D model of the sepulchre. Specifically, I will be discussing the differences between the construction of this model and that of the bowl model from part 2. I will also assess the quality of the model and what areas of improvement could have been made to create a better object.

References

[1] Sam. “The Ghost Room in Maynooth“. Come here to Me. 20 July 2012. Web. 3 April 2015.

Explorations in Photogrammetry – Part 2

In part 2 of my series on photogrammetry, I will discuss the process of creating the three-dimensional model of the bowl from the National Museum (for more information on the process of taking the images of the bowl—which I mentioned in my last post—see part 1 of Explorations in Photogrammetry).  The process itself involves the use of two pieces of software:  Adobe Photoshop and Agisoft Photoscan Pro.

Photoshop to the Rescue

The first step was to unify the images. This process was done using Photoshop and a few of its “auto” features. I began by setting up a batch job (learn how to create batch jobs in Photoshop here). This job applied the following auto corrections in order:

  1. Auto contrast – to adjust the brightness settings of each photograph and create a uniform brightness/contrast
  2. Auto colour – to correct any oddities in colour
  3. Auto tone – to smooth out an residual white / black in the image and apply a universal look

Each file was then re-saved in a separate location. It’s best practice to always keep a copy of the original image, unedited.

Building the Model in Photoscan Pro

The next step was to import the photos into Photoscan Pro.  Photoscan Pro is software that allows a number of images to be “stitched” together using a point cloud.  This point cloud is then used to construct a 3D model of the object.

The first step involved assessing the quality of the images.  Photoscan Pro has a built-in quality assessment tool.  After running this tool against all the images imported into the programme, each photo is then given a score (from 0 to 1) that shows how high the quality of the image is in order to align it and produce a 3D model.  While learning how to use the software in class, we were taught a general rule of thumb is to only use images whose quality is .6 or higher.

After importing the images and assessing their quality, I received the following results:

Number of Images Quality % of Total
7 .40 – .49 5%
45 .50 – .59 29%
101 .60 and above 66%

I was a little dismayed that such a large number of my images (over 30%) were under the .60 threshold.  I decided to run two different models in order to compare the results.

I began by excluding all images under .60 quality. I applied a mask to each image so as to instruct Photoscan to ignore everything other than object (this involved basically “cropping” out the background and having Photoscan ignore anything but the bowl object—a very time intensive process but well worth the results).  I then used Photoscan to align the photos, build a point cloud, a dense point cloud, and then a mesh.  This created a somewhat reasonable 3D model of the bowl; however there were a number of errors in its rendering and the model itself looked incomplete.

In order to correct this issue, I attempted to rerun the model by including the 45 images that were marked with an image quality of .50 – .59.  When including these images, along with the original 101 images of .60 quality or better, I received a much stronger model. This model lacked the errors in its rendering that were present in the first model and looked more complete.  Upon close inspection, however, one can see the very bottom of the bowl looks as though it was “cut out”.  This is not a feature of the bowl itself, but rather a failure to capture images at a deep enough angle so as to capture the entire inside of the bowl.

As I began to closely inspect the second version of the bowl, I also noticed there was quite a bit of “noise” in the model (areas of the bowl that looked pixelated or out of place).  I attempted to reapply the masks on the objects by cropping the images further (thus excluding the very edges of the object in each photo).  However, this did not turn out well. My bowl ended up rather flat-looking, as though someone had collapsed the bowl upon itself.  I think took the original point cloud from the second model and began manually cleaning up the cloud by looking for areas of white (which were the background and thus not part of the bowl).  After manually removing these points, I rebuilt the mesh and texture and developed my final model.

Assessment

Overall, the bowl turned out better than I initially expected, especially given this was my first attempt at such an endeavour.  Given the opportunity to repeat the process under the same settings, I would make the following corrections:

  1. Adjust the aperture and ISO. For most of the images, I was using an F-stop of 8 and an ISO of 640.  I think I would adjust this to lower the ISO (thus reducing visual noise in the image) and raise the F-stop on the camera.  The exact values I would use are difficult to say, but I would most likely try to lower the ISO to somewhere between 100 and 200 and raise the F-stop up to at least 22.  This would give me a crisper image, especially in regards to depth of field, as well as the added benefit of removing a small amount of noise that exists in the image.
  2. Adjust the shutter speed.  I left the shutter speed on the automatic setting for this exercise, mainly because I didn’t think I would need to adjust it.  Given another opportunity, I might attempt to manually adjust this setting to see how it affects the quality of light in the image (many of images seemed a little dark, despite all of the artificial lighting present).
  3. Adjust the angle.  I would also take another round of photographs at a much steeper angle so as to completely capture the interior of the bowl.
  4. Take more time.  As I was the first student to begin taking pictures, I self-imposed pressure to complete my pictures quickly, so as to ensure my fellow classmates had ample time with the camera as well.  If I repeated this process again, I would be less conscious of time constraints and take more time in between photos to evaluate the quality of the image and adjust my settings as needed.

Final Models

In an effort to show off the difference between the two different models, I’ve included both below, as well as a version of the bowl that was sent to the 3D Printer (more on 3D printing in a future blog post). The first is the “bad” model that only included the images with a quality of .60 or higher. The second model includes images with a quality of .50 or higher. Also note the absence of part of the bottom of the bowl where I failed to take images at a deep enough angle to cover the entire bottom of the bowl.  The model used by the 3D printer does not have the hole in the bottom of the bowl as I used 3ds Max to fill in the mesh.

“Bad” Bowl
“Good” Bowl
“Printed” Bowl

Acknowledgements

Special thanks to the National Museum of Ireland and their Archaeology department for allowing us to photograph some of their objects. The pictures taken of the bowl featured in this post and my previous post were courtesy of National Museum of Ireland – Archaeology. All rights reserved.

Coming Up Next…

In part 3 of my series on photogrammetry, I will discuss my choice for the second part of my assignment. I will detail why I chose the particular object in question and what challenges were presented to me as part of this aspect of the assignment. I will also detail the steps I took to overcome those challenges.

Designing the Diary

We are now almost two months into the second half of Digital Scholarly Editing. The bulk of the work this semester is focused on the creation of a digital scholarly edition. We have chosen the war diary of Albert Woodman, a signaller with the Royal Engineers during the Great War. The diary itself is an interesting object; it spans two physical books and, unlike traditional diaries in which the author tends to only write a single entry per page, Mr. Woodman will often have multiple entries on a single page (presumably to conserve paper). Additionally, he often inserts newspaper clippings and other miscellany into the diary, which is not easy to represent digitally.

Representing Related Media

One of the biggest questions we had to address was, “How do we represent these miscellaneous objects digitally in a way that holds true to the spirit of their analogue representation?” Most of these objects are directly tied to a diary entry (often, Mr. Woodman makes mention of the object in question in his entry, or the object itself refers to a battle or news item he discussed in a particular entry). Showing them separately or as secondary entries in the diary breaks the metaphor of the diary itself. After all, you can’t really have two entries for the same day—that isn’t in keeping with the way a user envisions a diary to work.

Ultimately we decided to tie these items to an entry as “related media”. From an implementation standpoint, this is relatively simplistic. Within the TEI of the diary, we simply insert another <div> tag at the bottom of the <div> day which wraps that day’s entry. This <div> tag is then given a type attribute with a value of “insert”. When we run the TEI through an XSLT transformation mechanism, these “related media” divs are then extracted and added to the entry in a section on the page titled accordingly.

As to the represented model—the model that bridges the gap between the user’s mental, or expected model[1]—and the implementation, it was decided the best implementation would be to group these additional inserts (which may also include other interesting bits of media we find relevant to an entry) and provide a lightbox implementation in order to view them. The lightbox, a modal popup which presents an image in an overlay[2], has numerous advantages:

  1. It provides additional screen real estate. By loading larger versions of the image into an overlay, a thumbnail of the image can be displayed on the main screen, which has a much smaller visual footprint.
  2. It can provide increased performance. Many lightbox implementations utilise javascript and AJAX (Asyncronous Javascript And XML) to load images only when they are requested by the user. This means the image is not loaded into the DOM (Document Object Model) until the image is actually requested, thus cutting down on the amount of data that is transmitted to the user’s browser. The less data that is transmitted, the faster the page will load.
  3. It maintains the visual narrative. Every interface tells a story and the goal of every interaction should be to supplement that story. If a user clicks on an image and the page is then reloaded to display a larger view of the image on another page, the narrative is broken because the user is moved away from the page. In order to re-enter the narrative, the user must use the back button in the browser. Anything that breaks the visual narrative runs the risk of breaking the entire experience for the user and thereby decreasing the overall “stickiness” of the website.

When used properly, the lightbox can provide a strong user experience and present the designer with additional screen real estate that would otherwise be unavailable. By using a lightbox approach here, we have managed to solve the issue of the ephemeral material as well as the problems presented with its presentation.

Multiple Entries on a Single Page

The second major interaction issue we had to address was the appearance of multiple entries on a single page. In many digital scholarly editions, the transcription is presented with the original image side by side[3] [4] [5]. This allows for a direct comparison between the transcription and the original text. However, such an implementation in the Woodman Diary is confusing. Often times, an entry may begin in the middle of the page, but as we are trained to read from top to bottom, the user would immediately begin scanning the image from the top and may be confused as to why the transcription doesn’t match the image—not immediately realising the transcription begins with text half way down the “page” in the image. Multiple methods were considered for handling this unique situation:

  1. The image would not be viewable side by side with the transcription. The user would read the transcription and could click on a thumbnail of the image that would then display only the image with no transcription. This method was discarded due mainly to the expected user interaction of comparing transcription and image side by side.
  2. We would attempt to position the transcription text on the page to match its position in the image. This, however, would require quite a bit of extra encoding as we would need to encode locations of text at given pixel points within the original image. While a novel approach, it was ultimately decided this would require too much effort, given we are working with limited time and resources. Additionally, we felt it was a less aesthetically pleasing interaction.
  3. We would present multiple entries on a single day to match whatever was displayed in the related images. This idea was also discarded due to potential confusion by the user. Because the user has certain expectations in their mental model as a result of the diary metaphor, a user clicking on 25 January would expect to see one entry for 25 January. Under this model, however, they would instead receive an entry for 24 January and 25 January, which might be confusing. The break in the mental model is potentially jarring enough as to disrupt the visual narrative.

After considerable deliberation, it was decided to modify the third approach and create a hybrid. On the intial view of the diary entry, the user would see only the transcription of the entry he or she had selected. Thumbnails of the original diary page(s) would be presented, which could then be clicked and viewed in a lightbox. This lightbox would present a larger view of the image along with the transcription of that entire page. If text exists in the transcription that is not part of the entry being viewed, it would be rendered in a grey font indicating it is unrelated to the entry as a whole. As an example, if the user views an entry on 25 January, that entry begins in the middle of the page in the actual diary. When viewing that page in the lightbox, the latter half of the entry from 24 January is transcribed along with the entry for 25 January that appears on that image of the diary page. However the text for 24 January is rendered in a light grey so as to visually indicate to the user that it is unrelated to the entry being viewed, while still providing the user with the visual cue that the text he or she may be looking for in the image is not directly at the top of the image.

Conclusion

The Albert Woodman Diary has been an interesting project to handle. It has challenged the class as a whole to consider different ways of presenting an analogue object in a digital environment. By drawing on our own digital experiences as well as research conducted within the field of User Experience design, we have been able to overcome these challenges and present a true digital edition that adheres to the underlying metaphoric premise of the diary but without limiting the interactions by adhering to the metaphor too strictly.

References

[1] Cooper, Alan, Robert Reimann, and Dave Cronin. About Face 3: The Essentials of Interaction Design. Indianapolis: Wiley Publishing, 2007. Print.
[2] Adam. “Are Lightboxes Good Web Design?”. PugetWorks.com. 29 January 2011. Web. 20 March 2015.
[3]Sutherland, Kathryn., et al. Jane Austen’s Fiction Manuscripts. 2015. Web. 20 March 2015.
[4] Baillot, Anne.Letters and Texts: Intellectual Berlin around 1800. 2013. Web. 20 March 2015.
[5] Schreibman, Susan, et al. The Thomas MacGreevy Archive. 2001-2004. Web. 20 March 2015.

Explorations in Photogrammetry – Part 1

Our first assignment in AFF 621: Remaking the Physical involves photogrammetry.   Photogrammetry is a technique used to create three dimensional images of an object.  This is accomplished by taking a series of photographs of an object, where each photograph overlaps by at least 60%.  Special software is then used to create point clouds that “stitch” the pictures together into a three dimensional representation of the object.[1] 

Over a series of several blogs, I intend to document the process of photogrammetry used for my photogrammetry assignment.  The assignment itself calls for the creation of two different 3D models using photogrammetry. One of the models is to be a bowl housed at the National Museum of Ireland where the pictures can be taken in a controlled environment. The second model is to be one of my own choosing; however it must be taken in an outdoor setting where little environmental controls exist, thus giving us the opportunity to experiment with lighting, aperture, shutter speed, etc. The blogs will cover the following:

  • my trip to the National Museum of Ireland in Dublin
  • my choice of topic for the second model and the process involved in the actual photography
  • my experiences with the editing and creation of the actual models (2 separate posts)
  • an analysis of 3D printing and the service I would use to print my object

My Day at the Museum

For the first part of our assignment, we visited the National Museum of Ireland.  This was a fantastic opportunity to not only visit the museum, but to see some of the artefacts up close.  We spent the entire day in the museum photographing a number of bowls which dated from some time around 2,000 BCE. The Bronze Age era bowls themselves were largely used as drinking vessels and likely held some kind of beer.  Many of the bowls have a sun pattern on the bottom that some archaeologists speculate was tied to the worship of the sun by the indigenous people[2].

The process of photographing these objects was quite different than it was for photographing the second part of my assignment (the latter of which will be detailed in a future blog post).  As we were in a controlled environment, we were able to avail ourselves of a number of lighting techniques that wouldn’t be feasible outside such a setting.  One of these (which is also a personal favourite) was the lightbox.  lightbox The lightbox is white canvas-type cube that, as one of my classmates pointed out, looks much like a collapsible laundry basket. The object is then placed inside the lightbox and lights are placed around the outside.  The material of the lightbox acts as a sort of diffuser, softening the light and eliminating shadows. The image to the right shows the final setup of our lightbox.  The lights are positioned from all angles in order to provide the most light. This includes not only from the sides and front but also from above.  Once the object is placed on the turntable (which is positioned in the centre of the lightbox), the object can be slightly rotated in between each picture, allowing for the creation of the overlap between images necessary to produce an accurate model using photogrammetry.

Once everything is set up, the actual photography can begin.  As I mentioned above, the object is placed on the turntable within the lightbox, which allows the object to be easily rotated. Shane Using the EOS Utility (software provided by the Canon website, the manufacturer of the digital camera we used), we were able to preview each image on the laptop and adjust various settings as necessary.  Primarily I worked with the aperture (to define not only the light available but also clarity of the background and depth of field) as well as the ISO (which can allow for more brightness but must be used carefully so as not to introduce any visual “noise” into the image).  I decided to leave the shutter speed set to automatic, as adjusting this is more of an advanced technique and given the controlled environment, I didn’t feel it was necessary to adjust.  Once all of the adjustments were made, the process of photographing the images was quite simple.  Very few adjustments needed to be made in between images, and I was largely able to move from one image to the next with little intervention with the camera.

I decided to take images from four different angles, beginning at straight on, and then adjusting the angle of the camera up in order to capture further detail. The bowl was then flipped, and the process repeated in order to capture the bottom part of the bowl.  I was able to capture approximately 150 images in about 50 minutes (excluding the initial setup of the lightbox).

Acknowledgements

Special thanks to the National Museum of Ireland and their Archaeology department for allowing us to photograph some of their objects. The pictures taken of the bowl featured in this post and my previous post were courtesy of National Museum of Ireland – Archaeology. All rights reserved.

Coming Up Next…

In my next blog, I will discuss the outcome of these images, the steps taken to clean up the images for contrast, brightness, clarity, etc., and the process of utilising Photoscan Pro to create the 3D model.  I will also assess the quality of the images taken and what steps, if any, I would take in the future to ensure a higher quality of image as well as what lessons I can take away from this aspect of the assignment.

References

[1]US Bureau of Land management Publication: Tech Note 248 on Photogrammetry.

[2]Flanagan, Lauren. Ancient Ireland: Life Before the Celts. Gill & Macmillan, Ltd. 1998.

Temporal Visualisations in Digital Humanities

Visualisations have played a very important role in our understanding of large sets of data.  Contrary to what you might think, they aren’t a relatively recent phenomenon; they’ve existed for hundreds of years.  After all, the very first maps were a type of data visualisation – a way to visualise area (otherwise known as spatial visualisations).  Today, however, I want to talk a bit about temporal visualisations – visual representations of time as it is related to data. Using an annotated bibliography, I will provide a number of sources that will provide further reading on the subject in order to gain a broader understanding.

“11 Ways to Visualize Changes Over Time – A Guide”. Flowingdata.com. 07 January 2011. Web. 24 November 2014. http://flowingdata.com/2010/01/07/11-ways-to-visualize-changes-over-time-a-guide/

This article from Flowingdata.com discusses some of the standard types of visualisations used when dealing with time-related data. The article describes each visualisation, its standard usage, and provides an example (via a link) to an implementation of said visualisation.

While relatively short compared with some of the other articles I’ve posted here, I think this article does a great job of summing up the main types of time-based visualisations. I love the use of examples to illustrate an implementation as well as an explanation regarding when it is appropriate to use a certain type of visualisation.

Aris, Aleks et al. “Representing Unevenly-Spaced Time Series Data for Visualization and Interactive Exploration”. Human-Computer Interaction – INTERACT 2005. Springer Berlin Heidelberg, September 2005: 835-846. Web. 25 November 2014. http://hcil2.cs.umd.edu/trs/2005-01/2005-01.pdf

Aris et al discuss time series data and its use in visualisation. Specifically, they focus on unevenly spaced time data and propose 4 different visualisation techniques: sampled events, aggregated sampled events, event index and interleaved event index. Each is discussed in depth, and an example is provided showing its implementation.

The methods presented here are certainly presented in a more cognisant manner than some of the other entries I’ve listed here. The visualisations presented as examples are easy to follow and interpret, if lacking somewhat in imagination. Shiroi, Misue, and Tanaka (see entry) based much of their work on the work presented here, and I can see the relationship between the two (which is why I called this out as an additional resource). The corpus here provides a really great understanding of time series data but allows for some growth in regards to creativity in the actual implementation of a visualisation method.

Buono, Paul et al. “Interactive Pattern Search in Time Series”. Proc. SPIE 5669, Visualization and Data Analysis 2005. 17 March 2005. Web. 26 November 2014. http://hcil2.cs.umd.edu/trs/2004-25/2004-25.pdf

This white paper opens by discussing some of the methods used for visualising time related data, specifically data in a time series. In addition, the paper discusses query techniques that can be used for searching time series data. The paper then examines TimeSearcher2, the next iteration of the TimeSearcher software. TimeSearch is software that allows a user to load a large set of data into the software, and then visualise said data using a number of analysis tools built into TimeSearcher2. The paper mainly focuses on a few of the features new to TimeSearcher2, such as a view that allows the user to look at multiple variables when visualising the data, improvements to the interactivity of the search, and improvements to the search algorithms. The paper closes with a discussion of shortfalls within the software and improvements that could be made in future versions.

The visualisations used in the software are somewhat primitive, but given the age of the paper (nearly a decade ago), this is not wholly surprising. Buono et al are quite candid in their evaluation, specifically in the conclusion where they discuss the shortfalls of the tool. In addition, they are also quite open with the methods used, particularly in their discussion regarding improvements to the search algorithm. The paper serves as an interesting insight into the history of time based visualisations in the last 10 years.

Capozzi, Mae. A Post Colonial ‘Distant Reading’ of Eighteenth and Nineteenth Century Anglophone Literature. PostColonial Digital Humanities. 08 April 2014. Web. 27 November 2014. http://dhpoco.org/blog/2014/04/08/a-postcolonial-distant-reading-of-eighteenth-and-nineteenth-century-anglophone-literature

Capozzi looks at 19th century British literature that specifically deals with India as its primary subject. In her presentation, she attempts to provide data to support her hypothesis that not only did Britain have an impact on Indian culture but, more importantly, India had an impact on British culture (via literature) as a result of British colonialism. She looks at a random sampling of literature and uses topic modeling, via a programme known as “Mallet“, to plot various topics over time. Via the use of line graphs (simple time visualisations), Capozzi uses this data to provide proof related to her hypothesis.

While Capozzi’s presentation is not a temporal visualisation in the sense that I am using the term throughout this post, I include it here as a cautionary tale of what not to do with a visualisation. Capozzi presents some very simple line graphs which seem to support her hypothesis. However, upon closer inspection, it is clear she relies on correlations between upticks in topic clusters at certain times and events in Indian history (such as the rise of the Raj or political unrest during World War I). She provides no empirical data to prove the correlation, instead merely relying on a cause and effect relationship. Furthermore, Capozzi offers no methodology behind her topic model (and by extension her visualisations). Without a thorough understanding of how her data was derived, we cannot make informed opinions regarding the data she is attempting to visualise. When working with visualisations, it is imperative to not only use a visualisation that will be intuitive to your audience (as I point out in later works) but also to remain transparent in the methodologies used to derive the data.

Day, Shawn. “Visualising Space and Time: Seeing the World in New and Dynamic Dimensions”. University College Cork. 11 February 2013. Web. 25 November 2014. http://www.slideshare.net/shawnday/dah-s-institute-uccrs

Day presents a number of interesting ideas around spatial and temporal visualisations. In his presentation, he discusses how we typically use time and space data, as well as common methods of plotting this data in a graphical form. He then continues by discussing both time and space data, separately and together, in a more in-depth format. He also discusses some great tools for visualising this type of data.

What I love most about this presentation are slides 14, 15, and 16. Here, Day discusses how we are used to seeing time plotted in a linear format. But he delves deeper by plotting out actual time data (using the movie Back to the Future as an example) to illustrate other, non-linear visualisations of time.

De Keyser, V. “Temporal Decision Making in Complex Environments.” Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 327.1241 (1990) : 569-576. Web. 20 November 2014. http://www.jstor.org/stable/55328.

De Keyser’s essay delves into the importance of time and the role it plays when making decisions. De Keyser begins by discussing how technology has changed our perception of time in regards to planning, due in large part to the increased availability of data and the ability to control the minutiae of outputs. He then discusses strategies behind temporal decision making, such as anticipation, assessment, and adjustment. He concludes with a discussion of errors that can (and most likely will) evolve from temporal decision making and their effects on a given process.

The article deals largely with the effects of time on technology projects and how the private sector constantly evaluates the success of such projects based on metrics involving time. These metrics often stem from datasets and statistics compiled into visualisations in order to express success as a function of time and resources. While more esoteric than the many of the other entries listed here, this article does provide a theoretical understanding of time and its role in decision making — a factor that largely plays into the importance of temporal visualisations.

Friendly, Michael. “A Brief History of Data Visualisation”. Handbook of Computational Statistics: Data Visualization. 2006. Web. 17 November 2014. http://www.datavis.ca/papers/hbook.pdf.

Friendly discusses the history of data visualisation, noting the first temporal visualisation used in a 10th century graph of stars and planetary movements over a period of time (p. 3). The article continues to trace the history of visualisations and their developments throughout 17th, 18th, and 19th centuries, noting the dramatic shift in approach during the latter part of the 19th century as a result of the increased usage of statistics among government agencies as well as innovations in the field of graphics and design. Following this, Friendly then discusses visualisations in the 20th century, noting the dramatic changes between the earlier and latter parts of the 20th century thanks to innovations such as interactive visualisations, dynamic data, and high dimensional visualisations. Friendly concludes with a look at “history as data” (p. 26) and his evaluation of the “Milestones Project” — a project on which he based much of his review (p. 25).

Overall, Friendly provides an interesting and thorough analysis of the history of data visualisations. His essay provides the reader with the background necessary to understand the context behind visualisations and how the methods have evolved over the course of the last few centuries. This is an excellent starting point for anyone wanting to dive deeper into the theoretical realm of the subject matter.

Mayr, Albert. “Temporal Notations: Four Lessons in the Visualization of Time”. Leonardo, 23.2/3, 1990: 281-286. Web. 24 November 2014. http://www.jstor.org/stable/1578624

While visual representations of space tend to have a sort of uniform acceptance of standard visualisations even across disciplines, Mayr argues that the time-based visualisations are much less standardised and tend to be rather specific to the individual discipline in which the visualisation is focused. In order to address this phenomenon, Mayr discusses several exercises performed with students in an effort to visualise time based data around guidelines Mayr has laid out in the article.

While the article itself is quite interesting, I don’t think Mayr actually manages to create any kind of coherence around the visualisation and notation of time, nor do I agree that consistent visualisations practices do not exist. He opens by discussing how notations vary from discipline to discipline but then proceeds to focus on techniques that rely rather heavily on the field of music to inform his guidelines (Mayr mentions in his article that he has a background in music). However, the exercises he gives to use in the classroom and the corresponding results lead to some interesting takes on time visualisations that I think most will find very interesting.

Moore, David M. and Francis M. Dwyer. Visual Literacy: A Spectrum of Visual Learning. Educational Technology Publications. 1994. Web. 20 November 2014. http://books.google.ie/books?id=icMsdAGHQpEC.

While Friendly’s article is an excellent take on the history of the field, Moore and Dwyer discuss the importance of visualisations and their relation to learning and cognitive development. While their entire book contains a plethora of interesting and important information, of particular note are sections 5 and 6, which discuss the role of visualisations in schools and business, as well as the cultural and socio-political impact of the field of semiotics and its intersection with technology. Semiotics, the study of signs and symbols, plays a major role in the understanding of visualisations and the data they entail.

Moore and Dwyer’s work is an excellent companion to Friendly’s article for providing a strong basis of understanding of the overall realm of data visualisations. Both are a necessary first step to a deeper understanding of the theory and reasons behind why visualisations are both important and utilised.

Shiroi, Satoko, Kazuo Misue, and Jiro Tanaka. “ChronoView: Visualization Technique for Many Temporal Data”. 2012 16th International Conference on Information Visualisation. IEEE, July 2012: 112-117. Web. 25 November 2014. http://ieeexplore.ieee.org/xpls/icp.jsp?arnumber=6295801&tag=1

Shiroi, et al discuss their creation of a visualisation technique they have developed known as “ChronoView”. They begin by discussing one of the problems of temporal visualisations, which is treatment of each time interval as discrete and the lack of ability to cluster a single event around multiple time entries. In order to combat this problem, they developed a circular view of the data.

While I’m not entirely sold on the visualisation used here, it is an interesting approach to visualising time-related data. The paper itself is well thought out, and the methods used for plotting the data are clearly and concisely disclosed — something I feel is incredibly important in the field of visualisation work. This, however, is the type of visualisation that I feel doesn’t lend itself well to the average reader. I would posit that to understand the data presented in this type of format, one would need not only a solid understanding of the particular field or data being discussed but also a strong background in statistics or visualisation theory. However, as a whole, I think it’s a solid take on time visualisations.

Turker, Uraz C. and Selim Balcisoy. “A visualisation technique for large temporal social network datasets in Hyperbolic space”. Journal of Visual Languages & Computing 25.3 (2014): p. 227-242. Web. 27 November 2014. http://dx.doi.org.jproxy.nuim.ie/10.1016/j.jvlc.2013.10.008

Turker and Balcisoy discuss the use of visualisations of temporal data utilising large datasets from social networks. As a result of their research, they have created a new visualisation technique they have dubbed the “Hyperbolic Temporal Layout Method” (HTLM). HTLM utilises geometry and spatial placement to visualise actors and relationships utilising a spiral layout. This paper describes how HTLM was developed, the algorithms used, and examples of the actual visualisation.

Turker and Balcisoy have done an excellent job of researching and proposing a new visualisation technique. They have taken great care in remaining transparent in their approach and have fully disclosed the algorithms used as well as discussed the background information that has led them to the creation of HTLM. That said, I feel that the visualisation itself falls somewhat flat. While an interesting take on a temporal visualisation, I feel that without significant understanding of the data and the field, most users would be unable to parse the data being presented — the visualisation remains almost unreadable to the casual observer. Perhaps Turker & Balcisoy are positioning HTLM towards a specific audience, but there is no indication within the paper itself this is the case. Thus while the visualisation offers a new and creative technique for visualising data, it’s difficult readability makes it a less than ideal visualisation.

Promoting Letters of 1916

The Letters of 1916 is a project involving the transcription and compilation of letters written by or to Irish residents between November 1915 and October 1916. Originally begun at Trinity College Dublin, the Letters of 1916 project has recently been transferred to Maynooth University. My Digital Scholarly Editing class has had the privilege to assist with the upload and transcription of some of the letters on the website. But one of the things that has fascinated me most about this project is the use of crowdsourcing to transcribe the letters and the methods of promotion that have been utilised to garner public attention.

Crowdsourcing the Transcriptions

At initial glance, one wouldn’t think there would be many letters to transcribe for such a short period of time. But once you step out of a modern mindset and realise that, during this time period, letter writing was really the only way to communicate, you begin to understand the sheer breadth of what this project is attempting to undertake. Factor in that during this time, Ireland is firmly enmeshed in World War I, and the Easter Uprising, a prominent event in Ireland’s fight for independence, occurred in April of 1916, there is quite a bit of activity going on for the citizenry to discuss. So how does a small research team transcribe all of these letters?

Enter the concept of crowdsourcing. Wikipedia defines crowdsourcing as “the process of obtaining needed services, ideas, or content by soliciting contributions from a large group of people, and especially from an online community, rather than from traditional employees or suppliers.” The idea is to leverage your audience, the group of people most invested in your product, to assist with the collection of your content. While some have maligned the concept of crowdsourcing as nothing but free labour (see Crowdsourcing: Sabotaging our Value), crowdsourcing has become an important tool in the content collection space, especially among non-profit endeavours.

So how does crowdsourcing work with the Letters of 1916? It’s fairly simple. Anyone can upload a letter (although many of the letters are contributed from agencies such as the National Archives of Ireland, the Military Archives of Ireland, University College of Dublin Archives, etc.) and once the letter is uploaded, any user can then transcribe the letter using a standard set of transcription tools provided by the website (Letters of 1916 utilises Omeka and Scripto to assist with the transcription efforts).

By the Numbers

The success of the crowdsourced transcription effort has been great. As of 31 October 2014, more than 1,000 letters have been transcribed or were in the process of being transcribed (approximately 71% of all currently uploaded letters), and October saw the addition of more than 30 new members to the transcription effort. For more information on the numbers, please refer to the October 2014 Progress Report.

These numbers show a positive trend in the use of crowdsourcing to leverage the audience for the Letters of 1916 in the creation of content for the website. Unfortunately, Omeka tracks only character counts and doesn’t really provide a solid look into the demographics of the transcribers or the extent of their contribution beyond the actual number of characters transcribed. Therefore, it is difficult to see the contributions of those transcribers who are proofing other people’s contributions but aren’t contributing to the overall character count, as they may only be changing a word here or there. This is one area where Omeka really falls short in terms of attempting to understand the scope of contribution of your user base. But the overall sense is that the crowdsourcing effort is highly successful.

Modes of Engagement

So what contributes to the crowdsourcing effort being as successful as it is? It is difficult to tease out any one particular item, but I would posit that the use of social media by the team has led to a strong engagement. The team utilises Twitter heavily to promote not only the site but also items related to the site, such as news articles and other events occurring within the Digital Humanities space. In addition, the team holds a monthly twitter chat utilising the hashtag #AskLetters1916 (check out Storify for the latest #AskLetters1916 chat). The team also leverages Facebook in addition to a blog to advertise the site to those interested in the history of the time period or with a general interest in Irish History or Digital Humanities.

Room for Improvement

While the site has been relatively successful in its efforts to leverage crowdsourcing, that doesn’t imply there isn’t room for improvement. While analysing the site for this article, I came across a couple of items that I thought could be improved from an interaction and usability standpoint.

First, there is A LOT of information on the site. So much, in fact, that it is very easy to get lost in the weeds and forget why you came – and that’s before you even get to the transcription area of the website. There are 7 top-level menu options, many of which have multiple sub-menus. There are a number of really interesting and helpful resources related to education and current news as well as the obligatory “About Us” and Sponsorship pages. These are well and good, but if the main purpose of the site is to contribute a letter or transcribe a letter, I wonder why there isn’t a persistent top-level menu item just for that. Yes, there is a “Contribute” top-level menu, but to do the actual transcription or contribution, one has to navigate to a sub-menu and follow links to login or signup. In addition, the menu item is easily lost among the other items.

As an alternative, I would suggest adding a persistent option for Contribution in the form of a button, that is coloured so as to stand out. I would also place it in the upper-right hand corner where the login / signup metaphor typically exists. This would draw the eye to the primary purpose of the site as well as facilitate a faster workflow for those who are returning to the site and simply wish to login to submit a new transcription or upload a new letter.

My second suggestion comes more from the transcription workflow itself. When attempting to transcribe a letter, the only view available is by category. As a user, I have no way of knowing which items are already transcribed and awaiting review, which are completed, and which have not been started. I have no options to sort or even filter. I’m simply presented with a long, scrollable list by category of the items loaded into the system. Once an item is selected, I can then see the status of the item (Not Started, Needs Review, Completed, etc.), but it requires an additional click to do the actual transcription (a click I deem largely unnecessary). Finally, the transcription tool itself is a little clunky. There is a toolbar provided to assist the user with standard TEI encodings, but as the average user may have no knowledge of TEI and the transcription page provides no explanation for how encoding should be handled, a number of transcriptions require a lot of clean up in order to conform to standards. Much of these complaints are, however, limited by the Omeka and Scripto software, so they are a criticism aimed more at those particular implementations than at the Letters of 1916 project itself.

Conclusion

Criticisms aside, the Letters of 1916 project has done a great job of garnering attention and drawing in its audience in order to facilitate the creation of content. The next step in the process is to migrate from the transcription desk to a site that is searchable and discoverable. With the implementation of a strong search mechanic and a few visualisations of the data to add a little spice, I think the Letters of 1916 will set itself up to be a rousing success.

Aura in the Digital Realm

In a recent class entitled Transformations In Digital Humanities, we discussed the notion of aura as it relates to an object, and how the individual’s perception of the aura can affect its value.  We then discussed the notion of digital auras, and more specifically, if digital objects have an aura. We also discussed if that aura is lost once the object moves from the analogue to the digital (or if the object is born digital, if it has an aura to begin with). This got me thinking about the notion of auras in general and what impact, if any, the digital realm has on an object’s aura.

Defining Aura

First, let’s start by defining what exactly an aura is when it relates to an object (in this case, we are dealing specifically with objects in the Arts & Humanities realm). The Free Dictionary defines aura as “a distinctive but intangible quality that seems to surround a person or thing; atmosphere”. In his book, Presence of Play, Power takes this definition a bit further by describing aura (or “auratic presence”) as the presence an object has that is beyond what the object’s physical appearance might suggest (p. 47). The easiest way to describe aura in my mind is that feeling one gets from looking at a favorite painting.  What feelings are evoked by Da Vinci’s Mona Lisa or Van Gogh’s Starry Night? If you’ve seen the original, how were those feelings changed? Was there an air of magnitude about the object? Did it change your sense of the object? If you answered yes, then you understand aura.  It’s that presence certain objects have that draw you to them. But where does it come from?

Why Aura?

As Walter Benjamin states in his article, Aura stems directly from the originality of the piece: “its presence in time and space, its unique existence at the place where it happens to be” (Benjamin). I feel, however, that Benjamin doesn’t get quite to the core of the matter of why we derive aura from presence which is, quite simply, the collective human thought that “original = better”. Let us consider a work of art. It has become quite commonplace to understand that the original Mona Lisa is near-priceless, but a reproduction of the Mona Lisa has vastly diminished value. But why? Is it because we associate the original with the great Leonardo da Vinci and since he no longer lives, his original creations are thus worth more than reproductions? This only leads to further questions in my mind. If one can take a piece of art and reproduce it exactly, brush stroke for brush stroke, so that it resembles the original down to the finest detail, why is the original any more valuable than the reproduction? What is it about “originality” that causes us as a society to assign so much value?

Money & Fame

I will probably ruffle quite a few feathers with this statement, but I believe the answer to the above question, like so many things, boils down to a single commonality: money. By assigning mystique (or in this case “auratic presence”) to an item, you place value on the item. It is different. It is unique. Therefore, it must be valuable. And the higher the value, the greater the monetary gain should the item be sold. And even if money isn’t the currency in question, the value will be dispensed in that other great currency, fame. The artist or object in question gains notoriety and thus, in the case of the artist, the value of future objects increases. In the case of the object itself, as notoriety increases, value becomes an expression of time, whereby the more time that passes, the greater the value increases until it eventually plateaus.

But does the digitisation or mass reproduction of an object actually detract from the value? Furzsi certainly doesn’t seem to think so. In Furzsi’s article, the author states, “Instead of destroying the cult status of artworks then, such printed fabrics reinforced the aura of the artist genius and played an important role in familiarising a wide audience with the modernist canon”. One could extrapolate from Furzsi’s argument, then, that mass reproduction of these prints raised the auratic presence and thus the value of the object by making it more readily available. And therein lies the crux of my argument.

Availing the ‘Common Folk’

It is my belief that digitisation of an object does not diminish the analogue object’s auratic presence, nor does the digital object lack aura. In fact, it is quite the opposite. As the internet continues to grow and connect the world, those who might not otherwise have access to Arts & Humanities analogue objects due to a lack of means for travel or opportunity can experience these objects in a digital format. The digital object can still evoke the same sense of wonder or mystery. After all, I’ve never seen the original work of Dali’s The Persistence of Memory, yet it is still one of my favourite paintings and evokes a strong feeling of connection to the subject matter. And by exposing a wider audience to the object or artist via the digital medium, the auratic presence of the analogue increases as well due to the increase in notoriety. It all ties back together.

Dr. Power also contends that aura emanates not just from the object itself but also from the unexpected encounter with the object and the emotions said encounter evokes (p. 48). Here again, the digitisation of the object lends itself to my supposition that by increasing the dissemination of the object to a wider audience, the aura of the object is increased as more individuals experience the object through unexpected digital encounters (via internet searches, online galleries, etc).

Unintended Consequence

The digitisation of objects has also had some unintended, yet beneficial, consequences. In his journal article, Rodríguez-Ferrándiz discusses the case of Edward Munch’s The Scream. The painting was damaged during its theft and recovery, and the curators were able to use the vast collection of reproductions of the work to assist in their restoration of the object. Thus, the auratic presence of the object, which could have suffered irreparable harm as a result of the damage to the painting, was preserved directly as a result of the mass production and digitisation of the object (p. 399).  Furthermore, had the object not been digitised and mass produced, the curators may never have been able to restore the painting, and the aura along with the value of the object could have been greatly reduced.

Conclusion

While I won’t go so far as to say that an analogue object retains an aura that can be matched by a digital representation (after all, seeing the Book of Kells in physical form does evoke something that the digital version doesn’t quite capture), I will emphatically state that digitisation of an object in no way shreds the auratic presence, but rather adds to the aura of the analogue object. In addition, the digital object itself retains a sense of aura through the “unexpected encounter” of the object (thus allowing objects that are born digital to possess aura as well). Aura is entirely intrinsic and subjective characteristic that is unique to the individual experience yet also is built upon by the collective unconscious. As we move evermore into the the digital age, aura will continually be built upon through both the analogue and the digital experience.

Bibliography

Looking at the Book

When I first applied to the Digital Humanities programme at Maynooth University, I was faced with a bit of a conundrum. Up to the point of my application, I had focused my research interests in the area of Anthropology, and I was most interested in how various cultures interacted with computer systems. Specifically, I wanted to explore what UI paradigms could be leveraged or created to overcome some of those culture boundaries that can often make the absorption of data difficult.  All of this was the result of countless nights of pondering UI interactions while completing my Masters thesis and the 2  years of subsequent work and personal study between the completion of my Masters and the inception of my PhD.

Flash forward to February of 2014 when I stumbled upon the Digital Arts & Humanities programme at Maynooth University.  As I read about the programme, I became incredibly excited.  This was exactly what I had been looking for! The notion of taking something analogue like history and presenting it in an online format was absolutely fascinating to me, but I was unsure how to tweak my research proposal, which had been written from an anthropological point of view, to work within the realm of Digital Humanities.  Furthermore, while cross-cultural barriers certainly exist in the consumption of the humanities information, was that a serious underlying problem to the field? Or was there something else for me to explore?  And that’s when inspiration hit me.  And it came in the form … of a book.

All in a Book

Go onto any online catalogue and find information that was originally stored in a book (or something electronic that mimicks a book such as an online journal). Over and over, you will see the same implementation of the UI for consumption – something called “the book metaphor”. In Interactive Design, a “metaphor” is known as a particular UI mechanism or paradigm that attempts to create a universal interaction that draws specific inspiration from some kind of interaction the user is already familiar with (see Wikipedia’s article entitled ‘Interface Metaphor‘ for more information). In this case, users are familiar with how to read a book (the turning of pages, the flow of data from top to bottom, left to right, etc), so the easiest implementation of this type of UI phenomenon is “the book metaphor”. And it is widely accepted and used. But just because something is widely accepted doesn’t mean its the best implementation.

Out of the Book

Let’s face it, we live in the digital age. More and more often, we turn to the internet for our information needs. Libraries are being used differently now. Instead of pulling books from shelves, users are leveraging the study space and pulling online copies of books to comb through. And hobbyist researchers troll Google looking for information on whatever subject happens to strike their fancy at the time. And we’ve become so accustomed to looking for information by reading text upon text that we’ve simply translated this metaphor from the analogue to the digital realm. But given the rate at which technology is growing, is this still the right path to take? That’s what I’m looking to explore in my dissertation. How do we get out of the book? What new ways can we visualise data to consume it? And for those cases that do involve combing through text, how can we make the UI interaction different or more efficient, so it moves beyond reading a book on a computer screen? Thus is the foundation for a large portion of my dissertation.

As I progress through my research, I’ll be exploring other topics as well (especially in the realm of data visualisations and common UI problems inherent in academic systems). And as I go, I’ll be blogging about my journey, so feel free to tune in and comment on what you see. I’ll do my best to keep you up to date on my progress and share with you this journey as I move out of the book and into the digital.