A Digital Education

Meredith Dabek, Maynooth University

Category: Digital Humanities (page 1 of 2)

Creating a Digital Scholarly Edition: Lessons from The Woodman Diary Project


In a previous blog post, I wrote about the Woodman Diary project, in which a group of students (myself included) enrolled in AFF606a (Digital Scholarly Editing) are creating a digital edition of a First World War diary under the guidance of Professor Susan Schreibman. The project, which began in earnest in January 2015, is now entering its final weeks. Though the creation of each digital scholarly edition may differ depending on the project team, the timeframe, resources, or other aspects, there are some general lessons we can draw from the Woodman Diary project that may prove helpful for future work.

Teamwork, Communication and Project Management

A digital scholarly edition such as the Woodman Diary has many parts. There is the text itself, its transcription and digital images. There are the technical aspects, such as the XML/TEI encoded files and the XSLT used to transform the XML. Digital editions often include extensive contextual and historical information, and there might also be design considerations for the final website. With each part progressing and moving forward at its own rate during the project timeframe, teamwork and communication between team members has been vital to Woodman Diary’s progress.

At the start of the project, we established clear goals and a clear division of labor by having each team member assume responsibility for one part of the digital scholarly edition. Doing so allowed us to set clearly established communication avenues; questions about the annotations, for example, are directed to Noel, whereas Josh handles any issues with the design composites. By assigning one person to take charge of a specific piece of the project, we are striving to eliminate any confusion or cross-purpose tasks.

Woodman Diary logoThe division of labor also contributes to effective teamwork within the project. Given the scope and timeframe of this project, it simply is not possible to complete the necessary work without each team member contributing to the whole. Moreover, allowing each team member to oversee his or her own area of responsibility helps ensure the continued progress of the project, by separating a seemingly daunting task into manageable pieces.

At the same time, however, the appointment of a project manager and the work he does is absolutely essential to ensuring the project advances as intended. Project managers offer structure and a foundational grounding to a specific project, enabling team members to work together to accomplish defined goals. As the Project Management Institute (PMI) states on its website, “hope is not a strategy.” We could not have crossed our fingers and anticipated a positive outcome. Consequently, having a project manager provides the necessary structure needed to complete the project.  When individuals come together as a team to create something, whether it is a digital scholarly edition, a new software program or the construction of a building, they need a strong, solid plan and a leader who can guide the process from start to finish.

While the process of creating a digital scholarly edition such as the Woodman Diary is the result of the collective efforts of the entire team, ceding overall management and oversight of the project to one person is important for success. Woodman Diary team member Shane McGarry serves as the project manager, and his expertise and previous experience in such a role has proven invaluable. Throughout the last several months, Shane has kept us focused on our long-term goals and deadlines, acted as the primary contact between the project team and Professor Schreibman, and shepherded the project from its early beginnings to this last final month. He also ensures we adhere to good project management principles by establishing clear communication processes.

Our team meets in person for regular progress meetings on a weekly basis, avails ourselves of project management tools, such as Google Drive, Google Group, and Jira, and uses a shared Google Calendar to highlight any personal commitments that might interfere with deadlines. These practices enable us to communicate effectively amongst ourselves, whether it is simply to check in or to crowdsource ideas for a particular aspect of the digital scholarly edition.

Effective team communication, though, is more than simply staying in touch. Clear, consistent communication can help identity potential risks before they become problems, determine which areas of the project might need more attention, or reallocate resources based on progress reports. Indeed, project teams that communicate well are more likely to be successful. According to a 2013 report from the Project Management Institute, projects with highly effective communication plans were more likely to meet their original goals (80%, versus 52% of projects with minimal communication) and more likely to be completed on time (71%, versus 37%). With so many different parts to the Woodman Diary project, its ultimate success will be due, in large part, to our team’s ability to communicate well.

Know what the project is – and what it isn’t

Good communication can also mean listening, especially to those who have relevant knowledge. Last month, our team had the opportunity to speak with Gordon O’Sullivan, a former student at Trinity College Dublin who served as the project manager for another digital scholarly edition, the Mary Martin Diary project. Gordon offered a wealth of advice and feedback, but his most valuable piece of guidance was this: know what your project is – and know what your project isn’t.

Scope creep – the unplanned or continuous expansion or extension of a project’s scope – is the bane of many project managers (“Scope Creep”). Particularly in a group environment, when ideas are flowing and creativity peaks, it is easy to get carried away with grandiose visions and “wish list” items. But such ideas often don’t come with the necessary corresponding adjustments in time, resources and/or money. Moreover, many scope creep ideas are often “nice to have” elements in the project, but are not essential components for its completion.

Albert Woodman’s diary contains multiple inserted maps and newspaper clippings, referencing various campaigns and attacks during the war. Additionally, he mentions several towns and cities throughout his entries, which are encoded with a <placeName> TEI tag. In trying to determine how best to include the maps and the references to specific places in the project, we have considered using geo-referencing software to create dynamic images comparing Woodman’s geographic references with present-day Google Earth (see the sample image below).

Example of Geo-referenced MapUltimately, though, the geo-referenced maps are an example of scope creep. Their inclusion in the project would be interesting and informative, but the time involved in their creation (as well as the time needed to learn the specific geo-referencing software) shifts attention away from the project’s core components, especially at this critical time in our schedule. Gordon’s advice reminds us to focus on our original project plan. For now, geo-referenced maps do not fit within the scope of what our project is. Rather than attempting too much, we can instead concentrate on completing and refining our initial objectives and goals.


Though the Woodman Diary project may be unique with regards to its purpose, goals and final result, the lessons learned by myself and the other team members throughout the process can be useful and applicable for other digital scholarly edition (DSE) projects. From the appointment of a project manager to minimizing scope creep, the example set by our project team will hopefully prove beneficial for future DSE projects.

Works Cited:

Project Management Institute. The Essential Role of Communications. New York: PMI, 2013. Print. Web. 18 April 2015.

“Scope Creep.” Technopedia. Janalta Interactive Inc., 2015. Web. 18 April 2015.

“Why is Project Management Important?” Project Management Institute. PMI New York, 2015. Web. 18 April 2015.

Encoding Choices in the Woodman Diary Project

TEI and Diplomatic Editions

Developed and first released in 1990, the Text Encoding Initiative (TEI) Guidelines are a specific method of text encoding that allows both computers and humans can read and understand those texts, separate and independent from a specific operating system. The Guidelines, which are expressed in the Extensible Markup Language (XML), provide scholars with pre-defined markup tags and elements to establish the structure of a particular text. The full and complete set of the Guidelines comprises nearly 500 elements, which digital humanists use to indicate what a text is, rather than how it should look or act.

TEI files have two parts: (1) a header, which includes information about the text, such as its title, author, publisher, and other bibliographic items; and (2) the body or text section, which contains the encoding of the actual text. All of the TEI tags and elements are organized into one of these two parts (“Introducing”). In addition to common structural elements such as paragraphs (<p>) and lines (<l>), the TEI Guidelines also include tags that allow encoders to communication editorial choices (<choice>), account for any apparent errors (<del> or <add>), and reflect decisions about any emendations in the original text (<unclear>). These tags are often used when scholars seek to create a diplomatic edition, a version of an original text which attempts to accurately reproduce any significant features, including spelling, abbreviations, deletions and other alterations (Pierazzo).

Diplomatic editions can range in their adherence to accuracy, from those considered ultra-diplomatic or strictly diplomatic “in which every feature which may reasonably be reproduced…is retained” to editions that feature normalized texts, created with readability in mind (Driscoll). Many scholarly editions fall somewhere in the middle, with an emphasis on a “semi-diplomatic” edition that retains some of the original text’s features, but not all. Such is the case here at Maynooth University, where a group of students enrolled in the Digital Scholarly Editing module are using TEI to encode and create a digital edition of the Woodman Diary.

The Woodman Diary Project

In 1918, Albert “Bert” Woodman was a soldier in the “L” Signal Company of the Royal Engineers, stationed in Dunkirk, France during World War I. After marrying his sweetheart, Nellie, Bert started to keep a diary of his experiences, intending to share it with Nellie when he returned home. Bert’s handwritten entries, starting in January 1918 and continuing until just after Armistice Day in November, fill the front and backs of nearly every page in the diary and span two physical journals, known by their respective brand names, Wilson and Butterfly.

Many historical documents, like Woodman’s diary, present unique challenges and opportunities for text encoders. Aside from understanding and transcribing an individual’s specific handwriting style, encoders may also encounter faint or faded writing, ink spills which obscure words and scribbles and cross-outs. Consequently, text encoders (in this case, the students in the module) must make careful editorial choices regarding the level of accuracy encoded in TEI.

Though the Woodman Diary Project is not a strict or ultra-diplomatic edition, the project team did decide to encode a handful of features often present in diplomatic editions, such as unclear words, additions and deletions, and abbreviated words. These tags and elements not only help preserve Bert’s idiosyncrasies, but they also allow readers in the general public or academic researchers to understand more about the diary and the circumstances under which it was written.

As often happens with handwritten documents, the Woodman diary contains a number of struck out words, phrases and letters, perhaps because Bert misspelled something or incorrectly recorded a number or name. These deletions are frequently accompanied by additions, either above or next to the original text. To accurately represent these features of the text, the Woodman Diary Project team used TEI’s <del> and <add> tags. Additionally, the attribute @rend gave team members the ability to indicate further characteristics, such as the position of the addition (e.g., above, over-written, next to) and even the very nature of the deletion (e.g., scribble, strikethrough, etc):

… when <add rend=”overwritten”>the<del>J</del></add> the Union Jack comes along …

5 March

(Woodman, 5 March 1918)

Occasionally, there were occasional words the project team was unable to decipher with absolute certainty. Despite having access to high-quality, high-resolution digital images of the physical diary, some words remain illegible, even if the proposed word does make sense within the context of the entry. In these cases, TEI’s <unclear> tag is used to contain “a word, phrase or passage which cannot be transcribed with certainty because it is illegible…in the source [document]” (“Elements Available”). In these cases, the tag helps signal to readers and researchers that there is still some doubt regarding the transcription of a word and phrase:

I’ll start another as soon as I can get the price of one <unclear>more</unclear>!!!

8 July

(Woodman, 8 July 1918)

When scholarly editions want to retain some diplomatic edition features, text encoders may offer the option of switching between the original text and an edited version. TEI’s <choice> tags allows for this, giving encoders the ability to “switch automatically between one ‘view’ of a text and another,” and therefore providing readers and researchers with insight into the encoder’s editorial choices (“Elements Available”). For the Woodman Diary Project, team members used the <choice> tag to contain abbreviations (<abbr>) and expansions (<expan>). Bert seems to have favored economy, given his use of every page available to him in his notebooks, and he also frequently abbreviated Standard English words and phrases in a likely attempt to save precious writing space:

Don’t get any <choice><abbr>ltrs</abbr><expan>letters</expand></choice> at all today

4 Feb

(Woodman, 4 Feb 1918)

As demonstrated by the examples above from the Woodman diary, the TEI tags for encoding editorial changes and choices prove particular useful for scholarly editions. While some encoders may choose to make silent corrects or emendations to enable easier reading of a text, partial or strict adherence to a diplomatic encoding offers accuracy and authenticity when dealing with historical texts. The encoding choices made by the Woodman Diary Project team give readers and researchers further insight into Bert Woodman and provide a more complete representation of his diary.



Driscoll, M.J. “Electronic Textual Editing: Levels of Transcription.” TEI: Text Encoding Initiative. TEI Consortium, n.d. Web, 18 March 2015.

“Elements Available in All TEI Documents.” TEI: Text Encoding Initiative. TEI Consortium, n.d. Web, 18 March 2015.

“Introducing the Guidelines.” TEI: Text Encoding Initiative. TEI Consortium, 2013. Web. 1 January 2014.

Pierazzo, Elena. 2011. “A Rationale of Digital Documentary Editions.” Literary and Linguistic Computing. 26.4 (2011): 463-477. Web. 18 March 2015.

Woodman, Albert. “Diary.” 1918. The Woodman Diary Project. An Foras Feasa, Maynooth University.


Take Two: Literature and DH

Recently, two intriguing articles from well-respected Digital Humanities scholars came through in my feed reader, and as they align quite nicely with my own interests in the intersection of technology and literature, I thought I’d share them here.

What is an @uthor? by Matthew Kirschenbaum

Writing for the LA Review of Books, Kirschenbaum (perhaps best known for his article “What is Digital Humanities and What’s It Doing in English Departments?”), explores how the evolving landscape of social media and author engagement with audiences online is changing the nature of literary criticism and the very idea of authorship itself:

Today you cannot write seriously about contemporary literature without taking into account myriad channels and venues for online exchange. That in and of itself may seem uncontroversial, but I submit we have not yet fully grasped all of the ramifications. We might start by examining the extent to which social media and writers’ online presences or platforms are reinscribing the authority of authorship. The mere profusion of images of the celebrity author visually cohabitating the same embodied space as us, the abundance of first-person audio/visual documentation, the pressure on authors to self-mediate and self-promote their work through their individual online identities, and the impact of the kind of online interactions described above (those Woody Allenesque “wobbles”) have all changed the nature of authorial presence. Authorship, in short, has become a kind of media, algorithmically tractable and traceable and disseminated and distributed across the same networks and infrastructure carrying other kinds of previously differentiated cultural production.

There are Only Six Basic Book Plots 

In an article for Motherboard, contributing editor Ben Richmond interviewed Matthew Jockers (textual analysis proponent and author of Macroanalysis) about his algorithmic model that identifies archetypal plot shapes. According to his research, about 90% of the time, results showed six basic plots (with the remaining 10% indicating seven basic plots). While some of his data remains unknown, Jockers did release his tools on GitHub to encourage others to try the same experiment for themselves:

Most books that measure the number of plots seem aimed at writers and would-be writers, but Jockers’s work has implications for readers, librarians, and even literature snobs, or anyone who wants to put snobs in their places.

As he was charting plots, Jockers noticed that some genres that are derided for being “formulaic,” like romance, aren’t just relying on boy-meets-girl.

“Romance showed some proclivity for two of the six plot shapes, but it wasn’t an overwhelming case of all the plots falling into one,” Jockers said. “It was a much more evenly distributed from these six shapes.”

End of Term Reflections

Well, it’s been four months, and my first semester as a Digital Humanities student is (for all intents and purposes) finished. From my perspective, the last sixteen weeks have been incredibly productive, informative and thought-provoking. I’ve not only learned a great deal, but I’ve also had the opportunity to think critically about what I’ve learned, and how I believe those lessons fit within the overall Digital Humanities field. Below are some of my reflections and thoughts about this past term, and some ideas for the future.

Though my technical and coding skills have vastly improved (especially when compared to the days and months when I was teaching myself), I still believe this is one area where I can do better. I’ve grappled with data modeling, encoding, and metadata schemas, but practice makes perfect, and there is always more to learn. I do wish there had been some follow up to the intensive, pre-term Java course we took; I did well with the module at the time, but feel I’ve lost some of the knowledge since due to non-use.

The intersections between Digital Humanities, media and digital (electronic) literature remains a strong area of interest for me, as one might have guessed based on some of my previous posts. I’ve been attempting to expand my knowledge of this area by reading on my own, and I’m fascinated by the creativity and ingenuity found in some of these new digital literature projects. In looking forward to the future, I’ve started working on a PhD proposal for doctoral-level research specifically addressing digital (electronic) literature. It’s still very much a work in progress, but I’m passionate about this particular area of study and look forward to what comes next.

My MA program is, as the name implies, Digital Humanities, so many of the readings and lectures have had a literature and/or history focus to them. As a result, I am very curious about what doesn’t come up as often, namely the state of the digital arts, and how that intersects with Digital Humanities. Some colleagues and lecturers are working in the art history and cultural heritage sectors, but I still sense that there is still a huge gap in awareness between Digital Humanities and digital arts (or music or performance). There could be many reasons for this (I have a few theories of my own), but I also believe there’s a world of untapped potential with the digital arts (the What’s the Score? project at the Bodelian Library is one project that immediately comes to mind) and I’d love to know more. I’m very interested in learning more about applying digital ideas and techniques to the art world, which is why I’m especially excited for my upcoming practicum next semester with the Irish Museum of Modern Art. More on that next term!

Similarly, I’m also curious about issues of diversity, race, gender and sex in the Digital Humanities. From my (admittedly somewhat limited) perspective, I see the field as one in which the majority of thought leaders and researchers are still male and overwhelmingly white. I’m interested about that dynamic and what it means both for the DH field and for DH projects and research. To my mind, there is a clear and identifiable need for more diversity within the field. I don’t know that I’m the best person to propose any solutions, but I would love to see a more concerted effort to think critically about expanding DH to include those voices that aren’t necessarily being heard. (Of course, if anyone has suggestions for readings that address this very topic and would like to point me in the right direction, I’d be most appreciative.)

These are just a few thoughts; like so many things in life, learning about Digital Humanities is an ongoing process (especially since it is an evolving field itself) and I know I’ll have much more to stay in 2015.

Until then, Happy Holidays, and a Happy New Year!

Text Mining: An Annotated Bibliography

Text Cloud of Text MiningIn 2003, in an issue of the Literary and Linguistic Computing journal, humanities computing scholar Geoffrey Rockwell asked the question, “What is text analysis, really?” More than ten years later, some Digital Humanities are still asking the same question, especially as technological advances lead to the creation of new text analysis tools and methods. In its most basic form, text analysis – which is also known as text data mining or, simply, text mining – is the search for and discovery of patterns and trends in a corpus of texts. The analysis of those patterns and trends can help researchers uncover previously unseen characteristics of a specific corpus, deconstruct a text, and reveal new ideas and theories about a particular genre or author. The following annotated bibliography offers an overview of text mining tools in Digital Humanities, with the intention that it may serve as a starting point for further exploration into text analysis.

Argamon, Shlomo and Mark Olsen. “Words, Patterns and Documents: Experiments in Machine Learning and Text Analysis.Digital Humanities Quarterly. 3.2 (2009). Web. 15 November 2014.

In Argamon and Olsen’s article, they suggest that the rapid digitization of texts requires new kinds of text analysis tools, because the current tools may not scale effectively to large corpora and do not adequately leverage the capability of machines to recognize patterns. To test this idea, Argamon and Olsen, through the ARTFL Project, developed PhiloMine, a set of text analysis tools that extent PhiloLogic, the authors’ full-text search and analysis system. Argamon and Olsen provide an overview of PhiloMine’s tasks (predictive text mining, comparative text mining and clustering analysis), and then summarize three research papers that highlight the tasks’ strengths and weaknesses.

Borovsky, Zoe. “Text and Network Analysis Tools and Visualization.” NEH Summer Institute for Advanced Topics in Digital Humanities. Los Angeles, 22 June 2012. Presentation. Web. 15 November 2014.

This presentation by Borovsky, the Librarian for Digital Research and Scholarship at UCLA, provides an overview of text mining tools, with an in-depth look at a few specific tools: Gephi, Many Eyes, Voyant and Word Smith. Borovsky highlights some of the benefits and challenges of each tool, and offers examples of sample outcomes. Though the slides are presented without the addition of a transcript of Borovsky’s presentation speech, the slides themselves a high-level overview of these four specific text mining tools and Borovsky’s template easily allows readers to discover relevant information about each tool.

Green, Harriett. “Under the Workbench: An analysis of the use and preservation of MONK text mining research software.Literary and Linguistic Computing. 29.1 (2014): 23-40. Web. 15 November 2014.

To help further humanities scholars’ understanding of how to use text mining tools, Green conducted an analysis of the web-based text mining software MONK (Metadata Opens New Knowledge). Green studied a random sample of 18 months of analytics data from the MONK website and conducted interviews with MONK users to understand the purpose of the tool, it’s usability and the challenges encountered. Along with other findings, Green discovered that MONK is often used as a teaching tutorial and that it often provides an entry point for students and researchers learning about text analysis.

Muralidharan, Aditi and Marti A. Hearst. “Supporting exploratory text analysis in literature study.Literary and Linguistic Computing. 28.2 (2013): 283-295. Web. 15 November 2014.

According to Muralidharan and Hearst, the majority of text analysis tools have focused on aiding interpretation, but there haven’t been many (if any) tools devoted to finding and revealing insights not previously known to the researcher. So Muralidharan and Hearst created WordSeer, a text analysis tool designed for literary texts and literary research questions. To illustrate the functionality of WordSeer, Muralidharan and Hearst used this text analysis tool to examine the differences in language between male and female characters in Shakespeare’s plays.

Ramsay, Stephen. “In Praise of Pattern.Faculty Publications – Department of English. Digital Commons @ University of Nebraska-Lincoln: 2005. Web. 15 November 2014.

Ramsay sets out to explore the idea of pattern as a point of Intersection between computational text analysis and the “interpretive landscape of literary studies.” Ramsay wanted to prove that there could be a computational tool that offered interpretive insight and not specific facts or results. So he set out to create StageGraph, a tool designed ostensibly to study structural properties in Shakespeare’s plays, but one also stemming from a branch of mathematics known as graph theory.

Rockwell, Geoffrey. “TAPoR: Building a Portal for Text Analysis.” Mind Technologies: Humanities Computing and the Canadian Academic Community. Ed. Ray Siemens and David Moorman. University of Calgary Press: 2005. 285-299. Print.

In this chapter, Rockwell introduces readers to the TAPoR – the Text Analysis Portal for Research. The TAPoR project began as a collaboration of researchers and projects and eventually proposed a network of labs and servers that would connect and aggregate the best text analysis tools, making them available to the larger academic community. Rockwell then explores TAPoR in more detail, offers an overview of the portal’s specific functions, and discusses the types of users the project envisions will use the tools available through the portal.

—. “What is Text Analysis, Really?Literary and Linguistic Computing. 18.2 (2003): 209-219. Web. 15 November 2014.

In this article, Rockwell argues that text analysis becomes, in effect, an interpretive aid because it creates new hybrid versions of a text by deconstructing and reconstructing some original text. As a result, Rockwell stresses the need for new kinds of text analysis tools that emphasize experimentation over hypothesis testing. He concludes the paper with a proposal for a portal model for text analysis tools, using his own TAPoR as an example.

Simpson, John, Geoffrey Rockwell, Ryan Chartier, Stéfan Sinclair, Susan Brown, Amy Dyrbye, and Kirsten Uszkalo. “Text Mining Tools in the Humanities.Journal of Digital Humanities. 2.3 (2013). Web. 15 November 2014.

Derived from an oral presentation at a research conference, Simpson et al.’s brief article and accompanying poster presents the testing framework developed for the TAPoR text mining tool. The TAPoR testing framework was then used as a proposal for the creation of a systematic approach to testing and reviewing humanities research tools, especially text mining tools.

Text Mining.DiRT Digital Research Tools. n.p., n.d. Web. 15 November 2014.

The DiRT directory compiles information about digital research tools for scholarly and academic use. The directory is divided into several categories, with one category devoted to text mining tools. Users can narrow the category by platform (operating system), cost, whether or not the tool is open sourced and more. Each individual entry includes a description of the tool as well as a link to the tool itself or its developer’s website. While the DiRT directory is an invaluable resource of text mining tools, one drawback is that the tools themselves are not rated in any way, either by the directory’s editorial board or by other users.

van Gemert, Jan. “Text Mining Tools on the Internet.ISIS Technical Report Series. The University of Amsterdam: 2000. Web. 15 November 2014.

van Gemert’s report is a thorough and comprehensive overview of text mining tools available on the Internet, though as it was published in 2000, it is now out-of-date. Still, this report offers a great deal of information both about specific text mining tools and the companies behind their creation. Van Gemert includes website links, summaries and information about available trial versions for each tool.

[Image note: text cloud created from the content of this post using Tagul, an online word cloud creator.]

Love Letters of 1916

Letters of 1916 ProjectLetters of 1916

In April 1916, during Easter Week, Irish republicans launched an armed rebellion aimed at ending British rule in Ireland. Though British forces quickly suppressed the insurrection, the event, now known as the Easter Rising, helped propel Ireland to independence.

To help preserve and document life in Ireland in the months before and after the Easter Rising, researchers at Trinity College Dublin and Maynooth University, led by Dr. Susan Scriebman, created the Letters of 1916 project. Launched in September 2013 as Ireland’s first crowdsourced (digital) humanities project, Letters of 1916 “aims to create a large scale digital collection of letters” written around the time of the Easter Rising, as well as create “an online archive of letters created by the public for the public” (Trinity College Dublin).

While many of the letters address the Easter Rising in some way, this diverse collection of correspondence includes a wide range of topics. From art, business and politics to family life and faith, Dr. Scriebman wanted to ensure that the Letters of 1916 would “bring to life…the unspoken words and the forgotten words of ordinary people during this formative period in Irish history” (Trinity College Dublin).

James and May

James Finn and May (Fay) Finn

James Finn and May (Fay) Finn

Among the thousands of unspoken and forgotten words of ordinary people catalogued by the Letters of 1916 Project are those of James Finn and May Fay. James and May were engaged sometime in late 1915 or early 1916, and between January and June of 1916, exchanged love letters as they continued their courtship and planned their wedding. The letters, donated to the project by granddaughter Tessa Finn, are filled with stories and anecdotes of everyday life in Ireland, friends and family of the couple and, in the weeks prior to and following Easter, the Rising.

James worked as a senior civil servant in Dublin, and lived in the city, while May remained at her family’s home in Mullingar, County Westmeath. They were prolific writers, exchanging the nearly 100 letters in just about six months’ time, and, in some cases, wrote and received replies on the same day – a testament both to their devotion to one another and a fairly efficient Irish postal service.

While the majority of James and May’s letters focused on their wedding plans and their future life together, several of the letters – James’ in particular – offer glimpses into the political climate of Dublin leading up to and following the Easter Rising. There are no letters between James and May during the days of the Rising itself; instead, James wrote about his plan to visit May in Mullingar for Easter, after which there is a gap of more than 10 days before he wrote again to reassure May of his safe return to Dublin.

Part of the reason for the gap between letters is that James was likely with May, visiting as planned. However, it also underscores the confusion and uncertainty that reigned in the days and weeks after the Rising, when accurate information was difficult to obtain, particularly for those outside of Dublin:

… News was so very scarce and uncertain that I very soon began to look out for another letter, it’s sickening not to know how long that suspense would last… (Fay, 7 May 1916)

In James’ case, he may have been wary of appearing to openly support the Irish Volunteers, especially as a civil servant. Many of his letters to May were sent from his office, on National Health Insurance Commission letterhead, and on 8 May 1916, he specifically mentioned his concern that his letters may not have gotten through due to the censors (Finn).

In later letters from the spring of 1916, James and May demonstrate a deliberate carefulness with the content they included in their letters. After sharing some of Patrick Pearce’s writings with May on 26 May 1916, James assured her that he “received the copy of [the] letter quite safely” (Finn), implying that possession of Pearce’s correspondence might be dangerous.

Their caution was not unfounded. In her contributor profile on the Letters of 1916 website, James and May’s granddaughter Tessa Finn wrote, “Many people they knew were either actively involved or suspected of…involvement” in the Easter Rising. On 18 May 1916, James’ letters informed May that one of his colleagues had been arrested because he “spoke Irish continually in his home and played Irish and German music on his piano” (Finn).

Due to his position as a civil servant (as well as the arrest of his colleague), James was probably questioned about his knowledge of the Rising events, a possibility May contemplated with a bit of humor:

We are always looking out for the paper & news we manage to get an odd paper now & then but I saw where all Civil Servants were to render an account of their Easter holidays… You need not be afraid to mention our names anyway; we are not very rebellious characters. (Fay, 10 May 1916)

Despite the heightened political atmosphere of Dublin (or, perhaps, because of it), both James and May’s letters suggest an increased appreciation for each other. In times of turmoil and upheaval, these two lovers naturally turned to one another for comfort, and to give thanks for what they had:

You remember how often I told you that both by letter and by mouth: that I might not have the good fortune or the grace from God to be married to you. Now somehow I feel that I may be thought worthy although why it should be so I cannot understand when I think of all the fine spirits that this calamity has called to their eternal account. Things are gradually getting more like their usual way and people generally are beginning to rebuild and restore all that has been shattered but it will be many a long day before Dublin is anything like its old self. (Finn, 8 May 1916)

In the aftermath of the Easter Rising, James and May’s letters illustrate a timeless fact: political uprisings can undoubtedly and irrevocably change a country, and yet life – and love – continue on. Thanks to the Letters of 1916 Project, the words of these everyday, ordinary lovers have been preserved and brought to new audiences, nearly 100 years later.

[Photo Credits: Letters of 1916 website; Tessa Finn’s contributor profile]

Works Cited

Fay, May. Letter from May Fay to James Finn. 7 May 1916. Web. 8 November 2014

Fay, May. Letter from May Fay to James Finn. 10 May 1916. Web. 8 November 2014

Finn, James. Letter from James Finn to May Fay. 8 May 1916. Web. 8 November 2014

Finn, James. Letter from James Finn to May Fay. 18 May 1916. Web. 8 November 2014

Finn, James. Letter from James Finn to May Fay. 26 May 1916. Web. 8 November 2014

Tessa Finn.Letters of 1916. National University of Ireland Maynooth. 2014. Web. 8 November 2014.

Trinity College Dublin. Letters of 1916 Research Project Calling on Public to Contribute Family Letters. 24 September 2013. Trinity College Dublin Communications Office. Web. 8 November 2014.

Scholarship and the Future

In 2007, Digital Humanities scholar Peter Robinson wrote a paper titled, “Electronic Editions for Everyone,” in which he explored the current state of digital or electronic scholarly editions. Though the primary focus of his paper is concerned with scholarly texts, Robinson spends the first section outlining why he believes books have defied the digital revolution, in contrast to film and music. According to Robinson, electronic books (e-books) cannot offer either a better distribution medium to printed books, nor a better performance medium. As a result, print books continue to flourish because e-books do not offer anything worthwhile in exchange.

While reading Robinson’s article through the lens of my 2014 perspective, I couldn’t help but disagree with nearly all of his introductory arguments about e-books and printed books. These arguments might have been valid at the time Robinson wrote the paper, but seven years later, I believe they don’t hold up well at all. While e-books and e-readers haven’t replaced print books (or print book sales), the rapid rise of products like Amazon’s Kindle and Barnes & Noble’s Nook have made e-books far more commonplace in 2014 than they were in 2007. U.S. e-book sales in 2013 alone accounted for $3 billion. Furthermore, e-readers offer benefits a print book can’t, including the ability to carry an entire library around in one (relatively) small device. *

Of course, it is not Robinson’s fault that his arguments from 2007 look very different in 2014; after all, how could anyone have possibly predicted the incredible rate of technological advancement within the last few years? It does, however, raise some interesting questions about the future of Digital Humanities: for a field so intertwined with technology, how might the continuing advances of technological tools and methods affect the sustainability of DH scholarship?

That technology will continue to change, develop and move forward seems inevitable. Tech companies thrive on pushing the limits and the finding the next big thing. At some point, Web 2.0 will likely give way to Web 3.0 (or 2.5 or some other term indicating advancement). Whatever that may look like, it also seems inevitable that the next generation of tech and web tools will make our current digital environment seem obsolete. So what, then, happens to Digital Humanities projects and scholarship developed during Web 2.0? Will we be able to access the information? Will the data even be useful anymore, if the technologies used in its creation are no longer valid?

I don’t necessarily have any answers right now. I don’t believe we can stop technology from changing (nor, I think, would we want to). Still, Robinson’s paper – and my reaction to it seven years later – seem to illustrate this uncomfortable uncertainty within Digital Humanities. If we can’t know or predict the future, how are we to ensure our scholarship in the present day (particularly, or at least, scholarship involving and aided by technology) isn’t rendered archaic in the future? Or do we simply accept the possibility of obsolescence?

In my first blog post on this blog, I talked about the difficulty in defining Digital Humanities, in part because it’s a field that is in constant motion, always evolving – something largely due to DH’s relationship with technology. In my opinion, this fluid notion of Digital Humanities is both a blessing and a curse. On the one hand, it keeps pushing Digital Humanities projects and scholarships forward, testing the limits of the field and its tools. On the other hand, it mean very well mean that what we do today will be irrelevant in two, five or 10 years time. Perhaps that’s the risk we take as digital humanists. The one thing I do know is that it’s important to keep asking these questions, to keep refining and re-shaping our ideas of Digital Humanities. Since we can’t move backwards, we might as well keep moving forward – whatever the future brings.

(Post Script: my classmate, Josh Savage, wrote a blog post about the durability of data, in which he grapples with similar themes and questions.)

Note: In 2010, Robinson’s paper was published as a chapter in Text and Genre in Reconstruction, edited by Willard McCarthy. In an updated appendix, Robinson does mention the introduction of the Kindle, but (in my opinion) rather casually dismisses its potential to upset his arguments.

Access & Accessibility in Digital Humanities

This year, from October 20th to October 26th, humanities researchers will observe International Open Access Week, a global event designed to celebrate and promote the benefits of open access and to encourage open access as the standard for academic scholarship. The organizers behind International Open Access Week define open access as the “free, immediate, online access to the results of scholarly research, and the right to use and re-use those results as you need.” Many projects, journals and scholarly resources within Digital Humanities promote themselves as open access, and many digital humanists support an increased commitment to open access research.

There is, however, a key difference between providing access to Digital Humanities research, and making that research accessible to all. While access can refer to “the right or opportunity to use or benefit from something,” accessibility specifically refers to something “easily obtained or used,” particularly by individuals with a disability (emphasis mine). If Digital Humanities, as a field of study, intends to maintain and perhaps even advance its commitment to access, then digital humanists must also consider accessibility when creating their projects. Far too often, the needs of individuals with disabilities remain neglected in digital spaces. According to George H. Williams, Associate Professor of English at the University of South Carolina Upstate:

Many of the otherwise most valuable digital resources are useless for people who are – for example – deaf or hard or hearing, as well as for people who are blind, have low vision or have difficulty distinguishing particular colors.

Indeed, despite its widespread use across many demographic groups, the Internet is “inherently unfriendly to many different kinds of disabilities” (Lazar and Jaeger, 70).

Accessibility on the Web

The Web Accessibility Initiative (WAI), created by the World Wide Web Consortium (W3C), tracks how individuals with disabilities use the Internet and develops guidelines and resources to help ensure websites are accessible to everyone. In theory, the Internet is designed to improve communication by removing barriers and obstacles; in practice, however, when websites – or Digital Humanities projects – are badly designed, they can prevent a large subset of the population from accessing information. Furthermore, each individual has his or her own strengths, weaknesses, skills and abilities, all of which can affect how he or she uses the Internet. Digital projects that take a “one way fits all” approach limit their reach and impact when certain groups of people can’t use or access that project.

The WAI offers an overview of the diversity of abilities and disabilities, which can range from auditory, visual, cognitive or physical disabilities to age-related impairments, temporary or situational impairments and health conditions. Each disability may have its own set of barriers to accessibility, requiring different solutions or alternatives. An individual who is hard of hearing, for example, might find it difficult to view audio content presented without captions, while someone with a cognitive disability might react poorly to lots of animation or moving images. Even the computer itself, with its traditional set up with a mouse and keyboard, can become an obstacle to a person with a lost limb or injury that prevents use of his or her hands.

Why Does Accessibility Matter?

Accessibility should be an integral part of Digital Humanities projects, for a variety of reasons. Perhaps most obviously, there could be legal implications, since many countries have passed laws requiring web accessibility. Digital Humanities projects are also sometimes funded through federal grants and, as Williams points out, digital humanists may lose such funding if they cannot demonstrate accessibility and adherence to federal accessibility laws.

Additionally, despite the existence of accessibility laws, a central administrating organization or group for web and digital accessibility does not. In the United States, for example, there is no one government agency in charge of ensuring compliance with accessibility laws. According to Lazar and Jaeger, this haphazard approach places “the burden on people with disabilities to enforce their own rights” (76).

Of course, accessibility also helps expand the reach of a Digital Humanities project. By taking the needs of the greatest number of people into account when designing a project, digital humanists can ensure the largest audience for their work, which in turn could help further the research or provide new contexts and connections.

Ideas and Recommendations

Improving accessibility in Digital Humanities will require more than one solution, and should include collaboration between those with expertise and those ready to learn. It will also necessitate improved accessibility policies and laws, as well as the enforcement of those laws. Williams proposes a universal design approach, explaining that universal design “is design that involves conscious decisions about accessibility for all.” It’s also efficient, providing websites and digital projects with compatibility for multiple devices and platforms. This would allow a digital humanist to design and create a project just once, then easily adapt it for different audiences or devices.

The WAI also offers suggestions by highlighting some of the tools a disabled person might use to improve his or her Internet experience (for example, hardware or software meant to help bridge the gap between the individual and the website) and the strategies and techniques a person might develop to interact with non-accessible websites. These include voice recognition software to give commands, screen readers for those with poor vision, and alternatives to the keyboard and mouse (touch-screens, joysticks, etc).

Certainly, one important step towards improved Digital Humanities accessibility is awareness within the field. A coalition of American universities and research centers is leading the charge for increased awareness with the Building an Accessible Future for the Humanities project. The Accessible Future partnership, supported in part by the US National Endowment for the Humanities, hosts a series of workshops exploring technologies, design standard and issues with digital projects, all tailored towards securing accessibility’s place in Digital Humanities.

Access has long been an integral part of Digital Humanities, grounded in the idea that digital projects should be available to as many people as possible. If Digital Humanities intends to continue its commitment to open access data and research, then accessibility – and specifically digital accessibility – must also become an integral part of the field. Designing accessible projects may require some rethinking and adjustments, but it won’t be as difficult as one might expect. Lazar and Jaeger remind us “the technical solutions for web accessibility already exist” (80). It’s simply a matter of being mindful of different abilities, considering accessibility issues and concerns from the start of each project, and ensuring that the information, in its many forms, is accessible to the widest possible audience.

Works Cited

About.International Open Access Week. Andrea Higginbotham, nd. Web. 21 October 2014.

“Access.” The New Oxford American Dictionary. Version 2.2.1. 2011. Apple, Inc.

“Accessible.” The New Oxford American Dictionary. Version 2.2.1. 2011. Apple, Inc.

Accessible Future. Indiana University Perdue University Indianapolis (IUPUI), 2014. Web. 20 October 2014.

How People with Disabilities Use the Web.Web Accessibility Initiative. W3C, 2013. Web. 20 October 2014.

Lazar, Jonathan and Paul Jaeger. “Reducing Barriers to Online Access for People with Disabilities.Issues in Science and Technology. Winter 2011: 69-82. Web. 20 October 2014.

Williams, George H. “Disability, Universal Design, and the Digital Humanities.Debates in Digital Humanities. Ed. Matthew K. Gold. University of Minnesota Press, 2012. 202-212. Web. 20 October 2014.

Of Cats, GIFs, and Contests

At the start of the term, the director of the Digital Humanities program, Dr. Scriebman, provided our class with general guidelines and instructions for these course blogs, along with the admonition that these blogs were intended solely for the Digital Humanities program and were therefore not appropriate places to post pictures of our cats.

Cat in a boot

(Not my cat. I promise.)

Fear not, friends. I’m not deliberately flouting those instructions. The above photo is from a late 19th century advertisement for F. W. Lucas & Co., in a collection at the Boston Public Library, and found via the Digital Public Library of America (DPLA). It’s just one example of a public domain photo that can be used in a new contest hosted by the DPLA and DigitalNZ.

GIF IT UP is an international competition, running from 13th October to 1st December, asking interested participants to create the best GIFs reusing public domain and openly licensed digital video, images, text and other material available through the search engines on DPLA and DigitalNZ’s websites. You can view examples of submissions on the GIF IT UP Tumblr and the full guidelines for the contest on the DPLA website.

Based on some of the GIFs submitted so far, response to the contest is positive. I think it’s an especially creative way to help the public engage with public domain collections and practice (or perhaps show off) technical computer skills. It’s an entertaining and educational representation of Digital Humanities, using digital skills to highlight humanities collections.

And yes, in this case, GIFs of cats would be acceptable. One of the six categories for the contest is “Animals.”

[The above image has no known copyright restrictions and no known restrictions on use.]

Crowdsourcing in DH, Part 2

When Jeff Howe and Mark Robinson coined the term “crowdsourcing” back in 2005 in an article for Wired magazine, the term referred primarily to practices operated by for-profit businesses, particularly within the tech world, whereby a large group of contributors undertook a number of small, often routine and mundane tasks. Nearly 10 years later, crowdsourcing has changed and evolved to a point where, like Digital Humanities, a standard, agreed-upon definition is difficult to find.

Stuart Dunn, a Digital Humanities lecturer at Kings College London, describes crowdsourcing as a “loaded term,” since the historical definition of the word connotes “the antithesis of what academia understands as public engagement and impact.” Yet, even with a variety of potential definitions and blurred boundaries for what might be considered a crowdsourced project, many Digital Humanities projects still rely on the term, if only because the larger population has developed a collective – if vague and overgeneralized – understanding of what “crowdsourcing” means.

As I mentioned earlier this week, my classmates and I recently presented on a number of crowdsourced projects. Listening to the other presentations and conducting my own research clearly revealed the depth and breadth of just what “the crowd” can accomplish. Below, I’ve shared a selection of some crowdsourced projects I found particularly interesting.

(There are, of course, many more examples than I’ve listed here. On my Links of Interest page, you can find a link to more DH crowdsourcing examples.)

  • What’s the Score at the Bodleian? – The Bodleian Library at Oxford University launched this project in collaboration with Zooniverse (a larger crowdsourcing project), to increase access to the library’s music collection and collection of printed musical scores. Volunteers transcribe the scores and add metadata tags to help categorize each score. The project initially attracted my attention as I’m a music fan and one-time musician myself, but further thought has me wondering: most online crowdsourcing projects are geared towards sighted volunteers – that is, volunteers need to be able to see something on a website. With What’s the Score?, there’s the potential for the Bodleian to add an audio component, allowing sight-impaired volunteers to offer tags or transcribe based on what they hear. Currently, the Bodleian does have some audio files uploaded, though these appear to be examples of the collection, rather than opportunities. I’d love to see the Bodleian – and other DH crowdsourcing projects – expand their accessibility so that more volunteers could contribute.
  • Reverse the Odds! – Another Zooniverse-affiliated program, Reverse the Odds! is a mobile game developed by Cancer Research UK. While the game is designed with bright colors and an easy-to-use interface, it also incorporates real cancer research data. By playing the game, participants help researchers recognize the patterns of various cancer cells, which, in turn, is used to find real solutions to cancer and cancer symptoms. There are other citizen science projects that have created games to further research; Reverse the Odds! is just one such example.
  • Tag! You’re It! and Freeze Tag! at the Brooklyn Museum – Though now retired, these two projects intertwined games with crowdsourcing in a new way. The Tag! game had volunteers providing collection tags to items in the Brooklyn Museums’ collections, with an interface that volunteers “playing” against each other for points. The Freeze Tag! component then gave volunteers the ability to revise and correct others’ tags, ensuring a built-in verification and moderation process. The project was a success for the museum and the use of game names that referenced clear childhood memories (at least for those of us who played the school yard game Tag) no doubt helped draw more volunteers to the project.
  • What Was There – Finally, a project not associated with an academic or nonprofit institution. What Was There was created by Enlighten Ventures, LLC, a digital marketing agency. The platform invites participants to upload old photos of their local community, then tag those photos with location and year. Once uploaded, the photos can then be overlaid with Google Maps Street View, providing a real-time visual example of how cityscapes and landscapes have changed over time. According to the website, the project hopes to “weave together a photographic history of the world (or at least any place covered by Google Maps).” That’s a fine goal, but there’s the potential for historians, architects, urban planners and conservationists to use the data gathered by the project for further research. Enlighten doesn’t (yet) mention what is done with the tags gathered, nor make it available to the public, but should they decide to open up the data, there are possibilities here.
Older posts

© 2019 A Digital Education

Theme by Anders NorenUp ↑