Lyndsey Smith

Digital Humanities Blog

Latest Posts

Digital Preservation Report

Objectives:

The goal of this preservation policy is to outline how to preserve the born digital content and the accompanying online virtual tour and exhibit.  The digital resources of projects such as this tend to be dynamic as they can be enriched and extended over time.  A lifecycle management approach must therefore be adopted.  This means that preservation must begin at the creation and design stage of the project in order for it to be successful.  This is widely acknowledged to be best practice and is the most cost effective way in the long term.  The ultimate goal is to preserve the digital objects so that they remain accessible and viable in the long term and, additionally, so that the component parts may be re-used in the future in other projects.

Considerations:

Consideration has been given to the challenges related to the preservation of digital resources, and sustained long term access to the online tour and exhibition and its component parts.  In particular, consideration has been given to technological obsolescence of both hardware and software, and the available staffing and funding resources available to the institution.  Viable appraisal and selection criteria for digital materials need to take account of factors which assume greater importance than for non-digital materials, such as intellectual property rights and the need to preserve more contextual information about the materials.  Metadata is therefore a vital consideration so that there is sufficient contextual information available to allow the digital materials to be searched or manipulated. In the future, users should be able to interrogate the old data to produce new results.

Planning:

The first step is to perform a collections inventory to locate the materials that will be digitised and included in the online exhibit. It is then necessary to count and describe all of the identified material.  After this, all relevant information from acquisition records, the collection itself, and any other useful resource should be gathered.  The location, inventory number, type of physical medium and any identifying information should be recorded.  In addition, it will be necessary to record anything that is known about the hardware, operating systems and software used to create the files.  It is important that consistent terms are used in the descriptions contained in these records.  Relevant intellectual property rights information about each object should be established.

Strategy:

Bitstream preservation

This is the initial stage of the preservation strategy.  The bitstream of every archival digital object should be preserved in its original form indefinitely.  This means that more complex preservation may be carried out, either when there are resources available or when there have been advances in preservation techniques, because the original bitstream can always be returned to.  To ensure that unaltered bitstreams are preserved intact over time, so that the authenticity of the digital object is not compromised, the following steps should be taken:

  1. Multiple copies of the bitstreams should be maintained. Best practice is that at least three copies should be kept.  The copies should be isolated from each other, both geographically, and technologically, i.e., stored on different media types. (Where the same type of media is used, it is a good idea to use different brands or batches of the media to minimise the risk of faults which may occur in a particular batch or brand.)
  2. There should be ongoing fixity and integrity checks using checksums.
  3. Security controls should be put in place, i.e., password controls, firewalls, anti-virus.
  4. Permissions should be controlled and system security should be rigorously tested.
  5. The digital object should not be altered in any way.

Migration:

This is the second stage of the digital preservation strategy.  Migration is defined as the copying or conversion of digital objects from one technology to another, whilst preserving their significant properties. Migration focuses on the digital object itself.  As such, it aims to change the object in such a way that hardware and software developments will not affect its accessibility. It applies to hardware, i.e., the copying of digital objects from one generation or configuration of hardware to another, and software, i.e., the transferring of digital objects from one software application or file format to another.  It is important that migration is fully documented by metadata, and ideally it should also be reversible.  In this project, the digital objects should be migrated to standard format on ingest into the project.  This is known as normalisation.  In order to control complexity and cost,  only a limited number of standardised file formats will be supported; all digital objects should be migrated to an appropriate supported format on ingest. All digital objects of a particular type will be converted into a single chosen file format that is thought to embody the best overall compromise among characteristics like functionality, longevity and preservability.  Normalisation is a more cost effective option than migrating to a wider range of formats.  Alternatively, the project may limit this approach to just one preservation-friendly format, converting all digital objects to this format on ingest.

General precautions:

There are also some general precautions in relation to storage and handling which can be observed to mitigate the risk of physical degradation, whatever the media employed:

  1. Digital materials must be stored in a stable, controlled environment. Fluctuations in temperature or humidity should be avoided. For mixed collections the suggested temperature is around 20ºC and relative humidity 40%.
  2. Keep storage areas free of contaminants.
  3. Media should be stored in closed metal cabinets which are electrically grounded.
  4. Media should be shelved vertically rather than stacked.
  5. Minimise light levels.
  6. Store any non-digital accompanying materials in appropriate conditions.
  7. Media should always be stored in the correct cases, preferably a suitable archival quality case.
  8. Media should be visually checked for signs of damage on a regular basis.
  9. Media should be allowed to acclimatise to any new temperature/humidity before use and be returned to controlled storage immediately after use.
  10. Minimise the handling of archival media; restrict to trained staff.
  11. Establish guidelines and procedures for acclimatising media if moving from significantly different storage conditions.
  12. Keep access devices well maintained and clean.

Conclusions:

Currently the technological knowledge or awareness to maintain digital objects for long periods of time does not exist. For this reason, no preservation strategy is sufficient to ensure the long term preservation of digital objects.  As such, the project should expect to implement different preservation strategies over time.  Therefore, the digital material should be continually reappraised to establish what further preservation actions need to be taken.

 Bibliography

  1. Brown, A. Practical digital preservation: a how-to guide for organizations of any size. Facet Press, London, 2013. Print.
  2. Erway, R. “You’ve Got to Walk Before You Can Run: First Steps for Managing Born-Digital Content Received on Physical Media.” 2012. Web. 20 November 2014.
  3. Lavoie, B. F. Technology Watch Report 04-01: The Open Archival Information System Reference Model: Introductory Guide. Digital Preservation Coalition.   Print.
  4. Ross, H. Preserving Digital Materials. K.G. Sauer. Berlin. 2008. E-book. 20 November 2014.
  5. Selecting the Right Preservation Strategy.” Paradigm. January 2008. Web.  25 November 2014.
  6. Digital Preservation Handbook. Digital preservation Coalition. 2012. Web. 23 November 2014.

Super-Aggregators as Digital History Tool

Much has been written about the usefulness of digitisation for history research and humanities research more generally.  It has been noted that the nature of history research means that history researchers use digital tools in different ways than other humanities scholars.  In their survey of 213 North American and Western European historians, Gibbs and Owens (2012) outline results that suggest that the primary use of digital tools by historians is to speed up traditional research methodologies.  They write that where digitised primary and secondary sources are concerned, historians tend to value quantity over quality: “In contrast to other disciplines like philology or textual criticism, where exact transcription is crucial, historians frequently preferred resources that offer large quantities of materials with even a crude full-text component. This sentiment likely reflects their primary use of technology, namely that finding references and information is a much higher priority than using tools to analyze primary sources” (ibid).  In 2007, the Abraham Lincoln Historical Digitization Project at Northern Illinois University Libraries was lauded for being unlike any existing historically oriented digitisation project, in that its website also included a number of multimedia and interpretive materials (VandeCreek 2007).  Most discussion on such projects concentrates almost exclusively on the question of access and how such access has led to the democratisation of history research. Super aggregators such as Europeana and the Digital Public Library of America (DPLA) would seem then to be a natural progression from digitisation projects that were organised around one research question or subject.  Such aggregators are by their nature hugely accessible and represent a further step in the democratisation of access to cultural heritage objects, but here I want to discuss their usefulness as a tool for historians.

The so-called super aggregators represent a new development in online digital resources and in particular of open shared resources.  Europeana was the first of such projects, describing itself as a cross border, cross domain, single access point for digitised cultural heritage materials provided by various European libraries, museums, archives, galleries, audiovisual collections and other memory institutions.  Similarly, the DPLA, serves as “the central link in an expanding network of cultural institutions that want to make their holdings more visible to the public” (Howard 2013).  According to its founder, Dan Cohen, the DPLA is not concerned with the preservation of cultural heritage objects but rather with being a connector or aggregator of digital and digitised cultural heritage content.  Both Europeana and the DPLA provide access to millions of objects from thousands of content providers.  Both have standardised the metadata provided by contributing institutions and provide basic search and browse functions.  Searchers are given access to a preview of the object with accompanying metadata provided by the content provider.  This immediately begs the question of how such aggregators are more useful than search engines such as Google.  Maxwell (2010) is sceptical about the usefulness of digital archives when compared with a search engine such as Google Books.  He suggests as assessment criteria, the number of hits for a given search, and the ease of access, which he measures in page loads and mouse clicks.  He uses a search for “Fichte” to compare Google Books to Europeana, and finds Europeana wanting.  His conclusions are based on what he believes to be Europeana’s inefficient and inaccurate interface and more significantly on the unavailability of full text search.  The reference to Europeana as an archive is to misunderstand its primary function, and I would also suggest that this comparison is not useful as it attempts to equate what are essentially two different tools.  However, his criticism of Europeana’s interface has some merit, as it is somewhat ungainly and not as intuitive as it could be; it is certainly not as intuitive as the DPLA’s interface, in fact, is.

Both Europeana and the DPLA have built an open API which they hope will encourage the independent development of applications, tools, and resources that make use of the data contained in both platforms.  The DPLA website lists completed and proposed projects based on their API, which is designed to be extensible in order to cater for the varying degrees of technical sophistication of the DPLA’s audience.  Stephanie Lampkin, a community rep for the DPLA, also explains that there are four interfaces – exhibitions, bookshelf, map, and timeline – which could be useful for research.  She suggests that the Map can be used as an excellent visualisation tool to pinpoint exactly where resources are available.  Gibbs and Owens (2010) found that the respondents in their survey were mostly interested in the availability of as many resources as possible.  They were concerned about gated access but had little interest in other tools that might help them make use of the objects they were accessing in novel ways.  In an interview with John Palfrey (2013), Dan Cohen suggested that one of the benefits of the DPLA for academic libraries was that it can be used to suggest research materials and collections beyond a home institution, to create virtual exhibits from federated sites which would serve to enhance the scholarship of students and faculty.  Aggregators use their metadata to point searchers to records relevant to their searches.  This has the effect of increasing the visibility of small and potentially unknown archives and collections.  According to Gibbs and Owens this access to a large quantity and variety of resources is typically what historians require from a digital tool.  Perhaps this is a symptom of a general reluctance to embrace digital tools among historians, however as things stand, such super aggregators perform an important and desired function, one which could not easily be substituted with a search engine, no matter how sophisticated.  Access to such a large amount of content from different cultural domains not only provides historians with access to a large quantity of both searched for and unknown digital collections, it also, by providing such access, has the potential to open up new research questions.

Bibliography

  1. DPLA. Digital Public Library of America. 2014. Web. 26 November 2014.
  2. DPLA. Meet our Community Reps: Using DPLA as a Research and Teaching Tool. June 17 2014.  Web. 28 November 2014.
  3. Europeana.eu. Europeana. 2014. Web.  20 November 2014.
  4. Gibbs, F. and Owens, T. “Building Better Digital Humanities Tools: Toward Broader Audiences and User-centered Designs.Digital Humanities Quarterly. 2 (2012). Web. 29 November 2014.
  5. Howard, J. “Digital Public Library of America: Young but Well Connected.” Chronicle of Higher Education. 60.1 (2013). 28 November 2014.
  6. Palfrey, J. “What is the DPLA?Library Journal. 7 (2013).  Web.  28 November 2014.
  7. Maxwell, A. “Digital Archives and History Research: Feedback from an End-user.Library Review.  1 (2010).  Web.  20 November 2014.
  8. VandeCreek, D. “‘Webs of Significance’: The Abraham Lincoln Historical Digitization Project, New Technology, and the Democratization of History.” Digital Humanities Quarterly. 1.1 (2007) Web. 28 November 2014.

Review of the Dublin Core Metadata Standard

The Dublin Core metadata standard was created following a 1995 workshop sponsored by the OCLC and the NCSA.  The original objective of Dublin Core was to define a set of elements that could be used by authors to describe networked electronic information.  The workshop was attended by people from a range of disciplines including librarians, archivists, and computing and humanities scholars, all of whom recognised that widespread indexing and bibliographic control of internet resources depended on the existence of a simple record to describe networked resources.  Early Dublin Core workshops popularised the idea of core metadata for simple and generic resource descriptions; its original goal was to define a set of elements and some rules that could be followed by non-cataloguers, so that the creators and publishers of internet documents could create their own metadata records.  Because of its simplicity, the Dublin Core element set is used by many outside of the library community.  Originally it was developed with a view to describing document-like objects but it can also be used to describe other types of resources as well, for example, internet resources such as videos, images, and web pages, or physical objects like books, CDs, or artworks.  Its suitability for use with other non-document resources will depend to some extent on how clearly their metadata resembles typical metadata and also what purpose that metadata is intended to serve.

The Dublin Core Metadata Initiative (DCMI) manage the continuing development of Dublin Core and its related specifications.  The DCMI has expanded beyond simply maintaining the Dublin Core Metadata Element Set into an organisation that promotes the widespread adoption of interoperable metadata standards, shared innovation in metadata design, and best practices in metadata implementation.  It does this in a number of ways: by managing the long-term curation and development of DCMI specification and metadata terms namespaces; by managing the discussion of DCMI-wide work themes; by setting up and managing international and regional events; and by creating and delivering training resources in metadata best practice.

The DCMI has a formal approval process through which the semantic and technical specifications of Dublin Core are approved. There are five categories of proposals that can be made to the DCMI.  They are: proposed changes to metadata terms; proposals for DCMI Recommendations; Proposals for DCMI Recommended Resources; Proposals for Application Profiles as DCMI Recommended Resources; and, finally, proposals for DCMI Process Documents.  Proposals can be submitted to the DCMI managing director by internal and external organisations, or by any individual.  During the formal approval process, proposals can be assigned one of the following statuses:  Community Specification, which means that the specification is put forward for DCMI endorsement for use and publication by task groups within the DCMI; Proposed Recommendations, which are technical specifications considered close to stable and which have growing support for adoption by the Dublin Core Community; Working Drafts, which are documents under development; Process Documents, which describe the process and procedures relevant for the operation of DCMI and its work structure; Recommended Resources, which are resources that the DCMI executive recommend as material for use by the DCMI community in support of their use of Dublin Core metadata; and, finally, Superseded Recommendations, which are specifications that have been replaced by newer versions. When proposals are first submitted, the directorate acknowledges receipt and decides whether a document falls in one of the five categories. A first decision on whether DCMI will accept a proposal for consideration is communicated to the submitter no later than two months after submission with specification of the process and timeline foreseen.

The Dublin Core Metadata Element set is comprised of 15 core elements, which together are referred to as Simple Dublin Core.  Qualified Dublin Core contains an additional three elements as well as a group of element refinements that are knows as terms. The 15 core elements are Title, Creator, Subject, Description, Publisher, Contributor, Date, Type, Format, Identifier, Source, Language, Relation, Coverage, and Rights.  The three additional elements of qualified Dublin Core are Audience, Provenance, and Rights Holder.  Dublin Core is non-hierarchical and each element is optional and repeatable.  One of the most significant goals of Dublin Core is simplicity of creation and maintenance so that it should be easy to use for non-specialists; the effect of this is to encourage the proliferation of metadata records of resources, and by extension, provide for effective retrieval of those resources in the networked environment. One problem that arises between different metadata standards is that people from different fields of knowledge use different terminology to describe the same thing.  For example, the <creator> element can be used to describe an artist, an author, or the creator of an electronic resource.  It is for this reason that the Dublin Core elements are described using a universally understood semantics; this serves to increase the accessibility of resources.  Further, the extensibility of Dublin Core increases its potential for interoperability, by acknowledging that it is likely that other communities of metadata experts will create and administer additional metadata standards to fulfil the needs of their particular community.

The Dublin Core Metadata Element Set also implements some principles that are critical to understanding how to think about the relationship of metadata to the resources they describe.  The one-to-one principle means that Dublin Core metadata describes one version of a resource.  In other words, metadata must be provided for both an artefact and its digital reproduction; one is not taken to represent the other.  The dumb-down principle means that the purpose of a qualifier or term is to refine the information provided by the element, and not to extend it in anyway.  The information must be understandable even if the qualifier is taken away.  The third principle is that of appropriate values, which means that the person implementing the metadata must always bear in mind the requirement of usefulness for discovery.  It is also worth noting that while Dublin Core was originally developed in English, the DCMI has acknowledged the multilingual and multicultural nature of electronic resources and so versions of Dublin Core are being developed in other languages.  In addition to its use for resource description and its interoperability with other metadata standards, Dublin Core Metadata can also be used to provide interoperability for metadata vocabularies in the Linked Data cloud and Semantic Web implementations.

In Ireland, the Dublin Core Metadata Element set has been used to record metadata on a number of projects.  A Digital Edition of Táin Bó Fliodhaise; A Digital Edition of the Alcalá Account Book; Art College Student Registers; Conflict Archive on the Internet; and the Earley and Company Archives have all used Simple Dublin Core to record the metadata of their resources.  Qualified Dublin Core has been used to record the metadata of objects contained in the Irish Virtual Research Library and Archive (IVRLA), now the UCD Digital Library.  The primary objective of the IVRLA Project was to digitise a core number of archival collections held in several University College Dublin repositories.  It took key humanities resources from five repositories containing physical materials in manuscript, printed, audio, video and graphic format.  In many cases, due to the rarity or fragility of these resources, they were not accessible to scholars outside of UCD.  The UCD Digital Library uses qualified Dublin Core to record the metadata on all of the objects in its collections.  As the format of the objects vary, Dublin Core is a suitable metadata standard to use as its semantics can apply to objects in different formats across different communities of knowledge, thus making them searchable and accessible.  MODS and METS are two other metadata standards which could have been applied to the UCD Digital Library.  However both of these standards are more complicated than Dublin Core and require specialist knowledge to implement.  For example, there are seven major parts of a METS document and MODS requires knowledge of MARC21.  Another central reason why Dublin Core was the most useful metadata standard for the Digital Library to use is the variation in the types of objects it digitises; for example, maps of Dublin, data sets from the urban modelling group and photographs of the Irish civil war and 1916 Rising.  Also, perhaps more significantly, many of the collections in the Digital Library are also published to Europeana, which asks that metadata conforms to the Europeana Data Model which incorporates all previous Dublin Core based Europeana Semantic Elements.  Therefore the use of Dublin Core allows for ease of ingestion into Europeana.

Bibliography

  1. Chan, L. M & Zend, M. L. “Metadata Interoperability and Standardization – A Study of Methodology Part 1:  Achieving Interoperability at the Schema Level”. D-Lib Magazine. 12(6) (2006).    Available at: http://www.dlib.org/dlib/june06/chan/06chan.html Accessed 30 October 2014.
  1. “Metadata Basics”. Available at http://dublincore.org/metadata-basics/ 24 October 2014. Web.  Accessed 6 November 2014.
  1. Digital Humanities Observatory. “Digital Research and Projects in Ireland.”  Available at: https://web.archive.org/web/20100303205203/http://dho.ie/drapier/  Web.  Accessed 6 November 2014.
  1. Heery, R. “Review of Metadata Formats”. Program. 30(4) October 1996, pp. 345-373. Web.  Available at: http://www.ukoln.ac.uk/metadata/review.html Accessed 30 October 2014.
  1. Hillman, D. “Using Dublin Core”. Available at: http://dublincore.org/documents/2001/04/12/usageguide/ Web.  Accessed 5 November 2014.
  1. “A Framework of Guidance for Building Good Digital Collections”. 2007. Web. Available at:  http://www.niso.org/publications/rp/framework3.pdf Accessed 6 November 2014.
  1. “Understanding Metadata”. 2004. Web.  Available at: http://www.niso.org/publications/press/UnderstandingMetadata.pdf  Accessed 30 October 2014.
  1. UCD Digital Library. http://digital.ucd.ie/ Web. Accessed 6 November 2014.

 

 

 

 

Los Angeles and the Problem of Urban Historical Knowledge

Los Angeles and the Problem of Urban Historical Knowledge (LAPUHK), described by its author as a “multimedia essay” was published in web form in 2000 to accompany the December 2000 issue of The American Historical Review.  The website, the outcome of a six-year project by Philip J Ethington, Professor of History and Political Science at the University of Southern California, is comprised of a combination of visual media accompanied by a rather dense essay on the problem of urban historical knowledge.  The most basic purpose of LAPUHK is to give readers an opportunity to explore LA both in overview and in close detail, through a variety of images, maps, and quantitative data visualised on animated maps and graphs.  Ethington describes his attempt to map, in both the literal and figurative sense, the vast metropolis of Los Angeles, and by extension to anticipate the move to a more productive stage of comparative urban scholarship.  The overriding objective of the project is to answer the question of how the historian can make sense of something as historically mutable as a vast metropolis.

The site is notable for the relative quaintness of its design and presentation. Visually, it is both unsophisticated and unappealing.  The home page is a black background, with the text and main image covering less than half of the screen.  It is laid out like the title page of a book, with the author’s name, the title of the publication, and publishing details all displayed at the top of the page, above a series of sequential photographs that one has to scroll down to see in its entirety.

DH1

There are other design issues which point to the site’s experimental genesis.  Clicking on the sequence of images on the home page doesn’t lead to a zoomed in version of the image, as one would now expect, but rather to the preface of the essay.  The white font against a back background is difficult to read and the underlining of hyperlinks is both inconsistent and more obtrusive than we are used to, 14 years later. But these criticisms serve no purpose given the age of the website, and so it is unnecessary to discuss them further.  What is, however, remarkable about the site is that despite its author’s avowal otherwise, it appears that the website functions primarily as a repository for his essay.  This essay, and indeed the website as a whole, are suffused with the language of the book.  The preface to the essay is linked to from the homepage and it is possible to download and print the entire textual contents of the site. This printed version is 29 pages long and begins with a table of contents that lists the various elements (or chapters) of the site.  In explaining how to read the website, Ethington suggests the analogy of the newspaper as a way of navigating the hypertextual structure of the site.  This direction to the reader is indicative of, firstly, the novelty of the medium, and, secondly, of what Hitchcock (2011) has called the “fascist authority” of the book format; he writes that, “In the last couple of decades, historians who are unduly fascinated by books have restricted themselves to asking only the kind of questions books can answer.”  LAPUHK represented an important step towards a new methodology, one which prepared the way for new questions and new ways of answering them.  Seefeldt and Thomas (2009 3) credit the project with establishing a different model of historical scholarship, “one that had an ambitious goal to both democratize the past and attempt alternative historical, theoretical and methodological approaches.”  The project exemplified some of the difficulties of performing urban history, in particular those presented by the obstacles of scale, complexity, historical erasure, and postmodern scepticism.  The success of the project therefore must not be judged on the achievements of the site but, rather, on the project’s situation within a new methodological impulse.

According to Cohen and Rosenzweig (2014), “doing digital history well entails being aware of the technology’s advantages and disadvantages and how to maximize the former while minimizing the latter.”  It is apparent from the layout of the site that Ethington was unaware of how to accomplish this.  The ergodic nature of the website form is not counterbalanced by an intuitive layout which would have reduced the effort required on the part of the reader.  The site is not easy to read but the application of manipulative technology to photographs of Los Angeles did achieve some interesting results.  Photos that have been manipulated to create the impression of a panoramic viewpoint provide a new perspective of the Los Angeles of past and present.  These photos represent one of the ways in which digital history projects are capable of changing historical knowledge production.

DH2DH3

However, there are other issues with the site that highlight Ethington’s failure to anticipate how changes in technology would impact upon the project.  Indeed, it is evident that little, if any, provision was made to guarantee its longer-term sustainability.  But given the inchoate nature of the project’s form, Ethington can hardly be held responsible for this lack of foresight.  The essay, photographs, maps, and visualised data are still available but the file format of the videos is not supported by current browsers and so can no longer be viewed online.  Cohen and Rosenzweig (ibid) have warned that, “When you move your history online, you are entering a less structured and controlled environment than the history monograph.”  This warning brings me to my only criticism for which Ethington can be reasonably held responsible, and that is to do with the accuracy of the data he provides. The Tour of Global Cities page lists some of the world’s most populous cities, with a population figure and a “Y” or “N” to indicate whether or not they are a capital city.  In a footnote that is hyperlinked to, Ethington explains the difficulty in obtaining accurate population statistics.  In Ethington’s bibliographic essay, he explains that it is “written in the spirit of an introduction for the neophytes” (Ethington 2000), and I would question then whether this inconsistency and lack of visibility of the footnote explaining these population figures would have arisen if Ethington had published these statistics in traditional print form. Despite this, LAPUHK was remarkable because it represented a significant development in digital history and was an important stepping stone towards the necessary hybridity that Zaagsma (2013 47) refers to as “the new normal”.

Bibliography

  1. Cohen, Daniel J., and Rosenzweig, Roy.Digital History: A Guide to Gathering, Preserving, and Presenting the Past on the Web. Web. https://www.library.cornell.edu/research/citation/mla Accessed 10 Oct. 2014
  2. Ethington, Philip, J. “Los Angeles and the Problem of Urban Historical Knowledge”.   2000. http://www.usc.edu/dept/LAS/history/historylab/LAPUHK/index.html Accessed 6 Oct. 2014
  3. Hitchcock, Tim.“Academic History Writing and Its Disconnects.” Journal of Digital Humanities 1 (2011). Web. http://journalofdigitalhumanities.org/1-1/academic-history-writing-and-its-disconnects-by-tim-hitchcock/ Accessed 10 Oct. 2014.
  4. Seefeldt, Douglas, and Thomas, William G. “What Is Digital History? A Look at Some Exemplar Projects”.University of Nebraska-Lincoln: 2009. Web. http://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1097&context=historyfacpub Accessed 9 Oct. 2014.
  5. Zaagsma, Gerben.“On Digital History.” BMGN – Low Countries Historical Review 4 (2013): 3–29. Web. http://www.bmgn-lchr.nl/index.php/bmgn/article/view/9344 Accessed 12 Oct. 2014

.