Interior Difficulties

The bronze age pots found in the Irish archaeology museum hold a very interesting history. The process of photogrammetry used for the capturing of my own bronze age pot was a method specific practice.
The bronze pot was placed within a lightbox. LED lights were placed at three points surrounding the item. Photogrammetry needs equal light on all parts of the object, to ensure no shadows or shininess. The box was placed on a table ensuring that height became a problem for the practice. The larger LED light, which used a larger tripod, looked down from the top of the object. Chairs had to be used on either side of the lightbox to ensure the surrounding light. Balance becomes quite important here because the lights cannot move during the process of photogrammetry, otherwise the data will be corrupted because the common points cannot be found.
The bronze pot was placed on a rotating table allowing for easy use of the Canon 60D on the tripod. Instead of moving around the object, like one would do with a larger object, the item could be spun using the table and photos then captured. The pot had to be turned each time, ever slightly and the photo to be captured,
Importantly, the curator at the museum had to flip the pot to its other side. This was to ensure that the inside and bottom of the pot was captured evenly to ensure a full model. The inside of the pot would prove to be the most difficult to capture. As the pot was rotating and the angles of capturing had changes it became clearer that the inside of the pot was being captured correctly.

Woodman Diary – A technical aspect

The project to create a digital scholarly edition of Albert Woodman’s World War One diaries has been equally worthwhile and challenging. The collection has two diaries named after their brand name, Wilson and Butterfly. The project team, consists of Masters and PhD students of Digital Humanities in Maynooth University. The abilities of all involved have proved advantageous to the project. However, the most important part of the module and project is the gaining and perfecting of new skills. The new skills included encoding the diary using XML, extensible markup language, creating and editing annotations and named entities, and helping to design the diary webpage itself.
A technical aspect which I was personally involved in was the creation of site wireframes. A wireframe acts as a website’s blueprint or schematic view, essentially showing the skeletal framework. Site wireframes allowed the team to visually perceive the ideas of user interactions and general design. Although I have never designed mockups and wireframes previously, I understood the need to develop my own personal skills while completing the project. The project manager, Shane McGarry allowed an open platform for submissions. Any team member with an idea could present it using softwares or programmes which they found suitable.
I chose Balsamiq Mockups as the programme to create the wire frames. The programme is advertised on the internet as a “zenware”, software which totally immerses the user. The software has a small learning curve for a user, a very sleek design and simple to use graphical boxes. As a complete beginner in this area of design, I was appreciative of the software’s easy to install nature, pleasant looking design and in-depth instructions.
The project team have been constantly working on the Albert Woodman diary during the second semester. The ideas regarding interaction and design had been decided during previous meetings including ideas of functional and nonfunctional requirements. All of these requirements would need to be presented using the wireframes.
The drag and drop feature of Balsamiq allows for mobile and browser view. I began the mockups by bringing a browser picture into the window. Further elements could now be placed over the browser window to represent a real web page. The addition of a text box to one side and an image viewer on the other, brought the browser up to team requirements. However, additional buttons and elements were still needed.
The wireframes I produced were based around three main ideas:
Transcription only page – as shown on the image below. The page has a main bar at the top with links towards other sections of the website. The main transcription, as per its importance, is placed in the center of the page, with next and previous buttons to the center to navigate through the library. The idea of navigation was always important to the project, the calendar to the right also helps with this idea, allowing for greater leaps through the diary. One interesting aspect we mused about was adding the option to turn on and off annotations. This would be a useful addition to help a more standard reading of the text, incase a user would prefer to have a read only version. These buttons are placed to the side of the transcription along with “place in diary” and an option to cite.
Side by Side view – a rather simple idea for the representation of a diary, but an important one. Instead of merely reading a large block of text, a user can become more immersed in the product itself. Trying to understand Woodman’s own writing and linking it together with the transcription adds to the user experience overall. The image below also shows the idea of annotations being pop up menus which hold additional information. The coloured aspect was only used to show the difference between named entities and annotations, open to further discussion upon the presentation of the wireframes.
Monthly View – although rarely discussed in our meetings, the monthly view became an idea linking from the transcription only page. Clicking on “place in diary” would bring you to a monthly view. Following on from this a user could find Woodman’s previous experiences or specific letters. Yet again, it shows an addition to the general user experience of the diary, the most important idea when dealing with a public project.

aff

As seen above I added a check box to the side of the text view. No annotations/annotations, acts as a switch, words such as Dunkirk could be highlighted or underlined. This ensures that a user feels like they have a choice within a website, freedom to take their own path. Importantly, we, as a team, would prefer that path to remain on our website. One of the ways in which we agreed upon this insular user experience was adding additional media material. Presented in many different forms, such as video, pictures and articles, media material heightens the experience and ensures more wireframe work.

In conclusion, my experience with site wireframes in this project has allowed me to develop further skills. Although I have worked on user interaction ideas before in the process of making applications, this more in-depth look at design has proved very useful. The meeting which involved all of our wireframes was a great point of discussion. The debates which ensued prove how particular the design of a website can become. Ideas from each wireframe were placed together to make a universal design for the website. As my thesis is based around the creation of an application, which will greatly involve design and user interactions, the skills I have learned in the study of Digital Scholarly Editions will prove considerably useful.

Practicum Blog Post 1

The practicum based in Louth County Museum, Dundalk is based around three dimensional recording and presentation. The museum is one which has excelled in recent years from non-substantial funding due to it’s proximity, size and number of visitors. The museum has successfully leaped into the digital age.
The megalithic stones found in Newtownbalregan and Tateetra have a rich history. The originally ancient stones had been reused as burial stones for souterrains. Bearing crosses, spirals and one almost teardrop shape, each stone has its own historical worth. These incredibly large and heavy stones can have increased worth by adapting them through equipment in modern times The question arises how were these ancient people so rehearsed in the forming of grave sites? Can modern times now form new means of viewing and perceiving. Their own formation of these graves also bring around the debate of how to form the three dimensional representation for this practicum.
Photogrammetry became the forepoint for the project instead of using a laser scanner. The models made by a laser scanner although dimensionally precise lack realistic textures. Further problems arose including cost and transportation of such a large tool.
Photogrammetry is a three dimensional recording technique, a mix of measurements and photography. Although a difficult algorithm works out the stitching together of photos, the process is rather simple for an experienced user. The object which is to be recorded needs equal surrounding light.
IMG_20150319_120959288_HDR

The capturer needs to select a path around the object at which to take their photographs. These series of photographs are then collected at different angles of shooting. The size and shape of the megalithic stones in Dundalk ensures varied approaches. The stones vary from full grave covers to smaller designed stones. The smaller stones can be placed on a table and rotated allowing for a solid camera base on a tripod. The larger stones, which have proven impossible to move, are more difficult. Constantly moving with camera in hand is the only approach to capture the stones correctly. The movement leads to a marginally longer period of time required to fully capture the larger stones.
The critical need in photogrammetry is to include an overlap between photos, depending on the software being used. Agisoft Photoscan allows for a greater dataset to be used. Masking removes the unneeded surroundings of the image and then aligns the photos with an algorithm. From this point the programme produces a point cloud, mesh cloud and the finished product.
A further, particularly interesting aspect of the project includes RTI, reflectance transformation imagery. This technique will allow for certain material on the stones to presented more easily. Crosses and etchings on the stones may be invisible to the naked eye but can be presented easily to an audience.
New three dimensional models will have been completed. However, the questions of where and how to represent these models still remain. This is an important problem which still needs to be solved before the completion of the practicum.
As a final note, the successful completion of this practicum will further show the adaption and advancement of the Louth County Museum and it’s curator Brian Walsh into the digital age. Allowing for the museum to present an outlook of access and interest towards the general public. Two incredibly important aspects when successfully working in cultural heritage and museums.

Beginning 3Ds: A very simple straw!

I have begun to use Autodesk’s 3Ds max as part of my Remaking the Physical module. Available free to students it proves to be a very useful application with a relatively steep learning curve.

I watched a few different tutorials and decided to try and make a straw! A simple design in the scope of the application, but a rewarding endeavor anyway.

To begin I created two separate tubes. Defining the inner and out radius to a thinner shape within the modify toolbar. One is placed below the grid line and the other above to lessen the amount of movement needed later in the project. Screenshot (113)

 

A hose was selected from the extended primitives, not yet defining specifics until the shape is attached to the tubes.

Screenshot (114)

 

However strange the outcome looks originally, the hose is then bounded to either tube. Selecting the top object and the bottom object in this case.

Screenshot (115)

 

I kept the colours seperate as to define the objects as separate for now. The tension of the hose was brought down to 30 for each selection to ensure a better shape around the curve.

Screenshot (116)

 

The top view was the easiest was to focus the diameter of the hose to fit that of the tubes. Within this viewport the diameter can be altered in the modify section of the toolbar on the right. Adjusting by number or simple mouse dragging.

Screenshot (117)

 

A full view of the new shape which has been completed. Looking more like a straw now, only to be followed by a slight bit of editing. Using the select and rotate function allowed for slight movement of the smaller tube to give a more realistic look.

 

Screenshot (118)

 

The finished object along with the correct colouring. However, I had forgotten to add the percentage parameters for the hose. This ensures that the full object gets altered and not just a specific section. As shown below the meeting of the tubes and the hose is slightly ugly.

 

Screenshot (119)

 

The final, final object presented with the classic coluring.

Screenshot (120)

 

Open Source

Online collections have always been a great user of software in their essence. Perhaps the most useful applications are the ones which come free to use, open source applications. Open source applications offer some of the most rich resources available on the digital age. Applications which have free source code allowed, including updates and patches with generally huge communities of users. The programs can then be redistributed, viewed and edited. Open source is not limited to just applications it also includes operating systems, databases, games and even programming languages(ESRI, 20111). There are some big examples of open source applications including Voyant Tools, Gephi and Google Ngram viewer. The focus towards more open source products is linked directly to the notion of open data after the late 90’s saw a failure for paid digitised products. The problem with paid services relates completely to the notion of Tim Berner’s Lee original focus of the Internet as a free platform for the sharing of information(Lee, 2014).

The main debates surrounding open source software will now be delved into further. The notion of collaborative production extendable to other forms of human endeavour is a great point brought up by Lemley and Shafir. A perfect issue is brought up within this article regarding the scope of open source as a theory. The rational side of economic businesses would push towards open source becoming more low-cost and small scale completed by universities etc. On the other side of the debate is if there was a mass funding of collaborative efforts with non-financial incentives, open source could conceivably replace monetary products. The second side as mentioned here is one which holds the great amount of promise for the world wide web. The incentive of organisations to produce open source products has grown since the writing of this article in 2011(Lemley, 2011). Companies are beginning to produce open source because they realise that to do otherwise would severely affect the belief in their company. Although this is focused on open source in a general sense there are many benefits for the scholarly community. Open source software is described as public good and an incredible tool for researchers, students and practitioners in many fields(Krogh, 2006).

Although this blog post has been focusing almost entirely on the advantages of open source, there are a few disadvantages which have been stated across sources online. Mark Tarver attempts to state in his blog post titled “The problems of Open Source” that most free software is poor and unstable. Although this blog post was made in 2009 some of his points remain solid. There are a large number of poor quality free software applications however the same could be said for paid software applications. A more recent article, February 2014, reveals more detailed criticisms including the difficulty of use for some open source technologies, the example used is Linux. Linux may have better options and power than MAC OS or windows, but the learning curve to use it would hinder work. Two further intelligent issues brought up in this article are that a customer needs a vendor which will remain in business and can have specific contact. This is quite similar to the second, the support for paid products is always to a higher standard(Rubens,2014).

A specific project will now be discussed in terms of open source software. The last blog post on this page regarding the Letters of 1916 project commented briefly on the separation of letters into certain sections. The Letters of 1916 is a direct example of a Digital Scholarly Edition which uses open source applications, in this case Omeka. Omeka is a system which is based around providing a framework for narratives and collections, a perfect combination for the letters project. Omeka is an open source content management system developed by the Roy Rosenzweig Center for History and New Media(CHNM) at the George Mason University(GMU). The initial release was in 2008, with a stable release being released in July 2014. Technically speaking, Omeka makes use of unqualified Dublin Core metadata standard as it’s basis. Apart from the Letters of 1916 there are various online resources which make use of the content manager system. The majority of these are cultural heritage sites with a large number of objects which have to be classified and stored efficiently. The New York Public Library, the Newberry Library are two of the most well known users of the product. It is a perfect example of the advantages of open source applications. There is constant updates on the site which also includes documentation, add-ons, a forum and a get involved section.

In conclusion, the use of open source applications for digital scholarly editions proves to be an obvious decision. Although there are many arguments surrounding the monetary and productions of open source products, a trend has slowly begun to show. The past few years as mentioned previously, have seen a direct increase of open source products which in turn has increased the quality. The updating, social forums and transparency of the applications really show their worth. This is the most important part of the products, a user can feel fully in control and part of a larger scale community. Open source software is a notion which is only growing on the online world and it seems that it will continue for the next few years.

Bibliography

Esri. “Open Source Technology and Esri.” Web log post. ArcNews. ESRI, Mar.-Apr. 2011. Web. 11 Dec. 2014

Krogh, Georg Von. “The Promise of Research on Open Source Software.”Management Science 52.7, Open Source Software (2006): 975-83.JSTOR. Web. 12 Dec. 2014.

Lee, Tim Berners. “Web at 25: The Past, Present and Future.” Web log post. Tim Berners Lee. Wired, 06 Feb. 2014. Web. 10 Dec. 2014.

Lemley, Mark A. “Who Chooses Open-Source Software?” The University of Chicago Law Review 78.1 (2011): 139-64. JSTOR. Web. 12 Dec. 2014

Rubens, Paul. “7 Reasons Not to Use Open Source Software.” Web log post.CIO. N.p., 11 Feb. 2014. Web. 10 Dec. 2014.

Review of a Digital History Tool: Gephi – Networking through History

The history of network visualization has its roots in the 20th century. Whyte and Coleman both frequently used sociograms to visualize their data, although this was determined by their own artistic and analytic eye (Moody). Gephi is one of the most modern graph visualization applications available and certainly adds to the statistical analysis world. The importance of Gephi and visualization tools is directly from human interaction. It is this human interaction which allows the tool to “leverage the perceptual abilities” of a user to find new features within data (Bastian). This is a very important point on the success of Gephi as a digital tool. The information creates a new capacity for human interaction which is otherwise vacant by a brief summary of statistics (Moody). The design of many digital tools for textual representation only, works as a juxtapose for Gephi. A wide variety of designers were forced into providing tools and transforming prototypes to fit a wider audience (Mirel). However, Gephi, as this review will show, represents a friendly user interface with a sleek design and simple user controls.

The main objective of Gephi is to provide a interactive visualization and exploration platform for all kinds of networks and complex graphs. Gephi definitely succeeds in proving features which help this objective to be complete. The importance of understanding network theory will be discussed further, later on in the review. For now, a simple description, is that a graph is made of nodes,a person, place or thing, and edges,a relationship between nodes. There are for many aspects of Gephi which open up further and extend the knowledge. Developed modules are used to import, manipulate, spatialise, filter and export networks. Visualization uses 3D rendering through a computer’s graphics card to ensure the processor can focus on a variety of tasks. Nodes can be personalized to include images and no overlapping. Algorithms allow for the data to be molded by the user, this includes real time movement involving speed, gravity, repulsion, auto- stabilization, inertia and size adjustment. These algorithms are easily selected and work in real time which ensures that any user can benefit from the feature.The open source aspect of Gephi extends to the use of free plugins within the application. These help the digestion of large amounts of data, for example the Semantic Web Plugin which allows for specific Sparql, a query language, searches through large files including Dbpedia, a database of Wikipedia entries.

Technical problems with computer applications generally begin with the installation of a product. Gephi proves to be reasonably difficult to install as a result of its Java dependency. Windows 8 will have issues if the newest Java is installed and certain files will need to be altered and changed to ensure the correct running of Gephi. However, away from the installation issues Gephi is a very friendly application which has a plethora of tutorials and a quite simple user interface. The main benefit of using Gephi is that the program can be as complicated or simple as the user wishes. If a user is more comfortable with using spreadsheets such as Microsoft Excel then personal information can be placed into the application and made into a graph. A user can delve further into the information and use query language and huge data-sets to represent a larger corpus of texts. The most popular use of these data-sets for Gephi is Sparql, which was mentioned previously. The knowledge of coding does not need to be present beforehand, there are various sample queries and tutorials online which can help towards understanding the data further. This is an important point when discussing the impact that Gephi, as an application and Digital History tool, could have on history.

The impact on the historical community has not been very widespread. Gephi is a untapped resource which would prove very useful for historians if used to a greater extent. Harvard had a series of projects which made use of the program and had some very interesting results. The research team noted that looking at a matrix or relationships isn’t exactly new but the advent of digital technologies is spreading the idea across disciplines (Harvard). One specific example proved to be very compelling. The Inner Life of Empires includes five connection graphs which makes use of various Gephi plugins. One of the graphs can be seen below which represents the relationships between certain people and a full list of information, giving attributes and undirect links. This information is accessed when a node is clicked on and every node within the graph holds the same amount of data.

Gephi blogpost

Kumekawa, Ian, and Emma Rothschild. Interactive Network Map. Digital image. Harvard University, 2013. Web. 1 Dec. 2014.

Two further popular projects which have used Gephi as a basis are the conjunction with the Google page-rank algorithm to find the most influential novels of the 19th century and the Republic of Letters. A closer look at the Republic of Letters shows the incredible scope which is available using Gephi. The website offers a list of letters and the relationship between some of the great minds in European history. Below is a graph showing the letters of Voltaire, there are various other visualizations of the data and allows for a user’s own method of inquiry. This is an important aspect of history as it grows towards a more universal public history mindset. The true beauty of Gephi is presented here and illustrates just how essential and key it could be for furthering history projects.

poih

Standford University. Comparison between the Correspondence Networks of Locke and Voltaire. Digital image. Voltaire and the EnlightenmentN.p., 2013. Web. 1 Dec. 2014.

In conclusion, Gephi is a project which is being used sparingly in the historical world. If history is to keep track with the digital age then its need to find new and useful methods of dealing with information is important. An application like Gephi allows for a person to heighten their perceptual abilities using graphs and data visualizations (Bastian). Important conclusions could be found if the use of this application came to the forefront. However, Gephi will never reach its full potential in the subject of history unless effort is placed into learning about networks, graphs and a variety of digital tools. An education which, if present within a history degree, may help further the discipline into the modern world.

 

Bibliography

Moody, James, Daniel Mcfarland, and Skye Bender‐Demoll. “Dynamic Network Visualization.” American Journal of Sociology 110.4 (2005): 1206-241. Web.

Mirel, Barbara. “Building Network Visualization Tools to Facilitate Metacognition in Complex Analysis.” Leonardo 44.3 (2011): 248-49. Web.

Bastian, Mathieu, Sebastien Heymann, and Mathieu Jacomy. “An Open Source Software for Exploring and Manipulating Networks.”International AAAI Conference on Weblogs and Social Media (2009): n. pag. 2009. Web. 01 Dec. 2014.

Center for History and Economics. “Visualizing Historical Networks.”Visualizing Historical Networks. Harvard University, 20133. Web. 01 Dec. 2014.

Annotated Bibliography

Bree, Linda, and James McLaverty. “The Cambridge Edition of the Works of Johnathan Swift and the Future of the Scholarly Edition.” Text Editing, Print and the Digital World. Farnham, England: Ashgate, 2009. 127-37. Print.

The main question brought up is how a scholarly piece should be represented. Should it be in electronic or print form only, or a combination of both? Limits are not seen by electronic ones because the only limitations are the expertise and knowledge of the creators, compared to a print which has publishing limits. The project outline has to be determined from the starting point because with print copies a publisher would have been able to set the design whereas in a an electronic form most of these decisions have to be made by the creators. The information goes beyond the knowledge which would have been originally needed for an edition. Technical expertise come into play. An interesting point here is mentioned in a previous lecture about the need for historians and other humanities scholars to learn code. The chapter repeats itself quite often when talking about the Johnathan Swift electronic edition but a few key points are mentioned about the necessities of finding appropriate funding to ensure the perpetual running of a project. The source would prove valuable towards a study of the the representation of scholarly editions.

Gabler, Hans Walter. “Theorizing the Digital Scholarly Edition.” Literature Compass 7.2 (2010): 43-56. Web. 24 Nov. 2014.

Generally outlines a Digital Scholarly Edition as a presentation of text – literary, historical, philosophical, juridical – or of a work (mainly, a work of literature) in its often enough several texts, through the agency of an editor in lieu of the author of the text, or work. Hans mentioned the importance of surrounding auxiliary procedures when producing a Digital Scholarly Edition. The three main ideas he comments on are apparatus, annotations and commentary. He defines all of these simply and moves onto revised models and the form of an edition. “The base line of my understanding of the scholarly edition is that it is a web of discourses. These discourses are interrelated and of equal standing.” The importance of this definition is described as the pivotal influence of a Digital Scholarly Edition, at least in its production, is based mainly on the editor instead of the author or the text itself. Further on in the article Hans begins to describe how many author’s original material is now born digital rather than being changed into digital. “We read texts in their native print medium, that is, in books, but we study texts and works in editions – in editions that live in the digital medium. The material brought up in this article is of great importance when studying Digital Scholarly Editions especially the definitions of the important aspects of the editions.

Price, Kenneth M. “Electronic Scholarly Editions.” A Companion to Digital Literary Studies. Malden, MA: Blackwell Pub., 2007. N. pag. Print.

Comments on how the advent of new technology is causing a shift in textual theory away from the old theories of a single definitive text definition and towards a more broad theory which allows for various editions. This is a very important aspect of digital scholarly editions because the extra space and availability of updating allows for multiple versions of one text to be placed in the one area. The use of this cannot be understated because it allows a scholar to study a much wider range of text in one sitting. However, this paper notes that it may stretch the general academic attempting to place something online, extending the amount of people in a project. This is linked entirely with the interdisciplinary nature of digital humanities as a whole. The chapter begins to strangely comment on the notion of entirety in a Digital Scholarly Edition. That if it is supposed to hold everything about an author should that stretch to shopping’s lists, autographs etc. Although mentioned in a very flippant way this brings up important questions of what is important? The problem is that the author just mentioned how many people were used in the formation of an edition yet forgets to include that there are separate people for selection of resources and a plan defined from the beginning of the project.

Deegan, Marilyn. “From Print to Digital: The Hybrid Edition.” Web log post.Oxford Scholarly Editions Online. N.p., 9 July 2012. Web. 23 Nov. 2014.

Every generation of researchers and scholars bring new tools, techniques, perspectives and interpretations to textual artifacts. This helps with the search for meaning through analysis. The Roberto Busa quote “ What’s difficult we can do straight away, what’s impossible takes a little longer” is an important notion on the subject of scholarly editions. The article presents a brief run through of the history of scholarly using technical resources, comments on the advent of computers , hypertext language, cd-roms and online collections. The important note made at the end of the piece is that we now have the best that the digital world can present us while it is still developing at a quick rate. But, on the other hand we still have the best of what a book can give to the audience.

Clarke, Desmond. “Being Philosophical about Scholarly Editions.” Web log post. Oxford Scholarly Editions Online. N.p., 30 Mar. 2012. Web. 23 Nov. 2014.

A historical basis regarding scholarly editions is mentioned at the beginning of the article regarding the editors of older scholarly editions which were not digital. The idea of editing became a lot more defined during the 20th century and the standards definitely improved. The crowd was not useful for the subject because it was refined down to a certain area. An edition may only be in a handful of museums or through a university library. Digital Scholarly Editions now allow for this data to be accessed across the world. The editions allow for a large amount of out of print pieces to be accessed and for different versions of translations/authorship to be shown to an audience. This article is important to allow a full sense of the history of Digital Scholarly Editions.

Hajo, Cathy Moran. “The sustainability of the scholarly edition in a digital world.” Presented at International Symposium on XML for the Long Haul: Issues in the Long-term Preservation of XML, Montréal, Canada, August 2, 2010.

This article brings up some interesting points about the sustainability of a digital scholarly edition. The main concerns which arise from this are mentioned in a variety of papers. If a Digital Scholarly Edition is to be maintained than the long term sustainability and future funding would have to be a problem discussed at the beginning during the management process. The important point that can be derived from this article is that technologies in the future must have the functionality of adaption for older technology. Without this notion present born digital projects can be found essentially broken in decades to come. This has always been a huge concern for Digital Scholarly Editors and is one which arises many questions.

Daengeli, Peter. “Digital Scholarly Edition: Alfred Escher Correspondence.” Web log post. Geschichte & Informatik. N.p., 26 Mar. 2012. Web. 22 Nov. 2014.

The usability of the Alfred Escher project is mentioned in one specific part of this article. Usability is a very important aspect of a digital scholarly edition and has not been mentioned in previous articles in this annotated bibliography. The usability of a project must allow for historical or humanities research, while also catering for a critical interpretation through the website. The piece critiqued in this article is using a chronological frame instead of a list of dates to show the data in a much easier way. The digestion of data is important, a researcher is able to look at a block of text in any single book the important aspect of scholarly editions is to allow this information to be shown in a much more concise and interesting way. The review also shows how the chronological side of the information can be placed in a cartographic spectrum. This further extends the usability of the website for a digital scholarly edition.

Schmidt, Desmond. “Towards and Interoperable Digital Scholarly Edition.”Journal of the Text Encoding Initiative 7 (2014): n. pag. Web. 24 Nov. 2014.

This piece is interesting as a technological standpoint and description. The article mentions substantially the use of technologies such as XML, TEI and web based programs. Commenting that the new wave of internet users take interoperability for granted because browsers can be viewed on multiple devices, image files can be edited across platforms etc. The need for all born digital and made digital projects to have the same type of interoperability as books is essential. Certain coding languages may be inter-operable such as XML and to an extent TEI but the issue of an audiences from of an artifact arises. Different people will look at a book and envision different tags, different key words which may cause a problem. Interesting arguments are made regarding leaving out markup languages, but the problem of plain text and a person’s own interpretation remains valid. Very interesting quote within the conclusion is “Human interpretations will never be inter-operable on their own, but it is possible to incorporate them into a technological structure that takes into account their variability.

Robinson, Peter. “Desiderata for Digital Editions: Why Digital Humanists Should Get out of Textual Scholarship.” Web log post. Academia. N.p., 19 July 2013. Web. 22 Nov. 2014.

Is a very useful source with regard to it’s structure and comments on digital editions. The main argument is based around the material and data used within the edition. The five he mentions explicitly are encoding of the document and text, editorial acts being attributed, all materials should be available through creative commons by default, all materials should be available independent of any one interface and all materials should be held in a long term sustainable storage system. These general points prove very useful when talking about both the humanities side of a digital scholarly edition and the technical side. The general discussion which is then brought up as it goes into the semantics of the issues which illustrate, possibly, the main problems which occur with a digital scholarly edition. Which as a basis could promote a further study on digital scholarly editions in detail.

IFPH. “Session 2: Scholarly Editing in a Digital World: Pushing the Boundaries.” Web log post. IFPH Amsterdam. N.p., 10 Nov. 2014. Web. 22 Nov. 2014

This blog post which is centered around the first conference of the International Federation of Public History in Amsterdam, October 2014. The blog post focuses on three American scholarly editors answering questions regarding the future of scholarly editing. A brief evaluation of this source brings up to important benefits. Firstly, the source was originally written last month which means it is a very worthwhile source for modern interpretations. Secondly, the use of three main editors ensure that the information written is relative to the practice. Many important answers are within this blog post, however some of the most interesting are based around the business model of a digital scholarly edition. That the main problem with success is not technical issues or funding, its with the business model at the beginning of the project. This is an interesting approach and could be discussed in great detail.

Letters of 1916: Questionable comments from an Irish Playwright?

 

The Letters of 1916 project is one which has proved already to be a great resource for a number of academic subjects. The letter to George Bernard Shaw(1856-1950) from Joseph Michael Stanley(1890 – 1950) on the 28th of March 1916 is a compelling one. Stanley worked for the Gaelic Press and asks Shaw whether or not the material in his book “Three Plays for Puritans” would cause any offense or prejudice towards recruiting. The military authorities had just taken the copy off one of his journalists, Shaw’s reply is rather witty commenting on a line within one play calling King George specifically “a pig headed lunatic”(Letters of 1916). Shaw continues to comment that maybe the military authorities mixed up the kings.

Now, it seems that it is only a brief conversation between two but the implications from it are astounding for the academic world. This is the true beauty of the Letters of 1916, it allows for a wide range of study. Within this one page letter, so much information is given towards the historical setting, the cultural problems and the social ideas. The history of recruiting in Ireland has always been a tenuous subject due to conscription which had been introduced. The recruiting process had been documented by various historians around the time of 1916 and into more recent studies. Arthur Griffith summed it up perfectly for the Irish Nationalists by stating “Ireland is not at war with Germany. She is not at war with any continental power. England is at war with Germany. We are Irish Nationalists and the only duty we can have is to stand for Ireland’s interests”(McKenna). As a result it is very interesting that military authorities in the middle of World War 1 still had the time to raid the Gaelic Press. An important point here is that there was still a focus on the Irish State even though there was such catastrophe on the mainland of Europe. The importance of Ireland to Great Britain and the necessity to ensure security on the isles remained a key issue for the kingdom. This historical angle of the letter can be further researched on the site by looking for similar letters from George Bernard Shaw or searching through the section it is placed in, World War 1.

The division of letters into sections greatly improves the usability of the site. Instead of an archive like system which reams of letters are placed in one collection they are spread through collections on religion, family, love letters etc. This usability is further enhanced through the friendly presentation of the project and the easy to understand instructions. The instructions based around marking up the letters helps anyone without any transcription experience contribute. The toolbox’s system is very user friendly and allows for work to be done before being confirmed for publishing on the website.The want to contribute derives from was has been mentioned above, interest. This interest has grown and grown to ensure that the first ever public history project in Ireland will be a hugely successful one.

McKenna, Joseph. Guerrilla Warfare in the Irish War of Independence, 1919-1921. Jefferson, NC: McFarland, 2011. Print

“Letter from Joseph Michael Stanley to George Bernard Shaw, 28 March 1916.” Letters of 1916. N.p., n.d. Web.

CrowdSourcing: Exploration of Ethics

CrowdSourcing is essentially the practice of obtaining ones needs by allowing a large group of people, or crowd, to contribute. The theory is derived from the idea of outsourcing which became popular in the early 21st century. Although linked it is important to distinguish between the two. Business ventures would push for outsourcing because it included the cheapest labor which they could find. CrowdSourcing has now become the cheapest form of online collaboration and contribution between an organisation and the general. The specific concept of ethics in CrowdSourcing is one which has risen in popularity over the past five years. There are various academic scholars and professionals in their fields who have started to debate the idea. A false notion of ethics not being discussed is commonplace throughout these articles, suggesting that it is not on the forefront of CrowdSourcing study. However, there is a plethora of information through journals and on websites regarding the subject. Ideas vary between the extreme illegal unethical ideas produced in C Harris'(Schmidt) look at the dark side of CrowdSourcing to the less extreme labor rights mentioned in works of Ross Dawson and Sean Moffitt (Phneah). The grounded idea of CrowdSourcing would lead any person to derive a positive influence. However, there are certain ideas which must be taken into account when dealing with the general public. This blog post will talk specifically about the notion around strong standards of control and ethics, ensuring that future organisations can benefit from CrowdSourcing in a complete ethical and fair way.

Shelly Kuipers, from Chaordix notes how new companies who are inexperienced in CrowdSourcing run the risk of ethical issues if they do not have some sort of consultancy plan (Phneah). At the same conference the idea of a governance body should maintain the ethics of CrowdSourcing was rebuffed due to the transparent nature of the topic and the self governing done by the public who are involved. However, a separate governing body would not be needed, crowdsourcing.org has already begun to produce standards. The crowdsourcing industry website is attempting to create a standard designed to protect both crowdfunders(people pledging or investing capital) and fundraisers(people raising capital). Known as the CAPS program, Crowdfunding Accreditation for Platform Standards. The standard is attempting to foster the sustainable growth of the crowdfunding industry to provide much needed capital for projects and initiatives, start-ups and small businesses while certifying the project to ensure legality (CrowdSourcing).

With the new age of the digital world, standards and rules should be shared openly and freely through wesbites such as crowdsourcing.org. The constant need for protection and transparency has been a strong idealistic manner in the modern world as education continues to grow. People begin to further question ethical issues, gender relations and the implications of all problems which they can find. Can a company like Doritoes use a crowdsourcing campaign which asked for new flavours and videos, to purely advertise their own company. This social media aspect adds a whole different spectrum to the problems which have arisen. Crowdsourcing is still a young notion but the connection with social media could sky rocket the idea and the controversy.

Generally, the use of CrowdSourcing by social media, thus far, has been quite transparent. Twitter has asked the crowd to help translate tweets in several languages and historical crowsourcing projetcs are consistently pushing their ideas through social media. However, the ethical issues around advertising and gaining help for free, brings up an aspect which is missing in many articles. The pure notion of a CrowdSourcing project is that it uses the crowd, even if the transparency of a project is not clear, it would take a large amount of people to realize for the results to be useless. A certain amount would understand the intent of a company, the other half would not, but the similar question is why would they care? If a video is only about their new flavor for a brand, the personal choice of uploading and sharing surely negates the aspect of rights. A further look into the darker side of crowd work is the paid idea of Amazon Turk. perhaps one of the most contested form of crowd work. The system stems from the amount of work a person can achieve against others while earning an incredibly low amount of money. Jeff Howe, who originally coined the CrowdSourcing name, calls the project “both rather depressing and rather brilliant”. But, if a project does not give money for the work, what incentive is there for a crowd to participate?

The gameifcation of crowdsourcing projects is one of the strongest tool present to control and manipulate a crowd. Ian Bogost has likened gameification with exploitationware. The normal form is done by the use of a points system in which people can gain experience and thus move up the ranks of the website. Similar to the idea presented in Viki an Asian company which allows the crowd to add subtitles to some of their favorite shows(Phneah). Bogost believes that this kind of concept undermines the importance of the crowd in a business platform (Bogost). The essential argument of fair labor and pay is brought up constantly in these works. However, if a person gains their own sense of importance and enjoys such work why should that become a problem of ethics?

A interesting comparison is then produced. If a business can easily gather a crowd to do work, does that begin to demean a profession which a person has studied to perfect? Many design websites have competitions for logos and offer prizes for winners. Similar to Doritoes attempt to find a new flavor, a pure marketing strategy. Does this mean that these professions will begin to die out? Will crowdsourcing conclude by taking over certain professions in the working world? This scope is far too broad. Current trends, around 80% growth of crowdsourcing a year, would suggest continued growth, but surely there is a point in which a paid professional would be more desired? Without standards of ethics and control can CrowdSourcing continue? CrowdSourcing.org is the perfect opportunity to change the future a type of fair trade mark(Schmidt). However, this idea is central to the future of the concept and one which will be contested for years to come.

“The Good, the Bad and the Ugly.” Florian Alexander Schmidt. N.p., n.d. Web. 21 Oct. 2014.

Phneah, Ellyne. “Crowdsourcing Faces Ethical, Legal Risks | ZDNet.” ZDNet N.p., n.d. Web. 22 Oct. 2014

Phneah, Ellyne. “S’pore Startup Finds Niche in Crowdsourced Video Subtitling | ZDNet.” ZDNet. N.p., n.d. Web. 24 Oct. 2014.

“The Crowdfunding Accreditation for Platform Standards | Crowdsourcing.org.” The Crowdfunding Accreditation for Platform Standards | Crowdsourcing.org. N.p., n.d. Web. 20 Oct. 2014.

Bogost, Ian. “Persuasive Games: Exploitationware.” Gamasutra Article. N.p., n.d. Web. 19 Oct. 2014.

HyperCities: An Unrealized Dream

Forever a dream?

HyperCities, created by Todd Presner, is built upon a platform of digital research and education. The project has been sponsored through UCLA, USC and CUNY. However, the response from public organisations has also been substantial.The project is centrally based around history, the history of a given place, whether it be academic or social, personal or cultural. The depth of knowledge is integral to the project which provides a “digital narrative crisscrossing place and time”(Presner, 12). The project began very centrally around the city of Berlin, where Presner had been teaching in a university. Presner’s original idea was to use Berlin as a prism which would lead to a broader scope of Europe. The involvement of his students in UCLA allowed the project to grow further and with very accessible and academically intuitive content it would seem HyperCities had no ceiling.

The project had originally spread across the world to various colleges in Europe and Asia. The European market has used the project to plot the course of Holocaust survivor stories specifically in German Universities. Asian universities have also used the project in a more personal sense, ensuring that a person’s own personal history can remain in a place forever. A South American community has used the project to show the change of land over time to benefit the appeals for grants. The changes in landscape and buildings over the past ten years has been documented and overlaid on the Google earth platform.These are all available through the collection process on the project itself. This proves that there has been an impact, however the result has been a lot less fruitful than the premise. However, the important aspect of the impact, to remain aware of, is that it is far reaching while making use of a Wikipedia type model of authorship and written content, it maintains a tiered, authorship model for evaluating and publishing scholarly research(“Teaching Learning at a Distance, 174).

The unrealized nature of the project ensures that it’s contribution to Digital Humanities cannot be fully tested. However, thus far the project has been proving its contribution in both a public and a technological way. The use of the Google Earth platform is far ahead of the many Digital History projects which work on the basis of Google Maps. The use of Earth ensures that a real life image of the world can be given alongside the older maps which are overlaid in certain projects on the site. This forward way of thinking is much more modern than previous projects and catches the public eye. Presner notes that the public eye is brought and sustained by a number of important aspects related to the project; its nature as a learning environment, evaluation of data and ideas, trans-media literacy, collaboration and collective knowledge, quite similar to Wikipedia and the deep lying trans-disciplinary nature of the project(“Teaching Literature at a Distance, 178). Many of the these terms are quite simple but they all emerge with a weighted meaning. Trans-media literacy was described perfectly by Henry Jenkins as “consumers becoming hunter gatherers, pulling together information from multiple sources to form a new synthesis”(85).

The idea of thick mapping is the main focus of this ten year long program. HyperCities delves further than other history projects, such as Digital Harlem . HyperCities is an amalgamation of mapping technologies, instead of being focused on the quantitative aspect of gathering large amounts of data or focusing primarily on Geographic Information Systems(“HyperCities: Thick Mapping in Digital Humanities” 51). Hyper is described by Todd Presner, in a non-geographical way, as the linking of history to a certain area so a place can have sustained layered history(6). The term which is sometimes coined deep mapping, proves the argument that maps are not a static object, they are varied with time and can hold strong historical influence. The problem which this approach is that the project does not have a concrete end point. If data can be constantly added to a map and thus forwarding the knowledge of a certain area, then when is the project endpoint?

The problem of an unfinished project has just been mentioned in the previous section. HyperCities has no real end point at the moment, which is a problem because it has become slightly stagnant and static itself. A book has just been released by the Harvard Press which has pushed the public gaze fixedly towards Presner’s dream. However, the project needs some severe updating and funding to ensure any kind of future. Many of the images and videos are no longer supported by web browsers. The original history of the project which was based strongly around history and not social aspects has changed. There is a focus towards a more social aspect, such as reviewing your favorite restaurant or where you got engaged. However, how can HyperCities compete with such social heavyweights as Twitter or Facebook? An amalgamation would prove costly to the original focus of the project because the academic side would be lost. Presner released a system which allowed these social media sites, along with Picassa and Flickr to be linked into HyperCities in an attempt to heighten popularity. If the project wants to survive it needs to push for updating and spreading the idea across further schools and universities. The saving grace for HyperCities thus far has been the projects which use its template as a basis, the original real purpose in theory. Universities and Colleges can produce their own projects and allow it to be seen by a wider audience.

The HyperCities project has the potential to be the biggest social, academic and research history project on the internet. The ability to upload scholarly and socially meaningful content in the same area raises its scope. The modernized way of using maps is one of the most impressive aspects of the project. However, the sheer scope of the project hampers its ability to be complete. The correct amount of funding and initiative place towards academic institutions may propel the project further, but at the moment it remains stagnant and incomplete. Presner summarizes the idea importantly, linking the idea of a Utopian world and digital humanities whereby the core notion of participation without condition is integral(140). HyperCities definitely has the potential to be a Utopia the future may present Presner’s realized dream

Bibliography

Jenkins, Henry. Confronting the Challenges of Participatory Culture: Media Education for the 21st Century. Cambridge, MA: MIT, 2009

Kayalis, Takis, and Anastasia Natsina. Teaching Literature at a Distance: Open, Online and Blended Learning. London: Continuum International Pub. Group, 2010. Print

Presner, Todd Samuel, David Shepard, and Yoh Kawano. HyperCities: Thick Mapping in the Digital Humanities. N.p.: n.p., n.d. Print.