Guest Speaker Angeliki Chrysanthi – Geotagging

Georeferencing is used to specify geographing location with code or the place. This can be done through various softwares, automatically, semiautomatially or manually adding metadata to files in question.
Embedded metadata informs about who took the image, where it is, any kind of technical information available such as camera settings(usually stored by the device automatically). Copyright information may be included in some cases. Geotagging may be done automatically for the device if it is GPS enabled. EXIF data is taken automatically from devices, shutter speed, GPS etc. Largely functions to connect a photo to the place, time subjects etc. GPS devices capture timestamps in a GPX format. GPS devices can store either track logs (drawing a path or directions) or capturing specific points according to a time (every 5 secs for example). A devices clock must be synchronised in order for it to work with GPS as it gets you the time of your image. Metadata, including GPS coordinates can also be added manually, and can be done when images are digitised – particarly with GPS coordinates as if you don’t have the camera settings from a manual camera than you will never have it. Digital creators describe the content of the image, either embedding it as part of the digital file or as an external “sidecar” file as part of a referencing system
Examples were given by Hochman, Manovich and Yazdani 2014: ”On hyper-locality: Performances of place in Social Media” which explored the relationship between the physical space and the digital – an interesting topic, particularly when geotagging is relatively accessible to consumers through apps and social media today. Geotagging can be very useful for fields like Archaeology – which the examples in the practical section were related to, or community based projects like the Historic Graves projects discussed. Many have adopted such methods because photos can be attached to where finds have been made. Its also been useful for Archival images including lithographs and paintings – sometimes tracked across 100s of years through digitised images with estimated locations in some cases.
In the practical section we learned how to manually add GPS data to image files, and how to batch add data using external files using Geosetter – shown in the screenshot below. There was a discussion about the different types of coding and programs to use best for different purposes – including several examples. Geobabel assists when there are GPX files, facilitates Geosetter to read files when they are not in the right format. The data files in this case were used to construct a map including tracks, which was a relatively straight forward process one we learned how to use the software. The capabilities to extend the use of such principles can be seen in the development of software like Archdis can convert the data to shape files reducing it all down to points.


Data gathering is massively important, which was explained through examples. Approaches need to be modified and improved on, depending on who the target audience are. What really stood out to me was the search for feedback in the examples. Case studies, questionnaires and interviews were used and while they were useful there were gaps in the information – especially as the participants involved were just behaving as normal with a GPS device in their pockets because they were already used to going about taking photos and weren’t so involved in the GPS logging side of Archaeology. There can be different concentrations of images(hotspots), but certain images may be of more significant points. Results as always require interpretation.

The additional reading provided was very interesting, highlighting the importance of Metadata standards for Geotagging and some of the guidelines around this topic. Schemas are used making the data readable, and usable externally. This reading was produced by EMDaWG (Embedded Data Working Group – Smithsonian Institution) and it largely contains technical information. Data needs to be processed, ensuring the same GPS formatting is used etc., and that the attached data is machine readable and logical.

Overall, this was a very comprehensive introduction to quite a vast topic This type of research needs some planning and foresight, with considerations around the dataset the audience and more.

Bibliography

EMDaWG (Embedded Data Working Group – Smithsonian Institution) “Basic Guidelines for Minimal Descriptive Embedded Metadata in Digital Images” April 2010.

Hochman, Manovich and Yazdani ”On hyper-locality: Performances of place in Social Media” 2014.

Computational Analysis vs. Curatorial Expertise

Computational analysis sees the powers of machines, and can constitute the sensing and analysing of files like an image. Computers can be thought how to recognise images, Colours, shapes, faces all have to be defined for the computer to recognise them if asked.
In class we were given an introduction to Tracking.js, with practical exercises showing us how to use this library in this practice and set parameters for what we were trying to detect. Tracking.js is a library of different computer programs, useful for detecting attributes of images. It can be used to sense colours, faces or shapes. A large advantage is that it is browser based, which make it very easy to use and it is open source.
Digital objects require metadata to be used, and while there are very basic labelling systems that a computer can generate, the image requires time resources to be devoted to it alongside digitisation. Complicated algorithms are be used by computer platforms to “read” an image, or any digital file. This works by detecting elements, defined as part of the computer program. Computer scientists create the algorithms, but the consumer can use them. E.g. SAS suite of Textual Analysis programs. An important point is that it may not detect all, or might get things mixed up or identify extra elements ike eyes of a face, and crevices. It only looks at what is available to it, what it can identify, what it has been programmed to do – human expertise is needed to interpret the results.
One should consider that computers are machines and parameters need to be defined for them to operate under. Limits need to be set, because how they work is by searching within specified parameters. Software also requires human design, there is a decision making process where default behaviours of programs have to be decided. Any program is designed with certain assumptions in mind, and it is a good idea to bear this in mind when choosing software for computational analysis. The example of the SAS suite of Textual Analysis programs referred to above, for example, has several tools that are used for different aspects as a part of data mining which are listed by Chrakraborty, Pagolu and Garla (4) Use of such software for analysis requires an understanding of the data set at hand, and curatorial expertise.
The computer doesn’t care about what its analysing it is a machine, it just calculates based on the parameters defined by the user. Curatorial expertise implies a lifetime of learning behind it, analytical skills and critical thinking. The advantage of the human over the computer is interpretation, a computer can be fooled deliberately by controlling conditions if you know what the computer is looking for or accidentally by similar looking areas (in the case of visual computational analysis). While human error is indeed possible, hopefully curatorial expertise would reduce these chances. Furthermore, human expertise may reveal significance quicker than a computer would – at least from data output by a computer. Furthermore, computational analysis needs human expertise to be further developed, not just in terms of developing technological capabilities but also in terms of specifically identifying data relevant to a field and being able to narrow it down enough and make it machine readable.
Conclusion:
Computational analysis is not perfect yet, but it is being constantly improved. New digital tools are constantly being developed and improved upon, however it should always be remembered that this requires human effort. Furthermore, it requires human understanding and curatorial expertise as one needs to know what to look for in the first place to design a program to do the task. Though the machine will provide results based on what it is looking for, interpretation is required to understand the significance of results.

Works Cited

Goutam Chakraborty, Murali Pagolu, Satish Garla. Text Mining and Analysis: Practical Methods, Examples, and Case Studies Using SAS. (2013) North Carolina, USA: SAS. Web. https://www.sas.com/storefront/aux/en/spmanaganalyzunstructured/65646_excerpt.pdf

The Digital, Archaeology and Digital Archaeology

“Mesh of Stones” A digital reconstruction of standing stones at newgrange, England.

Costopoulos discusses the normalization of Digital Archeology over the last number of years, its significant application and sums up the aims of publishing in Frontiers in Digital Archeology: “I want to stop talking about digital archaeology. I want to continue doing archaeology digitally.” What he is talking about here seems deliberately provocative – though his argument that the digital tools enhance the research in the field of Archaeology still stands true, even if such contributions are left out of conversations about digital archaeology. The tools are not unique to Archaeology and the definition of “Digital Archaeology” may still be developing, but realistically Archaeologists have been using technology to assist their work for a long time. He is an outspoken writer on the subject, but this article attracts criticism by Jeremy Huggett in aptly titled post “Let’s Talk about Digital Archeology”, stating that: “A superficial reading of the article suggests a degree of weariness and cynicism here. But it seems to me that the article potentially questions the very legitimacy of what I understand by digital archaeology.” His point is that while Archaeologists invariably use computers at some point in this age, but Digital Archaeologists have specific skills. Huggett questions the logic behind Costopoulos not wanting to categorise these skills and talk about their application – pointing out that to do so would be to leave an entire field of archeology out of the conversation: “Questions surrounding the introduction, development, and implications of new technologies within the subject go far beyond questions of standardisation, ethics etc. in addressing the very fundamental stuff of archaeology and its interpretation – or, at least, they should do.”
“In a Manifesto for Introspective Digital Archeology” Hugget brings up ‘New Aesthetic’ in relation to digital archeology, discussing trends in theory since the 1950’s which have transformed the field of archeology. Specifically, the challenge addressed in this piece of writing is how technology has affected how archeological knowledge is created, and how the subject is viewed. The people working in the field and scholars have been changed by this as previously there were computer scientists and Archeologists, but from the mid-80’s onwards people began to specialize in this themselves. While traditional archeologists and historians still exist, the digital archeologist and their status in the field is contested – largely from conservatism. However, the influence of Digital Archeology on how the subject is approached and researched should not be underestimated, having transformed the scholar in the Digital Age. Hugget argues that “Digital archaeologists are arguably the best positioned amongst digital humanists to investigate and understand the implications, transformations, and repercussions of digital technologies.” (“A Manifesto for an Introspective Digital Archeology, 87). New technology has of course provided incredibly useful tools, and this has changed how Archeologists approach their work – but an understanding of theory is still necessary. “Yet with few exceptions, that preoccupation has not been turned towards the consideration of the digital technologies used within archaeology other than in a superficial way. The belief that computers increasingly facilitate all these theoretical concepts is commonplace – much less so is the recognition that, all too often, they in fact restrict and subvert these very ideals and frequently disguise that they do so through a combination of technological sleight of hand and the law of unintended consequences.”(“ A Manifesto for an Introspective Digital Archaeology.” (89)

Works Cited

Costopoulos, Andre. Digital Archeology Is Here (and Has Been for a While) Specialty Grand Challenge ARTICLE Web. Front. Digit. Humanit.,Date of Publication: 16 March 2016. Date of Access: 02 Dec. 2016. http://dx.doi.org/10.3389/fdigh.2016.00004
Huggett, Jeremy. “A Manifesto for an Introspective Digital Archaeology.” Open Archaeology 2015; 1: 86–95. Date of Access: 02. Dec. 2016
Huggett, Jeremy. “Let’s talk about Digital Archaeology” WordPress. Web. Date of publication; May 10, 2016 Date of Access: 02 Dec. 2016.
“Mesh of Stones” Archeological Heritage. Published: 2012. http://www.archeritage.co.uk/wp-content/uploads/2012/10/Mesh-of-Stones-470×280.jpg