As discussed in previous posts, data refers to a collection of information. Whatever the purpose of this collection, in order to gain insight and work with this information, an adequate manner of displaying and comparing data is necessary in order to get full use out of it. Some thought needs to be put into how a system is designed for modelling data, the first step in database design and object orientated programming. Data Modelling is generally understood as having three stages of design: Conceptual, Logical and Physical. (“Data Modeling – Conceptual, Logical, And Physical Data Models”) Complexity increases with each stage of design. It should be highlighted that the structure of containing data is often purpose built. “The biggest challenge is correctly capturing the requirements on the data model. Often when the project starts, there are only vague requirements (if requirements at all), and the data model must represent these requirements completely and precisely. Therefore it is a very challenging task to go from ambiguity or vagueness to precision. ” (Hoberman)
Data modelling assumes the following in its design:
i) There can be numerous links between different data
ii) Categorization of data, separation and encapsulation is necessary for searchability – and a well built ontology allows you to get the most out of your data.
iii) Unique keys are used to identify parts of information, as access points linking data.
The Conceptual model highlights how the different bits of data relate to one another, specifying Entity Names and Relationships. The Logical model, is more specific and detailed – adding Attributes, Foreign Keys and Primary Keys. The physical model, must be implementable and applicable to the database of choice -specifying Column names and data types, tables names and Foreign and Primary keys. (“Data Modeling – Conceptual, Logical, And Physical Data Models”)
Within the Conceptual, Logical and Physical schemas there are numerous ways of modelling data, that can vary according to design depending on using the data for comparison and tracking correlations. Hoberman reminds us that methods of building and modelling this data can vary. “In some efforts, the database design is completed, and then the logical and conceptual are built for documentation and support purposes.” While familiarity with ones data set is needed for the purposes of interpretation, techniques of displaying data can be useful for particular purposes. Personally, I find visual data modelling techniques much easier to work with – particularly when comparing data. “The underlying benefit of creating a data model is that the data actually becomes understandable, as others can read it and learn about it. ” (Hoberman)
Different types of data modelling techniques which we should be familiar with include:
Spreadsheets for example can be used to model data, depending on the purpose this can be adequate as information can be grouped in rows and columns. The example is given of spreadsheets as data notation with financial business experts by Steve Hoberman. However, he also highlights the importance of definitions when modelling data – as every data set needs to be treated differently. The key understanding here is the elation between different types of data.
Visual representation of data can be very useful, especially when looking for comparisons and correlations. Diagrams are very useful when trying to design the structure for holding your data – setting out links and structure. What is important too is query languages for databases, which can have ontologies assigned such as W3schools standards like RDF. There are numerous software programs that can be useful for Data Modelling from spreadsheets, to diagram drawing softwares for explaining the concept but these must be held on database platforms designed to support data models like MySQL which we’ve used in class
Hoberman, Steve. “Data modeling techniques explained: How to get the most from your data”. Date of Access: 11 May 2017.
Knowledge is quite a difficult term to define, it relates to an accumulation of information and understanding of a topic gained over time. Data is a collection of information assumed to be true for the purposes of analysis and deduction. If knowledge is cumulative, and that data a collection of information used for reasoning: than using data is part of the process of obtaining informed knowledge. Though many associate the word “data” with computers it can be in either analogue or digital form, digital data having numerous advantages including easy conversion and manipulation of elements for comparison and display: like statistics, a popularly cited and displayed form of data.
Census data available at cso.ie can be used to explore the dynamics and diversity of the Irish population. However, it is important to understand the limitations of the conclusions that you draw. To build knowledge we need to understand the process of data gathering, social institutions, history and other factors to obtain true knowledge of the subject at hand through analysis and interpretation.
Data can be treated as a tool to develop and/or form an understanding, to support a theory or to explain relationships between variables. Critical analysis is needed to data interpret data and understand the scope and limits of the data to obtain knowledge with a reasonable degree of accuracy. Measuring society is difficult as there not everyone fits into the checkbox on a census form – and there are assumptions made in the collection of data to provide a statistic for calculations and comparison. Categories like “religion”, “ethnicity” and “nationality” are self defined and converted to numerical values – these are often used to support claims and hypothesis. Looking at this CSO information can we tell much about the makeup of Irish society? What about secularism?
The Irish Journal, referring to CSO 2011: “THE PERCENTAGE OF Catholics in Ireland is at its lowest ever, while the actual number of Catholics is at its highest level since records began.” (http://www.thejournal.ie/regious-statistics-census-2011-640180-Oct2012/ ) Though the population is growing, the percentage of Roman Catholics in relation to the population is not. Referring to the data on the right a simple calculation from the CSO information reveals that “3.86 million people classed themselves as being Catholic – 84.2 per cent of the population. Of this amount, 92 per cent were Irish nationals.” ( http://www.thejournal.ie/regious-statistics-census-2011-640180-Oct2012/) According to the data set available from the CSO, 77.464% of Irish nationals are Catholic – of which the majority are considered ethnically “white Irish” as well as being Irish nationals. Nationality is usually(but not always) determined by citizenship.
Data modelling allows us to selectively limit fields and compare data. The broad distinction between “Irish” and “Non-Irish”, refers to nationality – not ethnicity, and its much easier to become an Irish citizen under now that Ireland is in the EU. Cultural background in Ireland may be diverse including large student population and increasing visitors, tourists workers, and immigrants – but not all are Irish nationals by definition.
Limiting information displayed to “Usually resident and present in the state”, and “Ethnic background” in the 2011 census indicates predominantly “white Irish” majority, followed closely by other white backgrounds and by further limiting the data it appears to be this way across all age groups.
Information needs to be looked at in proportion to understand the forces shaping the ethnic and cultural makeup of society. Furthermore, we need to know what exactly has been measured – i.e. what is represented in data. Catholicism has a massive influence on Ireland on the past – there may be other factors impacting this figure than what I would take religion to be: practicing spiritual teachings which isn’t measured in the CSO.
We can understand relational proportion, Catholicism as dispersed per age – indicates most Catholics are over 16. However, the connection between the Catholic Church and the National School systems would indicate possible influence on the age category of under 14. By changing the display of data – the category that children about to attend or currently attending national school makes up a large chunk of the ethnically “white Irish” population, the table here shows the proportion.
Population usually resident and present in the state don’t have the same ratio of age dispersal. This may indicate a social factor causing the population to indicate Catholicism on the census. While comparing and displaying statistics can be used to come up with hypotheses, but it does not constitute proof. While you can make a strong case by displaying statistics, this can’t be taken as evidence. Correlating statistics do not necessarily indicate that the variables in question are related – as there can be unknown (or unmentioned) variables and furthermore we need to understand how exactly attributes are measured.
Modelling population statistics can highlight correlations and comparisons within data: illustrating points, demonstrating knowledge, and looking for relationships between variables. But we have to understand how it relates to what we are trying to find out if we are using it for reasoning and deduction! Data isn’t always adequate to what you are trying to find out – in the process of analysing and interpreting there may be a need to obtain external information to build knowledge on the subject: an understanding of the relationships between variables, which may exist outside of the data.
Our understanding of Catholicism as a religion seems to be changing compared to church attendance statistics – though its associated with the ethnic group “white Irish”– in 2009 weekly church attendance was at less than 50% (https://faithsurvey.co.uk/irish-census.html) Over time, the general trend seems to be towards the decline of practicing Catholics – or understand the category more as a cultural group if there are more people identifying as Catholic, but less practicing the religion. Perhaps indicative of a more secular attitude.
Credibility of data is another issue, where is it coming from – can we trust newspaper statistics? CSO would be a reliable source of information generally, but official record can have biases – especially in countries that refuse to recognize ethnic groups. However, in terms of practicality you are limited to what’s available.
If we were trying to use the official CSO statistic data to work out the strength of religious followings in Ireland would it be accurate? Or how about looking at minorities? Adequacy of data should be accessed in the process of obtaining knowledge. Data is just information, we decide how to treat it: Know its limitations, the assumptions being made and the degree of accuracy that’s needed. Data modelling can be used and abused. Information is frequently misrepresented both accidentally and on purpose – even if data is factual, the conclusions aren’t always correct – and statistics don’t always measure what they appear to.
Statistical data is useful for understanding trends in behaviors and dynamics of social structures over time – to further interrogate the CSO 2012 about the makeup and diversity of Irish society over time some background information is needed. Religion in Irish society is definitely according to p.55-56 in a CSO publication from 2000 That was then, This is Now: Change in Ireland, 1949-1999 A publication to mark the 50th anniversary of the Central Statistics office indicates increased a trend of increasing religious diversity in Ireland today, and potentially the need to identify new categories and gather more data in order to understand the makeup of the Irish population. When we compare this data from 1991 to 2011 to see trends in population dynamics focusing on religion and nationality – the identified trend seems quite accurate.
Church of Ireland 93, 056+30464 totaling at 123,520. Presbyterian 14348+8311 totaling at: 22, 659.
Other stated religions have nearly doubled: 34, 867 +40227 totaling at: 75, 094. As of 2011 CSO records indicate the Hindu population ordinarily resident in the state as 10,688, meaning the population is more than 11 times larger over the 20 years since the 1991 survey – making up over 12% of the “Other stated Religion” category.
Muslim 18223+29143 totaling at: 47366, a dozen times higher than the 1991 survey.
The number of people who leave their religion unstated is 29888+12925 totaling at: 42, 813 -this has nearly halved since 1991 however, those who indicate that they are “No Religion” 172180+82,194 totaling at: 254,374 which is more than five times the figures indicated in 1991 – indicating an overall increase in secular Irish society.
From the evidence presented it seems that there is increasing cultural and religious diversity in modern Ireland, this is due to a variety of factors and may necessitate a widening in the parameters of gathered data. The importance of understanding the form of data cannot be understated, categorizations and what exactly information is indicating: especially if it has been converted to numerical data. The true nature of individual cases in this statistical data set, the potential answers given are predetermined – which means that unidentified categories will be left out or overlooked.
While data can be used to come up with or support theories – there is the capacity for data to be misleading too. In isolation data like statistics can appear to show something, but this may be negated by other information like revealing a (previously) unknown variable. You are essentially building knowledge based on assumptions, using data as part of the process but it allows you to build an informed opinion. With greater familiarity around the topic at hand i.e. with more study, comparison and analysis, data can also be very revealing
On its own data is just a collection of information does not directly convert to knowledge: but you can use data to further your understanding and to make estimates based on assumptions. To build knowledge we need data, which is useful for making comparisons and inferring, illustrating, building and imparting knowledge too! While there are certain considerations worth bearing in mind, simply put: This is the best that we have for the purposes of generalizing population information in Ireland today. Data modelling can be used for displaying, illustrating and imparting knowledge. There is a process of evaluating and deciphering the collection of information that you are working with. Even if the information true – it is based on certain assumptions that can impact how we interpret and analyze information and draw conclusions.
Georeferencing is used to specify geographing location with code or the place. This can be done through various softwares, automatically, semiautomatially or manually adding metadata to files in question. Embedded metadata informs about who took the image, where it is, any kind of technical information available such as camera settings(usually stored by the device automatically). Copyright information may be included in some cases. Geotagging may be done automatically for the device if it is GPS enabled. EXIF data is taken automatically from devices, shutter speed, GPS etc. Largely functions to connect a photo to the place, time subjects etc. GPS devices capture timestamps in a GPX format. GPS devices can store either track logs (drawing a path or directions) or capturing specific points according to a time (every 5 secs for example). A devices clock must be synchronised in order for it to work with GPS as it gets you the time of your image. Metadata, including GPS coordinates can also be added manually, and can be done when images are digitised – particarly with GPS coordinates as if you don’t have the camera settings from a manual camera than you will never have it. Digital creators describe the content of the image, either embedding it as part of the digital file or as an external “sidecar” file as part of a referencing system
Examples were given by Hochman, Manovich and Yazdani 2014: ”On hyper-locality: Performances of place in Social Media” which explored the relationship between the physical space and the digital – an interesting topic, particularly when geotagging is relatively accessible to consumers through apps and social media today. Geotagging can be very useful for fields like Archaeology – which the examples in the practical section were related to, or community based projects like the Historic Graves projects discussed. Many have adopted such methods because photos can be attached to where finds have been made. Its also been useful for Archival images including lithographs and paintings – sometimes tracked across 100s of years through digitised images with estimated locations in some cases.
In the practical section we learned how to manually add GPS data to image files, and how to batch add data using external files using Geosetter – shown in the screenshot below. There was a discussion about the different types of coding and programs to use best for different purposes – including several examples. Geobabel assists when there are GPX files, facilitates Geosetter to read files when they are not in the right format. The data files in this case were used to construct a map including tracks, which was a relatively straight forward process one we learned how to use the software. The capabilities to extend the use of such principles can be seen in the development of software like Archdis can convert the data to shape files reducing it all down to points.
Data gathering is massively important, which was explained through examples. Approaches need to be modified and improved on, depending on who the target audience are. What really stood out to me was the search for feedback in the examples. Case studies, questionnaires and interviews were used and while they were useful there were gaps in the information – especially as the participants involved were just behaving as normal with a GPS device in their pockets because they were already used to going about taking photos and weren’t so involved in the GPS logging side of Archaeology. There can be different concentrations of images(hotspots), but certain images may be of more significant points. Results as always require interpretation.
The additional reading provided was very interesting, highlighting the importance of Metadata standards for Geotagging and some of the guidelines around this topic. Schemas are used making the data readable, and usable externally. This reading was produced by EMDaWG (Embedded Data Working Group – Smithsonian Institution) and it largely contains technical information. Data needs to be processed, ensuring the same GPS formatting is used etc., and that the attached data is machine readable and logical.
Overall, this was a very comprehensive introduction to quite a vast topic This type of research needs some planning and foresight, with considerations around the dataset the audience and more.
EMDaWG (Embedded Data Working Group – Smithsonian Institution) “Basic Guidelines for Minimal Descriptive Embedded Metadata in Digital Images” April 2010.
Hochman, Manovich and Yazdani ”On hyper-locality: Performances of place in Social Media” 2014.
Computational analysis sees the powers of machines, and can constitute the sensing and analysing of files like an image. Computers can be thought how to recognise images, Colours, shapes, faces all have to be defined for the computer to recognise them if asked.
In class we were given an introduction to Tracking.js, with practical exercises showing us how to use this library in this practice and set parameters for what we were trying to detect. Tracking.js is a library of different computer programs, useful for detecting attributes of images. It can be used to sense colours, faces or shapes. A large advantage is that it is browser based, which make it very easy to use and it is open source.
Digital objects require metadata to be used, and while there are very basic labelling systems that a computer can generate, the image requires time resources to be devoted to it alongside digitisation. Complicated algorithms are be used by computer platforms to “read” an image, or any digital file. This works by detecting elements, defined as part of the computer program. Computer scientists create the algorithms, but the consumer can use them. E.g. SAS suite of Textual Analysis programs. An important point is that it may not detect all, or might get things mixed up or identify extra elements ike eyes of a face, and crevices. It only looks at what is available to it, what it can identify, what it has been programmed to do – human expertise is needed to interpret the results.
One should consider that computers are machines and parameters need to be defined for them to operate under. Limits need to be set, because how they work is by searching within specified parameters. Software also requires human design, there is a decision making process where default behaviours of programs have to be decided. Any program is designed with certain assumptions in mind, and it is a good idea to bear this in mind when choosing software for computational analysis. The example of the SAS suite of Textual Analysis programs referred to above, for example, has several tools that are used for different aspects as a part of data mining which are listed by Chrakraborty, Pagolu and Garla (4) Use of such software for analysis requires an understanding of the data set at hand, and curatorial expertise.
The computer doesn’t care about what its analysing it is a machine, it just calculates based on the parameters defined by the user. Curatorial expertise implies a lifetime of learning behind it, analytical skills and critical thinking. The advantage of the human over the computer is interpretation, a computer can be fooled deliberately by controlling conditions if you know what the computer is looking for or accidentally by similar looking areas (in the case of visual computational analysis). While human error is indeed possible, hopefully curatorial expertise would reduce these chances. Furthermore, human expertise may reveal significance quicker than a computer would – at least from data output by a computer. Furthermore, computational analysis needs human expertise to be further developed, not just in terms of developing technological capabilities but also in terms of specifically identifying data relevant to a field and being able to narrow it down enough and make it machine readable.
Computational analysis is not perfect yet, but it is being constantly improved. New digital tools are constantly being developed and improved upon, however it should always be remembered that this requires human effort. Furthermore, it requires human understanding and curatorial expertise as one needs to know what to look for in the first place to design a program to do the task. Though the machine will provide results based on what it is looking for, interpretation is required to understand the significance of results.
Goutam Chakraborty, Murali Pagolu, Satish Garla. Text Mining and Analysis: Practical Methods, Examples, and Case Studies Using SAS. (2013) North Carolina, USA: SAS. Web. https://www.sas.com/storefront/aux/en/spmanaganalyzunstructured/65646_excerpt.pdf
Costopoulos discusses the normalization of Digital Archeology over the last number of years, its significant application and sums up the aims of publishing in Frontiers in Digital Archeology: “I want to stop talking about digital archaeology. I want to continue doing archaeology digitally.” What he is talking about here seems deliberately provocative – though his argument that the digital tools enhance the research in the field of Archaeology still stands true, even if such contributions are left out of conversations about digital archaeology. The tools are not unique to Archaeology and the definition of “Digital Archaeology” may still be developing, but realistically Archaeologists have been using technology to assist their work for a long time. He is an outspoken writer on the subject, but this article attracts criticism by Jeremy Huggett in aptly titled post “Let’s Talk about Digital Archeology”, stating that: “A superficial reading of the article suggests a degree of weariness and cynicism here. But it seems to me that the article potentially questions the very legitimacy of what I understand by digital archaeology.” His point is that while Archaeologists invariably use computers at some point in this age, but Digital Archaeologists have specific skills. Huggett questions the logic behind Costopoulos not wanting to categorise these skills and talk about their application – pointing out that to do so would be to leave an entire field of archeology out of the conversation: “Questions surrounding the introduction, development, and implications of new technologies within the subject go far beyond questions of standardisation, ethics etc. in addressing the very fundamental stuff of archaeology and its interpretation – or, at least, they should do.”
“In a Manifesto for Introspective Digital Archeology” Hugget brings up ‘New Aesthetic’ in relation to digital archeology, discussing trends in theory since the 1950’s which have transformed the field of archeology. Specifically, the challenge addressed in this piece of writing is how technology has affected how archeological knowledge is created, and how the subject is viewed. The people working in the field and scholars have been changed by this as previously there were computer scientists and Archeologists, but from the mid-80’s onwards people began to specialize in this themselves. While traditional archeologists and historians still exist, the digital archeologist and their status in the field is contested – largely from conservatism. However, the influence of Digital Archeology on how the subject is approached and researched should not be underestimated, having transformed the scholar in the Digital Age. Hugget argues that “Digital archaeologists are arguably the best positioned amongst digital humanists to investigate and understand the implications, transformations, and repercussions of digital technologies.” (“A Manifesto for an Introspective Digital Archeology, 87). New technology has of course provided incredibly useful tools, and this has changed how Archeologists approach their work – but an understanding of theory is still necessary. “Yet with few exceptions, that preoccupation has not been turned towards the consideration of the digital technologies used within archaeology other than in a superficial way. The belief that computers increasingly facilitate all these theoretical concepts is commonplace – much less so is the recognition that, all too often, they in fact restrict and subvert these very ideals and frequently disguise that they do so through a combination of technological sleight of hand and the law of unintended consequences.”(“ A Manifesto for an Introspective Digital Archaeology.” (89)
Costopoulos, Andre. Digital Archeology Is Here (and Has Been for a While) Specialty Grand Challenge ARTICLE Web. Front. Digit. Humanit.,Date of Publication: 16 March 2016. Date of Access: 02 Dec. 2016. http://dx.doi.org/10.3389/fdigh.2016.00004
Huggett, Jeremy. “A Manifesto for an Introspective Digital Archaeology.” Open Archaeology 2015; 1: 86–95. Date of Access: 02. Dec. 2016
Huggett, Jeremy. “Let’s talk about Digital Archaeology” WordPress. Web. Date of publication; May 10, 2016 Date of Access: 02 Dec. 2016.
“Mesh of Stones” Archeological Heritage. Published: 2012. http://www.archeritage.co.uk/wp-content/uploads/2012/10/Mesh-of-Stones-470×280.jpg
The videos that I focused on are: 1) Digital Humanities in Practice – Spatial Humanities & Social Justice and 2) Digital Humanities in Practice – Visualising Text. In both cases the names and qualifications of the main speakers in the videos are in the descriptions. Youtube is the source of both videos, and this is a reliable source generally unless either video is taken down – however, it won’t be up for ever and this shouldn’t be considered a permanent source. The date the videos were created don’t seem to be included anywhere – but presumably would have been created after the date in January 2015 when the Dariah Teach initiative began. The description contains the publication date with the first video being published on Nov 23, 2016 and the second published on Oct 19, 2016.
The videos were created for several audiences, students, those interested in the Digital Humanities and those interested in some of the projects. Both were created to inform and share information, as part of Dariah Teachs goals to provide open-source teaching materials. The organisation is a reasonable entity to create this video as it is dealing with specific of Digital Humanities practices and tools used in academia. The level of the audience would be at least of students and other academics, that is those who have academic interest but may not be experts in the area. Because the videos are on Youtube, the scope for other audiences to access these videos are high. The vocabulary of the narration seems to be general adequate for the intended audience. However, the second of my chosen videos uses a lot of dense jargon and technical terms – seen in other videos in the channel too. It is difficult to tell how other groups may react upon seeing this video, particularly with the first video as apartheid is still remembered by those who lived through it.
The goal of the first video is to draw attention to mapping some of the institutionalised human right violations – but also to promote this researchers project. The central theme of the first video is apartheid – and the platform which combines narratives on significant topics e.g. defensive design of Winnie Mandela’s house. But it draws attention to inequalities in everyday life, and in academia generally. The second video many topics they are linked in a manner that makes linear sense. It was probably specifically focusing on scholarly interest in Interactive Textuality: introduce this topic and elaborate on some of the general uses. A lot of information is given about specific cases, as part of a 3d recreation of historically significant locations with the aim of reconstructing Twentieth Century history as a Social Justice/history platform, combining video testimony with a 3d platform – illustrating connections between testimonials with reconstructions e.g. a protective wall built in living spaces to protect from police fire. There is an emphasis the information not previously given to the Truth and Reconciliation Commission which highlights the difficulties around telling stories in terms of legality
Authority of the Speaker
In both cases the speakers have academic expertise, quick searches of them would support their authority to speak about chosen topics. We know who the speaker is from the description at the bottom of the video. They have expertise on the topic, at least academically – and would appear to be quite knowledgeable on the subject from the information provided. Similarly, the speakers in the second video are both specialists in specifics area that they are speaking of, knowledgeable around theoretical trends in the field and its capabilities.
The first video has Angel D. Nieves, Professor of Africana Studies and Digital Humanities at Hamilton College, US. The video is about black special humanities, as a subfield of spatial humanities looking at the history of African diaspora. Here we have 2 different approaches to making videos, which may affect the reception of the message. The first video emphasises understanding “what it’s like to be African diaspora”, specifically looking at apartheid regimes and the imposition of restrictions and control on their lives – but also the resistance of those of African descent in their daily lives. The video specifically deals with the concept of restorative social justice through the telling of narratives highlighting injustices during apartheid rule. The organisation lends some credibility to the speaker in the first video, while he may not visibly be of African descent he does seem to know what he’s talking about. He thoroughly explains the field and the issues at hand, specifically social control in this case.
The second video features two speakers: Geoffrey Rockwell, Professor of Philosophy and Humanities Computing at the University of Alberta, Canada and Stéfan Sinclair, Associate Professor of Digital Humanities at McGill University. The second of my chosen videos is concerned with textual visualisation as a process. Here the speakers are concerned with visual textuality, its uses and applications. They come from specific academic field, while the speakers speak about capabilities of technology generally a lot is related to their specific fields. However, they cover multiple different topics. Videogames like Pokemon Go- which we are told is different categorically, though it is related as it is a form of visual image/information literacy
The point of view of the speaker is clear in each video. In the first video, the relationship between the speaker and the organisation creating the video is transparent. He clearly has an agenda, to expose injustices as part of his research – but also to defend the validity of his research. The second video is more aspirational, it is regarding the capability for the application of technology. Both speakers are biased in a sense, working in the field that they defending and promoting – and speculating about the future.
One could evaluate the accuracy of the first videos content of the video by searching through the narratives around apartheid South Africa and looking for evidence of specific events, or looking for publications or review of the study when they are released. The research is for the most part original to the speaker, but also draws on previous research and witness testimonials. The second video is largely opinion, but qualified opinion on their respective fields. Though there are not sources provided, there are many general references
Both videos are high production quality – well-lit and framed like the other videos on Dariah Teach and you can change the quality of video on the Youtube platform and generate subtitles. The content is the ideas presented in clear audio – with clear linear narratives. Titles are used effectively, describing the topic being spoken of as it transitioned from one topic to another. The first has focus primarily on the speaker, though it is furnished with examples as he speaks, showing video narratives and computer platforms. With a variety of transitions used to illustrate what is being spoken of. There are other videos with a similar purpose to the second video, promoting the Digital Humanities and showing their application and it is similar in tone to the others on this Youtube channel. The second of my chosen videos has less additional material on top of the information given by the speakers, who are in a central position onscreen while speaking.
“Digital Humanities in Practice – Spatial Humanities & Social Justice” DariahTeach. Youtube. Web. Published: Nov 23, 2016 . Date Accessed: 30 Nov. 2016. https://www.youtube.com/watch?v=9mAiyn6gMJw&index=1&list=PL77mHK9JuenOnEUrFvNzZB9qKuB3gE892
“Digital Humanities in Practice – Visualising Text” DariahTeach. Youtube. Web. Published: on Oct 19, 2016. Date accessed: 30 Nov. 2016. https://www.youtube.com/watch?v=uamyLcWtECg
Social media is increasingly playing a prominent role in everyday life, especially in the western world. Large multinational companies are beginning to monopolise particular services “free” to the user like Google search engine, or Facebook, who owns Instagram and has tried to buy Snapchat. They may be free to use, but that is because their product is the user reach that they have for advertisers and business. These profiles are becoming increasingly important as online identity and there has been rising demand in the past number of years for services to be provided on the internet. Images have a prominent position, being eyecatching, identifiable and are generated from a variety of sources.
Interaction begins to take place in a “third space”, with ideas shared between cultures . Though extensively criticised for his use of dense language, Homi K. Bhabha provides a framework for understanding the clashing and merging of cultures and appropriation of different elements resulting in “hybridity” and “mimicry”. E.g American hip-hop culture, the “gangster” image – and the appropriation of Box hats and baggy trousers. The role that social media plays in trend setting, commercial sales and creating a personal image hasn’t been concretely defined and researched yet but there are undoubtedly links. “Selfies” as a form of image have become incredibly power online, common amongst Facebook, Snapchat and other social media users and the profile photo functions as a form of representation of identity. The potential for the social media to be capitalised on is recognised by advertising, branding and marketing companies who increasingly make efforts to engage audiences onlineOr comic book culture, and its popularisation alongside the release of multiple high budget commercial cinema films merging the Marvel and DC “universes”. they certainly take advantage of hashtags
This third wave post-structuralist theory is adaptable, and can be applied to Social media sees user generated content in the form of profile photos and posting, videos etc. Different platforms see different use: Facebook, Twitter, Instagram all have different uses but the image has significant power in all of them. Construction of identity
Photos have a significant amount the information attached to them like user generated tags, location, captions etc. separate from the codified construction of the image itself. Who are the audience for posted photos? How are they taken symbolically, and what systems are the for “reading them”
Of course this space is shared by Events, businesses, promoters, and advertised who all generate images. Information security and privacy issues aside, how information is disseminated has changed greatly, seeing a blurring of the lines between social or personal communication and media communication.
Postmodernist concerns seem useful for criticism, reality has become increasingly mediated by social media. Memes, “facts” . Even “fake news” has come to the attention of the public, with speculation of its role in the American presidential election – all grab out attention with images. Hyper-reality with connotations of illegitimacy in a Baudrillian sense seems apt, and the simulacrum seems applicable to the social world online. It’s a question of information as well as images. Images can be stolen, manipulated, framed in different manners. “Catfish”is a documentary about uncovering the false information given by an online “friend” who had falsified an entire family history supported with images and addresses. Connotations of illegitimacy and uncertainty mark the image online. Especially considering photoshop and other photo modifying software, but the information attached to an image is equally if not more important
Baudrillard, Jean. Simulacra and Simulation. Ann Arbor: University of Michigan Press, 1994. Print.
Bhabha, Homi. The Location of Culture. London: Routledge, 1994. Print.
Curating an image collection presents its own concerns and challenges. Where photos for a collection are sourced can bring up problems, especially around ownership to images and rights to their distribution. Copyright law aside, there are ethical considerations when sourcing images. Especially when images are drawn from private collections. If the photographer is no longer alive to give their permission, like in Finding Vivian Maier, is it acceptable to bring their work into the public domain? This may be an extreme example, but it nonetheless highlights numerous issues around ethics in relation to presentation of image collection, and the attachment of information to the images. Particularly in the digital age when information can be distributed quickly and cheaply to a large audience. The internet provides numerous platforms that can host image collections, with its own set of issues. The photographs taken by Vivian Maier were purchased in an auction by Maloof who scanned them and published them online alongside his blog via popular website ’Flickr’ in August 2009– allowing them to be viewed by a large audience. Though the image collection in this example was largely ‘discovered’ in the form of negatives, physical exhibition required the collection of prints which were sold for monetary gain
Collections of images have to source their material somewhere. The purpose of the collection largely dictates where photos are sourced from and how they are presented. An amateur historian, Maloof bought 30,000 negatives at an auction – later purchasing more from others who had bought them originally. The images in question were mainly taken around New York, and Chicago over a womans lifetime as she worked as a nanny for numerous Chicago families and photographing daily. “Taking snapshots into the late 1990′s, Maier would leave behind a body of work comprising over 100,000 negatives. Additionally Vivian’s passion for documenting extended to a series of homemade documentary films and audio recordings.” (“About Vivian Maier”, Maloof Collections) This provided archival material, original negatives showcasing her natural talent which were previously unpublished and undocumented. The controversy over this collection of images encompasses privacy issues, and it is worth bearing in mind that Vivian Maier had no input in how the work was framed. How images are framed and presented in an image collection affects how they are perceived and consumed by an audience.
Curation of images requires further categorisation and the attachment of information to photos to contextualise them. Largely unknown in her own lifetime, Vivian Maier’s “discovery” began with this collection of images curated by John Maloof being published online, and later being exhibited first in New York and then internationally. She was an enigmatic and elusive character who literally made efforts to hide her work from others, and framing her work in this context gave it mystery and intrigue which was picked up on by various magazine and newspaper articles that brought her to public attention. The ‘public eye‘ has always been a powerful force, with the internet consumer interaction has changed – seeing an increased input into the process which the consumer is interacting with. Maloof attracted a lot of attention and began digging in to her history and this became a big part of the exhibition of her work. Crowdsourcing was used to source funding via popular website Kickstarter and a documentary was made, interviewing the children she used to nanny. His efforts received a lot of media attention, with articles being written about Vivian Maier who became a phenonomon. “Maloof has edited a book of her work, Vivian Maier: Street Photographer, which was published in November, and has raised money for a documentary film about her that is in the works.” (Zax, December 2011) The issue here at hand is the invasive searching for information about Maier resulting in scrutiny of the artist’s life as well as here work. Some controversy was generated by the recording of interviews of many surviving people who knew Vivian Maier during her life, building an interesting picture but also implying mental illness and a darker side to her character. Ethics and the attachment of context and information to images are concerns when curating an image collection – largely in the hands of the collectors and the gallery, but now becoming more complex in the age of information.
Zax, David. “Vivian Maier: The Unheralded Street Photographer”Smithsonian Magzine. December 2011. Web. Date of Access: 18 Nov. 2016. http://www.smithsonianmag.com/arts-culture/vivian-maier-the-unheralded-street-photographer-43399/
The museum and archaeological practice have transformed with digital technology. “The likes of crowdsourcing, hi-res scanning, 3D rendering and photogrammetry are increasingly becoming part of the methodology of preserving culture in the 21st century.” (Sinclair) This has conceptual ramifications for the cultural artefact itself as well as the fields of archaeological practice. The production of copies of cultural artefacts has long since been part of archaeology and museums methodologies and practice, but it is now easier than ever to do so. “Archaeologists and heritage managers have drawn on a range of recording technologies to generate highly accurate datasets of historic objects, monuments and landscapes. They have also increasingly drawn on the rich functionality of 3D modelling packages to create visualisations and reconstructions of the past.” (Jeffrey, 144) Jeffrey seems optimistic about what he terms a “potential golden age.” (144)
Sinclair questions the practice of reconstructing destroyed cultural heritage, pointing out some problematic aspects: “While an artistic reaction resulting in a new work of art is one thing, replicating an object or structure that has been destroyed – or copying it before it is lost – opens up many more questions.” (Sinclair) He gives several examples of companies specialising in digitisation of heritage and reproduction, mentioning the kinds of projects that have been worked on by companies like Factum Arte who create high resolution copies, utilising methods including aerial photogrammetry. Technological innovation spans beyond the museum, the issue that he discusses being the reproduction of objects and even architecture in the middle east. “But the ability to remake significant structures on the sites where they once existed is clearly a process that requires an awareness of cultural sensitivities, not to mention a desire to collaborate with local communities and organisations.” (Sinclair)
Jeffrey discusses what digital reproduction does to the aura of an object, echoing similar concerns to Benjamin from 1936 – but highlighting usefulness of this technology: “Bearing in mind that one of the ultimate objectives of digital visualisations is to help us understand the past, not only is it a peculiarly modern medium, but conceptually it represents a huge break from all previous ways of interacting with the world.“ (145) It is not without problematic aspects, as the object no longer needs a location in the real world and it becomes immaterial and immortalised in a sense as it cannot decay and is infinitely reproducible (or at least perceived this way by those who are not computer scientists). “No substance – barring the nascent field of haptics which offers a peculiar analogue for the sense of touch, the object has no physical substance that we can sense, no weight, no texture, no smell and no temperature.” (145) Online technology has facilitated increased connectivity and accessibility, virtually all over the world but generated all sorts of issues around licencing and copyright. Jeffrey’s argument would be that the aura of the object is not reduced, being a difficult concept to pin down in the first place. “This sensation, the thrill of proximity, is not essentially about the physical object itself, it is about the people who have been close to it in the past and our connection to them.” (Jeffrey, 147) Democratisation of recording technologies has its own issues around means of and purpose of reproduction
Sinclair, Mark. “Should museums be recreating the past?” Creative Review. 20th July 2016. Web. Date of Access: 26 Oct. 2016 https://www.creativereview.co.uk/should-museums-be-recreating-the-past/
Giza Plateau Alignments. Cheops Pyramids. Web. http://www.cheops-pyramide.ch/khufu-pyramid/great-pyramid/giza-plateau-alignments.GIF
Jeffrey, Stuart Challenging Heritage Visualisation: Beauty, Aura and Democratisation. Open Archaeology 2015; 1: 144–
Metadata is the description of all the data surrounding a digital artefact, image or file: meta as a prefix referring to the underlying nature of this attached data. This is an umbrella term, simply referring to data linked to an object or file. In Understanding Metadata it is defined as “Metadata is structured information that describes, explains, locates, or otherwise makes it easier to retrieve, use, or manage an information resource.” (1) This is further divided into 3 categories “Descriptive metadata describes a resource for purposes such as discovery and identification. It can include elements such as title, abstract, author, and keywords.” (1) This type of metadata is probably the most important in terms of categorising works for searchability, however there is a clear distinction from this and: “Structural metadata indicates how compound objects are put together, for example, how pages are ordered to form chapters.” (1) While type of metadata links composites of a resource or collection another distinction is necessary: Administrative metadata, referring to the metadata necessary for managing resources and cataloguing them.
XML has become the standard coding language used when encoding metadata, though there are other languages that can be used, this has become the standard in most professional and official attribution of metadata. It is a hierarchal language that uses controlled vocabularies, though versatile it needs standards of use for usability of other users and these elements function in the coding language as identifiers for strings of data. “The definition or meaning of the elements themselves is known as the semantics of the scheme.” (Understanding Metadata 2)
There is one important consideration, attachment of metadata is a process that requires encoding to attach data and decoding to use the attached data. Metadata standards are therefore quite important, as they facilitate the use of data attached to files. The use of standardised schema like Dublin Core or VRA constitutes an imposed structure which is used to encode the relevant metadata in an agreed format so that this data can be used. Though the XML language technically facilitates the input of any string of characters as a title, without following standards and schema a lot of this data would be unusable as there wouldn’t be adequate reference points for searching this data i.e. without identified authors in the metadata, the task of searching for works by this person becomes akin to searching for a needle in a haystack. Controlled vocabulary is used as a form of indexing to get around this. “The purpose of controlled vocabularies is to organize information and to provide terminology to catalog and retrieve information.” (“What are controlled vocabularies?” 12) This becomes complexified when more terms are added: “A taxonomy is an orderly classification for a defined domain. It may also be known as a faceted vocabulary.” (What are controlled vocabularies?, 22) The use of these taxonomies is to stratify hierarchal data for greater ease of use by machinated searching, broadening the scope of search terms and narrowing the results down more efficiently, using less computing power by making the attached metadata more accessible.
Harpring, Patricia “What are controlled vocabularies?” (12-26). Introduction to Controlled Vocabularies: Terminology for Art, Architecture and other cultural works. Getty Research Institute: Los Angeles, 2010. Web. Moodle. Date of Access: 25 Oct. 2016.
“Metadata Mapping”. Web. Understanding Metadata NISO press: Bethesda, 2004. Web. Moodle. Date of Access: 25 Oct. 2016.