Before explaining why standardised metadata is important it is firstly important to understand what metadata is. Essentially metadata is ‘data about data’. We use metadata to describe other objects, for instance, when you open a book and see the author, publisher, contents, etc., you are looking at a book’s metadata. For digital preservation purposes standardised systems are created to record these features so that an easily searchable and detailed record are created. As Adam Rabinowitz exhibits when we accurately create and store this data we can prevent the ‘flattening process’ which compresses our history into thematic periodisation (Rabinowitz). So why not do it?
The concept of metadata seems relatively straightforward yet there are dozens of articles, across numerous disciplines, outlining how to put it to best practice and why this practice is important (Linkert, Park, Riley). This abundance of guidelines and information can only result from a resistance among the academic community to implement a strong metadata policy into their research. Perhaps the number one factor motivating this resistance is simply time consumption. Creating and fulfilling metadata standards is a time consuming process, which requires added thought and labour. As shown by Paul Edwards et al. adding metadata to a project can increase the workload significantly (Edwards 673). So the necessary question to ask really is why this extra workload is worth your time and effort?
There are many advantages to creating good, in depth standardised metadata forms. Each on their own could supply content to fill a journal. Simply put, yes standardised metadata requires a longer research period to begin with, but, in the end it promotes better research. This research is not only more beneficial and expedient for researches that follow but provides high quality data that results in efficient research and findings rooted in solid evidence (Corti). As in the cultural sector, metadata has vast significance to scientific research. Edwards describes extensive, highly structured data as ‘a holy grail, a magic chalice” that can make data integration and sharing seamless (Edwards 672). Not only does this make datasets easily transferrable between researchers and communities but it also enables transferability between systems. This is essential to the preservation of digital material. Without proper standardised, monitored and extensive data we run the risk of losing the digital data we’ve created. Despite the perception of technological infallibility decay can erode our digital present as it has our analogue past. Therefore, good structured metadata not only allows for efficient and expedient workflow but helps to preserve online resources which can be utilised in years to come.
Corti, Louise et al., “Managing and Sharing Data” UK Data Archive. Web. http://www.data-archive.ac.uk/media/2894/managingsharing.pdf. 25 October 2016.
Edwards, Paul N. et al. “Science Friction: Data, Metadata, and Collaboration.” Social Studies of Science, 41: 5 (2011): 667–690. Web. http://www.jstor.org/stable/41301955. 25 October 2016.
Linkert, Melissa et al. “Metadata Matters: Access to Image Data in the Real World.” The Journal of Cell Biology, 189.5 (2010): 777–782. Web. http://www.jstor.org/stable/27811023, 25 October 2016.
Park, Jung-ran et al. “From Metadata Creation to Metadata Quality Control: Continuing Education Needs Among Cataloging and Metadata Professionals.” Journal of Education for Library and Information Science, 51.3 (2010): 158–176. Web. http://www.jstor.org/stable/40732596, 25 October 2016.
Rabinowitz, Alan, ‘It’s about time: historical periodization and Linked Ancient World Data’ Institute for the Study of the Ancient World. Web. http://dlib.nyu.edu/awdl/isaw/isaw-papers/7/rabinowitz/, 25 October 2016.
Riley, Jenn, and Kelcy Shepherd. “A Brave New World: Archivists and Shareable Descriptive Metadata.” The American Archivist, 72.1 (2009): 91–112. Web. http://www.jstor.org/stable/40294597, 25 October 2016.