Computational analysis sees the powers of machines, and can constitute the sensing and analysing of files like an image. Computers can be thought how to recognise images, Colours, shapes, faces all have to be defined for the computer to recognise them if asked.
In class we were given an introduction to Tracking.js, with practical exercises showing us how to use this library in this practice and set parameters for what we were trying to detect. Tracking.js is a library of different computer programs, useful for detecting attributes of images. It can be used to sense colours, faces or shapes. A large advantage is that it is browser based, which make it very easy to use and it is open source.
Digital objects require metadata to be used, and while there are very basic labelling systems that a computer can generate, the image requires time resources to be devoted to it alongside digitisation. Complicated algorithms are be used by computer platforms to “read” an image, or any digital file. This works by detecting elements, defined as part of the computer program. Computer scientists create the algorithms, but the consumer can use them. E.g. SAS suite of Textual Analysis programs. An important point is that it may not detect all, or might get things mixed up or identify extra elements ike eyes of a face, and crevices. It only looks at what is available to it, what it can identify, what it has been programmed to do – human expertise is needed to interpret the results.
One should consider that computers are machines and parameters need to be defined for them to operate under. Limits need to be set, because how they work is by searching within specified parameters. Software also requires human design, there is a decision making process where default behaviours of programs have to be decided. Any program is designed with certain assumptions in mind, and it is a good idea to bear this in mind when choosing software for computational analysis. The example of the SAS suite of Textual Analysis programs referred to above, for example, has several tools that are used for different aspects as a part of data mining which are listed by Chrakraborty, Pagolu and Garla (4) Use of such software for analysis requires an understanding of the data set at hand, and curatorial expertise.
The computer doesn’t care about what its analysing it is a machine, it just calculates based on the parameters defined by the user. Curatorial expertise implies a lifetime of learning behind it, analytical skills and critical thinking. The advantage of the human over the computer is interpretation, a computer can be fooled deliberately by controlling conditions if you know what the computer is looking for or accidentally by similar looking areas (in the case of visual computational analysis). While human error is indeed possible, hopefully curatorial expertise would reduce these chances. Furthermore, human expertise may reveal significance quicker than a computer would – at least from data output by a computer. Furthermore, computational analysis needs human expertise to be further developed, not just in terms of developing technological capabilities but also in terms of specifically identifying data relevant to a field and being able to narrow it down enough and make it machine readable.
Computational analysis is not perfect yet, but it is being constantly improved. New digital tools are constantly being developed and improved upon, however it should always be remembered that this requires human effort. Furthermore, it requires human understanding and curatorial expertise as one needs to know what to look for in the first place to design a program to do the task. Though the machine will provide results based on what it is looking for, interpretation is required to understand the significance of results.
Goutam Chakraborty, Murali Pagolu, Satish Garla. Text Mining and Analysis: Practical Methods, Examples, and Case Studies Using SAS. (2013) North Carolina, USA: SAS. Web. https://www.sas.com/storefront/aux/en/spmanaganalyzunstructured/65646_excerpt.pdf