http://mstars.co.id/wp-content/uploads/2012/12/Big-Data-by-DARPA.jpg

Source: mstars.co.id

In her TEDtalk, Fei Fei Li proposes to teach computers how to decipher images the way we teach children. Of course, the machines are unable to gather the needed data themselves, so the starting point of this experiment was to provide them with over billion categorized, described images that were meant to trigger the learning process in the computer. Using Big Data to train computer algorithms doesn’t seem as revolutionary to us now, but certainly it took some time to adjust to the idea that learning of computer could be similar to that of a child – if we tell it what the object is enough times, eventually it will learn how to do it by itself. But is it the right approach? I can see the appeal of such trained machine, how useful it could be in a research that looks at large data sets and how dramatically it could speed up the analysis process. Although I enthusiastically cheer for technological  progress I cannot help but worry that we might loose some of the most valuable information, one I keep going back to in nearly every post: socio-cultural context.

http://img.deusm.com/informationweek/2016/06/1326109/AI_Henrik5000_iStock_000037174148_Large.png

Source: http://www.informationweek.com

No matter how smart the machine is, how finely tuned its algorithms are, I still think that human touch is needed in curating artifacts of cultural heritage. Of course, computational analysis can help in speeding up creation of metadata, but it is not flawless and it lacks certain subtlety of human touch. Imperfect, because the machine can still mistake the gender of a person on the photo and categorize skateboard as a curb. Insufficient because it can leave out minor details than can prove crucial for accurate analysis. Of course, people make mistakes as well, which paradoxically makes the machine resemble us more. There are also attempts to teach the computers to recreate or even create objects of cultural heritage (with a little help of user who’ll provide the material i.e. image), such as deepart.io.

Can computational analysis exceed curatorial expertise? No. We can give the machine all the information about material, techniques and subjects of an image and (at least at this moment) it will still lack the sensitivity of a curator. When a curator decides to preserve and conserve an object, or image, it’s a conscious decision motivated by the object’s importance for the cultural heritage. What is more, he or she can make it accessible for others by providing relevant cultural context vital for understanding the curated piece. Without it, there is a chance that the audience could misunderstand what and why is presented to them. Finally, even if we agree that computational analysis is equal to curatorial expertise, all the data provided for the machine in order to make it algorithm functional, was appropriated by the curator to start with, which could turn this discussion into a question how finely can we tune our curatorial techniques and where should we stop.

 

http://www.smedia.info/wp-content/uploads/2016/05/deepart-example-2.jpeg

Source: http://www.smedia.info

 Reference:

Fei Fei, Li. “How we teach computers to understand pictures| Fei Fei Li”. TEDTalks. YouTube 23 March 2015. Web. 14 December.