About
Images and analysis related to warfare on Ancient Greek black-figure and red-figure vases.
Images and analysis related to warfare on Ancient Greek black-figure and red-figure vases.
We propose to develop application software for visual recognition of images related to warfare on Ancient Greek black-figure and red-figure vases. The goal of the project is to enable intelligent sorting of large quantities of images on vases into meaningful categories, according to objects depicted and compositions of the scenes. Such an application will facilitate research related to the vase-painting imagery, allowing a whole new series of questions to be posed and answered.
Currently, massive quantities of images of Greek vases are already available in open access databases such as the Beazley Archive, Corpus Vasorum Antiquorum, Lexicon Iconographicum Mythologiae Classicae, or museum databases of the Metropolitan Museum of Art, the British Museum, the Boston Museum of Fine Arts or the Harvard Art Museums. These image collections are searchable; however, the search operates according to verbal tags: for example, if a verbal description of a vase in a collection of the Boston Museum of Fine Arts contains the word “shield,” such a vase will appear in a search for “shield;” if the verbal tag is lacking, the vase will be lost to the search. Yet at the current stage of the technical knowledge in the field of image recognition, creating a machine learning model that would be specifically trained for recognizing images on Greek vases is a feasible and even straightforward task.
Greek vase painting is a highly uniform field, where representations are extremely consistent, and multiple visual formulae exist on the level of the individual objects represented or the composition of the scenes. These formulae are key components of the visual language of the vase-painting, which has its own grammar and syntax. In order to understand this language better, we need to identify its paradigms and syntagms. Currently, such research often needs to happen through laborious searches by hand; a machine learning model will greatly speed up that process.
We propose to focus on vases that feature arms and armor and to develop the application in conjunction with the course “Ancient Greek Warfare” (CLS-STDY 118) in Spring 2020. The students in that course as well as other qualified student volunteers will be able to take part in the aggregation of the data and in the process of training the image classifier model in recognizing the images. We envisage multiple cycles of machine learning happening in the course of the spring semester, bringing progressively more sophisticated results and driven, in part, by the participants’ individual research projects. In the beginning the model can be taught to identify basic objects, such as spears, shields, bows and chariots. This will result in a big pool of war-related images; students and faculty then will be able to identify recurrent scenes and compositions and train the image classifier model in recognizing them; after that the image classifier will add new images to these categories, generating new data for further human analysis. Example of questions that we could answer in the course of the spring semester with the help of machine learning are, do dead bodies appear in scenes of combat of nude armed warriors? What are the objects appearing in domestic scenes featuring women in armor? What type of shield does Heracles usually hold in battles with Amazons? Are daggers ever represented in use in combat? The possibility of posing and answering these and innumerable other data-specific questions will eventually advance our understanding of the most essential issues about Greek vase painting, such as differentiating between myth and daily life on vases; they will also shed new light on the question of the historical value of vase painting for reconstructing contemporary warfare.
Building the image classifier model involves several stages: first, a large quantity of data needs to be imported into the system. We plan to use the open access database of the Beazley Archive as our training data. The initial data aggregation will constitute a task that will be jointly performed by Archimedes Digital, a digital humanities startup (whose founder, Luke Hollis, is committed to participating in this project), as well as by interested Harvard students from Computer Science and Applied Computation of SEAS. The technical aspects of the model training will be also executed by Archimedes Digital and interested Harvard students, in conjunction with the team led by Reinhard Förtsch, the information technology director of the Deutsches Archäologisches Institut, who is currently working on advancing machine learning image recognition in ancient art and has expressed interest in supporting our project.
The image classifier model that will be developed by the end of the grant period will be expandable further, by training the model to recognize new parameters.
On Greek Vase-painting
Images and analysis related to warfare on Ancient Greek black-figure and red-figure vases.
Boylston G23, Harvard University, Cambridge, MA 02138