Uncovering evidence for historical theories and identifying patterns in past events has long been hindered by the labour-intensive process of inputting data from artefacts and handwritten records.
The adoption of artificial intelligence and machine learning techniques is speeding up such research and drawing attention to overlooked information. But this approach, known as “digital humanities”, is in a battle for funding against more future-focused applications of AI.
“There is a lot of interest in digital humanities, but there is not a lot of money,” says Ilan Shimshoni, professor of computer vision and machine learning at the University of Haifa in Israel, where he works on archaeological projects that include reassembling artefacts from photos of fragments. “If you want to do an analysis of Facebook you’ll get much more money than if you want to look at ancient Greek artefacts.”
Archaeological puzzles may not seem as urgent as computer science projects in healthcare, finance and other industries, but applying algorithmic techniques to historical research can improve AI’s capabilities, says Ayellet Tal, an archaeological and computer science researcher at Israel’s Technion University.
Restoring or recreating archaeological artefacts is a complicated problem for computer vision models. Previous work — algorithms learning to reassemble photos or documents, for example — has not accounted for the degradation of fragments, unclear images or imprecise piece-fitting.
Before attempting to reconstruct artefacts, the AI model learnt how to reverse the erosion process and predict what original fragments looked like. Researchers then defined how the model should test whether fragments fitted together.
“The tasks in archaeology are classical computer vision problems,” says Ms Tal. “But they are much more difficult in archaeology because the objects are not nicely behaved. We want to transform archaeology and we want to advance computer vision because these tasks are where current algorithms fail.”
IBM Japan and Yamagata University last year presented the first geoglyph — a large human-made ancient formation or design on the ground — identified by AI. The task required huge amounts of high resolution image data, which IBM saw as both a challenge and an opportunity to test its AI and computing capabilities.
“These geoglyphs are not only spread across miles of land, but the nature of the landscape means it can be difficult to discern clues that point to new formations,” says Akihisa Sakurai, an IBM engineer.
DeepMind, Google’s AI lab, has also targeted ancient mysteries. Yannis Assael, a DeepMind research scientist, last year published a paper with Oxford university historian Thea Sommerschield on a deep learning model called Pythia, designed to fill the gaps of missing text in ancient Greek inscriptions. In addition to developing a tool for historians, the research tackled a big challenge in AI: understanding algorithmic decision-making.
By assigning weightings to parts of the input text it used to predict missing characters, historians can better evaluate Pythia’s predictions. The technique also helps improve neural networks with transparent decision-making processes in other contexts.
For Matthew Connelly, professor of digital history and principal investigator for The History Lab, a UK data-driven research group, it is crucial that historians recognise the role that algorithms play in historical understanding.
Researchers increasingly deal with digitised information as opposed to sources that have been manually curated into archives. If historians do not adapt their methods of searching and analysing sources, their theories and conclusions will be distorted, he says.
For example, physical documents can be digitised using an AI technique called optical character recognition. Even with high accuracy, letters and words are frequently misidentified or missing entirely. At a more fundamental level, historical data may mislead historians when algorithms are used to select what gets archived and what gets deleted.
Prof Connelly recently highlighted his concerns in a New York Times op-ed about the US State Department’s practice of using machine learning to separate its documents into “historic” records to be archived, and “temporary” records to be deleted. Machine learning algorithms can overpredict historical significance for some documents and overlook others that will prove to be important, he warns, which he demonstrated in a project with Microsoft called “Predicting History”.
“The US government is using machine learning algorithms to destroy part of the historic record,” he says. “It’s the end of history as we know it.”
Prof Connelly says his research on official secrecy can only be done with machine learning and data science techniques, and — like restoring missing text in ancient inscriptions — uses algorithms that learn to fill in the gaps.
What gets left out of history has real-world consequences on matters of policy such as healthcare, economics and international relations. Matthew Lincoln, art historian and data research scientist, notes that this particularly affects people and groups with less power and privilege.
Mr Lincoln believes that machine learning is best placed to expose the gaps in historical records, but he cautions that historians must take a critical view when applying data science techniques.
“If historians use machine learning naively on data generated from archival sources, it will reinforce those existing gaps,” he says. “Poorly-made data analyses can unwittingly lend an air of objectivity to historical arguments that really can’t be supported by these incomplete archives.”
Credit: Google News