Digital technologies have been touted as a solution to thesince early in the pandemic. AlgorithmWatch, a non-profit research and advocacy organisation to evaluate and shed light on algorithmic decision making processes, just published a report on Automated Decision-Making Systems in the COVID-19 Pandemic, examining the use of technology to respond to COVID-19.
The report has a European lens, as AlgorithmWatch focuses on the use of digital technology in the EU. Its findings, however, are interesting and applicable regardless of geographies, as they refer to the same underlying principles and technologies. Furthermore, there is reference and comparison to the use of technology worldwide.
Is it AI or ADM?
The reports sets the stage by introducing the distinction between(AI) and Automated Decision-Making (ADM). AlgorithmWatch notes that AI is a vague and much hyped term, to which they have long preferred the more rigorous locution ADM. AlgorithmWatch defines an ADM system as:
“A socio-technological framework that encompasses a decision-making model, an algorithm that translates this model into computable code, the data this code uses as an input — either to ‘learn’ from it or to analyse it by applying the model — and the entire political and economic environment surrounding its use.”
The point is that ADM systems are about more than technology. Rather, AlgorithmWatch notes, they are ways in which a certain technology is inserted within a decision-making process. And that technology may be far less sophisticated or “intelligent” than deep learning algorithms. The same technology can be used for very different purposes, depending on the rationale.
Data collected through a Bluetooth LTE-based smartphone app, for example, can be voluntarily and anonymously shared either with a central server or with smartphones of potentially infected individuals, with no consequences or sanctions whatsoever in case a citizen decides not to download it.
Or, the same technology can be adopted within a much more rights-invasive solution, working in tandem with GPS to continuously provide a citizen’s location to the authorities, at times within mandatory schemes, and with harsh sanctions in case they are not respected.
On that premise, the report goes on to examine different ways of using technology and collecting data employed by different initiatives around the world.
Mandatory ADM and bracelets
Some regimes have resorted to invasive ADM solutions that strongly prioritize public health and safety concerns over individual rights, notes AlgorithmWatch. China seems to be leading the way. According to a New York Times report, a color-based rating system called Alipay Health Code is used.
The system uses big data “to draw automated conclusions about whether someone is a contagion risk”. Under this model of ADM, citizens have to fill out a form with their personal details, to be then presented with a QR code in three colors:
“A green code enables its holder to move about unrestricted. Someone with a yellow code may be asked to stay home for seven days. Red means a two-week quarantine.” A scan is necessary to visit “office buildings, shopping malls, residential compounds and metro systems,” according to a Reuters report.
AlgorithmWatch goes on to add Bahrain, India, Israel, Kuwait, Russia and South Korea to the list of countries where ADM applications are used in a way that poses threats to the rights of their citizens. Although the report notes that the EU fares better in that respect, the use of apps in Hungary, Lithuania, Norway and Poland is rife with issues too.
AlgorithmWatch provides some graphic details on some of those cases before moving on to wearables, aka bracelets. Here it’s Liechtenstein leading the way, having launched a study in which 2.200 citizens are given a biometric bracelet to collect “vital bodily metrics including skin temperature, breathing rate and heart rate.”
That data is then sent to a Swiss laboratory for analysis. The experiment, that will ultimately involve all of the citizens in the country, is based on the premise that by analyzing physiological vital signs “a new algorithm for the sensory armband may be developed that can recognize COVID-19 at an early stage, even if no typical symptoms of the disease are present.”
Wearables are also utilized in countries such as Hong Kong, Singapore, Saudi Arabia, the UAE, and Jordan, but also at Michigan’s Albion College. The report notes that although the stated goal is to enforce quarantine orders and other COVID-19 restrictions, organizations such as the Electronic Frontier Foundation (EFF) are deeply concerned.
The EFF states that wearables, in the context of the pandemic, “remain an unproven technology that might do little to contain the virus, and should at most be a supplement to primary public health measures like widespread testing and manual contact tracing.” Also, and importantly, “everyone should have the right not to wear a tracking token, and to take it off whenever they wish.”
How do contact tracing apps work, and actually, do they work?
The fundamental clash between different models of ADM is exemplified in the global debate around digital apps to complement contact tracing efforts, AlgorithmWatch notes. While some tech enthusiasts argued that privacy and other fundamental rights could be sacrificed to enable public health, not everyone is in favor of that view.
Furthermore, a heated debate on the adoption of relevant technologies ensued, resulting in two main camps: GPS tracking to collect location data, and Bluetooth Low Energy to collect proximity data. The latter camp also split in two opposing lines of thought: centralized vs decentralized. Countries like France, the UK and initially Germany tried to develop centralized Bluetooth-based solutions, while Italy, Switzerland, Denmark, Estonia (and, ultimately, Germany) opted for a decentralized solution.
GPS-based apps work by collecting location data. The rationale is that the data can help health authorities reconstruct the web of contacts of an individual who tested positive to COVID-19 had. This aids contact tracing efforts, the thinking goes, by speeding them up and making them more effective and complete, while also enabling precise geographic identification of outbreaks. GPS-based apps can also enable identification of trends and enforcement of quarantine rules.
Decentralized contact tracing apps work by merely signaling that two phones have been close enough to each other for long enough to consider the encounter at risk. They issue a notification of potential exposure to a positive subject, were one of the owners to be diagnosed with COVID-19 within 14 days, assuming they are willing to upload encounter data through the app.
Exposure notification APIs developed by Google and Apple for the Android and iOS operating systems which comprise the vast majority have been utilized, with varying degrees of success, while also causing some friction. The claim was that no location data would be collected. However it has been argued that Google still asked for location data to be turned on, even though not collected, to actually be able to notify users via Bluetooth.
AlgorithmWatch notes that months after the first deployments, we still lack hard evidence on the effectiveness of all such ADM systems. As a systematic review of the literature concluded after analyzing 110 full-text studies, “no empirical evidence of the effectiveness of automated contact tracing (regarding contacts identified or transmission reduction) was identified.” Why?
As the American Civil Liberties Union notes, GPS technology has “a best-case theoretical accuracy of 1 meter, but more typically 5 to 20 meters under an open sky.” Also, “GPS radio signals are relatively weak; the technology does not work indoors and works poorly near large buildings, in large cities, and during thunderstorms, snowstorms, and other bad weather.”
As for Bluetooth, even its own creators have argued for caution: problems in terms of accuracy and “uncertainty in the detection range” are very real, “so, yes, there may be false negatives and false positives and those have to be accounted for.” AlgorithmWatch elaborates further, and notes that based on the above, the efficacy of such apps is questionable.
Thermal scanners, face recognition, immunity passports: should this be our new normal?
The report also notes that for some industries, the pandemic is not exactly catastrophic. Forecasts for the thermal scanning, facial recognition, face and voice biometrics technology markets look outstanding, largely thanks to the pandemic. AlgorithmWatch dubs this both unsurprising and surprising:
“Unsurprising, given that face recognition is being widely adopted and deployed, both inside and outside the EU, with little to no meaningful democratic debate and safeguards in place. Bur surprising also, given what we know about their scant usefulness in the battle against COVID-19.”
A National Institute of Standards and Technology study argues that “wearing face masks that adequately cover the mouth and nose causes the error rate of some of the most widely used facial recognition algorithms to spike to between 5 percent and 50 percent.” EFF on its part notes that thermal cameras not only present privacy problems, but can lead to false positives carrying the very real risk of involuntary quarantines and/or harassment.
Some countries are experimenting with immunity passports too, from Estonia to the UK, as AlgorithmWatch documents. The rationale for their adoption, and the case for urgently doing so, is the same: when adopted as a digital “credential,” as per Privacy International, an individual becomes able to prove his health status (positive, recovered, vaccinated, etc.) whenever needed in public contexts, thus enabling governments to avoid further total lockdowns.
Privacy International goes on to add, however, that similarly to all the tools previously described, “there is currently no scientific basis for these measures, as highlighted by the WHO. The nature of what information would be held on an immunity passport is currently unknown.”
AlgorithmWatch concludes by highlighting the common theme emerging from what has been studied: a “move fast and break things” mentality, trading liberty for safety. What’s more, there does not seem to be much in terms of evidence for safety, or in terms of a democratic debate, accountability, and safeguards in terms of giving up liberty. Or even in how to measure “success.” The focus should not be to make these technologies better, AlgorithmWatch notes, but rather to safeguard their use:
“Rushing to novel technological solutions to as complex a social problem as a pandemic can result both in not solving the social problem at hand, and in needlessly normalizing surveillance technologies.”