Artificial intelligence is here to save us from coronavirus. It spots new outbreaks, identifies people with fevers, diagnoses cases, prioritizes the patients most in need, reads the scientific literature, and is on its way to creating a cure.
As the world confronts the outbreak of coronavirus, many have lauded AI as our omniscient secret weapon. Although corporate press releases and some media coverage sing its praises, AI will play only a marginal role in our fight against Covid-19. While there are undoubtedly ways in which it will be helpful—and even more so in future pandemics—at the current moment, technologies like data reporting, telemedicine, and conventional diagnostic tools are far more impactful. So how can you avoid falling for the AI hype? In a recent Brookings Institution report, I identified the necessary heuristics for a healthy skepticism of AI claims around Covid-19.
Let’s start with the most important rule: always look to the subject matter experts. If they are applying AI, fantastic! If not, be wary of AI applications from software companies that don’t employ those experts. Data is always dependent on its context, which takes expertise to understand. Does data from China apply to the United States? How long might exponential growth continue? By how much will our interventions reduce transmission? All models, even AI models, make assumptions about questions like these. If the modelers don’t understand those assumptions, their models are more likely to be harmful than helpful.
Thankfully, in the case of Covid-19, epidemiologists know quite a bit about the context of the data. Even though the virus is new and there is much to be learned, there is tremendous depth of expertise around what questions to ask and how they can be answered. Modern statistical epidemiology dates to the early 1900s, which means the field is incorporating a century of scientific research into its analyses. In contrast, machine learning methods tend to assume that everything can be learned directly from a dataset, without incorporating the broader scientific context.
Consider, for example, the claim that AI was the first to detect the coronavirus. Machine learning is very dependent on historical data to create meaningful insights. Since there is no database of prior Covid-19 outbreaks, AI alone cannot predict the spread of this new pandemic. What’s more, the claim implicitly overstates the ability of AI to inform us about huge and rare events, which is not the strength of AI at all. As it turns out, while software may have sounded the alarm, grasping the significance of the outbreak required human analysis.
AI’s real value lies in its ability to create many minute predictions. For instance, the AI epidemiology company BlueDot has successfully helped the state of California monitor the spread of the coronavirus. The company augmented traditional epidemiological models with machine learning, using flight patterns to predict the spread at the zip code level. That’s the value of AI. Those granular estimates can enable precise allocation of funding, supplies, and medical staff.
That said, you should not trust all individualized estimates from AI. Frequently, a company will report accuracy—the percent of predictions that are correct during development—to purport the effectiveness of an AI model. Unfortunately, this number is easy to juke and often offers an incomplete picture. For instance, Alibaba has claimed it can diagnose Covid-19 from CT scans with 96% accuracy. But, if you check in with the subject matter experts, you’ll see that the American College of Radiology has said that CT scans should not be used as “first-line tests to diagnose Covid-19.” Other experts echo that this method is not yet proven, and further caution that while the algorithm may be fast, it requires that CT scan rooms must be cleaned and their air recirculated between each patient
As for that impressively high rate of accuracy, it’s time to share a dirty secret of the machine learning world: any data scientist in the field would scoff at that level of accuracy. It’s unbelievably high. Without any caveat, self-criticism, or external validation, it’s suspicious on its face. Even if it is true, we often need metrics aside from accuracy to know if a model is effective, such as the percent of sick individuals who are correctly diagnosed. While fatigued medical systems have turned to AI analysis of x-rays for triaging patients based on the severity of their lung conditions, AI can’t currently diagnose Covid-19 on its own.
Credit: Google News