Credit: Google News
The vulnerabilities of machine learning models open the door for deceit, giving malicious operators the opportunity to interfere with the calculations or decision making of machine learning systems. Scientists at the Army Research Laboratory, specializing in adversarial machine learning, are working to strengthen defenses and advance this aspect of artificial intelligence.
Often, in a data set, corrupted inputs or an adversarial attack enters a machine learning model undetected. Adversaries also impact a model whether or not they know the machine learning algorithm in use, training a substitute machine learning model for use on a “victim” model. Corruption can even occur on sophisticated machine learning models trained with an abundance of data to perform critical tasks. And while some defense techniques can stop certain attacks, given enough runtime, techniques such as a so-called brute-force white-box attack almost always succeed in breaking a machine learning model—regardless of defense strategies, researchers say.
Given the rise in the use of machine learning as an important tool for autonomous systems to learn and act independently of human interaction—as in self-driving cars, data analytics, financial trading—the risks of failure due to adversarial machine learning could be disastrous.
The risks are even more potentially catastrophic for the U.S. Army, which seeks to employ machine learning for tactical applications on the battlefield. Here, having confidence in machine learning algorithms is crucial, say Ananthram Swami, Army Research Laboratory (ARL) fellow and senior scientist for network science, and Brian Jalaian, ARL scientist, both of the lab’s Computational and Information Science Directorate, Combat Capabilities Development Command.
Swami was awarded a Meritorious Presidential Rank Award last June from Army Secretary Mark Esper on behalf of the President for Swami’s contribution to network and information sciences, including signal processing, wireless communications and networking for soldiers, according to the Army.
Because the Army is always operating in a contested environment, constantly under attack and deception, its machine learning models need appropriate defenses, Swami explains. “Anything that’s targeted to deceive our systems, we call it adversarial,” he says. “We want our machine learning systems to be robust enough to deter adversarial manipulations.”
The problem is that adversarial interference can come into any phase of the machine learning process, not just in the training phase where the model learns, Swami notes. “The machine learning model could be poisoned, the training data could be poisoned and the operational data could be poisoned, and each has its own risks,” he states.
For the ARL—and the industry—adversarial machine learning is not necessarily a new research area, Swami continues. But what is new is the push to get to the heart of machine learning’s processing, understanding the adversarial vulnerabilities when using deep neural networks.
“Robust machine learning has a very long history,” says Swami. “In classical approaches we knew the data distributions, and we would then postulate what the adversarial distribution would be to develop a robust classifier. But now we are very data driven, and then the notion is, rather than explicitly learning the distribution, you build a data-driven classifier. And the issue is that we don’t quite understand how these classifiers work.” Classifiers, the scientists explain, are how the computer categorizes an object, or assigns an identity.
And although adversarial inputs can affect any type of machine learning model—such as regression—ARL scientists are focusing on the application to deep neural networks, the sophisticated layers of computational machines. For the past few years, the thrust of the ARL’s work has focused on how to identify the weaknesess in the learning mechanisms underlying deep neural networks that could lead to vulnerabilities, says Jalaian, who provides an example.
“Consider a deep neural network trained to examine photographs and differentiate between dogs and cats, typically called a classifier,” he explains. “When presented with pictures of an animal, the neural network is supposed to classify the picture as containing a dog or a cat.”
However, if someone manipulates the underlying structure of the deep neural network, it causes the computer to falsely classify a dog as a cat when an image of a dog is shown, he notes.
To combat this, the ARL is focusing some of its research into the study of the fundamental mathematical structures to locate the decision boundaries within these deep neural networks, says Jalaian. Researchers also are looking at how to characterize the uncertainty of a particular data source, such as tabulated data or a picture or a sound. “We are focusing in on the so-called adversarial noise, which is a more structured noise that is created to cause machine learning algorithms to make wrong predictions,” he explains.
By understanding the mathematical foundations, the ARL can then work on how to develop general defenses, which could protect multiple machine learning algorithms, Swami emphasizes.
While research has progressed, “what has happened over the last 15 years is that our training algorithms under benign condition had gotten much better, and of course at the same time the attacks have gotten much better, meaning they can be much more subtle,” Swami shares.
Defenses up to this time had been a reaction to individual attacks—a point, counter-point defense against a single type of attack. Now, the ARL and others are working on creating more generalized defenses that could broadly apply to any machine learning algorithm, the scientists say.
“Now there is a push to understand and come up with more generic approaches that provide some level of mathematical guarantees against a broader class of attacks in the context of deep neural networks,” Jalaian says.
One challenge is that current defenses require knowledge about the nature and severity of an attack, he notes. So the ARL’s researchers are examining what minimum amount of information is needed when devising defenses.
“One of the areas of particular interest is in understanding how much information is necessary about a particular attack to be able to come up with a robust defense mechanism,” shares Jalaian. “So an ideal approach would not need any information about the nature of that attack. ”
Another area of adversarial machine learning research by the ARL focuses on identifying the structural parts of a machine learning model, and how they can be made more robust.
“I think there are still components of the model that need to be studied to find sources of uncertainty,” Jalaian notes. “Part of the uncertainty actually comes from statistical learning approaches, which involves minimizing the loss function of a deep neural network. And once you follow that approach, you inherit mathematical drawbacks from the minimization. So the brittleness of deep neural networks can be attributed to the definition of the notion of learning, which translates to an optimization problem. We have to see if fundamentally this is the right way of defining this. And maybe we can come up with more generic notions of loss functions that have mathematical properties that enable the deep neural network to be more robust against a certain class of attacks.”
Ideally, the scientists want to come up with generic models that are robust against “at least all the known attacks, and then for a class of future attacks not even thought of yet,” he adds.
Jalaian admits that developing such a comprehensive defense “is a very computationally and mathematically challenging task,” which requires sophisticated computational efforts. “We also hope to design algorithms that are fast enough to be of use.”
The ARL also is examining the role of training data, on which a machine learning algorithm runs, and how to harness the various data sources that the Army brings in.
“If we look at where our tactical networks are, we get data from a lot of different sources, and we don’t necessarily control all the sources,” Swami says. “So then there is this issue of how much trust do we have in the training data itself, and in the labels that come with the training data. Some of our training data is distributed, which means we have different models, and we may not necessarily have the same amount of trust in the classifiers. One could call it fusion or ensemble learning, where you are bringing things that are disparate amongst their class.”
In many scenarios, the Army has extremely limited training data, Swami continues. “Then the question is, has this machine learning algorithm been able to generalize enough to learn on some other sets of training data and extrapolate from that. And we are not there yet.”
For training data, Jalaian suggests that some knowledge can be gained from past experiences of cybersecurity research. “As far as input data contamination, this is identical to conventional cyber security problems,” he offers. “The only difference is right now the input data goes to a classifier. Those efforts are still valid here.”
Credit: Google News