The UK’s top intelligence and security body, GCHQ, is betting big on artificial intelligence: the organization has revealed how it wants to use AI to boost national security.
In a new paper titled “Pioneering a New National Security,” GCHQ’s analysts went to lengths to explain why AI holds the key to better protection of the nation. The volumes of data that the organization deals with, argued GCHQ, places security agencies and law enforcement bodies under huge pressure; AI could ease that burden, improving not only the speed, but also the quality of experts’ decision-making.
“AI, like so many technologies, offers great promise for society, prosperity and security. It’s impact on GCHQ is equally profound,” said Jeremy Fleming, the director of GCHQ. “AI is already invaluable in many of our missions as we protect the country, its people and way of life. It allows our brilliant analysts to manage vast volumes of complex data and improves decision-making in the face of increasingly complex threats – from protecting children to improving cyber security.”
GCHQ is already heavily involved in AI-related projects. Although the organization will not disclose the exact details of its use of the technology, Fleming pointed to various partnerships with AI-related start-ups located around the country, as well as a strategic collaboration with the Alan Turing Institute, which was founded to advance research in AI and data science.
It is no news, therefore, that the intelligence body has a strong interest in using AI; but the newly published paper suggests that GCHQ is prepared to further ramp up its algorithmic arsenal in the years to come. The threats to the nation are increasing, argued Fleming, and they are coming from hostile states that are themselves armed with AI tools – and the UK should be prepared to face modern-day risk.
“The nation’s security, prosperity and way of life faces new threats from hostile states, terrorists and serious criminals, often enabled by the global internet. An ever-growing number of those threats are to the UK’s digital homeland – the vital infrastructure and online services that underpin every part of modern life,” said Fleming.
Almost half of UK businesses have reported a cyberattack in the past 12 months, with a fifth of those leading to a significant loss of money or data, says GCHQ’s paper. AI could help the agency better identify malicious software, and continually update its dictionary of known patterns to anticipate future attacks. The technology could also be used to fight online disinformation and deepfakes, by automatically fact-checking content, but also weeding out botnets and troll farms on social media.
AI will also help identify grooming behavior in the text of messages in chat rooms to prevent child sexual abuse; it will run across content and metadata to find illegal images that are being exchanged, preventing at the same time human experts from watching traumatically disturbing material. Using similar methods, the technology will assist the fight against drugs, weapons or human trafficking – analyzing large-scale chains of financial transactions to help dismantle some of the 4,772 groups in the UK that are estimated to be involved in serious organized crime.
But as with any other application of AI, using algorithms for national security purposes doesn’t come without raising ethical questions – in fact, when the stakes are so high, so are concerns with transparency, fairness or trust. At the same time, the nature of intelligence and security services means that it is difficult to reveal all the details of GCHQ’s operations. In other words, compromise will be necessary.
“In the case of national security, intelligence agencies traditionally operate behind a veil of secrecy and are not inclined to share information about their activities. It’s basically true by definition that their activities need not be explicable,” Robert Farrow, senior research fellow at the Open University, tells ZDNet.
“However, we know that machine learning can result in biased decision making if it is trained on biased data. If a biased algorithm is used for, say, profiling of potential terrorists by mining data from social networks, decisions might be made about people’s lives with no way for the public to check or evaluate whether the actions taken were ethical.”
When it comes to transparency, GCHQ’s track-record is questionable at best. The organization has come under public scrutiny numerous times since Edward Snowden, a former contractor at the US National Security Agency, shed light on the agency’s mass surveillance practices. GCHQ’s secretive bulk data collection program was ruled unlawful by independent judicial body the Investigatory Powers Tribunal (IPT).
Since then, surveillance laws have changed, but the UK’s Investigatory Powers Act (IPA), also known as Snoopers’ Charter, still makes it legal for government agencies like GCHQ to collect and retain some citizen data in bulk.
GCHQ’s latest paper, perhaps in an attempt to reassure the public on the use of their data, has a strong ethical focus. The agency committed to a fair and transparent use of AI, recognizing that the nature of GCHQ’s operations might impact privacy rights “to some degree”, and pledging adherence to an AI ethical code of practice, which is yet to be established.
“We need honest, mature conversations about the impact that new technologies could have on society. This needs to happen while systems are being developed, not afterwards. And in doing so we must ensure that we protect our [citizens’] right to privacy and maximize the tremendous upsides inherent in the digital revolution,” said Fleming.
Many experts welcomed the agency’s renewed focus on ethical considerations, which will ultimately boost public trust and contribute the uptake of a technology that could effectively be a game-changer in protecting the UK’s national security interests. Andrew Dwyer, researcher in computational security at Durham University, explains that AI could even help ease concerns about mass surveillance, by helping GCHQ identify and target the right individuals in the fight against terrorism or trafficking.
“Of course it is a good thing that GCHQ uses these systems,” Dwyer tells ZDNet. “In this example, it could actually focus surveillance away from mass surveillance as such. This paper is a first step into thinking about the role of AI being applied in national security.”
But while many will agree that GCHQ’s use of AI is justified and necessary, the deployment of the technology is likely to trigger much debate. Farrow, for instance, believes that an ethical framework is not sufficient: even intelligence agencies should required to provide an account of how algorithms influence decision-making. “What is really needed is for the law to catch up with technological developments and effectively regulate the use of AI,” he argues.
One thing is certain: privacy groups and digital rights activists will have all eyes on GCHQ’s upcoming ethical code of practice.