By AI Trends Staff
This week, the Office of the Director of National Intelligence (ODNI) released the first of an evolving set of principles for the ethical use of AI. The six principles, ranging from privacy to transparency to cybersecurity, were described as Version 1.0 and were approved by John Ratcliffe, Director of National Intelligence.
The six principles are positioned as a guide for the nation’s 17 intelligence agencies, especially to help them work with private companies contracted to help the government build systems incorporating AI, according to an account in Breaking Defense. The intelligence agency principles complement the AI principles adopted by the Pentagon earlier this year.
“These AI ethics principles don’t diminish our ability to achieve our national security mission,” stated Ben Huebner, head of the Office of Civil Liberties, Privacy, and Transparency at ODNI. “To the contrary, they help us ensure that our AI or use of AI provides unbiased, objective and actionable intelligence policymakers require that is fundamentally our mission.”
Feedback on the intelligence community’s AI principle is welcome, its managers say. “We are absolutely welcoming public comment and feedback on this,” stated Huebner, noting that there will be a way for public feedback at Intel.gov. “No question at all that there are aspects of what we do that are and remain classified. I think though, what we can do is talk in general terms about some things that we are doing.”
Dean Souleles, Chief Technology Advisor for ODNI, said the “science of AI is not 100% complete yet,” but the ethics documents give intelligence officials a current roadmap of how best to use this emerging technology.
“It is too early to define a long list of do’s and don’ts. We need to understand how this technology works, we need to spend our time under the framework and guidelines that we’re putting out to make sure that we’re staying within the guidelines. But this is a very, very fast-moving train with this technology,” stated Souleles, in an account in Federal News Network.
Feedback is Welcome
The intelligence community expects to release updates to its AI documents as the technology evolves and it responds to questions. One of the issues being considered by the intelligence agencies is – what is the role of the “human-in-the-loop,” in the parlance of the DoD. For example, if a voice-to-text application has been trained in a dialect from one region of the world, even if it is the same language, how accurate it is? “That’s something I need to know about,” stated Huebner.
Feedback to the intelligence agencies on their AI principles is likely to derive from examples in the private sector. “We think there’s a big overlap between what the intelligence community needs and frankly, what the private sector needs that we can and should be working on, collectively together,” stated Souleles.
For example, in trying to identify the source of a threat, examples from business could be helpful to the intelligence community, which is both reassuring and ominous.
“There are many areas we’re going to be able to talk about going forward, where there’s overlap that does not expose our classified sources and methods,” stated Souleles, “because many, many, many of these things are really common problems.”
A major concern with AI, no matter who is developing it, is bias in algorithms, according to an account in C4Isrnet. The framework suggests steps for practitioners to take to discover undesired biases that may enter algorithms throughout the life cycle of an AI program.
“The important thing for intelligence analysts is to understand the sources of the data that we have, the inherent biases in those data, and then to be able to make their conclusions based on the understanding of that,” Souleles stated. “And that is not substantially different from the core mission of intelligence. We always deal with uncertainty.”
Here are the six principles, in the document’s own words:
Respect the Law and Act with Integrity. We will employ AI in a manner that respects human dignity, rights, and freedoms. Our use of AI will fully comply with applicable legal authorities and with policies and procedures that protect privacy, civil rights, and civil liberties.
Transparent and Accountable. We will provide appropriate transparency to the public and our customers regarding our AI methods, applications, and uses within the bounds of security, technology, and releasability by law and policy, and consistent with the Principles of Intelligence Transparency for the IC. We will develop and employ mechanisms to identify responsibilities and provide accountability for the use of AI and its outcomes.
Objective and Equitable. Consistent with our commitment to providing objective intelligence, we will take affirmative steps to identify and mitigate bias.
Human-Centered Development and Use. We will develop and use AI to augment our national security and enhance our trusted partnerships by tempering technological guidance with the application of human judgment, especially when an action has the potential to deprive individuals of constitutional rights or interfere with their free exercise of civil liberties.
Secure and Resilient. We will develop and employ best practices for maximizing reliability, security, and accuracy of AI design, development, and use. We will employ security best practices to build resilience and minimize potential for adversarial influence.
Informed by Science and Technology. We will apply rigor in our development and use of AI by actively engaging both across the IC and with the broader scientific and technology communities to utilize advances in research and best practices from the public and private sector.
Read the source articles in Breaking Defense, Federal News Network and C4Isrnet. Read the intelligence community’s AI principles at Intel.gov.