Questions that loom large for the wider application of artificial intelligence (AI) in Defense Department operations often center on trust. How does the operator know if the AI is wrong, that it made a mistake, that it didn’t behave as intended?
Answers to questions like that come from a technical discipline known as Responsible AI (RAI). It’s the subject of a report issued by the Defense Innovation Unit (DIU) in mid-November called Responsible AI Guidelines in Practice, which addresses a requirement in the FY21 National Defense Authorization Act (NDAA) to ensure that the
DoD has “the ability, requisite resourcing, and sufficient expertise to ensure that any artificial intelligence technology…is ethically and reasonably developed.”
DIU’s RAI guidelines provide a framework for AI companies, DOD stakeholders and program managers that can help to ensure that AI programs are built with the principles of fairness, accountability, and transparency at each step in the development cycle of an AI system, according to Jared Dunnmon, technical director of the artificial intelligence/machine learning portfolio at DIU.
This framework is designed to achieve four goals, said Dunnmon:
- Clarify end goals, align expectations, and acknowledge risks and tradeoffs in order to speed development;
- Employ fairness, accountability and transparency in the development, testing, and vetting of AI systems in order to avoid things like bias in facial recognition systems, for example;
- Improve evaluation, selection, prototyping and adoption in order to avoid potential bad outcomes; and
- Foment constructive questions and conversations that can help imbue AI projects with better success.
Trust in the AI is foremost
Just like Isaac Asimov’s Three Laws of Robotics describes ethical behavior for robots, the DIU’s guidelines offer five ethical principles for development and use of artificial intelligence.
-
- Responsible: DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities. That means that human beings maintain responsibility for the development and use of AI.
- Equitable: The DoD will take deliberate steps to minimize unintended bias in AI capabilities. That requires reducing bias through testing, selection of adequate training sets, and diverse engineering teams.
- Traceable: The Defense Department’s AI capabilities will be developed and deployed in such a manner that relevant personnel possess an understanding of the technology, development processes, and operational methods, including transparent and auditable methodologies, data sources, and design procedure and documentation.
- Reliable: The DoD’s AI capabilities will have explicit, well-defined mission use cases and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life cycles.
- Governable: DoD will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and have the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.
It’s that fifth principle, governable, that addresses the questions asked at the top about letting the operator know when the AI is wrong. Operators need to establish trust in the AI systems or they simply won’t be used. That’s not an option for something as complex as the Joint All Domain Command and Control concept of operations.
“Governable AI systems allow for graceful termination and human intervention when algorithms do not behave as intended,” said Dr. Amanda Muller, Consulting AI Systems Engineer and Technical Fellow, who is the Responsible AI Lead for Northrop Grumman, which is one of the few companies with such a position. “At that point, the human operator can either take over or make adjustments to the inputs, to the algorithm, or whatever needs to be done. But the human always maintains the ability to govern that AI algorithm.”
Northrop Grumman’s adoption of these RAI principles builds justified confidence in the AI systems being created because the human can understand and interpret what the AI is doing, determine if it’s operating correctly through verification and validation, and take actions if it is not.
The importance of doing so is clear for the future of AI in the military. If AI systems do not work as designed or are unpredictable, “leaders will not adopt them, operators will not use them, Congress will not fund them, and the American people will not support them,” states the Final Report from the National Security Commission on Artificial Intelligence (NSCAI). This commission was a temporary, independent, federal entity created by Congress in the National Defense Authorization Act for Fiscal Year 2019. The commission was led by former Google CEO Eric Schmidt and former Deputy Secretary of Defense Robert Work, and delivered its 756-page Final Report in March 2021, disbanding in October.
“The power of AI is its ability to learn and adapt to changing situations,” said Muller. “The battlefield is a dynamic environment and the side that adapts fastest gains the advantage. Like with all systems, though, AI is vulnerable to attack and failure. To truly harness the power of AI technology, developers must align with the ethical principles adopted by the DoD.”
The complexity of all-domain operations will demand AI
The DoD’s pledge to develop and implement only Responsible Artificial Intelligence will underpin development of systems for JADC2. An OODA (Observe, Orient, Decide, Act) loop stretching from space to air and ground, and to sea and cyber will only be possible through the ability of an AI system to control the JADC2 infrastructure.
“The AI could perceive and reason on the best ways to move information across different platforms, nodes, and decision makers,” explained Vern Boyle, Vice President of Advanced Processing Solutions for Northrop Grumman’s Networked Information Solutions division. “And it could optimize the movement of that information and the configuration of the network because it’ll be very complex.
“We’ll be operating in contested environments where it will be difficult for a human to react and understand how to keep the network and the comm links functioning. The use of AI to control the communication and networking infrastructure is going to be one big application area.”
At the same time, RAI will serve as a counterweight to America’s Great Power competitors, China and Russia, who certainly won’t engage in ethical AI as they push for power. As part of its strategic plan, China has declared it will be the global leader in AI by 2030 and its investments in dual-use technologies like advanced processing, cyber security, and AI are threats to U.S. technical and cognitive dominance.
“The key difference is that China is applying AI technologies broadly throughout the country,” said Boyle. “They are using AI for surveillance and tracking their citizens, students, and visitors. They use AI to monitor online behaviors, social interactions and biometrics.
“China has no concern about privacy rights or ethical application of the data that AI is able to gather and share. All data is collected and used by both industry and the Chinese government to advance their goal of global, technical dominance by 2030.”
Fundamental to the U.S response to China’s actions is assuring that the Defense Department’s use of AI reflects democratic values, according to Boyle.
“It is critical that we move rapidly to set the global standard for responsible and ethical AI use, and to stay ahead of China and Russia’s advances toward the lowest common denominator. The U.S., our ally partners, and all democratic-minded nations must work together to lead the development of global standards around AI and talent development.”
Northrop Grumman systems to close the connectivity/networking gap
Doing so will help to close one of the most significant capability gaps facing armed forces right now, which is basic connectivity and networking. The platforms and sensors needed to support JADC2—satellites, unmanned air and ground systems, and guided missile destroyers, to name a few—aren’t necessarily able to connect and move information effectively because of legacy communications and networking systems.
That reality will dampen the DoD’s ambitions for AI and machine learning for tactical operations.
“It’s both a gap and a challenge,” observed Boyle. “Let’s assume, though, that everyone’s connected. Now there’s an information problem. Not everybody shares their information. It’s not described in a standard way. Having the ability to understand and reason on information presumes that you’re able to understand it. Those capabilities aren’t necessarily mature yet either.”
There are also challenges with respect to multi-level security and the ability to share and distribute information at different classification levels. That adds a level of complexity that’s not typically present in the commercial sector.
The severity of this issue and the need to solve it in the name of all-domain operations is driving Northrop Grumman to prioritize the successful application of AI to communications and networking.
The company has numerous capabilities deployed now on important platforms such as Global Hawk and is working with customers to leverage gateway systems in service now for data relay, while developing new capabilities to address gaps in communications and networking.
Northrop Grumman’s portfolio already contains enabling technologies needed to connect joint forces, including advanced networking, AI/ML, space, command and control systems, autonomous systems powered by collaborative autonomy, and advanced resiliency features needed to protect against emerging threats. And it is developing AI that acts as the connective tissue for military platforms, sensors, and systems to communicate with one another—enabling them to pass information and data using secure, open systems, similar to how we use the Internet and 5G in our day-to-day lives.
“The DoD has stated that it must have an AI-enabled force by 2025 because speed will be the differentiator in future battles,” said Boyle “That means speed to understand the battle space; speed to determine the best course of action to take in a very complex and dynamic battle space; and speed to be able to take appropriate actions. Together, they will let the DoD more quickly execute the OODA Loop (Observe, Orient, Decide, Act).
“AI and advanced, specialized processing at the tactical edge will provide a strategic information advantage. AI and edge computing are the core enabling technologies for JADC2.”