Updated 10/31/2023 at 1:01 pm ET: This story has been updated throughout to reflect specifics in the full Executive Order, which was released after publication.
WASHINGTON — A new artificial intelligence executive order signed by President Joe Biden Monday is being hailed by the administration as one of the “most significant actions ever taken by any government to advance the field of AI safety” in order to “ensure that America leads the way” in managing risks posed by the technology.
Through the executive order, DoD is directed to establish a pilot program to identify how AI can find vulnerabilities in critical software and networks and develop plans to attract more AI talent, among other tasks. But new regulations on how the commercial world develops AI could have as much, if not more, impact on how the Defense Department and industry collaborate moving forward than the official taskings outlined in the EO.
“I think the biggest implication for DoD is how this will impact acquisition because…anybody who’s developing AI models and wanting to do business with the DoD is going to have to adhere to these new standards,” Klon Kitchen, the head of the global technology policy practice at Beacon Global Strategies, told Breaking Defense Monday.
“The executive order has some pretty extensive requirements for anyone who’s developing or deploying dual-use models,” he added. “So all the major contractors and integrators and that kind of thing are going to have pretty significant reporting requirements associated with their frontier models.”
A fact sheet from the White House lays out key tenets from the executive order. Notably, it directs “that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government” and that federal agencies will also be issued guidance for their use of AI.
Kitchen said that although there seems to be an “intended alignment” between Monday’s EO and DoD’s own AI policies, like the Responsible AI Strategy and Implementation Pathway, there will be “some inevitable disjunctions that will have to get worked out.”
“My read is [that] the administration understands that and is trying … not to put undue burden on the industry, while at the same time trying to meaningfully address the very real concerns,” he said. “Industry and government are definitely going to disagree about where those lines should be drawn, but I do interpret the executive order as a general good faith effort to begin that conversation.”
According to the fact sheet, the National Institute of Standards and Technologies will develop standards for making sure AI is secure, and federal agencies like the Departments of Homeland Security and Energy will address the impact of AI threats to critical infrastructure. In a statement, Eric Fanning, the head of the Aerospace Industries Association trade group, said his organization is “closely assessing” the document.
Orders For The Defense Department
The executive order says the National Security Council and White House chief of staff will develop a national security memorandum that “shall address the governance of AI used as a component of a national security system or for military and intelligence purposes.” The memorandum should also assess how adversaries could threaten DoD or the IC with new technology.
Specific to DoD, the secretary of defense and secretary of homeland security are directed to “develop plans for, conduct, and complete an operational pilot project to identify, develop, test, evaluate, and deploy AI capabilities, such as large-language models, to aid in the discovery and remediation of vulnerabilities in critical United States Government software, systems, and networks.”
The pilot program should start within 180 days of the executive order and be followed up by a report within 270 days on the results of the pilot project.
In another section of the executive order, the secretary of defense is instructed to assess ways AI can increase biosecurity risks and make recommendations on how to mitigate risks. The secretary of defense is also directed to submit a report within 180 days that provides recommendations on how to address gaps in AI talent.
In a statement, Sen. Mark Warner, D-Va., chairman of the Senate Select Committee on Intelligence and co-chair of the Senate Cybersecurity Caucus, said “many” of the sections in the executive order “just scratch the surface.”
“Other areas overlap pending bipartisan legislation, such as the provision related to national security use of AI, which duplicates some of the work in the past two Intel Authorization Acts related to AI governance,” Warner. “While this is a good step forward, we need additional legislative measures, and I will continue to work diligently to ensure that we prioritize security, combat bias and harmful misuse, and responsibly roll out technologies.”
In a statement, Paul Scharre, executive vice president and director of studies at the Center for a New American Security, said the requirement for companies to notify the government when training AI models and NIST’s red-teaming standards requirements are two of many “significant” steps being taken to advance AI safety.
“Together, these steps will ensure that the most powerful AI systems are rigorously tested to ensure they are safe before public deployment,” he said. “As AI labs continue to train ever-more-powerful AI systems, these are vital steps to ensure that AI development proceeds safely.”
According to Kitchen, “what’s really going to matter is how these various departments and agencies actually start building the rules and interpreting the guidance that they received in the executive order.”
“So I think the EO will provoke a lot of questions from industry, but it will be the individual agencies and departments who actually start to answer those questions,” he said.