Officials say they want computers to be capable of explaining their decisions to military commanders
The Defense Department’s cutting-edge research arm has
promised to make the military’s largest investment to date in artificial
intelligence (AI) systems for U.S. weaponry, committing to spend up to
$2 billion over the next five years in what it depicted as a new effort
to make such systems more trusted and accepted by military commanders.
The director of the Defense Advanced Research Projects
Agency (DARPA) announced the spending spree on the final day of a
conference in Washington celebrating its sixty-year history, including
its storied role in birthing the internet.
The agency sees its primary role as pushing forward new
technological solutions to military problems, and the Trump
administration’s technical chieftains have strongly backed
injecting artificial intelligence into more of America’s weaponry as a
means of competing better with Russian and Chinese military forces.
The DARPA investment is small by Pentagon spending
standards, where the cost of buying and maintaining new F-35 warplanes
is expected to exceed a trillion dollars. But it is larger than AI
programs have historically been funded and roughly what the United
States spent on the Manhattan Project that produced nuclear weapons in
the 1940’s, although that figure would be worth about $28 billion today
due to inflation.
In July defense contractor Booz Allen Hamilton received
an $885 million contract to work on undescribed artificial intelligence
programs over the next five years. And Project Maven, the single largest
military AI project, which is meant to improve computers’ ability to
pick out objects in pictures for military use, is due to get $93 million
in 2019.
Turning more military analytical work – and potentially
some key decision-making – over to computers and algorithms installed in
weapons capable of acting violently against humans is controversial.
Google had been leading the Project Maven project for the
department, but after an organized protest by Google employees who
didn’t want to work on software that could help pick out targets for the
military to kill, the company said in June it would discontinue its
work after its current contract expires.
While Maven and other AI initiatives have helped Pentagon
weapons systems become better at recognizing targets and doing things
like flying drones more effectively, fielding computer-driven systems
that take lethal action on their own hasn’t been approved to date.
A Pentagon strategy document released in August says
advances in technology will soon make such weapons possible. “DoD does
not currently have an autonomous weapon system that can search for,
identify, track, select, and engage targets independent of a human
operator’s input,” said the report, which was signed by top Pentagon
acquisition and research officials Kevin Fahey and Mary Miller.
But “technologies underpinning unmanned
systems would make it possible to develop and deploy autonomous systems
that could independently select and attack targets with lethal force,”
the report predicted.
The report noted that while AI systems are already
technically capable of choosing targets and firing weapons, commanders
have been hesitant about surrendering control to weapons platforms
partly because of a lack of confidence in machine reasoning, especially
on the battlefield where variables could emerge that a machine and its
designers haven’t previously encountered.
Right now, for example, if a soldier asks an AI system
like a target identification platform to explain its selection, it can
only provide the confidence estimate for its decision, DARPA’s director
Steven Walker told reporters after a speech announcing the new
investment – an estimate often given in percentage terms, as in the
fractional likelihood that an object the system has singled out is
actually what the operator was looking for.
“What we’re trying to do with explainable AI is have the
machine tell the human ‘here’s the answer, and here’s why I think this
is the right answer’ and explain to the human being how it got to that
answer,” Walker said.
DARPA officials have been opaque about exactly how its
newly-financed research will result in computers being able to explain
key decisions to humans on the battlefield, amidst all the clamor and
urgency of a conflict, but the officials said that being able to do so
is critical to AI’s future in the military.
Vaulting over that hurdle, by explaining AI reasoning to
operators in real time, could be a major challenge. Human
decision-making and rationality depend on a lot more than just following
rules, which machines are good at. It takes years for humans to build a
moral compass and commonsense thinking abilities, characteristics that
technologists are still struggling to design into digital machines.
“We probably need some gigantic Manhattan Project to
create an AI system that has the competence of a three year old,” Ron
Brachman, who spent three years managing DARPA’s AI programs ending in
2005, said earlier during the DARPA conference. “We’ve had expert
systems in the past, we’ve had very robust robotic systems to a degree,
we know how to recognize images in giant databases of photographs, but
the aggregate, including what people have called commonsense from time
to time, it’s still quite elusive in the field.”
Michael Horowitz, who worked on artificial intelligence
issues for Pentagon as a fellow in the Office of the Secretary of
Defense in 2013 and is now a professor at the University of
Pennsylvania, explained in an interview that “there’s a lot of concern
about AI safety – [about] algorithms that are unable to adapt to complex
reality and thus malfunction in unpredictable ways. It’s one thing if
what you’re talking about is a Google search, but it’s another thing if
what you’re talking about is a weapons system.”
Horowitz added that if AI systems could prove they were
using common sense, ”it would make it more likely that senior leaders
and end users would want to use them.”
An expansion of AI’s use by the military was endorsed by
the Defense Science Board in 2016, which noted that machines can act
more swiftly than humans in military conflicts. But with those quick
decisions, it added, come doubts from those who have to rely on the
machines on the battlefield.
“While commanders understand they could benefit from
better, organized, more current, and more accurate information enabled
by application of autonomy to warfighting, they also voice significant
concerns,” the report said.
DARPA isn’t the only Pentagon unit sponsoring AI
research. The Trump administration is now in the process of creating a
new Joint Artificial Intelligence Center in that building to help
coordinate all the AI-related programs across the Defense Department.
But DARPA’s planned investment stands out for its scope.
DARPA currently has about 25 programs focused on AI
research, according to the agency, but plans to funnel some of the new
money through its new Artificial Intelligence Exploration Program. That
program, announced in July, will give grants up to $1 million each for
research into how AI systems can be taught to understand context,
allowing them to more effectively operate in complex environments.
Walker said that enabling AI systems to make decisions
even when distractions are all around, and to then explain those
decisions to their operators will be “critically important…in a
warfighting scenario.”
Comments
Post a Comment