From
Send to

[David Ignatius] Can Pentagon build a bridge to tech community?

Jan. 27, 2019 - 17:05 By David Ignatius
As the age of artificial intelligence transforms warfare, the Pentagon faces a delicate problem: How does it convince employees of high-tech companies based in the US that Americans are still the “good guys,” so that they’ll lend their talents to US national-security projects?

The challenge is huge, given that Google, Microsoft, Amazon, Apple and other tech giants see themselves as global companies with workers drawn from many nations. But tapping this talent base is essential for future US security -- and fortunately the Pentagon, after some false starts, is now launching a creative effort to win the trust of suspicious software engineers who grew up in the shadow of Edward Snowden’s revelations.

The basic idea is to do artificial intelligence “the American way,” as people used to say, by framing a set of clear ethical rules through public debate. This AI Principles Project was launched last October by the Pentagon’s Defense Innovation Board. The first major public meeting took place Tuesday at Harvard, where Pentagon officials met with about a dozen AI experts, some of them strong critics of US military actions.

The Harvard roundtable discussion was lively, and occasionally sharp, says a participant. The group debated privacy concerns, the trade-off between an algorithm’s power and its ability to explain its results, methods for establishing human accountability for AI actions, and other legal and moral issues. Similar expert gatherings are planned at Carnegie Mellon in March and Stanford in April, and then the board will release draft principles for public comment.

The Pentagon outreach was deliberately aimed at engineers who don’t like the idea of working with the US military. As the innovation board’s statement announcing the ethics dialogue put it, “We are taking care to include not only experts who often work with the (Department of Defense), but also AI skeptics, DoD critics and leading AI engineers who have never worked with DoD before.”

“If we’re going to be the arsenal of democracy in the 21st century, we have to show that we have ideals and are ready to stand up for them,” says one senior Pentagon official involved in the program. “It wasn’t going to be enough to say, ‘Hey, we’re the good guys, we’re Americans.’ We needed to be more introspective.”

This bridge building to the tech community follows a potentially disastrous rupture last year, when Google employees rebelled at a Pentagon AI effort called Project Maven. It was a relatively small, $9 million contract to write algorithms for nonlethal monitoring of surveillance videos to detect threatening movement. Neither the company nor the Pentagon foresaw the controversy that erupted when thousands of Google employees signed a protest petition; the company had to retreat and declined to renew the contract.

Behind the Maven flap lie some fascinating cross currents. Senior Google executives had wanted a larger piece of the government’s national security business, which has been dominated by Amazon and Microsoft. But they were secretive with employees about the project. A tight-lipped Pentagon worsened the public-relations disaster. Google employees felt misled, and Pentagon officials were enraged that the tech engineers had scuttled a project aimed at detecting terrorist threats.

Pentagon anger deepened when Google announced last year it would launch a “Dragonfly” search-engine project in China; the company has since retreated, again after employee protest.

Google’s Chief Executive Sundar Pichai’s withdrawal from Maven was driven, above all, by opposition from some of the top engineers on whom Google’s future rests. A Pentagon official recalls trying to explain to one of these AI gurus that the US constitution and Bill of Rights would prevent excesses by America. The engineer replied that these safeguards meant little to him because he wasn’t a US citizen.

The Google revolt hasn’t yet spread across Silicon Valley, despite efforts by some engineers to organize such a boycott. Top executives at Microsoft and Amazon have resisted employee protest and reaffirmed their willingness to work on classified contracts for the military and the intelligence community, such as the huge JEDI cloud-computing project.

The engineers aren’t wrong in demanding that the Pentagon set rules for this new domain of warfare. “There has been a lack of clarity from DOD about how it will use AI,” argued Paul Scharre of the Center for a New American Security.
The Pentagon-Silicon Valley dialogue is wary and awkward, and it could collapse -- with dire consequences for America’s future military strength. The senior Pentagon official explains the one reason it might work, “We’ve created ways for people who hate us to express their views. That’s what makes us different from a closed society.” 


David Ignatius
David Ignatius can be reached via Twitter: @IgnatiusPost. -- Ed.

(Washington Post Writers Group)