Skip to main content Skip to secondary navigation
Page Content

Amy Zegart: Integrating AI in the Realm of National Security

Deployed properly, the new HAI associate director says, AI holds tremendous potential to improve and economize the work of intelligence agencies.

Image
Amy Zegart

Amy Zegart, Morris Arnold and Nona Jean Cox Senior Fellow at the Hoover Institution and a senior fellow at the Freeman Spogli Institute of International Studies, has joined Stanford HAI as an associate director.

The amount of data on Earth is doubling roughly every two years. Much of this comes from sources that are publicly available — social media feeds, commercial satellites, and media outlets around the world.

For Amy Zegart, a leading expert on U.S. intelligence agencies who has served on the National Security Council staff and holds several appointments across Stanford, the implications for the world of spycraft are clear: A profession that once hunted diligently for secrets is now picking through huge haystacks for one or two needles of insight, and that’s precisely the kind of project at which AI excels.

Introducing New Faculty, Staff Leaders at Stanford HAI

 

But the adoption and deployment of these technologies must be done thoughtfully, says Zegart, who recently joined the Stanford Institute for Human-Centered AI (HAI) as an associate director.

“Any new technology comes with good news and bad news, benefits and vulnerabilities,” she explained in a recent interview that touched on her hopes for and concerns over the use of AI in U.S. intelligence gathering. “AI is no different.”

I know you’ve written a book on this, but let’s start with the basics: Where do you see AI being most useful in the intelligence world?

It has become very clear that AI is transforming not only the future but also our ability to understand the future. This is a profound opportunity for U.S. intelligence agencies and people outside intelligence agencies to better anticipate threats and to prevent bad things from happening. Specifically, I think AI can be incredibly useful inside U.S. intelligence agencies for augmenting the abilities of humans — accelerating analysis, aiding pattern recognition, better understanding what we know, and divining insights from large amounts of data that humans can’t connect as readily.

You’ve said that getting intelligence communities to adopt these technologies is “much harder than it might seem.” Why?

There are a couple of reasons. Number one, there are 18 agencies in the U.S. intelligence community and they all operate with bespoke technology. Now imagine stitching together and structuring and labeling all this data in a way that it can be seamlessly integrated. That alone is a hard enough technical challenge.

But then there are cultural challenges of getting agencies to adopt technology that is from outside the U.S. government, for the most part, and that removes human analysis altogether from tasks that humans once did. That is a very unsettling proposition for many people inside the intelligence community. So you have this big technical challenge alongside the big cultural challenge.

Are there risks of removing people too much, of relying too heavily on these tools?

I often joke that there are two kinds of AI challenges in intelligence. The first is not enough AI and the second is too much AI. In the case of not enough: Intelligence analysts are overwhelmed with data and AI holds enormous potential for dealing with this overload. But I think we haven’t given enough attention to the “too much AI” problem.

One of the challenges of relying on AI too much is that it can distort what intelligence analysts conclude. I often say that it can lead analysts to count too much on things that can be counted. AI relies on data. The more quantifiable the data, the better. But many of the key indicators of intelligence are not quantifiable: What’s morale like inside an army? How much corruption is there inside a regime? What’s the mood of a leader today? Those are critical factors in anticipating what could happen, and they’re not easily identified by AI.

Another challenge with too much AI is that these tools are good if the future looks like the past. But if the future doesn’t look like the past, then AI is not going to help you very much. Those are often the challenges that intelligence analysts face: discontinuous change. How do we understand and identify indicators of discontinuous change? AI is not so great at that. We need to be very clear about the promise and the pitfalls of AI.

You’ve noted in past interviews how, historically, spy technologies typically originated in the government. Now they come largely from industry. What are the implications of that?

This has huge implications. First, a different skill set is required to adopt technology from outside than is needed to invent it inside. You’re really asking government bureaucracies to do things in fundamentally different ways. Government bureaucracies do some things really well, by design: They do the same tasks in standard ways over and over again. They are designed to be fair and to replicate results. What are they not designed to do? They’re not designed to adapt very quickly.

But for bureaucracies to adopt this technology means they must change how they operate. It’s not just ”add a little AI and stir.” Agencies have to change how they think about buying, training, and using this technology. They ultimately have to change many aspects of what they do. That makes it hard.

There is a tight link between intelligence agencies and the policymakers who use intelligence. Does AI change how we think about the policy side?

If you think of policymakers as customers, AI could potentially help intelligence agencies better understand what their customers want and how to deliver it. What formats are best suited for informing policymakers about a threat? What are they reading? What do they want more of? AI can help identify and automate answers to these questions.

Perhaps a challenge, though, is that governments are not the only organizations with AI capabilities, which means that intelligence organizations have much more competition than they once did. Anybody who has a laptop can collect, analyze, and diffuse intelligence today.

Is that happening?

Absolutely. The availability of information on the internet and the commercial satellite revolution are enabling low-cost remote sensing, which was previously the province of billion-dollar spy satellites. Pretty much anybody can track troop movements on the ground with unclassified commercial satellite images and use algorithms to help process that imagery faster. Then, they can post what they’re finding to Twitter [now X].

If you talk to government officials, they’ll tell you that there are a handful of responsible open-source intelligence accounts on Twitter that are doing exactly that and doing it faster than government.

What ethical questions are most salient for you when thinking about AI in national security?

A few things stick in my mind. The first is about who’s in control of developing frontier AI. Right now, we’re in a situation where a tiny handful of large corporations are the only organizations in the world capable of making frontier models. This raises governance concerns, not only about security but more generally. Who is responsible for asking tough questions and mitigating risks of AI? Right now, companies do that voluntarily. It’s akin to having students grade their own homework. We need independent capabilities at places like HAI and other universities to examine and stress-test LLMs [large language models] before they are deployed.

The second obvious set of concerns is making sure that humans and ethics are at the center of how government adopts and uses AI for national security purposes, whether for intelligence or military operations. DOD has had ethical principles for years. Government agencies have typically been very transparent about this concern. That’s not to say they can’t do better, and it’s important for independent academics and others to continue asking tough questions about human-centered AI in national security.

The third set of challenges relates to ethics around crisis decision-making in a world of more AI. If you consider nuclear or financial catastrophe, how do we mitigate those risks? AI is very good at following the rules. Humans are really good at violating rules. And in crises, it’s often the case that the violation of the rule or the order or standard operating procedure is what avoids catastrophe. We want to have situations where there is space for humans to violate rules to prevent catastrophe.

You’ve noted elsewhere that intelligence in general is a neglected topic in academia. How can a place like Stanford HAI start to address this?

I’m so excited about HAI. I think understanding the opportunities and risks of AI in national security is a team sport: We have to have leaders in CS and social scientists working together on these difficult problems. HAI has been at the forefront of thinking about many issues in AI, and I’m excited to be at the forefront of this one as well.

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics