When I started this series a couple years ago, the idea was more constrained than it is today. We wanted to do exit interviews with civil servants, people who were newly free to speak about their experiences and their learnings. The project has expanded: we talk to political scientists, economists, DC wonks, elected officials and people currently in government. But the core value of this project is in that original idea of getting a hold of people as they're leaving the government, pinning them to the wall, and making them reveal their secrets.
Today's guest is in that mold. His name is Dean Ball. If you follow AI policy, you already know who he is. Until a couple of weeks ago, Dean was a senior policy advisor for artificial intelligence and emerging technology at the White House Office of Science and Technology Policy (OSTP).
Dean and I go back a little while. Most notably, we’ve served together on one of the most dominant trivia teams DC has seen. But that's not why Dean's important. Dean's had a whirlwind tour over the past few months in the federal government. During that time, he was the organizing author of the Administration's AI Action Plan, a comprehensive roadmap from the White House on federal AI policy.
Today, we caught up to talk about that Action Plan, what it takes to write a strategy document for the federal government, and the challenges of implementing that strategy in the face of political, personal, and bureaucratic opposition.
I've said in the past that Dean thinks more clearly about the near-term future than most people. I still think that's true, though I don't agree with him on everything here, as you’ll see. He's an incredibly sharp thinker and I benefit from talking to him.
We discuss:
How to gain influence in the White House
Navigating the interagency process
How to build influence in an office without formal power
Is the Deep State real?
Why do Democrats and Republicans consistently have different approaches to managing the White House operation?
Should Dean have stayed in government?
Thanks to Harry Fletcher-Wood for his transcript edits, and to Katerina Barton for her audio edits.
Dean, you and I chatted a few weeks after you joined the admin. At that point, my rough impression was you were thinking through, “How can I be effective in this role?” What have you learned about being effective in the federal government?
To set the scene for readers, I was at the Office of Science and Technology Policy, which has no formal power. It doesn't control any budget, or particularly relevant choke points in the policymaking process. Traditionally, the OSTP ends up being like the little brother to the National Security Council (NSC), especially on tech-related issues. The NSC has way more staff usually, significantly more hard power, etc. This is the fundamental predicament everyone who's ever worked in OSTP faces.
I did a couple of things. I spoke to people like you. I also identified former OSTP staffers from every administration going back to Clinton 2, and talked to them about their experiences. A consistent message that is 100% true, maybe even more true in this administration than others, is that people don't care about job titles as much in the White House. It's not to say that it's not a hierarchical place: it's totally hierarchical. One thing that I assumed would be important going in is — there's the special assistant to the president, deputy assistant to the president, assistant to the president. I was like, “That's going to be important. I'm going to have to get promoted to special assistant for AI.” Turns out, nope, not at all. No one cares.
The way that you gain the ability to be effective is not dissimilar from what I experienced in the private sector: be the guy people want to have in the room because they think you’ll say something helpful to the conversation. Especially on a technical issue like AI, a topic on which lots of people are genuinely pre-paradigmatic. They don't have lots of inbuilt abstractions and ways of thinking about it. So, an extreme willingness to be outgoing, meeting with people, especially going to agencies in person.
An effective former guest of yours gave me this advice: go visit in person all the different agencies I'd be working with the most. That ended up being enormously useful.
Want to name the guest?
I don't want to, because I didn't explicitly get his permission, and it was a private conversation. But it was great advice.
Say more about hierarchy. It's interesting that which formal kind of assistant to the president you are is relatively unimportant. So which parts of the White House hierarchy, explicit or implicit, are the important parts?
It goes, in order from lowest to highest: special, deputy, and assistant. Assistant to the president matters because there aren't that many of those. My boss, Michael Kratsios, was Assistant to the President for Science and Technology. The president is the chair of the National Security Council, but Secretary Rubio, the head of the National Security Council, is the Assistant to the President for National Security Affairs. Those are important, high-ranking roles.
Once you get below that and into the policy process — the sociological reality of how policy gets developed inside the White House and in the interagency process — what that comes down to is more, “Who is in the room? Who knows about this? Who is looped in on this?” You know how government is. Plausibly, everyone can be invited to the conversation, especially a topic as wide-ranging as AI.
I don't know if this word gets used outside of DC, but in DC, you get invited if you have “equities” in an issue. Elsewhere, people just say you have a stake in something. But in DC it's, “DOJ has equities here.”
I found it so amusing too, because it sounds like a lefty word, and yet it was used throughout the Trump administration. I heard through-and-through MAGA people saying, “Our department has equities in this.”
Very often, power is more organic and more connected to, “Who do we think is going to create value? Who knows what they're talking about here?” Because that's the most important thing. Everything in the White House operates on such incredibly short timescales. The White House Chief of Staff's event horizon is like 10 days. Anything beyond that is impossibly far away. When those people need information and counsel, they need it now. So who can deliver that?
Careful readers of Statecraft will know that we've had on Office of Science and Technology Policy guests. Most prominently, before he was confirmed, we had your boss, Michael Kratsios, last fall. We've also had Tom Kalil, who was the #2 in the Obama OSTP. Tom had the initial idea for Statecraft, so I have to shout him out there.
The Obama OSTP was organized around certain principles, and Tom has talked about writing those principles on a whiteboard in his office. Principles like “Own the paper” — whoever owns the documentation for the project is the person in charge. “Always be trying to solve other people's problems for them.” Be the person who first comes to mind when someone else in the White House has a problem.
In the OSTP you served in, what were the organizing principles?
It's funny, because I saw this transition over time. Those organizing principles would not be bad ones. Michael Kratsios is a very intelligent, savvy person, because one of the things that happened early in the administration — day two or three — is OSTP got tasked with the lead role in authoring the AI Action Plan.
We, in governmental terms, held the pen. For the vast majority of that process, I personally held the pen. The reason that's powerful is that you have an interagency process, and you get a bunch of feedback. Every agency says blah, blah, blah, blah, blah. “No, not this, change this wording,” etc. Sometimes, those bits of feedback are purely good, useful things that make the document better. Other times, they conflict with what other agencies want, and the person who adjudicates is the person who holds the pen. Fundamentally, that's a mechanical reality.
The idea of making other people's lives easier is a good strategy in life. In most administrations, OSTP is the little brother to the National Security Council. One common strategy that I heard from people was, “Be friends with your counterparts at the National Security Council. Don't view them as rivals. Take stuff off their plate.” Because in most administrations — as a practical matter, in the Biden administration — the National Security Council runs the country. They're leading the vast majority of the policy processes that matter. All that was very true and it served me well in the first month or so.
In late May, Secretary Rubio completely reorganized the National Security Council. This was a serious opportunity for OSTP. We had a very good relationship with NSC: there was never any fighting or turf wars — the stuff I heard from previous administrations did not happen. But because a lot of that reorganization from Secretary Rubio did focus on tech-related things, a lot of policy processes that the National Security Council had been sharing were transferred by default to OSTP. Anything vaguely AI-related was transferred by default to me.
When you say the policy processes were translated to you, what did you suddenly have to do?
For example, there's an executive order that's in draft, that, at some point, someone thought it was a good idea to do. Now, we are taking the draft, sharing it with all the agencies that would have to execute it, and figuring out what it should be. It made OSTP more powerful and higher status within the four walls of the White House, and maybe a bit weaker in the interagency process.
How so?
If you have everyone in the White House aligned — all the assistants to the president are aligned around a specific thing — that's quite a powerful coalition that it's hard for even a cabinet secretary to go up against. It definitely forces a compromise. It's much harder when the NSC isn't participating in that anymore. Then it's, “You're the bigger fish in the small pond of the White House, but you are a smaller fish in the grand scheme of the government.”
I've been reading a history of the Nixon presidency by a Nixon appointee, called The Plot that Failed. Both Nixon and Trump thought consciously about, “How do I, as president, exert more control over this massive organization?”
But they went about it differently. Nixon's instinct was to staff up the White House itself. Many of these White House policy councils are Nixonian creations. Whereas the Trump admin, especially in this second term, has tried to put more responsibility on Cabinet secretaries, as opposed to on the staff of the White House. Is that a fair characterization, that there's more authority given to and expected from people like Chris Wright, Rubio, and other principals?
It's not universally true, but it's more true than not, probably, that that's the way it's going. The Trump 47 White House staff is significantly smaller than, for example, Biden’s. Biden’s OSTP — I don't have exact numbers, but what I've heard from former OSTP staff was 150 people. OSTP will grow from where it was when I left. But when I left, we probably had 25 or 30.
Yet I would argue that the OSTP led by Michael Kratsios did considerably more than the Biden OSTP. So the relationship between staff and effectiveness is not necessarily a one-to-one correlation. Adding more staff can create problems, especially in a charged place like the White House. Turf wars are more likely when there's more people. For example, if OSTP had a 10-person biosecurity team and a 10-person biotech team, what would those people do? They would probably spend a huge amount of their time fighting with one another over what issue was biotech and what was biosecurity. If we had a robotics team and an AI team, who knows?
But, when it comes to agency rulemaking, this administration is more hands-off as a general matter than Biden was. The story that is told publicly and that I have also heard is that things like the export controls on AI chips in the Biden administration were very heavily influenced by staffers in the Eisenhower building, which is to say the White House. That's not to say that there's no communication anymore. There's certainly collaboration between the Commerce Department and the White House in this administration. But, generally speaking, it's, “That's Commerce's rule and we're going to let Commerce do it.”
The president would rather exercise control over the government through Cabinet officials whom he knows well. The staff of the National Security Council alone is above the Dunbar number. That's above the number of relationships that a human can reasonably manage: even the number of directors and senior directors is quite large. The president is managing a board of directors, almost.
When we had Russ Vought on last year — the head of the Office of Management and Budget, in this admin and in Trump I — he said he's against the model of governance that the Biden and many Democratic administrations have tended towards, in which you have lots of free-floating “czars” in the White House, who are generally responsible for an issue. They overlap with the people in a formal chain of command from a Cabinet secretary on that issue.
You've seen this dynamic at the State Department as well: many of the Rubio-led reorganizations are trying to strip out some of these crosscutting roles on human rights, the environment, labor, etc, and trying to run things through an individual direct chain of command. I'm oversimplifying, but it's interesting to hear you talk about that, because it seems like a managerial philosophy that Russ Vought shares.
This is one of the stories that is most profoundly underappreciated, and it's the one that makes me the angriest: that we don't have a whole media apparatus filled with Santis doing the hard reporting about this. There is a new philosophy of statecraft that is being, not just experimented with in pilot, but done at scale in the federal government. I would say that the results… we don't know, we're very early into the administration still…
I'm always happy to hear, “It's too early to tell,” on Statecraft.
My experience of being inside it was extremely pleasant. It was not a soul-crushing bureaucracy at all. It was a dynamic, collaborative, congenial place to work. I've worked at think tanks and universities with way more toxic internal climates, political turf wars, and internal bureaucracy than the White House.
Will you say a little bit more about what you mean by that “new model of statecraft”? You said the word “statecraft,” so I have to push.
I didn't mean to say the name of the newsletter, but…
It’s like when they say the title of the movie in the movie.
Part of it does relate to this managerial philosophy: “We're going to exercise more direct control and make sure that the lines of information are very clear.” The way that the Presidential Personnel Office (PPO) is going about its task — I don't want to talk about the questions they ask you in any detail, but everyone has to do this — this is not the Trump administration, this is every administration, it’s famous. They ask you questions, trying to make sure that you're loyal to the president and that you're going to be a faithful agent of the agenda that the president wants to execute.
But the questions were so clearly influenced by — I don't know if the people asking them would think of it as the concept of cybernetics — but they were so influenced by cybernetics. It was, “Are you good enough of an agent that you can anticipate where the president's going to come down on an issue and make a convincing case for why you believe that?” Even if we know d*** well you haven't talked to the president about common law liability in AI, where do you think the president would come down on that? That's interesting.
In general, there's this willingness in the administration to reconsider… Let me step back for a second. It’s such a fascinating confluence of events. There's the fact that the president has a non-consecutive two-term, which is the first time that's happened in a long time.
It's the first time that's happened in the era when something like the deep state exists, which is an interesting twist. The deep state is totally real. It's not conspiratorial to say that; it's a real thing. Trust me, I got the memos with the policy recommendation where it's, “You have two hours to clear this. Please let us know what you think. There's a mountain of context on this, we're not going to give it to you.” That's how the deep state works. That's how it is, and they're doing whatever they want. That thwarts a lot of president,s and particularly thwarted Trump 45.
Now, we have this fascinating dynamic: four years in the wilderness to contemplate what went wrong, you come back with a very detailed plan, also with the knowledge that you don't have time. You have — not a complete willingness to throw out everything about how government has worked — but, much more than is appreciated, a willingness to not do things the way they've been done, if there are more efficient ways to do it.
I've heard about how an idea goes from a staffer's brain to the president's desk in the form of an executive order or another sort of policy document. I've heard about how that worked in the Biden administration, in Trump 45, and in Obama. In this admin, can tell you that the process of getting things to the president's desk — it shouldn't be easy, it wasn't easy — but it made sense. It wasn't filled with things that struck me as pointless procedural blockers. They were, “Now these lawyers have to look at it,” things that made sense. There wasn't turf war and pointlessness. Maybe it's also a new administration — that's also possible.
There are learning cycles in politics, just like how people talk about learning the lessons of the last war. When people say that about militaries, they often mean it derogatorily. But clearly there was learning done among Trump folks from the first term about how to operate.
I also see folks from the Biden admin paying very close attention to the way this admin has managed to move much faster. I had conversations this morning with former Biden officials, who very bluntly wished they’d developed a stronger sense of the “shot clock.” They say things like, “We should have been much more aware that every day that passes is one day fewer to set the president's agenda.” Everybody's watching how the other guy does it and trying to adjust: “We should have done it that way. We'll do that next time.”
That's right. To some extent it's the lessons learned. Another part of it is just culture. There's people in this admin who have a very clear sense of, “We are here to do a job, we have to get this done, and we're trying to effect genuinely transformative change.” Also, that realization that the election alone is not nearly enough. That doesn't itself get you transformative change.
All those are lessons that Republicans feel viscerally in a way that's very hard for Democrats, because they've been the recipients of the cultural grace for the last 20 or 30 years. So they get the cultural and political benefit of the doubt. When Republicans push against the Constitution, it is called “tyranny.” When Democrats do it, it's called, “ambition and creativity.” It's called “statecraft” when Democrats do it. When Republicans do it, it's “evil.” That's something that all Republicans are quite aware of.
It affects the culture. It was cool. It was fun. Being in the Eisenhower building in the spring and early summer of 2025 was like being at CBGB in lower Manhattan in 1974. It was, “Man, this guy's doing crazy stuff over here.” It was a scene. There's smart people with ambitious ideas. There's an entire group of people rewriting the Federal Acquisition Regulation and reducing its length by a gajillion pages, and no one knows about it. There's an executive order, it's all public that this is happening, but nobody talks about it, in the way that there would be doting pieces about this in The Atlantic if it were a Democrat doing it.
Let me propose one concern one might have about the admin’s model of statecraft. This admin likes a managerial system that runs along direct lines of command and control, with principals who are empowered to make calls and oversee people down the chain. One issue is that lots of things get bottlenecked at the level of the principals.
In May, there was reporting that Howard Lutnick, Secretary of Commerce, was personally approving all contracts valued over $100,000. The report focused on the National Oceanic and Atmospheric Administration (NOAA): they have over 200 contracts currently waiting for Secretary Lutnick’s approval, with another 5,700 expiring this year. Officials at the NOAA were quoted saying, “Everything has ground to a halt.” Some of these contracts were bulk procurements of office supplies, things like that.
The Nixon administration ran into this same issue. By enforcing a hierarchy for every decision, you create these roadblocks at the top. Of course the Secretary doesn't have time to sign off on everything, but the result is institutional lethargy.
What do you make of that concern?
That's a failure mode of this style of management that can be just as true in a corporation. The interesting way that this gets balanced is: the president is not a micromanager. He wants to trust that if he wants something to happen, it will happen. Everyone in the administration completely gets that. But once he feels comfortable that the people are in place and that he's figured out the goals that he wants to articulate, he's not necessarily going to micromanage things.
I take your point: President Trump is not a micromanager. But you can go down several levels and find massive organizations that are bottlenecked on individual political appointees.
Different cabinet secretaries have very different approaches to this. There are cabinet secretaries who are total micromanagers, and there are others who are quite willing to delegate important things to staff. Pairing this style of management with a micromanager is a very likely way to create this failure case. It's not just that that style of management bottlenecks things at the very top. It also creates an incentive for the direct deputies to also be that way. It's a series of bottlenecks, because if the secretary of a department says, “I want everything like this to come across my desk.” The person's deputy, who would've typically been responsible for that, probably still has an incentive to be a chokepoint — to make sure that there are not random staffers of theirs going directly to the secretary. Micromanagement is a way to make this problematic, until we're able to make copies of secretaries with AI agents. Maybe one day we'll be able to micromanage as much as we want.
I want to ask you a bunch of nerdy implementation questions about the AI Action Plan, which you largely held the pen for. Give us a one-minute explanation of what the Action Plan was for people who are not necessarily in the weeds on AI policy.
The AI Action Plan was created by an executive order the president signed in the first day or two of his administration. That rescinded the Biden executive order on AI, which was unpopular in Silicon Valley, and replaced it with a provision that said, “Go make an AI Action Plan.” So, the Action Plan is a replacement for the Biden executive order, though it was not itself an executive order, it was more of a report.
The Action Plan tries to think of all the different themes where America needs to develop a strategy and have a position on AI. America exercises a lot of leverage over the scientific enterprise. So, what do we think about AI and science, AI and defense, infrastructure for AI, and environmental permitting?
Most government strategy documents say, “AI is going to be important and there's also going to be risks. We have got to get the good with little of the bad. The bad is going to be defined however we define it as the government: misinformation and bias, or any number of things, like catastrophic risk.” That's where most of those documents stop.
But the Action Plan is both a strategy document that tries to be quite specific about our government's strategic objectives, and also tries to identify between two and six specific policy actions that federal agencies can take now — with existing statutory authorities and budget — to meaningfully advance the ball on that objective. We do this across several dozen objectives with more than 90 total policy recommendations. What makes a difference is this emphasis on concrete, actionable recommendations.
You guys did not run the formal interagency process in writing the AI Action Plan. Say a little bit more about “the interagency,” another DC-ism people use.
I assume your readers have context for this, but when some whippersnapper in the White House gets an idea, that idea is usually executed by someone else. Maybe there's many other people who would like to be executing it and aren't, because the whippersnapper didn't think of them. “You didn't know that the State Department also does this.” So you have to go through a process of communicating what you're doing to those people, and figuring out what it should say from their perspective. Fundamentally, it is a good and necessary process. It is much maligned in DC, because it's where infighting happens.
Many guests of Statecraft have maligned it.
We ran interagency processes for all the executive orders that were associated with the Action Plan. Those processes were more efficient: our executive orders didn't take months and months and months to go from a draft to being on the president's desk. They didn't end up being 50 pages long, like all the Biden executive orders. They're short, sweet, and to the point — they were written by a writer. But the Action Plan itself, I had a lot of latitude to design the process — “clearance” is the word, that's what you seek from the agency — consent, approval, clearance.
That's different from security clearance.
Yes. You want to get approval, clearance from all these agencies. The Action Plan is weird because it's a very strong recommendation, but there are no “shalls.” The Action Plan is a “should” type document. So I had latitude to design the process by which we would take this thing from a Word document to the president.
What I ultimately decided was, this thing is going to be hugely wide-ranging, it's going to touch on a million different issues. We can't have the entire federal government commenting on every aspect of the strategy. Some random career staffer, who's grumpy and has been there for 30 years, might throw in a bunch of comments. As the White House staffer, you don’t have context for who they are. Also, it increases the odds of it leaking. It increases the odds of a drawn-out process with more interference.
I wrote the plan, and you can see exactly how it would've worked out. The Action Plan is organized with three pillars. Under each, there's roughly a dozen strategic objectives in bold text per pillar.
Then there's a paragraph and the bullet points with recommended policy actions.
I would take those bullet points and the recommended policy actions and — either going by the agencies named in them, or thinking through who has equities and is not necessarily named — send those bullets alone, with the paragraph of text, to the agency. If you're the Department of Commerce, that means you got pretty much the whole plan, because the department is mentioned often. One other rule I had was that, if there's a series of five bullet points, and one agency is mentioned in one of them, I did give that agency all five bullet points, so they could see the other things that were part of this strategic objective. Then we went back and forth on that.
So the Action Plan was more like 20 or maybe 40 interagency processes that were run in parallel. At the same time, we did a paper interagency process within the White House itself. All the components of the White House got the whole plan a month or so in advance, so they were able to offer complete feedback.
The other thing I did, which made me feel more comfortable at least, is use AI to simulate interagency processes for different policy objectives. You try to get, “What is the first pass going to be of pushback or advice?” Maybe try to incorporate some of that. At least be aware of it, so that when the wizened career person at the National Science Foundation comes back to you and says, blah, blah, blah, blah, blah, “This isn't feasible,” you at least have anticipated their criticism and are able to have a constructive conversation with them.
It all came together and then, at the very last minute, agencies got the whole thing. But they were only ever asked to clear the bullets relevant to them.
As we've discussed, OSTP has very little formal power. It does not get to go to agencies and say, “You shall do this.”
But the AI Action Plan has quite dense recommendations for a strategy document from the White House. These depend on the agency being a fully motivated and willing partner, because typically, agencies don't follow guidance just because somebody in the executive office of the president wanted them to. How should we think about the implementation challenges of the AI Action Plan?
Writing it was hard enough, but the hard part is the implementation. First of all, the bulk of time that the Action Plan spent in draft, it was mostly done in a form pretty close to where it came out publicly. Not exactly, but 75% similar, in late April. Then, it was two-and-a-half months of working with my counterparts at agencies. Sometimes it was, “What exactly should this say?” But other times it was, “Let's get the leadership of this agency bought in. Let me explain why. Let me go in person and visit.” That helped. These ideas were very fully baked. Nobody in the federal government was surprised — no one in the leadership of any federal agency was calling the West Wing being surprised that their agency was asked to do something because of the Action Plan.
Having a good rollout strategy helped too. A lot of people worked super hard on the Action Plan event. There were five cabinet secretaries, the vice president and the president announcing it. It would be hard for the administration to emphasize harder that it cares about this issue than that star power.
If there's one lesson I've taken away from Statecraft, it’s that having the principals stand up and say something matters.
That is very true. The last thing is that we were lucky and the Action Plan was well received, which was not part of my model of the world. I thought it would be way more criticized.
What criticism did you expect to get?
I expected it to be criticized in every way. For being too concerned with risks, not nearly enough concerned with risks, concerned about the wrong risks. I figured people would throw whatever they could at us — not because of the content of it, but because it's the president — and that it would be broadly characterized as a reckless plan. If you go read the New York Times or the Washington Post, they'll tell you it's a reckless plan that doesn't care about safety.
That wasn't my impression.
The Washington Post had a good editorial and it was reasonably well-received. You’re right. But, you can go look at reporter shorthand and follow-up stories. I've seen things where, “The administration's Action Plan, which is reckless and a sign of turning away from the world order and focusing inward…”
The New York Times did do a lot of that, I will say.
Maybe what I meant was NYT, not WaPo.
One other thing for any younger people who are interested about the media landscape. Everyone in the Trump White House still reads the New York Times. It's not like we were obsessed, but we cared, and we noticed.
Anyway, I expected it to be poorly received and maligned, but it wasn't. That ended up helping too, because it made it prestigious within the administration. So, the last two or three weeks that I was there, it was a marked change in the number of people who are like, “Can we get OSTP's take on this? We're excited about implementing the Action Plan.”
A lot of agencies will do things that are not necessarily directly called for in the Action Plan, but are related to it. They'll say, “As the Action Plan says…” blah, blah, blah, blah, blah. We've already seen that happen. I suspect we'll see more. That is exactly what I was hoping for: that this would be a rallying cry, not just for agencies, but it's already happened in the private sector too. Probably it's flattery, but the private sector has done things and said, “The Action Plan says this, and we think we're aligned with the Action Plan here.” That's the idea. That's what government should do. That's what the leadership part of this is.
So I guess my questions about implementation fall into two kinds of questions. One is, how do you get an agency that isn't formally required to do a thing to try and do a thing? And your answer is combination of careful work from folks at OSTP proceeding, the rollout, the rollout itself, the salutary reception they got, POTUS making a big deal about the AI Action Plan, and so on. All these pieces make it easier to get agencies to act.
But there’s another part of implementation: helping an agency actually execute on something it wants to do.
Let's take an example: the Action Plan calls on the intelligence community to monitor the capabilities of Chinese frontier AI labs and pass that information back, with no lossiness to politics, who have to make difficult decisions about AI in light of those frontier lab developments. The intelligence community (IC) is legendarily hard to coordinate, as we’ve talked about on Statecraft. The OSTP is much smaller than the Biden admin’s version. Even if the number of people doesn’t directly correlate to capacity, it has some relationship to capacity. How is that example going to get implemented?
Great question. The parts of my answer that I emphasize, probably solipsistically, were the parts that I feel I am good at. I'm fundamentally a communicator. But there is this ongoing role, not just within the agency, but also at OSTP, “How do we see to it that these things happen?”
To use your example of the IC, that's one of the very hardest ones. We're now getting into parts of the reasons why I made the decision to leave. You need someone who understands AI well to implement that. But me knowing what a KV cache is is not especially useful to me knowing how to get the intelligence agency to do something. I'll tell you the truth. That's not my comparative advantage. What I've counseled is, “Go in the direction of implementers — people that know their way around the agencies they're going to be working with.” The good news is OSTP already has people like that. You need people with relationships — that have established trust. With the intelligence community, I only barely got to work with them. So that wasn't ever going to be me. You have to know when to step aside.
But how you actually do it — the things that are in the Action Plan have got to remain important to enough of the relevant people in the White House. If the White House cares, it'll get done. If there's some blocker and it's not happening, you have to identify what the blocker is. You have to try to resolve it through the front door — you try to resolve it yourself with the agency. If that doesn't work, top cover is key. It's a call to the chief of staff. That's how it gets done.
But this is one place where having more bodies on this stuff does make it easier to do implementation. It's hard for a principal — someone like your boss, Secretary Lutnick, or Rubio — to ride herd on the intelligence community; to get biweekly reports to make sure that we're implementing this new surveillance the way we want. In any institution, all that stuff mechanically requires a whole stack of people who are good at carrying out the principal's wishes. It's not just about political cover. There's all kinds of resistances and cross-pressures within a specific agency.
OSTP is going to have to staff up. There's going to be turf wars that you're going to have to adjudicate. It'll require skillfulness and good relationships. I'd love to query the LLM of your guests’ knowledge on this question, because I don't know the answer. What is it like for an administration to age over time? I worked in the administration during a time when it was quite young.
But people end up occupying whatever their choke point's going to be. Their soft power hardens a little bit. Then they are where they are and they're not going anywhere. That could have been me. I could have stayed — I wouldn't have characterized myself as a choke point, but I would've been something. Maybe there'd be some other new person with a great idea that I don't want to listen to because I view him as competition in two years. I think it's an interesting question of, “How do these things harden over time?” I don't quite know the answer.
I don't think there's an abstract, correct managerial answer. Should you have continuity or should you have lots of turnover in an administration?
I have spoken to many people in this admin who have had the model that I think you have: “I'm going to go in, execute the task for which I am best suited, and I will try and know when my comparative advantage has dried up and it's time for me to go out.” You could contrast that to the Biden admin, which famously didn't fire any principals. They touted that as a good thing: “We have continuity. We build relationships over time.”
There's not some magic managerial answer about which model is the right one, but each has its own failure modes.
The one other thing that's useful in government is a forcing function. The provisions of the executive orders (EOs) that get implemented are the ones where you attach deadlines. Even better is an event where the president is speaking. It has to be ready.
Because the president is going to talk about it, and woe to you if the president talks about your thing and it's not ready.
Or if your thing sucks. Those aren't the only types of forcing functions; there's others that you can be creative about. And this is not me. I was good at some aspects of this, maybe I would've gotten good, I don't know. But one aspect of effectively implementing this is going to be creating those correct forcing functions.
We call, for example, for the creation of a new facility for the Department of Defense to test autonomous vehicles and drones — that’s a hard one. An autonomy proving ground, essentially that's a physical facility. Or a high-security data center is another very tough one with political and technical execution risk. Maybe your forcing function is the president or Secretary Hegseth are going to visit on this day. You get the principal personally invested and you create some sort of a forcing function, you can create a sense of urgency that might not otherwise be there, which is the only way to make anything happen.
There's interest in data center security — the Action Plan talks about it and it was mentioned in the Biden admin roadmap — so that rogue nations can't hack into our precious data centers. But there's lots of different views on how robust that security should be, on who should be implementing and checking. Is that not the sort of thing where a Dean Ball on the inside would have the ability to drive the train: to make sure that we execute that in a way that makes sense?
Certainly you can exercise a high degree of influence over it from within the White House. I want to contextualize for readers, what we're talking about here is not security regulation about private data centers, but the security of data centers that are specifically contracted by the intelligence community or the Department of Defense. There are obviously already security standards for such things. One of the challenging political risk areas is in the nature of those standards: are those standards good enough? Do we need to throw them out completely or can we iterate on them?
I've heard some people say that private sector data centers might be more secure than classified data centers, because the standards that the government uses don't get updated. And so it becomes a box-checking exercise. I won't get too much into the technocratic details, but I do have opinions about this.
You are totally right, that this is the sort of thing that you could drive from within the White House. But the Action Plan is a snapshot in time. I don't have all the answers to exactly how to implement it. There are some things that are vague because it's, “We need to talk about this,” and I couldn't quite do that in three months. Because we're talking about emerging technology, it would be weird if all the policy ideas were fully developed. If there's stuff that you need to bake more, what's the best place to do? Is the White House a good place to develop policy ideas? And my conclusion…
As you’ve said in other conversations, “No.”
My conclusion was no. But you have asked me these questions in such a way that you're now getting the narrative, the exact train of thought that I went through. It's like, “No, I don’t think so.” I don't think that this environment — which is an amazing environment, it's not a criticism of it all — but it's not conducive to developing new ideas. You have to come in with ideas fully baked.
On another podcast, you mentioned that it's very hard to work in public in the federal government. You don't get to publish a Substack, say, “Hey, Twitter, what do you think of this?” get dogpiled, and update your thoughts. The mechanism that we do have in the federal government formally for “working in public” is requests for input (RFI) — these very formalized mechanisms by which you say, “Hey, public, we're working on an AI Action Plan. Here's the sort of thing we're thinking about. What do you think?”
The RFI for the Action Plan got a little over 10,000 public comments. I know you took a look at a lot of them. How many of those comments were actively helpful for developing your thinking?
A lot of them were very helpful. It does end up being a heavy-tailed outcome, where a certain number of them were extremely useful. There are people that wrote very discreet, specific, and actionable things about data center power, for example, that I found enormously useful, in getting me up to speed on that issue. There were perspectives on AI that I never in a million years would've considered myself, like groups that represent dairy farmers…
What do dairy farmers think about AI?
Unfortunately it's one of the areas of the Action Plan that we didn't quite ever go into. I wanted to do an agriculture section, but there wasn't enough stuff that I felt high conviction that, “This is the thing to do.” It does take time to do that. The amount of time you have to develop policy in the White House is non-zero, but most of the research that I did that went into the Action Plan was done before I joined the White House. When it comes to agriculture though, we're remarkably close to being able to automate big chunks of farms. We already did that in many ways.
And my understanding is that several folks in this admin have been interested in pushing that further, as a way to substitute for illegal labor.
Another little lesson for the intellectual entrepreneur reading this: the Action Plan is ultimately a document of many different intellectual entrepreneurial ventures undertaken at the same time. If you can find things that will get different people rowing in the same direction, you should totally do that. You should look for weird, Kanye West-style mixes of different coalitions that you can bring together into one. I mean, Kanye West stylistically.
That makes me think about a places where there might be the least amount of overlap between different intellectual entrepreneurs in the AI Action Plan. There's a lot in the Plan about data center build-out.
When you talk about data centers on federal land, or special compute zones, you see huge pushback from other parts of the Trump coalition, from folks who are otherwise bedfellows. You see concerns about data centers, about energy usage, about using federal land. How are you thinking about that political tension?
It's a wonderful question. One of my highest-conviction bets on how anti-AI sentiment will manifest itself in the political world is that it won't be new regulation, it will be data center NIMBYism of various kinds. It will be exercised in local decentralized fashion for the most part. At the same time, the Trump administration probably places somewhat less emphasis in the Action Plan on federal lands than did the Biden administration. The Biden administration had a 30-page executive order that was all about federal land.
Some good geothermal stuff, we were very happy with some of that at IFP.
Well, sorry we rescinded it. But as you know, the Trump administration remains quite enthusiastic about geothermal.
Anyway, the approach that we took with the AI infrastructure EO was to focus much more on private and public lands. There are some people using this idea of federal lands as the idea that the public is now making some sort of sacrifice to construct the data centers. So we're paying for it, and that's somehow wrong. That probably has to do with animosity toward the people that they view as the beneficiaries of the data center — which they don't view as themselves and their children, and instead view as distant billionaires in California. That's a broader political issue about which I, in the short term, can do very little.
But I’m a practical person. My fear of data centers on public lands — IFP had a version of this that I'm more okay with. The reason that [IFP’s director of emerging tech policy] Tim Fist was so into this was that he wanted to attach riders to the contracts — to make them do stuff in exchange for the lands: cybersecurity guarantees, which is a real thing. But the political economy of that is that it's just text on a lease to be whatever you want. The second your landlord is the federal government — real estate developers say the federal government is a terrible tenant. I can only imagine how they are as a landlord.
The federal land authorities are there and they're activated. If agencies know they can use them because there's that EO and if they want to, they should, if there's some tactical reason where it makes sense. But I want to turn down the volume on the importance of federal lands for AI, because I don't think land is the rate-limiter, and I don't like the political economy of it in the long term. If it also happens that some people in the Republican coalition don't want it for sentimental reasons, then fine. I don't think anyone wants to build a data center in the Grand Canyon.
The Yellowstone xAI training run?
I'm very happy that we turned down the volume on that. The volume seems it's now at an appropriate place, which is 10%.
[NB: My colleagues at IFP have argued that the value of using federal land is not in the land availability itself (there is private land available all over the country). The value comes from having a single authority responsible for approval, permitting, siting, and other incidental authorities to improve AI policy. The federal government has a range of authorities related to national security that can be used to speed up construction on public land. For more, see our Compute in America series. Note that the federal government owns massive tracts of unused land that often have little ecological value.]
[NB, again: On Monday September 8th, the Department of Energy put out a request for proposals (RFP) for AI data centers on the land of the Idaho National Laboratory. The RFP includes a requirement that proposals include a security plan.]
There's been a fight over export controls of the H20 chip produced by Nvidia. [Exports of the H20 to China were banned in 2023. The ban was lifted in July 2025. At IFP, we think exporting the H20 to China carries significant risk.]
Meanwhile, Nvidia announces they’re rolling out the B30A, a new chip, which gets around the existing export control rules on the H20 and has half the performance of Nvidia’s highest-spec chips at half the price. How should America think about the export of a B30A?
There are a number of reasonable criticisms one could make of the current approach to export controls and tactical stuff in this administration. There's much bigger critiques you can make of the Biden administration and the first Trump administration. Let's set all that aside and talk about tactics for a second. I believe there's a legitimate case to be made for, “We want to sell more chips to China.” The case that Jensen Huang, CEO of Nvidia, will make is that China should be dependent on US chips, rather than developing a domestic semiconductor ecosystem. If they do that, they're going to be able to flood the market and out-compete all of our companies in ways that we've seen before. The correct strategy, in that model of the world, is we need to double down on controls for semiconductor manufacturing equipment, but we should also sell China some of our top-of-the-line Nvidia, AMD, and other AI acceleration hardware.
That is a very reasonable strategy, but you have to implement it. To implement that strategy requires more than saying, “That's what we're doing.” Taking the tweet-length version of the argument I gave you, you have to think a little bit more carefully. One thing you have to think about is what does “Some chips to China” mean? We have ways of making this intelligible. We should do that. We should have a framework where we say, “We're going to allow this much: we want a chip that is X% better than the best chip in China. If an American company makes that chip, we're going to sell China Y number of that chip.”
Give the relevant businesses — give everybody predictability. The rate-limiter here is TSMC (Taiwan Semiconductor Manufacturing Company). It's how many chips can be fabbed. Nvidia will sell as many as they make, at least over the next couple of years. That way, you can have some reasonable sense of, “We're talking about giving roughly this percentage of TSMC's output of AI chips to China.” That seems the way to do it.
Should we give American companies the right of first refusal on those chips? As you say, there is more demand than there is supply. American companies would love to buy them.
That's an interesting idea. It seems the sort of thing you could lawyer your way around pretty easily — I don’t know how exactly you write that into the Bureau of Industry and Security (BIS) rulemaking, but that'd be a great way to do it. What could be more America First?
How can we implement this idea of selling China a certain amount of chips and no more, in a world where buyers in Malaysia buy a bunch of chips and then pass them along to their partners in mainland China?
The Action Plan does not say anything specific about what the substantive content of AI compute export control should be. What it says is, whatever the compute export controls are, we are saying, “This is American law, so let's enforce it evenly and robustly throughout the world.” So that people take us seriously: big navy and stuff. “Don't screw with us.” So, make sure that we know when you're violating the law, because it doesn't matter how big your guns are, if you don't know that the law's being violated, or where, or by whom.
The Action Plan goes into this idea that the intelligence community needs to partner more thoroughly with the Bureau of Industry and Security in the Commerce Department, which writes and enforces the export controls. So much information sharing that could be happening there that isn't — it boggles the mind when you think about how much low-hanging fruit there is.
One of the more controversial parts of the Action Plan is location verification [of where specific chips are]. The Action Plan is very sincere in what it says about that, which is, “It is worth exploring.” I'm not technical enough to know what is feasible and what is not. Is that creating a security vulnerability? There's probably versions of it that do, I'm not sure. But the White House should get its people together and explore that.
One other piece of this that I think is important to emphasize: it's not in the Action Plan because BIS writes its own rules. But now I can comment on such things, because I don't work for the White House anymore. This is where the benefit of something like a global licensing rule makes quite a bit of sense and is good for American industry. The Biden diffusion rule was an attempt at this, but it added complexity and grouped the countries of the world into strange classifications that offended people.
You can have a global licensing rule that doesn't do any of that stuff, but does make sure that the bulk of the AI compute is preferenced to the United States. You can also make sure that, of the compute that's not built in the United States, we're preferencing American companies to build it, which is obviously consistent with the export promotion agenda. All this stuff can be put together into a harmonized way that gives chipmakers, data center operators, financiers, other foreign governments — everyone — confidence and predictability, which is lacking in this AI infrastructure build-out.
In November, you laid out a roadmap for how the Trump admin could take AI safety seriously, and suggested maybe this admin will be a better home for AI safety concerns than a putative Kamala administration. About a year on, how do you reflect on that argument?
I feel great about that thesis. Part of this is a political realization about the ways that the Democrat and Republican parties are hooked up in reality. I've talked to people who tried to make the case for AI catastrophic risk being a problem. There are good-hearted Democrats who will be like, “Yeah, that sounds like a problem for humanity.” But the actual way that the Democratic party works is that, if you want to get the Democratic Party to be supportive of catastrophic risk things, you’ve got to have a story about why biorisk is a problem for the teachers unions, the ACLU, the school bus drivers union, and all the other various constituencies that — Holy Roman Empire-like — make up the Democratic party.
We have interest groups in the Republican party, but we're motivated by — what was the president's objective for the Action Plan? “Achieve American AI dominance.” That's it. Period. Big objectives. In the campaign, it was, “Human flourishing” and “free speech.” Those are big ideas, and it's not a bunch of jumping through hoops and all this tortured stuff. This is very simple and what we want to do. That lends itself well to caring about these big picture issues, particularly because it turns out that many matters of AI safety and what the doomers — or people that are worried about such things — care about comes down to values, alignment, control, and maintaining human control over these things.
One failure mode for Republicans will be repeating the social media policy battles with AI. They're not the same at all, but there are some structural similarities. We have broadly aligned concerns about both of these technologies. So there's a real opportunity there and Republicans are quite primed to think about these issues in ways that are isomorphic to how AI safety people think about them. I have observed prominent Republicans do this.
Anything I should ask you before we close out?
I don't have a good answer, but why can't the White House use Google Docs? More broadly, why can't the whole government use Google Docs? I was thinking about that a lot in the interagency process: why don't we all have one Google? It definitely doesn't work that way.
Another thing, maybe it's more of a future thing for us to talk about, but how rules governing the flow of information profoundly affect the way the government works and the incentives of public employees. The Freedom of Information Act (FOIA) is a good example of this. FOIA hugely impacts the way that the federal government is organized and what federal government employees do and do not say, and do and do not do, for that matter.
When you say, “hugely impacts,” you mean for ill?
Sometimes it's for ill, sometimes not, sometimes it's neutral. The bigger point is that it's an artificial imposition on the activity. It's not a law of nature that every email is a record. It's not that we discovered that in quantum mechanics or something. That’s a thing we chose to do to ourselves.
Most laws are.
But after having done it for 50 years, we should decide whether we still want to or whether we want to make any changes to that. The other one is classified information. I don't know if you've ever done any episodes specifically on that, but the political economy of classified information is something that I could write a d*** book about.
The institutional Statecraft position is that we classify way too much stuff. Jon Askonas has been very influential for me in this regard.
We absolutely classify too much stuff, but the various reasons that it’s done is so interesting. One of which is, we classify information to keep people out of the room. It's about the interagency process, because some interagency processes are classified. You do that if you want to limit the number of people who can be in the room: you put it at the top secret level, you put it in a SCIF (Sensitive Compartmented Information Facility), and then I don't know. Does NSF even have a SKIF? I don't know. It makes it harder, because you massively limit the number of people who can be involved.
That's one example of many ways in which the classification system is used tactically in government as a move, and not necessarily used to protect that information. I promise you that half of what's on the front page of the New York Times today is classified information.