Statecraft
Statecraft
Does Anyone in Government Care About Productivity Growth?
0:00
Current time: 0:00 / Total time: -51:50
-51:50

Does Anyone in Government Care About Productivity Growth?

"This is a big, structural problem with our government"

Today’s episode is a special one: an interview with a colleague of mine at the Institute for Progress. Ben Jones is an economist who focuses on the sources of economic growth in advanced economies, and he’s a Non-Resident Senior Fellow at IFP.

We recorded this conversation at the second #EconTwitterIRL Conference last month in Lancaster, PA, which IFP hosted alongside the Economic Innovation Group). The other interview at that conference was excellent too: Cardiff Garcia interviewing Paul Krugman.

Jones has served in more than one executive branch role, including as the Senior Economist for Macroeconomics for the White House Council of Economic Advisors (CEA), during the first Obama administration. But what we spent most of our time talking about here was a broader question: What role does federal spending on science play in productivity growth?

We discussed:

  • Do national leaders actually affect economic growth?

  • Whose job is it in the federal government to think about productivity?

  • Why is working on supply chains less prestigious than working on economic theory?

  • What market failure is solved by public R&D funding?

  • What does the rise of team science mean for young scientists?

  • Should we be bearish about the entire scientific enterprise?

  • What levers can we pull to increase productivity growth?

    Thanks for reading Statecraft! Subscribe for a new interview each week.


Ben, why aren't you an aerospace engineer? 

I was on the way to Oxford to do a doctorate in aerospace engineering and turbomachinery. And I had the dawning realization that it was a very narrow thing to be doing for the rest of my life. I decided I wanted to broaden out. Although I had not really studied any economics to that point, I had become aware of some of the policy challenges around clean energy technologies and was interested in understanding more about what was going on in the market or the government that might interface with technology. I got pulled into economics when I went to Oxford. Apparently I never looked back and I became an economist. 

Early in your economics career, you were special assistant to Larry Summers at Treasury during the Asian financial crisis. What did you learn from watching him? 

I think one of the first things I learned was the importance of expertise in policymaking. That's something Larry said to me very directly when he hired me: “There are lots of very smart people in Washington, D.C. But what is often scarce is true expertise.” 

What did he mean by that?

In order to really figure out our best solutions to whatever’s going on in the world, just being smart is not enough. You need to bring as much expertise to bear as you can.

And just being smart is not enough. For good decision-making, you need to bring truly deep perspectives into the room, probably from a variety of angles. Back then at the Treasury, I was not an expert – that was very clear to me. I would shadow Larry in all these meetings and I was not able to contribute as someone who really understood economics deeply, or who really understood legislative affairs, or as a lawyer who thought a lot about regulatory law. 

One throughline in your work that I find fascinating is the role of leaders in national growth and monetary policy. Did that interest come out of your time watching Summers? 

It was actually related to my time watching leaders of other countries in the Asian financial crisis. Since WWI, the 20th-century paradigm of social science, including economics, rejected the “great man” view of history in favor of social forces ruling the day, right?

When I left the Treasury to do a PhD in economics, it became quickly apparent to me that the agency of national leaders was just not a thing. Economists were not talking about what was happening in the world as if it mattered what a political leader thought. Everything was just the expression of broader market forces, or, in political science, median voter theory: what do the masses want, where are the votes going to go?

But at Treasury, I saw very different policy outcomes in different countries, because the leaders of those countries seemed to have very different views and tastes. I left with the impression that leaders mattered quite a bit.

One of my earliest papers had a very morbid research agenda. I wanted to look at what role leaders might have in the world. In order to get causation, we need some random, exogenous event to happen.

So with my co-author, we collected data on all the cases where leaders died suddenly, either because they had a heart attack or a stroke. Then we studied what happened to their economies when there's a sudden change in leadership that’s unrelated to social forces.

And what we discovered was that leaders seem to matter enormously for the progress of nations economically in authoritarian regimes. But when leaders die in democracies, we see no macroeconomic effect.

Would that imply that a successful assassination attempt on a U.S. president would likely not have macroeconomic effects?

Well, we did go on from that first study. I said it was a morbid research agenda. The second thing we did was study assassinations of national leaders. The paper is called “Hit or Miss”, which tells you the point: conditional on the weapon going off, is the leader killed or merely wounded, or does the weapon miss?

And of course, Trump just had one of these incredibly close calls. We look back to 1875, and there are many cases where leaders barely survive. Idi Amin once survived a live grenade that bounced off his chest while he was giving a speech, then fell into the crowd and blew up. But he was fine. We've had leaders have been shot through the mouth. Any number of things have happened. 

We find that when you assassinate a leader in authoritarian regimes, you get big changes, but when you assassinate a leader in a democracy, you don't. Although, of course, Trump is not your average leader, and to the extent someone is not really representing the average, maybe different things could occur. 

You ended up in the federal government again at the Council of Economic Advisers (CEA) about 14 years ago. Were you a better economist in government that second time around? 

I was certainly more informed. I meant what I said about the role that expertise can play in policy. After I left Treasury, I got my PhD and spent the next decade developing certain areas of economic expertise. 

When I was back in the White House in the first term of the Obama administration, I did have this gratifying sense that once I’d become more of an expert, I had more automatic insight, and a lot more to say. I felt this very magical feeling. When you walk into a meeting and you listen to the conversation, you suddenly think, “Well, this is uninformed by the following six things. I’d better jump in here and tell people what's really going on, at least in my lane on economics.” 

I found that was actually quite easy to do. Expertise really is powerful, and it's also gratifying to be able to try to provide that into active policy process. 

In D.C., whose job is it to think about productivity growth? 

No one, in a sense. I think this is a big structural problem with our government. 

If you think about the progress of the U.S. economy over the long run, we've had massive transformations in health and in the standard of living. You can look at any sector of the economy and see that. Where does that really come from? It comes from science and technology and from new ideas. Productivity growth is not about doing more of the same thing you were doing. It's about radically different ways of doing things.

In the middle of the 19th century, if you wanted to cross the country, you’d do it in a wagon, and it took five or six months. Now you can do it in five or six hours. We didn't get there by making more wagons. We did an incredible amount of science and innovation between now and then that developed massive new forms of transportation. 

So when you think of what really drives our strength — and I mean our income, our health, I mean our, the competitiveness of our workforce, the ability to succeed in the global economy, our national security — at its heart, all of that is really science and technology. And the government plays an incredibly important role in that. 

But when you get to the center of the thing in the White House, and the economic policy that most closely surrounds the president, you're thinking more about the Treasury. It’s very important, very influential, but it’s dealing more with crises, finance, and not really thinking about long-run investment.

In the White House, you have the National Economic Council, the CEA, or the Office of Management and the Budget (OMB). But no agency is really committed to productivity. The Office of Science and Technology Policy, OSTP, can have influence. But in my experience, it’s rarely especially close to or influential with the policies being formed. Most OSTP staffers sit in the New Executive Office Building (NEOB), not really in the main campus of the White House. [NB: It was once the case that OSTP staff were housed in the NEOB. However, today OSTP is housed in the Eisenhower Executive Office Building, on the main campus of the White House, along with key staff from the National Economic Council, the Council of Economic Advisers, and some OMB staffers.]

What structural change would you want to see to the White House decision-making apparatus to prioritize productivity?

It could be bringing the CEA or the NEC or the OSTP into the heart of the matter. But I don't know that the problem is necessarily the executive branch. I think it's a broader problem in Congress, in the American public, in the electorate. I don't think the fact that science and technology drive our prosperity is really that appreciated, broadly. That leads to underinvestment and deprioritization of these topics that are so central to what Americans want to achieve. 

You once called investing in science and innovation “perhaps the world's greatest market failure.” In layman's terms, why is that a market failure?

Science and innovation are extremely important, but that doesn’t necessarily mean the government needs to get involved, right? Maybe it's really important, but the market does it and everything's fine. 

The fundamental reason there's a market failure here and the government does need to get involved is that the value of an idea goes way beyond the value captured by the person who created it. Here’s one way to think about new ideas — whether it's a private sector firm, a scientist in a university, or a national lab — you're trying to come up with something new that either explains nature in some new way or produces some new product into the market.

And it's hard. It's uncertain. If it was obvious it would work, we'd do it already, right? You're stumbling around in the dark, and then someone finds something, they flip on a light switch. “Aha, look at this: CRISPR! Look at this: machine learning! It recognizes cats on the internet or whatever. Isn't that amazing?” 

But when you turn that light on, it's not just that you can see. Everybody else gets to see with the same light, and they can take that idea and build on it. Or maybe they are inspired by it, and so I can come up with an even better algorithm, or an even better gene editing technology. 

When I turn that light on, I create all that value from everyone else being able to see, but I don’t capture it myself. This is why the market will tend to under-invest in R&D: the private return to the light is much lower than the broader social gain from all the people who get the benefit of that light.

As a political matter, how do you encourage federal policymakers to allocate more money to R&D?

I'm trying. It's hard. People even agree with me when I tell them, but somehow it's hard to make it go.

Now, to be clear, the government does an enormous amount already in this space. We have a lot of taxpayer dollars going to funding science. We have intellectual property, and we also have R&D tax credits. There are a number of ways that the government comes forward and subsidizes or motivates further investment in R&D. 

But if you take a macro view, R&D investment in the U.S. as a whole is less than 3% of GDP. That's a very small amount, given how important it is. Macro estimates suggest we could easily double that: we'd grow faster, we'd live longer lives, we'd have more competitive workforce, we'd have greater national security. We could have all these great benefits that would more than pay for themselves because the social returns are so high.

I know some policy advocates who would respond to you like this: 

“We have very little agency to increase the total amount of federal funding for R&D. Therefore, the best use of our time and intellectual resources is to push to better allocate the funding that we do commit to R&D.”

What do you make of that tension? 

I think it doesn't have to be a tension. They're both very valuable. We greatly under-invest in R&D, and I would love to see us just do more of the same. Just do everything we're already doing and put more into it: more to the Department of Energy, more to the NIH, and just let them distribute it the way they do.

But it is also true that we don't do that in a way that's super informed by rigorous evidence. Why does NIH do grant review one way and NSF does it another? Do we get more return from the R&D tax credit vs. from DARPA at the Department of Defense? We don't know the answers to those questions.

So whatever share of GDP is going into R&D, it would be great if we could do that more efficiently. There's a lot of research trying to figure out ways to do that, and that's very valuable. 

But let me be clear: what we're doing, even if it's messy, it really works. The U.S. science and innovation system works. It's very, very, very effective. Maybe we're wasting money over there and not so much over there. But I wouldn't want to let the perfect be the enemy of the good, and I wouldn't want people to say “Oh, I'm not sure what the best next investment is. So until we study that, let's do nothing.” 

One of your most well-known papers is called “The Burden of Knowledge and the ‘Death of the Renaissance Man’: Is Innovation Getting Harder?

It suggests that as science compounds on itself, it takes longer to get to the frontier: First inventions come later in life, and people spend more time in educational institutions before becoming good researchers. Newton said he could see so far because he was standing on the shoulders of giants; now “climbing up the backs of giants” takes longer. 

That paper came out in 2005. What have we learned about the burden of knowledge since then?

The burden of knowledge argument says it’s inevitable that people become narrower and narrower in their expertise. There's so much more to know, and you can't know everything, so we all end up being narrower experts. But if you're really narrow, it's hard to do anything big and important, because you're so narrow. It's very frustrating. 

When I ended up deciding not to do my doctorate in turbomachinery, it was partly because it’s such a deep field. It's very, very, very, very deep. If you do turbine machinery, you probably are an expert on how to drill holes in turbine blades to get the maximum cooling from airflow. It's that specific.

From Jones’ “The Burden of Knowledge” paper

If you want to do something big and everyone's so narrow, what do you do? Well, you have to collaborate. One of the themes of “The Burden of Knowledge” is that we're going to end up having to collaborate more across expertise if we want to meaningfully push the frontiers of knowledge.

And that is what we see. Not only is everyone working more in teams, but the highest-impact work is coming from teams. If you did something alone in math in the ‘50s and ‘60s, your chance of writing a home run paper as a solo mathematician was higher than if you worked in a team. But now it's reversed. Basically in every field, if you're on a team, you'll have a higher probability of a home run than if you're alone. 

That has led to science becoming a big network of international collaboration as people try to find co-authors or co-inventors. We've seen a real shift in the organization of science.

From “The Burden of Knowledge”

Have you noticed a shift towards more teamwork in economics, over the course of your own career?

Yeah. Team-authored work in economics is becoming the source of the highest-impact work. I had a Journal of Economics Perspectives piece a couple of years ago talking about how this all applies in economics.

Econ PhDs are taking longer and longer and longer to complete their PhD and enter the job market. The other feature of the burden of knowledge is not just that we're narrower experts, but that it takes longer to become an expert because there's so much more existing knowledge.

In a related paper, you talk about ways to try and shorten the educational timeline. Are there ways that we could do that for economics PhDs, for instance?

It's tricky. The more we succeed, the more we get deeper and deeper knowledge across all these areas. Now, of course, sometimes we have revolutionary ideas that throw out the old knowledge, and they might make it easier to get to the frontier in that area.

That's what happened in early 20th-century physics with quantum mechanics, for example. I think that was partly what was going on in the early years of computing and dot com. HTML code was pretty simple.

In one of those papers, you talk about how, during the quantum revolution, the average age of scientists making discoveries or publishing papers plummeted.

Yeah. It’s a good example. Einstein, Heisenberg, and Dirac were all below age 25 when they did the work for which they would win the Nobel Prize. Heisenberg famously nearly failed his PhD oral exams in Germany because his advisors were like, “This guy doesn't know anything about classical electromagnetism. He couldn't tell you what Maxwell's equations are.” Like a year later, he's got matrix mechanics and he's going on to the Heisenberg Uncertainty Principle. 

After Planck at the very beginning of the 20th century, basically there were these new empirical facts that said classical physics was wrong, at least at small distances. And it was open season. You saw very young mathematicians coming in with very creative ideas. But if you look at quantum after that, the age at which people made quantum discoveries goes up and up and up, because actually quantum becomes an incredibly deep, complex model.

Can we hold out hope for a similar revolution in economics? That suddenly the median age of brilliant publication will be like 24?

Maybe if AI can do it for us.

Say more about that.

So this relates directly to the burden of knowledge. Why might AI be good at creativity? Well, it's because AI actually does read everything. Our problem is that we only can be experts in very narrow things. In some sense, our creative capacities are limited by what we're experts in, and we have blinders about all these other deep areas of knowledge. 

Now, it remains to be seen, but to the extent that AI is trained on everything, and it can actually be reasoning in some magical way through all the literature, it can in fact be your new combiner of ideas to get past the burden of knowledge and do some pretty interesting things that would be very hard for an individual expert to be able to do.

How bullish are you on that particular outcome? 

I'm often totally wrong, and it's hard to predict the future in technology, but I'm actually pretty bullish about AI. The tool sets are developing rapidly, and it's uncanny what it can already do, in my view, across a number of tasks. 

Another reason I'm bullish: we spend a lot of time talking about AI and how it affects the workforce, how it would affect jobs. R&D is a small share of the economy, but it actually creates so much progress. Forget the general GDP production function: if AI has an effect on the research production function, then it may be a great leverage of limited resources towards much faster progress.

Let's imagine this hypothesis is wrong, and AI doesn’t cause big changes to the research landscape. If you want to improve science and counteract the narrowing and aging of fields caused by the burden of knowledge, what else is on the table? What are plans B and C? 

Many people born in the U.S. don't particularly want to become STEM workers. Many do, but an increasing portion of the STEM workforce in the U.S. are immigrants who come in for university. From the point of view of getting someone to do a STEM career, one of the challenges of the burden of knowledge is that the longer it takes to get to the frontier of a field, the more time you spend as a student. It takes longer to become a postdoc and to get to a place where you have creative autonomy, your own grants or labs or ability to pursue your own independent research.

All else being equal, that dissuades entry into research careers, compared to finance or consulting. Obviously, some science-oriented, curious people will just do it anyway. But it’s challenging.

Remember that because it's a public good and there's a market failure, the wages in science aren't determined particularly by standard market demand, which you might see in consulting or finance. They're heavily subsidized by research grants that allow you to pay for a postdoc, etc. And those salaries are low. 

So the more we starve the system of public money — which we've been doing, by the way — today, public funding of R&D in the U.S. as a share of GDP is near a 70-year low — the more difficult it will be to create the incentives for people to enter those careers that you would need. That starving of the pipeline, to me, is a first-order problem. 

Would you want to see more grant money allocated specifically to younger scientists? Would that help pull the marginal young talent into science, as opposed to into consulting or into finance?

It’s an interesting and good idea. I'm very open to that idea. 

In the biomedical system, you now get your first NIH grant where you're the principal investigator (PI) on average around age 44 or 45. Think about that. You put so much time into becoming a biomedical researcher, you’ve probably gone through a series of postdocs, and you can finally be captain of your own ship in your 40s, when people in other careers might be retiring if they did very well. These are very smart people going into these biomedical careers. They have lots of outside options. 

Santi, building on your question, why not give a lot more money? Let people start earlier on their own research, and not necessarily wait so long before they get to creative autonomy. 

Related to that, we spend a lot of time in grant reviews, pretending we're not wrong, as if I can actually read this grant and tell you what's going to change the world or not. I think experts have value, but we don't want to overplay how much people can actually predict how to allocate those investments. I'm often wrong. I think everyone is often wrong about what's going to happen in the technology space. 

We were wrong about mRNA. We were wrong about AI, right? It's often surprising what turns out to work. I think making broad investments across a variety of perspectives with people with different ideas and views, young, old, is probably a very fruitful way to go.

I hadn’t thought about this problem before reading one of your papers. But the burden of knowledge doesn't just apply to individuals and teams doing research. It also applies to grant evaluation teams. As knowledge becomes even more narrow and specialized, it's much harder for a grant reviewer to decide, “Is this thing worth funding or not?”

Right. In the patent office, a single examiner is typically assigned to a patent. Now, that one examiner has lots of computer tools to help them, but how can that one examiner really know how important or valuable or new these things are? Now, grants tend to have grant panels, but even then people know what they know, and if someone comes up with something fresh that is beyond the expertise of the panelists, it's just very hard to say how good it is. There's a longstanding concern that this leads to certain kinds of conservatism or mistakes in how we allocate resources. 

Given those fundamental trends and this bureaucratic inertia, should we be relatively more bearish on institutions like the NIH and NSF? Should we put more of our eggs in stranger, newer, weirder ways of funding science? 

We continue to develop evidence on different models. You know, the DARPA model's gotten very popular. We had that from the Defense Department. Now we have ARPA-E in energy and ARPA-H for health.

Okay. I tend to be of the view that trying different things is good. But I think we're a little ahead of the rigorous empirical evidence on knowing why you'd invest it that way vs. a different way. I'll retreat to my prior point: some of it's probably better and some of it's worse, but boy does it do a lot altogether.

Are there empirical questions that folks in this audience should be pursuing that would help us better understand how to allocate science funding?

Sure. One of the challenges is, what's the gold standard of evidence? It's to do some field experiment, where you randomly do one thing for one set of people and you have a control group that does something else. It’s what you'd have to do to get an FDA-approved drug. We force it in that context. But in general, we don't force a whole lot of experiments upon the world. 

Running experiments in the government on different ways of funding is a really good idea, and it will be revealing. I've worked on this and pushed on it a bit at times, though I don't do a lot of field experiments in my own work. But whenever you want to run a field experiment on an organization, you’ll encounter a challenge: they're afraid you're going to find that what they do doesn't work. Right? And then they're like, “Well, I don't want to know. I'd rather not know that what I'm doing is not working, so I don't want to find out.” 

That's a problem, but it's not a surprising problem. It's very natural. I would call that an existential field experiment, where you're going to test whether or not you should have this program. The people who work in the program lose their jobs if you decide it was bad. 

A different field experiment is what I would call an operational field experiment: “You're already going to give out this money, and I just want to know should you give it out this way, or should you give it out that way?” We're not really evaluating whether or not you should give out the money. We're just saying, conditional on giving out the money, is there a better way to do it? An operational RCT in the stream of the existing operation. It’s gentler. If you can keep running A/B tests on how you give out money or do other innovation activities in the government, you can keep trying to inch your way forward towards a more efficient system.

I have a question about status in economics. There’s a clip from Ha-Joon Chang joking about the relative status of different subdisciplines in economics, basically suggesting that the brainier you are, the more heady and untethered to reality your subdiscipline is. He says that the lowest point on the totem pole is trying to figure out supply chains, or inputs and outputs to individual factories.

Is that your impression of the status game in the field? Should it be that way?

I think in the sociology of science and of economics, that view exists. There’s this idea in science: if you're not a great mathematician, you become a physicist, if you're not a great physicist you become an economist, down the chain or something. It's a way for people to insult other fields or celebrate their own. I think that view exists. 

But I think it's also wrong, deeply wrong. Both are important. Theory is very important, and applied work is very important.

This goes back to Pasteur's Quadrant. There's a very influential book by Donald Stokes, and he points out that this distinction between basic and applied research is way too simplistic. His canonical example is Louis Pasteur. Because what was Louis Pasteur doing in the 19th century? 

On the one hand, he was consulting with winemakers and cheesemakers, and thinking about how we preserve food. Of course, from that we get the pasteurization of milk. At the same time, he comes up with the germ theory of disease, which is one of the most fundamental insights in biology. 

If you look at his career, it's not like he thought of the germ theory of disease first, and then was like, “I wonder if that’s why we get spoiled milk. Let me go work on an application.” Nor was it the case that he was like, “Huh, we get spoiled milk. I wonder if there's a germ theory of disease.” It's just so closely tied up: his ability to engage with real problems was also at the same time working together with his more broad conceptualization.

I have a paper in Science that tries to pull all patents and papers together and look at the interface between how patents build on papers. What you see is that the highest-value patents are the ones that directly build on science, but also the highest-value science is the stuff that directly has the highest impact.

Science has direct applications often to patents in the market, and I think what is going on there. If you are doing research untethered from empirical fact, you just theorize in the shower. You might come up with some really good ideas. But it's possible you're not talking about anything. You're just coming up with good ideas. You're spinning within the literature. You're doing a logical extension, but it's not really applying to anything. 

If you're engaged with what's actually happening and you're getting an explanation for that, you may be wrong, but at least you're explaining something. If you look at electricity or computing, at general purpose technologies, they're often at first designed around solving a very particular problem, and then they just turn out — like ARPANET to the internet — to have much bigger applications, way beyond what was first thought. This goes back to the light turning on. High-value science can come from lots of places, but it has a higher probability of success if you're engaging with a real phenomenon. 

So I think that's the sense in which that common sociological view, that some places are higher on some hierarchy of importance, I think is actually deeply, deeply wrong, maybe even backward.

Are there levers you'd like to pull for the field of science in general? Can we change our funding incentives to shift that status game so that more people are working in Pasteur's Quadrant and not on pure heady stuff, or pure engineering?

I'm actually not sure we get it that wrong. Math gets a lot less funding than computer science, for example. I have another paper where we took all the times a paper is referenced in a patent, in a federal policy document, or in the news media.

We could see which scientific fields get used the most outside of science in various realms of the public sphere. We also looked at the funding for all the fields of the papers. So we could see in science, do we fund the fields that rarely have impacts in one of those spheres? Or do we tend to fund fields that tend to have a lot of impacts in those spheres? There is an incredibly strong correlation between what the U.S. government funds and its rate of impact or use outside of science. So actually, funding is pretty heavily allocated towards things that tend to have a more obvious application, broadly construed.

Audience Questions

Willy Chertman, IFP: You talked about the rise in the average age of scientists when they receive their first NIH or NSF grant. What percent of this trend is due to the increase in the burden of knowledge? Could it be partially due to something more like rentkeeping, or institutional inertia? Maybe older institutions have pre-existing scientists, and they have longer careers due to longer life expectancy. How would you think about that, roughly? 

Ben Jones: So I think there are several things going on, including the ones you mentioned. With the baby boomers getting older, there are more PIs who are older, who are very established. When they apply for NIH grants, they tend to get them, so they're coming up higher on the ratings. So I think there's some demographics going on there. 

But another challenge is that as we move more and more towards teamwork, there's like 12 people on your paper. It's very hard for a young person to signal that they were in fact an important part of that team. You can put them as first author and various things like that. But in the days when solo authors wrote papers, you could say, “That’s my paper,” and I could say, “That paper is great. Let’s go with that person.” Now it's much more mixed signals. That makes it a lot harder for young people to signal themselves as an independent entity that’s ready to run something.

I don't have a good empirical answer for you, but I think there's a lot going on there. And if we don't really know the best ideas to fund anyway, we should be modest about our ability to screen that, and we should make sure we have an allocation that encourages entry. That's probably the more important bottleneck.

David Jimenez, Niskanen Center: In D.C., we sometimes find that people hate grants, but they love tax credits. They're fascinated with this indirect system of tax incentives and deductions for innovation. I’m curious, heading into next year with TCJA, how much private sector credits for R&D can fill the gaps you're describing. If that is a less optimal approach, how do we mitigate that? 

Ben Jones: So there's separate evidence that the R&D tax credits seem to be pretty effective. That's a context where there's a high social return compared to the cost. For those who think that the market is really important, maybe that's a reasonable political sell.

There's separate evidence that public funding of R&D is very high-return as well, maybe quite a bit higher, actually. That’s the area I think we're tending to starve with time. In the market, someone will innovate if the private return is high enough and there may be lots of spillovers. I mean, there are lots of spillovers to the iPhone, to all the app developers that Apple doesn't directly capture. But Apple did pretty well, right? So they're going to innovate alone, because the actual private return is high enough. 

But remember that in science, the direct return from a new scientific idea put in a paper and published for everyone to read is zero. There's no market, typically, for a new understanding of nature. And yet those new understandings are the foundation for so much marketplace innovation. 

The market's still going to do a bunch of innovation, because the firms are going to figure out ways to appropriate some sufficient value. But the thing you're really not going to get is a lot of science, new understandings of nature. And those are going to be the fund from which the market's going to draw. That's the part that I think we really need to encourage.

Jed Kolko, former undersecretary for economic affairs at the Department of Commerce: If you were to think about international economic policy — by which I mean immigration, trade, foreign direct investment, and so on — and you wanted to design policy to maximize innovation, would that imply complete openness on all dimensions? Or are there arguments for anything less than complete openness on the international side, if you wanted to maximize innovation? 

Ben Jones: From an economics point of view, you want to maximize openness, because you want people to build on each other's ideas. You want them to be inspired by them. You don't want ideas in silos.

An interesting new paper from Fieldhouse and Mertens looks at causal implications of U.S. R&D funding over the long run on U.S. productivity. It’s a hard thing to do, but they have a pretty clever idea how to do it. 

They find really high returns to public funding to the agencies, except from the Department of Defense. Their argument is that most of that DoD funding is weapons development, and it's closed, so people in the broader sphere can't really benefit from the ideas. If the funding is open, you can then inculcate new industries.

So I would think you'd want a lot of openness on science innovation to make that work. When you get into trade and exchange rate policy, we have to think a little bit more. Industrial policy is back to the fore, and people are debating whether you can be good at it or not. When you put up trade barriers, you cut down market sizes for companies. So that's going to reduce the reward from innovation. You could think that could dissuade a company that thinks it could win globally, but can no longer sell except on a local market. It's going to have much less return on its R&D investments.

On the other hand, you could think that maybe trade barriers motivate more new innovative ideas, because everyone's trying to do something new and different in all these different countries. You're gonna get more bites at the apple, and that might lead to more collective progress.

It’s complex. But with regard to the science piece, if we can sustain openness, that’s key to the whole system.

Discussion about this episode