Dwarkesh Podcast
Dwarkesh Podcast
Tyler Cowen - Hayek, Keynes, & Smith on AI, Animal Spirits, Anarchy, & Growth
2
0:00
-1:42:21

Tyler Cowen - Hayek, Keynes, & Smith on AI, Animal Spirits, Anarchy, & Growth

2

It was a great pleasure speaking with Tyler Cowen for the 3rd time.

We discussed GOAT: Who is the Greatest Economist of all Time and Why Does it Matter?, especially in the context of how the insights of Hayek, Keynes, Smith, and other great economists help us make sense of AI, growth, risk, human nature, anarchy, central planning, and much more.

The topics covered in this episode are too many to summarize. Hope you enjoy!

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(00:00:00) - John Maynard Keynes

(00:17:16) - Controversy

(00:25:02) - Friedrich von Hayek

(00:47:41) - John Stuart Mill

(00:52:41) - Adam Smith

(00:58:31) - Coase, Schelling, & George

(01:08:07) - Anarchy

(01:13:16) - Cheap WMDs

(01:23:18) - Technocracy & political philosophy

(01:34:16) - AI & Scaling

Transcript

Dwarkesh Patel 00:00:00

This is a fun book to read because you mentioned in there what the original sources to read are. It’s like the Harold Bloom of economics, right?

Tyler Cowen 00:00:10

It’s a book written for smart people.

Dwarkesh Patel 00:00:13

Okay, so let’s just jump into it. The book we’re talking about is Goat, who is the greatest economist of all time, and why does it matter? Alright, let’s start with Keynes. So in the section on Keynes, you quote him, I think, talking about Alfred Marshall. He says, “The master economist must possess a rare combination of gifts. He must be a mathematician, historian, statesman, philosopher. No part of man’s nature or his institutions must lie entirely outside his regard.” And you say, well, Keynes is obviously talking about himself because he was all those things, and he was arguably the only person who was all those things at the time. He must have known that.

Dwarkesh Patel 00:00:57

Okay, well, you know what I’m going to ask now. So what should we make of Tyler Cowen citing Keynes using this quote? A quote that also applies to Tyler Cowen?

Tyler Cowen 00:01:09

I don’t think it applies to me. What’s the exact list again? Am I a statesman? Did I play a role at the Treaty of Versailles or something comparable?

Dwarkesh Patel 00:01:18

I don’t know. We’re in Washington. I’m sure you talk to all the people who matter quite a bit.

Tyler Cowen 00:01:21

Well, I guess I’m more of a statesman than most economists, but I don’t come close to Keynes in the breadth of his high-level achievement in each of those areas.

Dwarkesh Patel 00:01:32

Okay, let’s talk about those achievements. So, chapter twelve, General Theory of Interest, Employment, and Money. Here’s a quote. “It is probable that the actual average result of investments, even during periods of progress and prosperity, have disappointed the hopes which promoted them. If human nature felt no temptation to take a chance, no satisfaction, profit apart, in constructing a factory, a railway, a mine, or a farm, there might not be much investment merely as a result of cold calculation.” Now, it’s a fascinating idea that investment is irrational, or most investment throughout history has been irrational. But when we think today about the fact that active investing exists for winners’ curse like reasons, VCs probably make, on average, less returns than the market, there’s a whole bunch of different examples you can go through, right? M&A usually doesn’t achieve the synergies it expects. Throughout history, has most investment been selfishly irrational?

Tyler Cowen 00:02:26

Well, Adam Smith was the first one I know to have made this point, that projectors, I think he called them, are overly optimistic. So people who do startups are overly optimistic. People who have, well, entrenched VC franchises make a lot of money, and there’s some kind of bifurcation in the distribution, right? Then there’s a lot of others who are just playing at it and maybe hoping to break even. So the rate of return on private investment, if you include small businesses, it’s highly skewed. And just a few percent of the people doing this make anything at all. So there’s a lot to what Keynes said. I don’t think he described it adequately in terms of a probability distribution, but then again, he probably didn’t have the data. But I wouldn’t reject it out of hand.

Dwarkesh Patel 00:03:13

Another example here is this is something your colleague Alex Tabarrok talks about a lot, is that innovators don’t internalize most of the gains they give to society. So here’s another example. The entrepreneur compared to one of his first employees, is he that much better off for taking the extra risk and working that much harder? What does this tell us? It’s a marvelous insight that we’re actually more risk-seeking than it’s selfishly good for us.

Tyler Cowen 00:03:41

That was Reuven Brenner’s claim in some of his books on risk. Again, I think you have to distinguish between different parts of the distribution. So it seems there’s a very large number of people who foolishly start small businesses. Maybe they overly value autonomy when they ought to just get a job with a relatively stable company. So there, part of the thesis is correct, and I doubt if there’s really big social returns to whatever those people do, even if they could make a go of it. But there’s another part of the distribution, people who are actually innovating or have realistic prospects of doing so. Where I do think those social returns are very high. Now, that 2% figure that’s cited a lot, I don’t think it’s really based in much real. It’s maybe not a crazy seat of the pants estimate, but people think like, oh, we know it’s 2% and we really don’t. So look at Picasso, right? He helped generate cubism with Braque and some other artists. How good is our estimate of Picasso’s income compared to the spin-offs from Picasso? We just don’t really know. Right? We don’t know. It’s 2%. It could be 1%, it could be 6%.

Dwarkesh Patel 00:04:51

How different do you think it is in art versus, I don’t know, entrepreneurship versus different kinds of entrepreneurship? There are different industries there as well, right?

Tyler Cowen 00:04:59

I’m not sure it’s that different. So say if some people start blogging, a lot of people copy them, right? Well, some people start painting in a particular style, a lot of people copy them. I’m not saying the numbers are the same, but they don’t sound like issues that in principle are so different in 2%.

Dwarkesh Patel 00:05:16

Overestimate or underestimate. It might be wrong, but in which way is it wrong?

Tyler Cowen 00:05:19

My seat of the pants estimate would be two to 5%, so I think it’s pretty close. But again, that’s not based on anything firm.

Dwarkesh Patel 00:05:26

Here’s another quote from Keynes. “Investment based on genuine long-term expectation is so difficult as to be scarcely practicable. He who attempts it must surely lead much more laborious days and run greater risks than he who tries to guess better than the crowd how the crowd will behave.” So one way to look at this is like, oh, he just doesn’t understand the efficient market hypothesis. It’s like before random walks or something. But there are things you can see in the market today. Where are the prospects for future dividends so much higher after Covid than they were immediately after the crash? How much of market behavior can be explained by these sorts of claims from Keynes?

Tyler Cowen 00:06:08

I think Keynes had the view that for his time, you could be a short-run speculator and in fact beat the markets. And he believed that he did so, and at least he did for some periods of his life. That may have been luck, or maybe he did have special insight. It probably wasn’t true in general, though we don’t really know. Did efficient markets hold during Britain at that time? Maybe there just were profit opportunities for smarter than average people. So that’s a view. I’m inclined not to believe it. But again, I don’t think it’s absurd. Keynes is saying, for people who want money, this is biased toward the short term. You can get your profits and get out. And that’s damaging long-term investment, which in fact, he wanted to socialize. So he’s being led to a very bad place by the argument. But again, we shouldn’t dismiss it out of hand.

Dwarkesh Patel 00:07:01

Why is it not easy to retrospectively study how efficient markets were back then, in the same way we can study it now? You look at the price-to-earnings ratios, and then what were the dividends afterwards over the coming decades for those companies based on their stock price or something?

Tyler Cowen 00:07:17

I don’t know how many publicly traded firms there were in Britain at that time. I don’t know how good the data are. Things like bid, ask, spread, at what price you actually executed trades can really matter for testing efficient markets hypothesis, so probably we can’t tell, even though there must be share price data of some sort. At what frequency? Well, is it once a day? Is it once a week? We don’t have the sort of data we have now where you can just test anything you want.

Dwarkesh Patel 00:07:47

He also made an interesting point. Not only is it not profitable, but even if you succeed, society will look at the contrarian in a very negative light. You will be doubly punished for being a contrarian. But that doesn’t seem to be the case. Right? You have somebody like Warren Buffett or Charlie Munger. People who do beat the market are actually pretty revered. They’re not punished in public opinion.

Tyler Cowen 00:08:08

They pursued mostly long-term strategies.

Dwarkesh Patel 00:08:10

Right.

Tyler Cowen 00:08:10

But again, trying to make sense of Keynes, if you think about long-term investing, and I don’t think he meant Buffett-style investing. I think he meant building factories, trying to figure out what people would want to buy 25 years from that point in time. That probably was much harder than today. You had way less access to data. Your ability to build an international supply chain was much weaker. Geopolitical turmoil at various points in time was much higher. So again, it’s not a crazy view. I think there’s a lot in Keynes that’s very much of his time, that he presents out of a kind of overconfidence as being general. And it’s not general. It may not even be true, but there were some reasons why you could believe it.

Dwarkesh Patel 00:08:55

Another quote from Keynes, I guess I won’t read the whole quote in full, but basically says, over time, as investments, markets get more mature, more and more of equities are held basically by passive investors, people who don’t have a direct hand in the involvement of the enterprise, and the share of the market that’s passive investment now is much bigger. Should we be worried about this?

Tyler Cowen 00:09:16

As long as at the margin people can do things, I’m not very worried about it. So there are two different kinds of worries. One is that no one monitors the value of companies. It seems to me those incentives aren’t weaker. There’s more research than ever before. There’s maybe a problem. Not enough companies are publicly held. But you can always, if you know something the rest of the market doesn’t, buy or sell-short and do better. The other worry is those passive investors have economies of scale, and they’ll end up colluding with each other. You’ll have, say, like three to five mutual funds, private equity firms owning a big chunk of the market portfolio. And in essence, directly or indirectly, they’ll tell those firms not to compete. It’s a weird form of collusion. They don’t issue explicit instructions like, say the same few mutual funds own Coke and Pepsi. Should Coke and Pepsi compete, or should they collude? Well, they might just pick lazier managers who in some way give you implicit collusion.

Dwarkesh Patel 00:10:14

Maybe this is another example of the innovators being unable to internalize their gains. As active investors who are providing this information to the market, they don’t make out that much better than the passive investors, but they’re actually providing a valuable service. But the benefits are diffused throughout society.

Tyler Cowen 00:10:29

I think overconfidence helps us on that front. So there’s, quote-unquote, too much trading from a private point of view. But from a social point of view, maybe you can only have too much trading or too little trading, and you might rather have too much trading.

Dwarkesh Patel 00:10:42

Explain that. Why can it only be too much or too little?

Tyler Cowen 00:10:46

Well, let’s say the relevant choice variable is investor temperament. So, yes, you’d prefer it if everyone had the temperament just to do what was socially optimal. But if temperament is some inclination in you, and you can just be overconfident or not confident enough, and overconfidence gives you too much trading, that might be the best we can do. Again, fine-tuning would be best of all, but I’ve never seen humans where you could just fine-tune all their emotions to the point where they ought to be.

Dwarkesh Patel 00:11:15

Yeah. Okay, so we can ask the question, how far above optimal are we? Or if we are above optimal? In the chapter, Keynes says that over time, as markets get more mature, they become more speculative. And the example he gives is like, the New York market seems more speculative to him than the London market at that time. But today, finance is 8% of GDP. Is that what we should expect it to be to efficiently allocate capital? Is there some reason we can just look at that number and say that that’s too big?

Tyler Cowen 00:11:43

I think the relevant number for the financial sector is what percentage it is of wealth, not GDP. So you’re managing wealth, and the financial sector has been a pretty constant 2% of wealth for a few decades in the United States, with bumps. Obviously, 2008 matters, but it’s more or less 2%, and that makes it sound a lot less sinister. It’s not actually growing at the expense of something and eating up the economy. So you would prefer it’s less than 2%? Right. But 2% does not sound outrageously high to me. And if the ratio of wealth to GDP grows over time, which it tends to do when you have durable capital and no major wars. The financial sector will grow relative to GDP. But again, that’s not sinister. Think of it in terms of wealth.

Dwarkesh Patel 00:12:29

I see. So one way to think about it is like the management cost as a fraction of the assets under management or something. And that’s right. In that case, 2% is not that bad. Yeah. Okay, interesting. I want to go back to the risk aversion thing again, because I don’t know how to think about this. So his whole thing is these animal spirits, they guide us to make all these bets and engage in all this activity. In some sense, he’s saying, like, not only are we not risk-neutral, but we’re more risk-seeking than is rational. Whereas the way you’d conventionally think about it is that humans are risk-averse, right. They prefer to take less risk than is rational in some sense. How do we square this?

Tyler Cowen 00:13:09

Well, here, Milton Friedman, another goat contender, comes into the picture. So his famous piece with Savage makes the point that risk aversion is essentially context dependent. So he was a behavioral economist before we knew of such things. So the same people typically will buy insurance and gamble. Gambling you can interpret quite broadly, and that’s the right way to think about it. So just flat out risk aversion or risk-loving behavior, it doesn’t really exist. Almost everyone is context-dependent now. Why you choose the contexts you do, maybe it’s some kind of exercise in mood management. So you insure your house, so you can sleep well at night, you buy fire insurance, but then you get a little bored. And to stimulate yourself, you’re betting on these NBA games. And yes, that’s foolish, but it keeps you busy and it helps you follow analytics, and you read about the games online, and maybe that’s efficient mood management, and that’s the way to think about risk behavior. I don’t bet, by the way. I mean, you could say I bet with my career, but I don’t bet on things.

Dwarkesh Patel 00:14:13

What’s your version of the lottery ticket? What is the thing where you, just for the entertainment value or the distraction value, take more risk than would seem rational?

Tyler Cowen 00:14:23

Well, writing the book titled “GPT-4,” which is not with any known publisher. It’s just online; it’s free, published within GPT-4. It took me quite a while to write the book. I’m not sure there’s a huge downside, but it’s risky in the sense that it’s not what anyone else was doing. So that was a kind of risk. I invested a lot of my writing time in something weird, and I’ve done things like that pretty frequently. So that keeps me, you could say, excited, or starting MRU, the online education videos in economics no pecuniary return to me at all. Indirectly, it costs me a lot of money. That’s a sort of risk. I feel it’s paid off for me in a big way. But on one hand, you can say, “Well, Tyler, what do you actually have from that?” And the answer is nothing.

Dwarkesh Patel 00:15:11

Yeah, well, this actually raises the question I was going to ask about these GO contenders in general, and how you’re judging them, where you’re looking at their work as a whole. Given that, I don’t know, some of these risks pay off that these intellectuals take, some of them don’t pay off. Should we just be looking at their top contributions and just disregard everything else? For Hayek, I think one of the points you have against him is that his top three articles are amazing. But after that, there’s a drop-off. The top risk you take, are they the only ones that matter? Why are we looking at the other stuff?

Tyler Cowen 00:15:40

I don’t think they’re the only ones that matter, but I’ll weight them pretty heavily. But your failures do reflect usually in how you think or what you know about the world. So Hayek’s failures, for instance, his inability to come up with a normative standard in “The Constitution of Liberty,” show in some ways he just wasn’t rigorous enough. He was content with the kind of Germanic, put a lot of complex ideas out there and hope they’re profound. And you see that even in his best work. Now, that is profound. But it’s not as if the failures and the best work for all these people are unrelated. And same with Keynes. Like Keynes, more or less changed his mind every year. That’s a strength, but it’s also a weakness. And by considering Keynes’s really good works and bad works, like his defenses of tariffs, you see that. And the best work, he also moved on from in some way. If you read “How to Pay for the War” in 1940, if you didn’t know better, you would think it’s someone criticizing the General Theory.

Dwarkesh Patel 00:16:43

Does quantity have a quality all of its own? When you think of great intellectuals, were many of these people have like volumes and volumes of work. Was that necessary for them to get the greatest sets? Or is the rest of it just a distraction from the things that really stand the test of time?

Tyler Cowen 00:16:56

For the best people, it’s necessary. So John Stuart Mill wrote an enormous amount. Most of it’s quite interesting, but his ability to see things from multiple perspectives, I think, was in part stemming from the fact that he wrote a lot about many different topics, like French history, ancient Greece. He had real depth and breadth.

Dwarkesh Patel 00:17:16

If Keynes is alive today, what are the odds that he’s in a polycule in Berkeley, writing the best-written Less Wrong post you’ve ever seen?

Tyler Cowen 00:17:24

I’m not sure what the counterfactual means. So Keynes is so British. Maybe he’s an effective altruist at Cambridge. And given how he seems to have run his sex life, I don’t think he needed a polycule. Like a polycule is almost a Williamsonian device to economize on transactions costs. But Keynes, according to his own notes, seems to have done things on a very casual basis.

Dwarkesh Patel 00:17:50

He had a spreadsheet, right, of his special partners?

Tyler Cowen 00:17:52

And from context, it appears he met these people very casually and didn’t need to be embedded in, oh, we’re the five people who get together regularly, so that’s not a hypothetical. We think we saw what he did, and I think he’d be at Cambridge, right? That’s where he was. Why should he not today, be at Cambridge?

Dwarkesh Patel 00:18:14

How did a gay intellectual get that amount of influence in Britain of that time? When you think of somebody like Alan Turing, helps Britain win World War II and is castrated because of one illicit encounter that is caught, was it just not public? How did he get away with it?

Tyler Cowen 00:18:29

Basically, I don’t think it was a secret about Keynes. He had interacted with enough people that I think it was broadly known. He was politically very powerful. He was astute as someone managing his career. He was one of the most effective people you could say, of all time, not just amongst economists. And I’ve never seen evidence that Keynes was in any kind of danger. Turing also may have intersected with national security concerns in a different way. I’m not sure we know the Alan Turing story and why it went as badly as it did, but there was in the past, very selectively, and I do mean very selectively, more tolerance of deviance than people today sometimes realize.

Dwarkesh Patel 00:19:13

Oh, interesting.

Tyler Cowen 00:19:14

And Keynes’s benefited from that. But again, I would stress the word selectively.

Dwarkesh Patel 00:19:17

Does it say more? What determines who is selected for this tolerance?

Tyler Cowen 00:19:21

I don’t feel I understand that very well. But there’s plenty say in Europe and Britain of the early 20th century where quote-unquote outrageous things were done. And it’s hard to find evidence that people were punished for it. Now, what accounts for the difference between them and the people who were punished? I would like to see a very good book on it.

Dwarkesh Patel 00:19:42

Yeah, I guess it’s similar to our time. Right. We have certain taboos and you can get away with.

Tyler Cowen 00:19:47

Yeah, they say whatever on Twitter and.

Dwarkesh Patel 00:19:49

Other people get cancelled, actually. How have you gotten away with it? I feel like you’ve never been in, at least as far as I know. I haven’t heard you being in the part of any single controversy. But you have some opinions out there.

Tyler Cowen 00:19:59

I feel people have been very nice to me.

Dwarkesh Patel 00:20:01

Yeah. What’d you do? How did you become the Keynes of our time if we’re comparing after all? Right.

Tyler Cowen 00:20:10

I think just being good-natured helps, and helping a lot of people helps. And Turing, I’m a huge fan of, wrote a paper on him with Michelle Dawson, but it’s not obvious that he was a very good diplomat, and it seems he very likely was a pretty terrible diplomat, and that might be feeding into this difference.

Dwarkesh Patel 00:20:29

How do you think about the long-term value and the long-term impact of intellectuals you disagree with? So, do you think over the course of history, basically, the improvements they make to the discourse and the additional things they give us a chance to think about, that washes out their object level, the things they were object level wrong about?

Tyler Cowen 00:20:48

Well, it’s worked that way so far. Right. So we’ve had economic growth, obviously with interruptions, but so much has fed into the stream. And you have to be pretty happy with today’s world compared with, say, 1880. The future may or may not bring us the same, but if the future brings us continuing economic growth, then I’m going to say exactly that. Oh, be happy. They fed into the stream. They may have been wrong, but things really worked out. But if the future brings us a shrinking population asymptotically approaching a very low level, and greater poverty and more war, and you’ve got to wonder, well, who is responsible for that, right?

Dwarkesh Patel 00:21:26

Who would be responsible for that?

Tyler Cowen 00:21:29

We don’t know, but I think secular thinkers will fall in relative status if that’s the outcome. And that’s most prominent intellectuals today, myself included.

Dwarkesh Patel 00:21:39

Yeah. Who would rise in status as a result?

Tyler Cowen 00:21:42

Well, there’s a number of people complaining strenuously about fertility declines. If there’s more war, probably the hawks will rise in status whether or not they should, and alternative scenarios that the pacifists rise in status. But I basically never see the pacifists rising in status for any more than brief moments like after the Vietnam War. Maybe they did after World War I. Yes, but again, that didn’t last because World War II swept all that away.

Dwarkesh Patel 00:22:10

Right.

Tyler Cowen 00:22:11

So the pacifists seemed to lose long-term status no matter what. And that means the Hawks would gain in status. And those worried about fertility and whatever technology drives the new wars, if that is what happens. Let’s say it’s drones. It’s possible, right? People who warned against drones, which is not currently that big a thing. There are quite a few such people, but there’s no one out there known for worrying about drones the way, say, Eliezer is known for worrying about AI. Now drones, in a way, are AI, but it’s different.

Dwarkesh Patel 00:22:44

Yeah. Although Nat Friedman, Stuart Armstrong, other people have talked about, we’re not that far away from drones. I guess you have millions of views. Whoever made that would rise, I think. Stuart Armstrong. No, sorry, not Stuart. Anyways.

Tyler Cowen 00:22:57

Yeah, but those people could end up as much more important than they are now.

Dwarkesh Patel 00:23:00

Yeah. Okay. Let’s talk about Hayek. Sure. So before we get into his actual views, I think his career is a tremendous white pill in the sense that he writes The Road to Serfdom in 1944 when Nazi Germany and Soviet Union are both prominent players. And honestly, the way things shaked out, he would be pretty pleased that a lot of the biggest collectivisms of the day have been wiped out. So it is a tremendous white bill. You can have a career like that.

Tyler Cowen 00:23:30

He was not as right as he thought at the time, but he ended up being too grumpy in his later years.

Dwarkesh Patel 00:23:37

Oh really?

Tyler Cowen 00:23:38

He thought, well, collectivism is still going to engulf the world. And I think he became a grumpy old man. And maybe it’s one thing to be a grumpy old man in 2024, but to be a grumpy old man in the 80s didn’t seem justified.

Dwarkesh Patel 00:23:52

What was the cause? What specifically did he see? That he.

Tyler Cowen 00:23:55

He thought there were atavistic instincts in the human spirit which were biologically built in, that led us to be collectivists and too envious and not appreciative of how impersonal orders worked and that this would cause the west to turn into something quite crummy. I wouldn’t say he’s been proven wrong, but a lot of the west has had a pretty good run since then and there’s not major evidence that he’s correct. The bad events we’ve seen, like some war coming back, something weird happening in our politics. I’m not sure how to describe it. I’m not sure they fit the Hayek model. Of sort of simply the accretion of more socialism.

Dwarkesh Patel 00:24:37

But in terms of the basic psychological urges towards envy and resentment, doesn’t the rise of wokeness provide evidence for his view?

Tyler Cowen 00:24:44

But now wokeness, I would say, is peaked and is falling. That’s a big debate. I don’t see wokeness as our biggest problem. I see excessive bureaucracy, sclerotic institutions, kludocracy as bigger problems. They’re not unrelated to wokeness, to be clear, but I think they’re more fundamental and harder to fix.

Dwarkesh Patel 00:25:02

Let’s talk about Hayek’s arguments. So obviously he has a famous argument about decentralization. But when we look at companies like Amazon, Uber, these other big tech companies, they actually do a pretty good job of central planning, right? There’s like a sea of logistics and drivers and trade-offs that they have to square. Do they provide evidence that central planning can work?

Tyler Cowen 00:25:25

Well, I’m not a Coasian, so Coase in his famous 1937 article said the firm is planning. And he contrasted that to the market, right? I think the firm is the market. The firm is always making contracts in the market, is subject to market checks and balances. To me, it’s not an island of central planning in the broader froth of the market. So I’m just not Coasian. So for people who are Coasian, this is an embarrassing question for them, but I’ll just say Amazon being great, is the market working? And they’re not centrally planning. Even the Soviet Union, it was very bad, but it didn’t end up being central planning. It started off that way for a few years. So, I think people misinterpret large business firms in many ways on both the left and the right.

Dwarkesh Patel 00:26:07

Wait. But under this argument, it still adds to the credence of the people who argue that basically we need the government to control. Because if it is the case that the Soviet Union is still not central planning, people would say, well yeah, but that’s kind of what I want in terms of there’s still kind of checks in terms of import, exports, of the market test. It still applied to the government in that sense. What’s wrong with that argument that basically you can treat the government as that kind of firm?

Tyler Cowen 00:26:32

I’m not sure I followed your question. I would say this. I view the later Soviet Union as being highly decentralized managers optimizing their own rents and setting prices too low to take bribes.

Dwarkesh Patel 00:26:45

Allah.

Tyler Cowen 00:26:46

Paul Craig Roberts, what he wrote in that’s a very bad decentralized system. And it was sort of backed up by something highly central communist party in the USSR. But it’s not like the early attempts at true central planning in the Soviet Union, after the revolution, which did totally fail and were abandoned pretty quickly, even by Lenin.

Dwarkesh Patel 00:27:09

Would you count the ’50s period in the Soviet Union as more centrally planned or more decentralized by that point?

Tyler Cowen 00:27:14

Decentralized. You have central plans for a number of things, obviously, weaponry, steel production. You have targets, but even that tends to collapse into decentralized action just with bad incentives.

Dwarkesh Patel 00:27:26

So your explanation for why did the Soviet Union have high growth in the, is it more catch up? Is it more that they weren’t communists at the time? How would you explain it?

Tyler Cowen 00:27:35

A lot of the Soviet high growth was rebuilding after the war, which central planning can do relatively well, right? You see government rebuilding cities, say, in Germany, that works pretty well. But most of, and this is even before World War II, just urbanization. It shouldn’t be underrated today, given we’ve observed China. But so much of Chinese growth was driven by urbanization, so much of Soviet growth. You take someone working on a farm producing almost nothing, put them in a city, even under a bad system, they’re going to be a lot more productive. And that drove so much of Soviet growth before, after the war, but that at some point more or less ends as it has. Well, it hasn’t quite ended with China, but it’s certainly slowed down and people don’t pay enough attention to that. I don’t know why. It now seems pretty obvious, but going.

Dwarkesh Patel 00:28:23

Back to the point about firms. So I guess the point I was trying to make is, I don’t understand why the argument you make that, well, these firms are still within the market in the sense that they have to pass these market tests. Why that couldn’t also apply to government-directed production, because then people argue sometimes it does. Right.

Tyler Cowen 00:28:41

Government runs a bunch of enterprises. They may have monopoly positions, but many are open to the market. In Singapore, government hospitals compete with private hospitals. Government hospitals seem to be fine. I know they get some means of support, but they’re not all terrible.

Dwarkesh Patel 00:28:57

But I guess as a general principle, you’d be against more government-directed production, right?

Tyler Cowen 00:29:02

Well, it depends on the context. So if it’s, say, the military, probably we ought to be building a lot more of some particular things, and it will be done through Boeing, Lockheed, and so on. But the government’s directing it, paying for it in some way, planning it, and we need to do that. We’ve at times done that well in the past. So people overrate the distinction between government and market, I think, especially libertarians. But that said, there’s an awful lot of government bureaucracy that’s terrible, doesn’t have a big market check. But very often, governments act through markets and have to contract or hire consultants or hire outside parties. And it’s more like a market than you think.

Dwarkesh Patel 00:29:43

I want to ask you about another part of Hayek. So, he has an argument about how it’s really hard to aggregate information toward a central planner. But then, more recently, there have been results in computer science that just finding the general equilibrium is computationally intractable. Which raises the question, well, the market is somehow solving this problem, right? Separate from the problem of getting the information, making use of the information to allocate scarce resources. How is that computationally a process that’s possible? I’m sure you’re aware, like the linear optimization, non-convex constraints. How does the market solve this problem?

Tyler Cowen 00:30:20

Well, the market’s not solving for a general equilibrium. It’s just solving for something that gets us into the next day. And that’s a big part of the triumph, just living to fight another day, wealth not going down, not everyone quitting. And if you can do that, things will get better. And that’s what we’re pretty good at doing, is just building a sustainable structure. And a lot of it isn’t sustainable, like the fools who start these new small businesses. But they do pretty quickly disappear, and that’s part of the market as well. So, if you view the whole thing in terms of computing a general equilibrium, I think one of Hayek’s great insights is that’s just the wrong way to think about the whole problem. So, lack of computational ability to do that doesn’t worry me for either the market or planning, because to the extent planning does work, it doesn’t work by succeeding at that. Like Singaporean public hospitals don’t work because they solve some computational problem. They seem to work because the people running them care about doing a good job. And enough of the workers go along with that.

Dwarkesh Patel 00:31:21

Yeah. So, related to that, I think in the meaning of competition, he makes the point that the most interesting part of markets is when they go from one equilibrium to another, because that’s where they’re trying to figure out what to produce and how to produce it better and so on, and not the equilibriums themselves. And it seemed related to the Peter Thiel point in zero to one. That monopoly is when you have interesting things happen because when there’s just competitive equilibrium, there’s no profits to invest in R&D or to do cool new things. Do those seem like related points? Am I reading? Absolutely.

Tyler Cowen 00:31:50

And Hayek’s essay competition as a discovery process or procedure makes that point very explicitly. And that’s one of his handful of greatest essays, one of the greatest essays in all of economics.

Dwarkesh Patel 00:32:01

Is there a contradiction in Hayek in the sense that the decentralization he’s calling for results in specialists having to use the very scientism and statistical aggregates?

Tyler Cowen 00:32:13

Of course, that Hayek underrates scientism. Scientism is great, it can be abused, but we all rely on scientism. If you have an mRNA vaccine in your arm, well, how do you feel about scientism and so on?

Dwarkesh Patel 00:32:26

How much should we worry about this opening up the whole system to fragilities, if there’s like no one mind that understands large parts of how everything fits together? People talk about this in the context of if there’s a war in China, and the producers didn’t think about that possibility when they put valuable manufacturing in Taiwan and stuff like that.

Tyler Cowen 00:32:43

No one mind understanding things is inevitable under all systems. This gets into some of the alignment debates. If you had one mind that understood everything or could control everything, you have to worry a great deal about the corruptibility of that mind. So legibility, transparency, are not per se good. You want enough of them in the right places, but you need some kind of balance. So I think supply chains are no longer an underanalyzed problem, but until Covid they were, and they’re a big deal. And the Hayekian argument doesn’t always work, because the signal you have is of the current price. And that’s not telling you how high are the inframarginal values if you get, say, cut off from being able to buy vaccines from India, because you’re at the bottom of the queue. So that was a problem. It was the market failing, because the price doesn’t tell you inframarginal values. And when you move from some ability to buy the output to zero, those inframarginal values really matter.

Dwarkesh Patel 00:33:43

What would Hayek make of AI agents as they get more powerful? You have some market between the AI agents. There’s some sort of decentralized order as a result. What insights would you have about that?

Tyler Cowen 00:33:55

Well, a lot of Hayekians wrote about these issues, including at George Mason in the 1980s. And I think some of those people even talked to Hayek about this. And my recollection, which is imperfect, is that he found all this very interesting and in the spirit of his work. And Don Lavoie was leading this research program; he died prematurely of cancer. Bill Tulloh was also involved. And some of this has been written up, and it is very Hayekian and George Mason actually was a pioneer in this area.

Dwarkesh Patel 00:34:25

What do you make of AI agents? The market between them and the sort of infrastructure and order that you need to facilitate that.

Tyler Cowen 00:34:31

They’re going to replicate markets on their own, has been my prediction, and I think they’re going to evolve their own currencies. Maybe at first, they’ll use Bitcoin, but there’ll be an entire property rights system based, at least at first, on what we now call NFTs. I’m not sure that will end up being the right name for them, but if you want property rights in a so-called imaginary world, that’s where you would start with Bitcoin and NFTs. So I don’t know what percent of GDP this will be at first. It will be quite small, but it will grow over time. And it’s going to show Hayek to have been right about how these decentralized systems evolve.

Dwarkesh Patel 00:35:06

Do you anticipate that it’ll be sort of a completely different sphere and that there’s like the AI agents’ economy and there’s the human economy, and obviously they have links between them, but it’s not intermixed. Like they’re not on the same social media or the same task rabbit or whatever. It’s a very separate infrastructure that’s needed for the AI agents to talk to themselves versus talk to humans.

Tyler Cowen 00:35:26

I don’t see why we would enforce segregation now. You might have some segregated outlets like maybe X Twitter. Well, we’ll keep off the bots, let’s say it can even manage to do that. But if I want to hire a proofreader, I’m going to deal with the AI sector and pay them in Bitcoin. And I’ll just say to my personal AI assistant, “Hey, go out and hire an AI and pay them with whatever,” and then just not think about it anymore.

Dwarkesh Patel 00:35:52

And it will happen, maybe because there’s much higher transaction costs with dealing with humans and interacting with the human world, whereas they can just send a bunch of vectors to each other. It’s much faster for them to just have a separate dedicated infrastructure for that.

Tyler Cowen 00:36:05

But transaction costs for dealing with humans will fall because you’ll deal with their quote-unquote assistants, right? So you’ll only deal with the difficult human when you need to. And people who are very effective will segregate their tasks in a way that reflects their comparative advantage. And people who are not effective will be very poor at that, and that will lead to some kind of bifurcation of personal productivity. How well will you know what to delegate to your AI? I’ll predict you’ll be very good at it. You may not have figured it out yet, but say you’re like an A+ on it and other people are D. That’s a big comparative advantage for you.

Dwarkesh Patel 00:36:44

We’re talking, I guess, about GBD five level models. When you think in your mind about, okay, this is GBD five. What happens with GBD six, GBD seven. Do you see it? Do you still think in the frame of having a bunch of RAs, or does it seem like a different sort of thing at some point?

Tyler Cowen 00:36:59

I’m not sure what those numbers going up mean, what a GPT seven would look like, or how much smarter it could get. I think people make too many assumptions there. It could be the real advantages are integrating it into workflows by things that are not better GPTs at all. And once you get to a GPT, say 5.5, I’m not sure you can just turn up the dial on smarts and have it integrate general relativity and quantum mechanics.

Dwarkesh Patel 00:37:26

Why not?

Tyler Cowen 00:37:27

I don’t think that’s how intelligence works. And this is a Hayekian point. And some of these problems, there just may be no answer. Like, maybe the universe isn’t that legible, and if it’s not that legible, GPT Eleven doesn’t really make sense as a creature or whatever.

Dwarkesh Patel 00:37:44

Isn’t there a Hayekian argument to be made that, listen, you can have billions of copies of these things. Imagine the sort of decentralized order that could result from the amount of decentralized tacit knowledge that billions of copies talking to each other could have. That in and of itself is an argument to be made about the whole thing as an emergent order will be much more powerful than we were anticipating.

Tyler Cowen 00:38:04

Well, I think it will be highly productive. What tacit knowledge means with AIs, I don’t think we understand yet. Is it by definition all non-passive? Or does the fact that how GPT four works is not legible to us or even its creators so much? Does that mean it’s possessing tacit knowledge, or is it not knowledge? None of those categories are well thought out, in my opinion. So we need to restructure our whole discourse about tacit knowledge in some new, different way. But I agree, these networks of AIs, even before, like GPT eleven, they’re going to be super productive, but they’re still going to face bottlenecks. Right. And I don’t know how good they’ll be at overcoming the behavioral bottlenecks of actual human beings, the bottlenecks of the law and regulation. And we’re going to have more regulation as we have more AIs.

Dwarkesh Patel 00:38:53

Right. Yeah. When you say there’ll be uncertainties, I think you made this argument when you were responding to Alex Epstein on fossil future, where you said uncertainties also extend out into the domain where there’s a bad outcome or much bigger outcome than you’re anticipating.

Tyler Cowen 00:39:04

That’s right.

Dwarkesh Patel 00:39:05

So can we apply the same argument to AI? The fact that there is uncertainty is also a reason for worry.

Tyler Cowen 00:39:11

Well, it’s always a reason for worry, but there’s uncertainty about a lot of things, and AI will help us with those other uncertainties. So on net, do you think more intelligence is likely to be good or bad, including against x risk? And I think it’s more likely to be good. So if it were the only risk, I’d be more worried about it than if there’s a whole multitude of risks. But clearly, there’s a whole multitude of risks. But since people grew up in pretty stable times, they tend not to see that in emotionally vivid terms. And then this one monster comes along, and they’re all terrified.

Dwarkesh Patel 00:39:42

What would Hayek think of prediction markets?

Tyler Cowen 00:39:45

Well, there were prediction markets in Hayek’s time. I don’t know that he wrote about them, but I strongly suspect he would see them as markets that through prices, communicate information. But even around the time of the civil war, there were so-called bucket shops in the US and New York where you would bet on things. They were betting markets with cash settlement, probably never called prediction markets, but they were exactly that. Later on, they were banned. But it’s an old standing thing. There were betting markets on lives in 17th century Britain, different attempts to outlaw them, which I think basically ended up succeeding. But under the table, I’m sure it still went on to some extent.

Dwarkesh Patel 00:40:22

Yeah. The reason it’s interesting to think about this is because his whole argument about the price system is that you can have a single dial that aggregates so much information, but it’s precisely for this, and for that reason, it’s so useful to somebody who’s trying to know based on that information. But it’s precisely for this reason that it’s so aggregated that it’s hard to learn about any one particular input to that dial.

Tyler Cowen 00:40:42

But I would stress it’s not a single dial. And whether Hayek thought it was a single dial, I think you can argue that either way. So people in markets, they also observe quantities, they observe reaction speeds. There’s a lot of dimensions to prices other than just, oh, this newspaper costs $4, the terms on which it’s advertised. So markets work so well because people are solving this complex multidimensional problem and the price really is not a sufficient statistic the way it is in an Arrow-Debreu. And I think at times Hayek understood that and at other times he writes as if he doesn’t understand it. But it’s an important point.

Dwarkesh Patel 00:41:18

Somewhat related question what does it tell us about the difficulty of preserving good institutions, good people, that the median age of a corporation is 18 years and they don’t get better over time, right? Decade after decade, what corporation? There’s a racial corporations that continue improving in that way?

Tyler Cowen 00:41:34

Well, I think some firms keep improving for a long time. So there are Japanese firms that date back to the 17th century. They must be better today or even in 1970 than they were way back when. Like the leading four or five Danish firms, none of them are younger than the 1920s. So Maersk, the firm that came up with Hosempic, the pharmaceutical firm, they must be much better than they were back then, right? They have to be. So how that is possible to me is a puzzle. But I think in plenty of cases it’s true.

Dwarkesh Patel 00:42:09

I can really say that the best firms in the world aren’t ones that have been improving over time. If you look at the biggest companies by market cap, it’s not like this is what it takes to get there is hundreds of years of continual refinement. What does that tell us about the world?

Tyler Cowen 00:42:26

Or just hundreds of years? But again, don’t be overly biased by the US experience and the tech sector. There’s around the world plenty of firms that at least seem to get better as they get older. Certainly, their market cap goes up. Some of that might just be a population effect. Maybe their productivity per some unit is in some ways going down. But that’s a very common case. And why the US is such an outlier is an interesting question, right? Israel clearly is an outlier in a sense. They only have pretty young firms, right? And they’ve done very well in terms of growth.

Dwarkesh Patel 00:43:01

Can it be explained by the fact that in these other countries it’s actually just harder to start a new company? Not necessarily that the older companies are actually getting better.

Tyler Cowen 00:43:08

Possibly, but it does seem the older companies are often getting better, right? Like in, you know, take China is pretty much entirely new firms because of communism. Japan, in particular, seems to have a lot of very old firms. I don’t know if they’re getting better, but I don’t think you can write that off as a possibility.

Dwarkesh Patel 00:43:28

This is Hayek in competition as a discovery process. And it seems like he predicted NIMBYism. So he says in a democratic society it would be completely impossible, using commands that could not be regarded as just, to bring about those changes that are undoubtedly necessary but the necessity of which could not be strictly demonstrated in a particular case. So it seems like he’s kind of talking about what we today call NIMBYism.

Tyler Cowen 00:43:52

Sure. And there’s plenty of NIMBYism in earlier times. You look at the 19th-century debates over restructuring Paris Hausmann and putting in the broader boulevards and the like that met with very strong opposition. It’s a kind of miracle that it happened.

Dwarkesh Patel 00:44:05

Yeah. Is this a thing that’s inherent to the democratic system? Recently, I interviewed Dominic Cummings and obviously planning is a big issue in the UK. It seems like every democratic country has.

Tyler Cowen 00:44:16

This kind of problem and most autocratic countries have it too. Now, China is an exception. They will probably slide into some kind of NIMBYism even if they stay autocratic. Just people resist change. Interest groups always matter. Public opinion. Ala David Hume always matters. And it’s easy to not do anything on a given day. Right. And that just keeps on sliding into the.

Dwarkesh Patel 00:44:41

Guess.

Tyler Cowen 00:44:42

India has had a lot of NIMBYism. It’s fallen away greatly under Modi and especially what the state governments have done. But it can be very hard to build things in India.

Dwarkesh Patel 00:44:52

Still, although it is a democracy, I guess it’s a China example. We’ll see what happens there.

Tyler Cowen 00:44:57

That’s right. But it would be very surprising because the Chinese government is highly responsive to public opinion on most, but not all issues. So why wouldn’t they become more NIMBY? Especially with a shrinking population, they’re way overbuilt.

Dwarkesh Patel 00:45:12

Right.

Tyler Cowen 00:45:12

So the pressure to build will be weak and in cases where they ought to build, I would think quite soon they won’t.

Dwarkesh Patel 00:45:19

How much of economics is a study of the systems that human beings use to allocate scarce resources and how much is just something you’d expect to be true of aliens, AIs? It’s interesting when you read the history of economic thought, how often they make mention of human nature specifically like Keynes is talking about. People have high discount rates. Right? Yeah. But what are your thoughts here?

Tyler Cowen 00:45:44

My former colleague Gordon Tullock wrote a very interesting book on the economics of ant societies and animal societies and very often they obey human-like principles, or more accurately, humans obey non-human, animal-like principles. So I suspect it’s fairly universal and depends less on quote unquote human nature than we sometimes like to suggest. Maybe that is a bit of a knock on some behavioral economics, the logic of the system. Armin Alchian wrote on this. Gary Becker wrote on this. There were some debates on this in the early 1960s, and that the automatic principles of profit and loss and selection at a firmwide level really matter. And it’s responsible for a lot of economics being true. I think that’s correct.

Dwarkesh Patel 00:46:30

Actually, that raises an interesting question of within firms, the sort of input they’re getting from the outside world or ground truth data, is profit loss, bankruptcy. It’s like very condensed information. And from this they had to make the determination of who to fire, who to hire, who to promote, what project to pursue. How do we make sense of how firms disaggregate this very condensed information?

Tyler Cowen 00:46:54

I would like to see a very good estimate of how much of productivity gains is just from selection and how much is from, well, smart humans figuring out better ways of doing things. And there are some related pieces on this in the international trade literature. So when you have freer trade, a shockingly high percentage of the productivity gains come from your worst firms being bankrupted by the free trade. And Alex Tabarrok has some posts on this. I don’t recall the exact numbers, but it was higher than almost anyone thought. And that, to me, suggests the Alchian-Becker mechanisms of evolution at the level of the firm, enterprise, or even sector, they’re just a lot more important than human ingenuity. And that’s a pretty Hayekian point. Hayek presumably read those pieces in the. Don’t think he ever commented on them.

Dwarkesh Patel 00:47:41

Interesting. Let’s talk about Mill.

Tyler Cowen 00:47:47

Right, not James Mill, but he was interesting, too.

Dwarkesh Patel 00:47:49

So his arguments about the law force against women and how basically throughout history, the state of women in his society is not natural or the wisdom of the ages, but just the result of the fact that men are stronger and have codified that. Can we apply that argument in today’s society against children and the way we treat them?

Tyler Cowen 00:48:09

Yes, I think we should treat children much better. We’ve made quite a few steps in that direction. It’s interesting to think of Mill’s argument as it relates to Hayek. So Mill is arguing you can see more than just the local information. So keep in mind, when Mill wrote, every society that he knew of at least treated women very poorly, oppressed women because they were physically weaker or at a big disadvantage. If you think there’s some matrilineal exceptions, Mill didn’t know about them, so it appeared universal. And Mill’s chief argument is to say you’re making a big mistake if you overly aggregate information from this one observation, that behind it is a lot of structure, and a lot of the structure is contingent, and that if I, Mill, unpack the contingency for you, you will see behind the signals. So Mill is much more rationalist than Hayek. It’s one reason why Hayek hated Mill. But clearly, on the issue of women, Mill was completely correct that women can do much better, will do much better. It’s not clear what the end of this process will be. It will just continue for a long time. Women achieving in excellent ways. And it’s Mill’s greatest work. I think it’s one of the greatest pieces of social science, and it is anti-Hayekian. It’s anti-small c conservatism.

Dwarkesh Patel 00:49:26

His other book, On Liberty, is very Hayekian, though, right? In the sense that free speech is needed because information is contained in many different people’s minds.

Tyler Cowen 00:49:34

That’s right. And I think Mill integrated sort of. You could call it Hayek and anti-Hayek better than Hayek ever did. That’s why I think Mill is the greater thinker of the two.

Dwarkesh Patel 00:49:45

But on the topic of children, what would Mill say, specifically? I guess he could have talked about it if he wanted to, but I don’t know if he was. In today’s world, we send them to school. They’re there for 8 hours a day. Most of the time, it’s probably wasted, and we just use a lot of coercion on them. We don’t need to. How would he think about this issue?

Tyler Cowen 00:50:03

There’s Mill’s own upbringing, which was quite strict and by current standards, oppressive, but apparently extremely effective in making Mill smart. So I think Mill very much thought that kids should be induced to learn the classics, but he also stressed they needed free play of the imagination in a way that he drew from German and also British Romanticism, and he wanted some kind of synthesis of the two. But by current standards, Mill, I think, still would be seen as a meanie toward kids. But he was progressive by the standards of his own day.

Dwarkesh Patel 00:50:37

Do you buy the arguments about aristocratic tutoring for people like Mill? And there’s many other cases like this, but since they were kids, they were taught by one-on-one tutors, and that explains part of their greatness.

Tyler Cowen 00:50:49

I believe in one-on-one tutors. But I don’t know how much of those examples is selection, right? So I’m not sure how important it is. But just as a matter of fact, if I were a wealthy person and just had a new kid, I would absolutely invest in one-on-one tutors.

Dwarkesh Patel 00:51:04

You talk in the book about how Mill is very concerned about the quality and the character development of the population. But when we think about the fact that somebody like him was elected to the parliament at the time, the greatest thinker who’s alive is elected to government, and it’s hard to imagine that could be true in today’s world. Does he have a point with regards to the quality of the population?

Tyler Cowen 00:51:29

Well, Mill, as with women, he thought a lot of improvement was possible. And we shouldn’t overly generalize from seeing all the dunces around us, so to speak. Maybe the book is still out on that one, but it’s an encouraging belief, and I think it’s more right than wrong. There’s been a lot of moral progress since Mill’s time. Not in everything, but certainly in how people treat children or how men treat their wives. And even when you see negative reversals, Stephen Pinker so far seems to be right on that one. But you do see places like Iran. How women were treated seems to have been much better in the 1970s than it is today. So there are definitely reversals.

Dwarkesh Patel 00:52:13

But on a specific reversal of somebody of Mill’s quality probably wouldn’t get elected to Congress in the US or parliament in the UK. How big a deal is that?

Tyler Cowen 00:52:22

Advice may get through the cracks due to all the local statesmen who wisely advise their representatives in the House. Right. So I don’t know how much that process is better or worse compared to, say, the 1960s. I know plenty of smart people who think it’s worse. I’m not convinced that’s true.

Dwarkesh Patel 00:52:41

Let’s talk about Smith. Adam Smith. Yeah. Okay. One of the things I find really remarkable about him is he publishes in 1776, The Wealth of Nations. And basically, around that time, Gibbon publishes the decline and fall of the Roman Empire. Yep. So he publishes The Decline and Fall of the Roman Empire. And one of his lines in there is if you were asked to state a period of time when man’s condition was what is his best, it was during the reign of Commodus to Domitian. And that’s like 2000 years before that. Right. So there’s basically been at least it’s plausible to somebody really smart that there’s basically been no growth since for 2000 years. And in that context to be making the case for markets and mechanization and division of labor. I think it’s even more impressive when you put it in context that he has basically been seeing 0.5% or less growth.

Tyler Cowen 00:53:29

Strongly agree. And this is, in a way, Smith being like Mill. Smith is seeing the local information of very small growth and the world barely being better than the Roman Empire, and inferring from that, with increasing returns to division of labor, how much is possible. So Smith is a bit more of a rationalist than Hayek makes him out.

Dwarkesh Patel 00:53:47

To be right now, I wonder if we use the same sort of extrapolative thinking that Smith uses. We haven’t seen that much growth yet. But if you apply these sorts of principles, this is what you would expect to see. What would he make of the potential AI economy where we see 2% growth a year now, but you have billions of potential more agents or something. Would he say, well, actually, you might have 10% growth because of this? You would need more economic principles to explain this or that. Just adding that to our list of existing principles would imply big gains?

Tyler Cowen 00:54:19

It’s hard to say what Smith would predict for AI. My suspicion is that the notion of 10% growth was simply not conceivable to him. So he wouldn’t have predicted it because he never saw anything like it. That to him, 3% growth would be a bit like 10% growth. It would just shock him and bowl him over. But Smith does also emphasize different human bottlenecks and constraints of the law. So it’s quite possible Smith would see those bottlenecks as mattering and checking AI growth and its speed.

Dwarkesh Patel 00:54:52

But as a principle, given the change we saw pre-industrial revolution and after 1870, does it seem plausible to you that you could go from the current regime to a regime where you have 10% growth for decades on end?

Tyler Cowen 00:55:08

That does not seem plausible to me. But I would stress the point that high rates of growth decades on end, the numbers cease to have meaning because the numbers make the most sense when the economy is broadly similar, like, oh, everyone eats apples, and each year there’s 10% more apples at a roughly constant price. As the basket changes, the numbers become meaningless. It’s not to deny there’s a lot of growth, but you can think about it better by discarding the number. And presumably, AI will change the composition of various bundles quite a bit over time.

Dwarkesh Patel 00:55:39

So when you hear these estimates about what the GDP per capita was in the Roman Empire, do you just disregard that and think in terms of qualitative changes from that time?

Tyler Cowen 00:55:45

Depends on what they’re being compared to. So there are pieces in economic history that are looking at, say, the 17th, 18th-century Europe comparing it to the Roman Empire. Most of GDP is agriculture, which is pretty comparable, right? Especially in Europe. It’s not wheat versus corn. It’s wheat and wheat. And I’ve seen estimates, oh, say, by 1730, some parts of Western Europe are clearly better off than the Roman Empire at its peak. But, like, within range, those are the best estimates I know, and I trust those. They’re not perfect, but I don’t think there’s an index number problem so much.

Dwarkesh Patel 00:56:23

And so when people say, we’re 50% richer than an average Roman at the peak of the empire, this kind of thinking doesn’t make sense to you?

Tyler Cowen 00:56:33

It doesn’t make sense to me. And a simple way to show that. Let’s say you could buy from a Sears Robot catalog of today or from 1905, and you have $50,000 to spend. Which catalog would you rather buy from? You have to think about it right now. If you just look at changes in the CPI, it should be obvious you would prefer the catalog from 1905. Everything’s so much cheaper. That white shirt costs almost nothing.

Dwarkesh Patel 00:56:59

Right?

Tyler Cowen 00:56:59

At the same time, you don’t want that stuff. It’s not mostly part of the modern bundle. So even if you ended up preferring the earlier catalog, the fact that you have to think about it reflects the changes.

Ambiguities when you read the contemporaries of Smith. Other economists who were writing at the time, were his arguments just clearly, given the evidence of the time, much better than everybody around? Or was it just that, exposed, he was clearly right. But given the arguments of the time, it could have gone any one of different ways.

Tyler Cowen 00:57:29

Well, there aren’t that many economists at the time of Smith, so it depends on what you’re counting. I mean, the two fellow Scots you could compare Smith to are Sir James Stewart, who published a major work, I think, in 1767. On some matters, Stewart was ahead of Smith. Not most, clearly Smith was far greater. But Stewart was no slouch. And the other point of comparison is David Hume, Smith’s best friend. Of course, per page, you could argue Hume was better than Smith. Certainly on monetary theory, Hume was better than Smith. Now, he’s not a GOAT contender. He just didn’t do enough. But I wouldn’t say Smith was ahead of Hume. He had more and more important insights. But Hume was pretty impressive. Now, if you’re talking about, oh, the 18th-century German cameralists, well, they were bad mercantilists, but there are people, say, writing in Sweden in the 1760s, analyzing exchange rates, who had better understandings of exchange rates than Smith ever did? So it’s not that he just dominated everyone.

Dwarkesh Patel 00:58:31

Let me offer some other potential nominees that were not in the book for GOAT, and I want your opinions of them. Henry George, in terms of explaining how land is fundamentally different from labor and capital when we’re thinking about the economy.

Tyler Cowen 00:58:45

Well, first, I’m not sure land is that fundamentally different from labor and capital. A lot of the value of land comes from improvements, and what’s an improvement can be quite subtle. It doesn’t just have to be putting a plow to the land. So I would put George in the top 25. Very important thinker. But he’s a bit of a. Not a one-note Johnny. His book on protectionism is still one of the best books on free trade. But he’s circumscribed in a way, say, Smith and Mill were not.

Dwarkesh Patel 00:59:16

Today. Does its status rise? We see rents in big cities.

Tyler Cowen 00:59:20

Status is way up for this reason because of YIMBY NIMBY. And I think that’s correct. He was undervalued. He’s worth reading very carefully. A few years ago we’re recording here at Mercatus, we had a like twelve-person, two-day session with Peter Thiel just on reading Henry George. It’s all we did. And people came away very impressed, I think.

Dwarkesh Patel 00:59:42

And for people who are interested, they might enjoy the episode I did with Lars Doucet, who, oh, I don’t know about this.

Tyler Cowen 00:59:47

He’s a Georgist.

Dwarkesh Patel 00:59:48

Oh yeah. He’s a really smart guy. Basically, he wrote a book review of Henry George that won Scott Alexander’s book review contest.

Tyler Cowen 00:59:57

Oh, I know this.

Dwarkesh Patel 00:59:58

And then he’s just turned it into a whole book of his own, which is actually really good.

Tyler Cowen 01:00:02

And I think there’s something truly humane in George when you read him. That can be a bit infectious. That’s positive.

Dwarkesh Patel 01:00:10

And there was some insane turnout for his funeral. Right. He was very popular at the time.

Tyler Cowen 01:00:16

And that was deserved.

Dwarkesh Patel 01:00:17

Yeah. I guess you already answered this question, but Ronald Coase, in terms of helping us think about firms and property rights and transaction costs, well, even though I.

Tyler Cowen 01:00:26

Think the 1937 piece is wrong, it did create one of the most important genres. He gets a lot of credit for that. He gets a lot of credit for the Coase theorem. The FCC property rights piece is superb. The lighthouse piece is very good. Again, he’s in the top 25, but in terms of his quantity, its own quality, it’s just not quite enough. There’s no macro, but of course, you rate him very, very highly.

Dwarkesh Patel 01:00:51

How about your former advisor, Thomas Schelling?

Tyler Cowen 01:00:54

He is a top-tier Nobel laureate, but I don’t think he’s a serious contender for the greatest economist of all time. He gets the most credit for making game theory intuitive, empirical, and workable, and that’s worth a lot. Economics of self-command. He was a pioneer, but in a way, that’s just going back to the Greeks and Smith. He’s not a serious contender for GOAT, but a top-tier Nobel laureate for sure.

Dwarkesh Patel 01:01:21

You have a fun quote in the book on Arrow where you say his work was Nobel Prize winning important, but not important important.

Tyler Cowen 01:01:29

Well, some parts of it were important important like how to price securities. So I think I underrated Arrow a bit in the book. If you ask like, what regrets do I have about the book? I say very, very nice things about Arrow, but I think I should have pushed him even more.

Dwarkesh Patel 01:01:42

What would Arrow say about prediction markets?

Tyler Cowen 01:01:45

Well, he was really the pioneer of theoretically understanding how they work. So he was around until quite recently. I’m sure he had things to say about prediction markets, probably positive.

Dwarkesh Patel 01:02:00

So one of the points you make in the book is economics at the time was really a way of carrying forward big ideas about the world. What discipline today is where that happens?

Tyler Cowen 01:02:09

Well, Internet writing, it’s not a discipline, but it’s a sphere, and plenty of it happens more than ever before. But it’s segregated from what counts as original theorizing in the academic sense of that word. Is that a good or bad segregation? I’m not sure, but it’s really a very sharp, radical break from how things had been. And it’s why I don’t think there’ll be a new GOAT contender. Probably not ever. Or if there is, it will be something AI-related.

Dwarkesh Patel 01:02:36

Yeah, that sounds about right to me. But within the context of Internet writing, obviously, there are many disciplines there, economics being a prominent one when you split it up, is there a discipline in terms of, I don’t know, people writing in terms of computer science concepts or people writing in terms of economic concepts? Who’s today?

Tyler Cowen 01:02:55

The discipline ceased to matter. That really good Internet writing is multidisciplinary. When I meet someone like a Scott Aronson, who’s doing, like, computer science, AI type Internet writing on his blog, I have way more in common with him than with a typical research economist, say, at Boston University. And it’s not because I know enough about computer science, like I may or may not know a certain amount, but it’s because our two enterprises are so similar. Or Scott Alexander, he writes about mental illness also. That just feels so similar, and we really have to rethink what the disciplines are. It may be that the method of writing is the key differentiator for this particular sphere, not for everything.

Dwarkesh Patel 01:03:36

Scott Aronson was my professor in college for a couple. Yeah, yeah. That’s where I decided I’m not going to go to grad school because you just see like two standard deviations above you easily. You might as well just choose a different game.

Tyler Cowen 01:03:50

But his method of thinking and writing is infectious, like that of Scott Alexander and many of the rest of us.

Dwarkesh Patel 01:03:56

Yeah. So I think in the book you say you were raised as much by economic thought or the history of economic thought as you are by your graduate training.

Tyler Cowen 01:04:07

More, much more. It’s not even close.

Dwarkesh Patel 01:04:10

Today people would say, I was talking to Basil Halperin, who’s a young economist, and he said he was raised on Marginal Revolution in the same way that you were raised on the history of economic thought. Does this seem like a good trade? Are you happy that people today are raised on Scott Alexander and Marginal Revolution?

Tyler Cowen 01:04:26

At the margin, I would like to see more people raised on Marginal Revolution. I don’t just mean that in a selfish way. Regarding the internet writing mode of thinking, I would like to see more economists and research scientists raised on it, but the number may be higher than we think. If I hadn’t run Emergent Ventures, I wouldn’t know about Basil, per se, maybe would not have met him. And it’s infectious, so it might always be a minority, but it will be the people most likely to have new ideas. It’s a very powerful new mode of thought, which I’ll call the Internet way of writing and thinking. And it’s not sufficiently recognized as something like a new field or discipline, but that’s what it is.

Dwarkesh Patel 01:05:03

I wonder if you’re doing enough of that when it comes to AI, where I think you have really interesting thoughts about GPD, five-level stuff, but somebody with your sort of polymathic understanding of different fields, if you just extrapolate out these trends, it seems like you might have a lot of interesting thoughts about what might be possible with something much further down the line.

Tyler Cowen 01:05:21

Well, I have a whole book with AI predictions, averages over, and I have about 30 Bloomberg columns and probably 30 or 40 Marginal Revolution posts. I can just say I’ll do more, but the rate at which ideas arrive at me is the binding constraint. I’m not holding them back.

Dwarkesh Patel 01:05:40

Speaking of Basil, he had an interesting question. Should society or government subsidize savings so that we’re, in effect, having it lead to basically a zero social discount rate. So people on average probably have their own lives prioritized. They have discount rates based on their own lives. If we’re long-term, should the government be subsidizing savings?

Tyler Cowen 01:06:07

I’ll come close to saying yes. First, we tax savings right now. So we should stop taxing savings. Absolutely. I think it’s hard to come up with workable ways of subsidizing savings that don’t give rich people a lot of free stuff in a way that’s politically unacceptable and also unfair. So I’m not sure we have a good way of subsidizing savings, but in principle, I would be for it if we could do it in a proper, targeted manner.

Dwarkesh Patel 01:06:33

Although you had a good argument against this in “Stubborn Attachments,” right? That over the long term, if economic growth is high enough, then the savings of the rich will just be dissipated to everybody below.

Tyler Cowen 01:06:45

Well, I’m not sure to whom it’s dissipated. It does get dissipated. The great fortunes of the past are mostly gone, but they may not go to people below. And the idea of writing into a tax system, subsidies on that scale, in essence, subsidies to wealth, not GDP. But wealth is, say, six to eight times GDP. I just think the practical problems are quite significant. It’s not an idea I’m pushing, but there are, at the margins, ways you can do it that only benefit people who are poor, ways you can improve through better either regulation or deregulation, like the workings of local credit unions, that are a kind of de facto subsidy without having to subsidize all of the saved wealth. There are a lot of ways you can do that, and we should look for that more.

Dwarkesh Patel 01:07:30

Relatedly, I think, a couple of years ago, Paul Schlemming had an interesting paper that if you look from 1311 to now, interest rates have been declining. There’s been hundreds of years of interest rate declines. What is the big picture explanation of this trend?

Tyler Cowen 01:07:46

I’m not sure we have one. You may know Cowen’s third law: All propositions about real interest rates are wrong. But simply, lower risk, better information, higher buffers of wealth would be what you’d call the intuitive economistic explanations. There’s probably something to them, but how much of that trend do they actually explain as a percent of the variance? I don’t know.

Dwarkesh Patel 01:08:07

Let’s talk about anarchy. You have first written about this; I hadn’t read the last time we talked, and it’s really interesting. So maybe you can restate your arguments as you answer this question, but how much of your arguments about how network industries lead to these cartel-like dynamics? How much of that can help explain what happened to social media, Web 2.0?

Tyler Cowen 01:08:28

I don’t view that as such a cartel. I think there’s a cartel at one level, which is small but significant. This is maybe more true three, four years ago than today, with Elon owning Twitter and other changes, but if someone got kicked off social media platforms three, four years ago, they would tend to get kicked off all or most of them. It wasn’t like a consciously collusive decision, but it’s a bit like, oh, well, I know the guy who runs that platform, and he’s pretty smart, and if he’s worried, I should be worried. And that was very bad. I don’t think it was otherwise such a collusive equilibrium, maybe some dimensions on hiring social people, software engineers. There was some collusion, not enough bidding, but it was mostly competing for attention. So I think the real risk protection agencies’ side of network-based collusion is through banking systems, where you have clearinghouses and payments networks, and to be part of it, the clearinghouse, in the absence of legal constraint, can indeed help everyone collude. And if you don’t go along with the collusion, you’re kicked out of the payment system. That strikes me as a real issue.

Dwarkesh Patel 01:09:40

Do your arguments against anarchy, do they apply at all to web 3.0 crypto-like stuff?

Tyler Cowen 01:09:48

Do I think it will evolve into collusion? I don’t see why it would. I’m open to hearing the argument that it could, though. What would that argument look like?

Dwarkesh Patel 01:09:56

Well, I guess we did see with crypto that in order to just have workable settlement, you need these centralized institutions, and from there you can get kicked off those, and the government is involved with those. And you can maybe abstract the government away and say that they will need to collude in some sense in order to facilitate transactions.

Tyler Cowen 01:10:15

And the exchanges have ended up quite centralized, right?

Dwarkesh Patel 01:10:18

Yeah.

Tyler Cowen 01:10:19

And that’s an example of clearinghouses and exchanges being the vulnerable node. But I don’t know how much web 3.0 is ever going to rely on that. It seems you can create new crypto assets more or less at will. There’s the focality of getting them started. But if there’s a real problem with the preexisting crypto assets, I would think you could overcome that. So I would expect something more like a to and fro, waves of centralization, decentralization, and natural checks embedded in the system. That’s my intuition, at least.

Dwarkesh Patel 01:10:50

Does your argument against anarchy prove too much in the sense that globally different nations have anarchic relations with each other, and they can’t enforce a monopoly on each other, but they can coordinate to punish bad actors in the way you want protection agencies to do? Right? Like we can sanction North Korea together or something.

Tyler Cowen 01:11:07

I think that’s a very good point and a very good question, but I would rephrase my argument. You could say it’s my argument against anarchy, and it is an argument against anarchy, but it’s also an argument that says anarchy is everywhere. So within government, the feds, the state governments, all the different layers of federalism, there’s a kind of anarchy. There’s not quite a final layer of adjudication, the way you might think we pretend there is. I’m not sure how strong it is internationally. Of course, how much gets enforced by a hegemon, how much is spontaneous order? Even the different parts of the federal government are in a kind of anarchy with respect to each other. So you need a fair degree of collusion for things to work, and you ought to accept that. But maybe in a Straussian way, where you don’t trumpet it too loudly. But the point that anarchy itself will evolve enough collusion to enable it to persist, if it persists at all, is my central point. My point is, well, anarchy isn’t that different now, given we’ve put a lot of social political capital into our current institutions, I don’t see why you would press the anarchy button. But if I’m North Korea and I can press the anarchy button for North Korea, I get that it might just evolve into Haiti, but I probably would press the anarchy button for North Korea if at least someone would come in and control the loose nukes.

Dwarkesh Patel 01:12:30

Yeah. This is related to one of those classic arguments against anarchy, that under anarchy, anything is allowed, so the government is allowed. Therefore, we’re in a state of anarchy in some sense.

Tyler Cowen 01:12:39

In a funny way, that argument’s correct. We would reevolve something like government. And Haiti has done this, but in very bad ways, where it’s gangs and killings. It doesn’t have to be that bad. There’s medieval Iceland, medieval Ireland. They had various forms of anarchy, clearly limited in their destructiveness by low population, ineffective weapons, but they had a kind of stability. You can’t just dismiss them and you can debate how governmental were they? But the ambiguity of those debates is part of the point that every system has a lot of anarchy, and anarchies have a fair degree of collusion if they survive, actually.

Dwarkesh Patel 01:13:16

So I want to go back to much earlier in the conversation where you’re saying, listen, it seems like intelligence is a net good. So just that being your heuristic, you should call forth the AI.

Tyler Cowen 01:13:28

Well, not uncritically. You need more argument. But just as a starting point, if more intelligence isn’t going to help you, you have some really big problems anyway.

Dwarkesh Patel 01:13:37

But I don’t know if you still have the view that we have like an 800-year timeline for human civilization, but that sort of timeline implies that intelligence actually is going to be the. Because the reason we have an 800-year timeline presumably is like some product of intelligence, right.

Tyler Cowen 01:13:53

My worry is that energy becomes too cheap and people at very low cost can destroy things rather easily. So, say, if destroying a city with a nuclear weapon cost $50,000, what would the world look like? I’m just not sure. It might be more stable than we think, but I’m greatly worried, and I could readily imagine it falling apart.

Dwarkesh Patel 01:14:15

Yeah. But I guess the bigger point I’m making is that in this case, the reason the nuke got so cheap was because of intelligence. Now, that doesn’t mean we should stop intelligence, but just that, if that’s like the end result of intelligence over hundreds of years, that doesn’t seem like intelligence is always that good.

Tyler Cowen 01:14:34

Well, we’re doing better than the other great apes, I would say, even though we face these really big risks. And in the meantime, we did incredible things. So that’s a gamble I would take, but I believe we should view it more self-consciously as a sort of gamble, and it’s too late to turn back. The fundamental choice was one of decentralization, and that may have happened hundreds of millions or billions of years ago. And once you opt for decentralization, intelligence is going to have advantages and you’re not going to be able to turn the clock back on it. So you’re walking this tightrope, and by goodness, you’d better do a good job. I mean, we should frame our broader history more like that, and it has implications for how you think about x-risk. Again, I think of the x-risk people, a bit of them. It’s like, well, I’ve been living in Berkeley a long time, and it’s really not that different. My life’s a bit better, and we can’t risk all of this. But that’s not how you should view broader history.

Dwarkesh Patel 01:15:30

I feel like you’re an expert person. Even they don’t think we’re like 100% guaranteed to go out by 800 years or something, we’re guaranteed at all.

Tyler Cowen 01:15:38

It’s up to us. I just think the risk, not that everyone dies, I think that’s quite low, but that we retreat to some kind of pretty chaotic form of like, medieval Balkans existence with a much lower population. That seems to me quite a high risk. With or without AI, it’s probably the default setting.

Dwarkesh Patel 01:15:59

Given that you think that’s a default setting, why is that not a big part of your, when you’re thinking about how new technologies are coming about, why not consciously think in terms of, is this getting us to the outcome where we avoid this sort of preindustrial state that would result from the $50,000 nukes?

Tyler Cowen 01:16:17

Well, if you think the risk is cheap energy, more than AI per se, admittedly, AI could speed the path to cheap energy. It seems very hard to control. The strategy that’s worked best so far is to have relatively benevolent nations become hegemons and establish dominance. So it does influence me. I want the US, UK, some other subset of nations to establish dominance in AI. It may not work forever, but in a decentralized world, it sure beats the alternative. So a lot of the AI types, they’re too rationalist, and they don’t start with the premise that we chose a decentralized world a very, very long time ago, even way before humans.

Dwarkesh Patel 01:16:57

And I think you made an interesting point when you were talking about Keynes in the book, where you said one of his faults was that he assumed that people like, would always be in charge.

Tyler Cowen 01:17:06

That’s right.

Dwarkesh Patel 01:17:06

And I do see that also in the alignment discourse. Like alignment is if it’s just handing over to the government and just assuming the government does what you’d expect it to do.

Tyler Cowen 01:17:13

And I worry about this from my own point of view. So even if you think the US is pretty benevolent today, which is a highly contested and mixed proposition, and I’m an American citizen, pretty patriotic, but I’m fully aware of the long history of my government in killing, enslaving, doing other terrible things to people. And then you have to rethink that over a long period of time, at maybe the worst time period, that affects the final outcome, even if the average is pretty good. And then if power corrupts, and if the government even indirectly controls AI systems, so the US government could become worse because it’s a leader in AI. Right? But again, I’ve got to still take that over China or Russia or wherever else it might be.

I just don’t really understand when people talk about national security. I’ve never seen the AI doomers say anything that made sense. And I recall those early days. Remember China issued that edict where they said, we’re only going to put AIs that are safe and they can’t criticize the CCP. How many super smart people, and I mean super smart, like X, just jump on that and say, see, China’s not going to compete with us. We can shut AI down. They just seem to have zero understanding of some properties of decentralized worlds.

Or Eliezer’s tweet, was it from yesterday? I didn’t think it was a joke, but, oh, there’s a problem. That AI can read all the legal code and threaten us with all these penalties. It’s like he has no idea how screwed up the legal system is. Yeah, it would just be courtroom waits of, like, 70 or 700 years. It wouldn’t become a thing people are afraid of. It would be a social problem in some way.

Dwarkesh Patel 01:18:52

What’s your sense of how the government reacts when the labs are doing, regardless of how they should react, how they will react, and when the labs are doing, like, I don’t know, $10 billion training runs? And if under the premise that these are powerful models, not human level per se, but just they can do all kinds of crazy stuff, how do you think the government’s going to. Are they going to nationalize the labs or staying in Washington? What’s your sense?

Tyler Cowen 01:19:15

I think our national security people are amongst the smartest people in our government. They’re mostly well intentioned in a good way. They’re paying careful attention to many things. But what will be the political will to do what they don’t control? And my guess is, until there’s sort of an SBF like incident, which might even not be significant, but a headlines incident, which SBF was, even if it doesn’t affect the future evolution of crypto, which I guess is my view, it won’t. Until there’s that, we won’t do much of anything, and then we’ll have an SBF like incident, and we’ll overreact. That seems a very common pattern in American history. And the fact that it’s AI, the stakes might be high or whatever. I doubt if it will change the recurrence of that pattern.

Dwarkesh Patel 01:20:01

How would Robert Nozick think about different AI utopias?

Tyler Cowen 01:20:05

Well, I think he did think about different AI utopias. Right? So I believe whether he wrote or talked about it, but the notion of humans much smarter than they are, or the notion of aliens coming down who are like in some way, morally, intellectually way beyond us. He did write about that and he was worried about how they would treat us. So he was sensitive to what you would call AI risk, viewed a bit more broadly very early on.

Dwarkesh Patel 01:20:34

What was his take?

Tyler Cowen 01:20:35

Well, Nozick is not a thinker of takes. He was a thinker of speculations and multiple possibilities, which I liked about him. He was worried about it, this I know, and I talked to him about it, but I couldn’t boil it down to a simple take. It made him a vegetarian, I should add.

Dwarkesh Patel 01:20:54

Wait, that made him because we want to be treating the entities that are to us as AI.

Tyler Cowen 01:20:59

Aliens from outer space might treat us. We are like that to animals. May not be a perfect analogy, but it’s still an interesting point. And therefore we should be vegetarians. That was his argument. At least he felt he should be.

Dwarkesh Patel 01:21:11

I wonder if we should honor past generations more, or at least respect their wishes more. For if we think of the alignment problem, it’s similar to how we react to our previous generations. Do we want the AIs to treat us as we treat people thousands of years ago?

Tyler Cowen 01:21:26

Yeah, it’s a good question. And I’ve never met anyone who’s consistent with how they view wishes of the dead. Yeah, I don’t think there is a consistent, philosophically grounded point of view on that one.

Dwarkesh Patel 01:21:39

I guess the sort of Thomas Paine view of you don’t regard them at all. Is that not self consistent?

Tyler Cowen 01:21:43

It’s consistent, but I’ve never met anyone who actually lives according to.

Dwarkesh Patel 01:21:47

Oh, and what’s inside?

Tyler Cowen 01:21:48

Are they contradicting, say, you know, their spouse were to die and the spouse gave them instructions? Sure, they would put weight on those instructions. Somewhere out there there’s probably someone who wouldn’t. But I’ve never met such a person.

Dwarkesh Patel 01:22:01

And how about the Burke view that you take them very seriously? Why is that not self consistent?

Tyler Cowen 01:22:06

The Burke view? What do you mean?

Dwarkesh Patel 01:22:07

Burke view.

Tyler Cowen 01:22:08

Oh, well, it’s time inconsistent to take those preferences seriously. And Burke himself understood that; he was a very deep thinker. So, well, you take them seriously now, but as time passes, other ancestors come along, they have somewhat different views. You have to keep on changing course. What you should do now, should it be what the ancestors behind us want, or your best estimate of what the 30 or 40 years of ancestors to come will want once they have become ancestors? So it’s time, inconsistent again. There’s not going to be a strictly philosophical resolution. There will be practical attempts to find something sustainable, and that which survives will be that which we do, and then we’ll somewhat rationalize it, ex post.

Dwarkesh Patel 01:22:51

Yeah. There’s an interesting book about the ancient, ancient Greeks. What is it called? I forgot the name. But it talks about the hearths that they have for their families, where the dead become gods. But then over time, if you keep this hearth going for hundreds of years, there’s like thousands of ancestors that you don’t even remember their names. Right. Who are you praying to?

Tyler Cowen 01:23:11

And then it’s like the Arrow Impossibility Theorem for all the gods.

Dwarkesh Patel 01:23:15

What do they all want me to do?

Tyler Cowen 01:23:17

And you can’t even ask them.

Dwarkesh Patel 01:23:18

Yeah, okay. We were talking before we started recording about Argentina and the reforms they’re trying there. And they’re trying to dollarize because the dollar is more stable than their currency. But this raises the question of why is the dollar so stable? So we’re also a democracy. Right. But the dollar seems pretty well managed. What is the larger explanation of why monetary policy seems well managed in the US?

Tyler Cowen 01:23:42

Well, US voters hate inflation, mostly for good reasons, and we have enough wealth that we can pay our bills without having to inflate very much. And 2% has been stable now for quite a while. It’s an interesting question, which I cannot answer, and I have looked into this and asked smart people from Argentina, why does Argentina in particular have recurring waves of hyperinflation? Is there something about the structure of their interest groups that inevitably, recurringly leads them to demand too much? I suppose, but there are plenty of poor, badly run countries that don’t have hyperinflation. African countries historically have not had high rates of hyperinflation, haven’t had high rates of inflation. Why is that? Well, maybe they don’t capture enough through seigniorage. For some reason, currency holdings aren’t large enough, there’s some kind of financial repression, I don’t know, but it’s very hard to explain why some of these countries, but not others, go crazy with the printing press.

Dwarkesh Patel 01:24:41

And this is maybe a broader question about different institutions in the government where I don’t understand enough to evaluate their object-level decisions. But if you look at the Supreme Court or the Federal Reserve or something, just from a distance, it seems like they’re really well-run, competent organizations with highly technocratic, nonpartisan people running them.

Tyler Cowen 01:25:01

They’re not nonpartisan, but they’re still well run.

Dwarkesh Patel 01:25:03

Yeah. And what’s the theory of why these institutions, in particular, are so much better run? Is it just that they’re one step back from direct elections? Is it that they have traditions of knowledge within them? How do we think about this?

Tyler Cowen 01:25:18

I think both of those. I don’t think the elections point is sufficient because there are plenty of unelected bodies that are totally corrupt around the world. Most of them are, perhaps some sense of American civic virtue that gets communicated and then the incentives are such. Say you’re on the Fed for a while, what you can do afterward can be rewarding, but you want a reputation for having done a good job. So your sense of morality and your private self-interest coincide, and that’s pretty strong. And we’re still in that loop. I don’t really see signs of that loop breaking.

Dwarkesh Patel 01:25:52

It’s also striking to me how many times I’ll read an interesting article or paper and the person who wrote it, it’s like the former head of the Federal Reserve in New York or something. It just seems like that’s a strong vindication of these institutions, that the standards are very high.

Tyler Cowen 01:26:03

And if you speak with any of those people, like who’ve been on Fed boards, ask them questions, they’re super smart, super involved, curious, really, for the most part, do want the best thing for their country.

Dwarkesh Patel 01:26:15

Going back to these economists at the end, you talk about how you’re kind of disappointed in this turn that economics has taken.

Tyler Cowen 01:26:22

Maybe I’m just not surprised.

Dwarkesh Patel 01:26:23

Right.

Tyler Cowen 01:26:23

It’s division of labor. Adam Smith, who said it would make people a bit feeble-minded and infurious, was completely correct.

Dwarkesh Patel 01:26:31

Wait, Adam Smith said what would make people division of labor? I see, right. Yeah.

Tyler Cowen 01:26:35

Not stupid. Current economic researchers probably have never been smarter, but they’re way less broad and less curious.

Dwarkesh Patel 01:26:44

Patrick Carlison put it in an interesting way where he said, in the past, maybe thinkers were more interested in delving into the biggest questions, but if they couldn’t do it rigorously in a tractable way, they would make the trade-off in favor of the big question. And today we make the opposite trade-off. Does that seem like a fair comparison?

Tyler Cowen 01:27:02

I think that’s correct. And I would add that, say, in the time of Smith, there was nothing you could do rigorously. So there was no other option.

Well, oh, I’m going to specialize in memorizing all the grain prices and run some great econometrics on that. And that’ll be rigorous. It’s really William Stanley Jevons who to the Anglo world introduced this notion. There’s something else you can do that’s rigorous. It was not yet rigorous, but he opened the door and showed people the.

Dwarkesh Patel 01:27:27

Alternative of the Jevons paradox?

Tyler Cowen 01:27:32

Well, I would say his work in statistics originally on the value of money.

Dwarkesh Patel 01:27:35

Right.

Tyler Cowen 01:27:36

But his statistical work on coal also had some rigor, so you’re not wrong to cite that. And Jevons just showed that rigorous statistical work and economics could be the same thing, and that was his greater innovation than just marginalism. So he’s an underrated figure. Maybe he should be in the book in a way, but it had some unfortunate secondary consequences. Too many people crowd into specialization. “Crowd” is a funny word to use because they’re each sitting in their separate nodes, but it’s a kind of crowding.

Dwarkesh Patel 01:28:07

Is there some sort of Hayekian solution here, where in markets, the effect of having the sort of decentralized process is that the sum is greater than the parts, whereas in academic disciplines, the sum is just a bunch of different statistical aggregates. There’s no grand theory that comes together as a result of all this micro work. Is there some Hayekian solution here?

Tyler Cowen 01:28:30

Well, yes, you and I are the Hayekian solution that as specialists proliferate, we can be, quote-unquote, parasitic on them and take what they do and turn it into interesting larger bundles that they haven’t dreamt of and make some kind of living doing that. And we’re much smaller in number, but I’m not sure how numerous we should be. And there’s a bunch of us, right?

Dwarkesh Patel 01:28:51

You’re in a separate category, Tyler. I’m running a podcast here.

Tyler Cowen 01:28:55

I run a podcast. We’re exactly in the same category, is my point.

Dwarkesh Patel 01:29:00

And what do you see as the future of the kind of sort of thinking you do? Do you see yourself as the last of the literary economists, or is there a future of this kind of. Is it just going to be the slatesar codexes? Are they going to take care of it, or this sort of lineage of thinking?

Tyler Cowen 01:29:16

Well, the next me won’t be like me in that sense. I’m the last, but I don’t think it will disappear. It will take new forms. It may have a lot more to do with AI, and I don’t think it’s going to go away. There’s just a demand for it. There’s a real demand for our products. We have a lot of readers, listeners, people interested, whatever, and there’ll be ways to monetize that. The challenge might be competing against AI, and it doesn’t have to be that AI does it better than you or I do, though it might, but simply that people prefer to read what the AIs generate for ten or 20 years. And it’s harder to get an audience because playing with the AIs is a lot of fun. So that will be a real challenge. I think some of us will be up to it. You’ll be faced with it more than I will be, but it’s going to change a lot.

Dwarkesh Patel 01:30:03

Yeah. Okay. One of the final things I want to do is I want to go into political philosophy a little bit. Okay. And ask that we haven’t been doing it already. Okay. So I want to ask you about certain potential weaknesses of the democratic capitalist model that we live in. And in terms of both in terms of whether you think they’re object-level right and second, regardless of how right they are, how persuasive and how powerful a force they will be against our system of government and functioning. Okay, so there’s a libertarian critique that basically democracy is sort of a random walk with a drift towards socialism. And there’s also a ratchet effect where government programs don’t go away. And so it just ends up towards socialism at the end.

Tyler Cowen 01:30:52

It ends up with having a government that is too large. But I don’t see the evidence that it’s a road to serfdom. France and Sweden have had pretty big governments, way too large, in my opinion. But they haven’t threatened to turn autocratic or totalitarian. Certainly not. And you’ve seen reforms in many of those countries. Sweden moved away from a government approaching 70% of GDP, and now it’s quite manageable. Government there should be smaller.

Dwarkesh Patel 01:31:19

Yet.

Tyler Cowen 01:31:20

I don’t think the trend is that negative. It’s more of a problem with regulation and the administrative state. But we’ve shown an ability to create new sectors, like big parts of tech. They’re not unregulated. Laws apply to them, but they’re way less regulated. And it’s a kind of race. That race doesn’t look too bad to me at the moment. We could lose it, but so far so good. So the critique should be taken seriously, but it’s yet to be validated.

Dwarkesh Patel 01:31:47

How about the egalitarian critique from the left that you can’t have the inequality the market creates with the political and moral equality that humans deserve and demand?

Tyler Cowen 01:31:58

They just say that.

Dwarkesh Patel 01:31:59

What’s the evidence?

Tyler Cowen 01:32:01

The US has a high degree of income inequality. So does Brazil, a much less well-functioning society. Brazil continues on average, it will probably grow one or 2%. That’s not a great record. But Brazil has yet to go up in a puff of smoke. I don’t see it.

Dwarkesh Patel 01:32:18

And how about the Nietzschean critique? And in the end, of history. Fukuyama says this is more powerful. This is the one he’s more worried about, more so than the leftist critique. And over time, basically what you end up with is the last man and you can’t defend the civilization. You know the story.

Tyler Cowen 01:32:33

It’s a lot of words. I mean, is he short the market? I’ve asked Fukuyama, this is a long time ago, but he wasn’t. Then again, it’s a real issue. It seems to me the problems of today, for the most part, are more manageable than the problems of any previous era. We still might all go, poof, return to medieval balkan style existence in a millennia or whatever, but it’s a fight and we’re totally in the fight and we have a lot of resources and talent. So, let’s do it.

Dwarkesh Patel 01:33:06

Okay.

Tyler Cowen 01:33:07

I don’t see why that particular worry is so dominant. It’s a lot of words and I like to get very concrete. Like even if you’re not short the market, if that were the main relevant worry, where would that show up in asset prices as it got worse? It’s a very concrete question. I think it’s very useful to ask. And when people don’t have a clear answer, I get worried.

Dwarkesh Patel 01:33:26

Where does your prediction that hundreds of years down the line we’ll have the $50,000 nukes? Where does that show up in the asset prices?

Tyler Cowen 01:33:33

I think at some point VIX, an index of volatility, will go up, probably not soon. Nuclear proliferation has not gone crazy, which is wonderful, but I think at some point it’s hard to imagine it not getting out of control.

Dwarkesh Patel 01:33:49

Last I read, VIX is surprisingly low and stable.

Tyler Cowen 01:33:52

That’s right. I think 2024 is on the path to be a pretty good year.

Dwarkesh Patel 01:33:56

Yeah. Or do you think the market is just wrong in terms of thinking about both geopolitical risk from Israel or…

Tyler Cowen 01:34:01

No, I don’t think the market’s wrong at all. I think that war will converge. I’m not saying the humanitarian outcome is a good one, but in terms of the global economy, I think markets are thinking rationally about it, though the rational forecast, of course, is often wrong.

Dwarkesh Patel 01:34:16

What’s your sense on the scaling stuff? When you look at the arguments in terms of what’s coming, how do you react to that?

Tyler Cowen 01:34:22

Well, your piece on that was great. I don’t feel I have the expertise to judge that as a technical matter. It does seem to me intuitively it would be weird on the technical side if scaling just stopped working. But on the knowledge side, I think people underestimate possible barriers. And what I have in mind is quite a bit of reality. The universe might in some very fundamental way simply not be legible, and that there’s no easy and fruitful way to just quote-unquote, apply more intelligence to the problem. Like, oh, you want to integrate general relativity and quantum mechanics. It may just be we’ve hit the frontier and there’s not a final layer of, oh, here’s how it fits together. So there’s no way to train an AI or other thing to make it smarter to solve that. And maybe a lot of the world is like that. And that to me, people are not taking seriously enough. So I’m not sure what the net returns will be to bigger and better and smarter AI.

Dwarkesh Patel 01:35:17

That seems possible for P versus NP type of reasons. It’s just like harder to make further discoveries. But I feel like we have pretty good estimates in terms of the declining researcher productivity because of low-hanging fruit being gone in this sort of sense of we’re reaching the frontier and whatever percent it is a year, if you can just keep the AI population growing faster than that, if you just want to be crude about it, that seems enough to, if not get to the ultimate physical synthesis, at least much farther than where human civilization would get in the same span of time, that seems very plausible.

Tyler Cowen 01:35:51

I think we’ll get further. I expect big productivity gains. As a side note, I’m less convinced by the declining researcher productivity argument than I used to be. So the best way to measure productivity for an economist is wages. And wages of researchers haven’t gone down, period. In fact, they’ve gone up. Now, they may not be producing new ideas. You might be paying them to be functionaries or to manage PR or to just manage other researchers. But I think that’s a worry that we have a lot more researchers with generally rising researcher wages, and that hasn’t boosted productivity growth. China, India, South Korea brought into the world economy scientific talent. It’s better than if we hadn’t done it, but it hasn’t, in absolute terms, boosted productivity growth. And maybe that’s a worrisome sign.

Dwarkesh Patel 01:36:42

The metric of researcher wages. It seems like it could just be a fact that even the less marginally useful improvements are worth the extra cost. In terms of if you think of a company like Google is probably paying its engineers a lot more than it was paying in the early days, even though they’re doing less now because changing a pixel on the new Google page is going to affect billions of users. The same thing could be happening in the economy. Right.

Tyler Cowen 01:37:06

That might hold for Google researchers, but take people in pharma, biomedicine. There’s a lot of private sector financed research or indirectly financed by buying up smaller companies. And it only makes sense if you get something out of it that really works, like a good vaccine or good medication. Ozempic, super profitable. So wages for biomedical researchers in general haven’t gone down. Now, finally, it’s paying off. But I’m not sure AI will be as revolutionary as the other AI optimists believe. I do think it will raise productivity growth in ways which are visible.

Dwarkesh Patel 01:37:43

To what extent? In the conventional growth story, you think in terms of population size, right. And then, so you just increase the population size. You get much more research at the other end. To what extent does it make sense to think about, well, if you have these billions of AI copies, we can think of that in terms of, as a proxy of how much progress they could produce. Is that not a sensible way to think about that?

Tyler Cowen 01:38:04

At some point, having billions of copies probably won’t matter. What will matter much more is how good the best thing we have is, and how well integrated it is into our other systems, which have bottlenecks of their own. The principles governing the growth of that are much harder to discern. It’s probably a much slower growth than just juicing up. “Oh, we’ve got a lot of these things, and they’re trained on more and more GPUs.”

Dwarkesh Patel 01:38:28

But precisely because the top seems to matter so much is why we might expect bigger gains. Right? So if you think about Jews in the 20th century, 2% of the population or less than that, and 20% of the Nobel Prizes, it does seem like you can have a much bigger impact than if you’re on the very tail if you just have just a few.

Tyler Cowen 01:38:46

A hundred John von Neumann copies, maybe that’s a good analogy. That the impact of AI will be like in the 20th century, the impact of Jews, right? Which would be excellent. Right?

Dwarkesh Patel 01:38:56

Yeah.

Tyler Cowen 01:38:56

But it’s not extraordinary. It’s not a science fiction novel.

Dwarkesh Patel 01:39:00

It is. I mean, you read the early 20th-century stuff as you have. It’s like a slow takeoff right there, of like go from V2 rockets to the moon in a couple of decades. It’s kind of a crazy pace of change.

Tyler Cowen 01:39:12

Yeah, that’s what I think it will be like again. Great stagnation is over. We’ll go back to those earlier rates of change, transform a lot of the world, mostly a big positive. A lot of chaos disrupted institutions along the way. That’s my prediction. But no one writes a science fiction novel about the 20th century. It feels a bit ordinary still.

Dwarkesh Patel 01:39:32

Yeah.

Tyler Cowen 01:39:33

Even though it wasn’t.

Dwarkesh Patel 01:39:34

I forget the name of the philosopher you asked this to, but the feminist philosopher you asked the question. Amiya Srinivasan, you asked the question, what would have to be different for you to be a social conservative?

Tyler Cowen 01:39:43

Right.

Dwarkesh Patel 01:39:44

What would have to be different for you to not be a doomer, per se, but just one of these people who think this is the main thing to be thinking about during this period of history or something like that?

Tyler Cowen 01:39:52

Well, I think it is one of the main things we should be thinking about. But I would say if I thought international cooperation were very possible, I would at least possibly have very different views than I do now, or if I thought no other country could make progress on AI. Those seem unlikely to me, but they’re not logically impossible. So the fundamental premise where I differ from a lot of the doomers is my understanding of a decentralized world and its principles being primary. Their understanding is some kind of comparison. Like, here’s the little people, and here’s the big monster, and the big monster gets bigger, and even if the big monster does a lot of good things, it’s just getting bigger, and here are the little people. That’s a possible framework, but if you start with decentralization and competition and well, how are we going to manage this? In some ways, my perspective might be more pessimistic. But you don’t just think you can wake up in the morning and legislate safety. You look at the history of relative safety having come from hegemons, and you hope your hegemon stays good enough, which is a deeply fraught proposition. I recognize that.

Dwarkesh Patel 01:41:04

What’s the next book?

Tyler Cowen 01:41:06

I’m already writing it. Part of it is on Jevons, but the title is The Marginal Revolution. But not about the blog, about the actual marginal. But it’s maybe a monograph, like 40,000 words. But I don’t think book length should matter anymore. I want to be more radical on that.

Dwarkesh Patel 01:41:25

I think 40,000 words is perfect because it’ll actually fit in context. So when you do the GPT-4.

Tyler Cowen 01:41:31

Now, the context may be bigger by then. Yeah, but I want to have it in GPT in some way, or whatever has replaced it.

Dwarkesh Patel 01:41:40

Okay. Those are all the questions I had. Tyler, this was a lot of fun.

Tyler Cowen 01:41:43

And keep up the great work, and delighted you’re at it. Thank you.

Dwarkesh Patel 01:41:47

Thank you. Yeah, thanks for coming on the podcast. It’s the third time now, so a lot of fun.

Tyler Cowen 01:41:51

Okay, bye. Everyone.

Discussion about this podcast