Response to Tyler Cowen on eternal growth
aka. Why we need a director's cut of Stubborn Attachments
Tyler Cowen just posted an really interesting set of responses to the argument that eternal economic growth is not possible. Here is how that argument goes, as summarized by Tyler:
Dwarkesh Patel surveys one angle of that debate in this short post, and also here. More commonly, from EA types I increasingly hear the argument that if an economy grows at [fill in the blank] percent for so many thousands of years, at some point it becomes so massively large relative to the galaxy that it has to stop growing. It is then concluded that economic growth is not so important, because all it does is help us arrive on the final frontier sooner, and be stuck there, rather than increasing net human well-being over time. (Then often one hears the follow-up claim that existential risk should be prioritized over growth.)
I wish I had access to Tyler’s bullet-point-thoughts on every important topic! Within the 20 minutes it must have taken Tyler to write this post, he added a lot of important considerations that I hadn’t encountered or thought of at all.
Let’s take them point by point.
I
Tyler:
1. Growth may involve dematerialization and greater energy efficiency, rather than eating up the galaxy’s resources. Much of modern growth already has taken this form, with likely more to come.
Energy efficiency
There are clearly physical limits to how energy efficient our machines, computers, and vehicles can get. Sure, there’s a lot more room for improvement - but not infinite room. And compound growth can conquer any distance between the status quo and large finite numbers relatively quickly. I don’t see how growth from increasing efficiency allows us to bypass these physical limits.
Even in the near term, there’s limited gains left from simply improving efficiency. As Tom Murphy explains:
Many things are already as efficient as we can expect them to be. Electric motors are a good example, at 90% efficiency. It will always take 4184 Joules to heat a liter of water one degree Celsius. In the middle range, we have giant consumers of energy—like power plants—improving much more slowly, at 1% per year or less. And these middling things tend to be something like 30% efficient. How many more “doublings” are possible?
And from Rob Wiblin:

Dematerialization
Maybe artificial minds connected to a euphoric virtual world could experience a high that is worth many times more than everything we produce today. We could discover an experience that is to consciousness what uranium is to energy - something orders of magnitude more dense in available value (if you have the technology to access it). Perhaps even comparing this experience to real goods and services will become anachronistic
.Even if such an experience were possible, and even if its value was exponentially greater than everything we produce today, it would still likely still be dependent on the computer on which it’s instantiated. Thus, this experience would be constrained - in time or magnitude - by the physical bounds of the most powerful possible computer. I would be surprised if such experiences could get exponentially better while running on the same hardware.
That being said, this topic especially is very murky because we don’t have a theory of what it takes to generate conscious experiences.
II
2. Real gdp comparisons give you good information locally, when comparing relatively similar societies or states of affairs. The numbers have much less meaning across very different world states, or very long spans of time with ongoing growth. Comparing say Beverly Hills gdp per capita to Stone Age gdp per capita just isn’t an accurate numerical exercise period. It is fair to say that the Beverly Hills number is “a whole lot more,” and much better, but I wouldn’t go too much further than that. They are very different situations, rather than one being a mere exponential version of the other. The economics literature on real income comparisons supports this take.
This point most decidedly does not prove that “eternal growth” is possible. It does show that the “you can’t just keep on scaling up” argument against ongoing growth does not get off the ground. It is really just asserting — without actual backing — that “society couldn’t be very different from it is right now.”
It’s not clear to me how eternal growth becomes more plausible if you change “exponentially increasing GDP for thousands of years” to “a whole lot more and much better”. Are the amount of whole-lot-mores you can get from a finite amount of matter infinite? If not, physical limits will catch up to you!
I agree that GDP per capita of the Stone Age and Beverly Hills can’t be compared at face value. But is it absurd to say that something exponential has happened? We use orders of magnitude more energy and resources than our Stone Age ancestors and our lifestyle and civilization depend on entire pantheons of knowledge that they lacked. Maybe GDP growth isn’t the best way to describe this process, but clearly some exponential acceleration occurred between the two.
You can call that compounding growth in knowledge and resources whatever you want, but you still have to confront the implausibility of its exponential increase for millions of years.
Note that Tyler’s argument in Stubborn Attachments in favor of long run economic growth seems to rely on the premise that we can compare distant states of the world - the reason we should raise growth rates is that it will make our descendants centuries from now much much better off. Doesn’t this also require us to say that a far off state of affairs can be an exponential version of ourselves?
Tyler again:
This point most decidedly does not prove that “eternal growth” is possible. It does show that the “you can’t just keep on scaling up” argument against ongoing growth does not get off the ground. It is really just asserting — without actual backing — that “society couldn’t be very different from it is right now.” And note that points #1 and #2 are mirror images of each other.
But there’s a difference between saying that “society can be very different from what it is right now” and “society can continue to improve forever”. It is trivially true that things can always continue to change. The galactic civilization could collapse, adopt a new form of government, or experience a million other modifications
.But is eternal improvement possible? Can we continue to make new discoveries as important as relativity century after century or sustain an exponential increase in energy use for thousands of years? Maybe - we should be humbled by the thought of a hunter gatherer trying to reason about the lives of people thousands of years in the future. Still, I would be surprised if we could never exhaust the reserves of knowledge, resources, and energy available in our galaxy.
III
3. In general I am not persuaded by backwards induction modes of moral reasoning. The claim is that “in period x we will hit obstacle y, therefore let us reason backwards in time to the present day and conclude…” Backwards induction does not in general hold, either practically or morally, especially across very long periods of time and when great uncertainty is present. I am not saying backwards induction never holds, but most of these arguments are simply applying some form of moral backwards induction without justifying it. A simpler and more accurate perspective is that the status quo already is highly uncertain, and we don’t have much of a workable sense of how things will run as we approach a “frontier,” or even exactly what that concept should mean. And this point is not unrelated to #2. We are being asked to draw conclusions about a world we cannot readily fathom.
Commonsense moral reasoning has all kinds of backward induction. For example, many people believe that the fact that they’re going to die someday has important implications for how they should live now (at least while they’re listening to a particularly wise TED talk). Is this not also backward induction in moral reasoning?
Perhaps Tyler is pointing out that the physical limits of growth are a much harder topic to reason backwards from that death. We all know that people die and tend to have similar patterns of regrets. But we have no idea what it means for a civilization to reach the carrying capacity of its galaxy (or whether there even is such a thing). Nor do we know what these beings at the true end of history will wish their ancestors did tens of thousands of years ago.
I think this is actually a great point, and it’s the reason my worldview isn’t heavily impacted by the ultimate cosmological implications for our far distant descendants.
IV
4. The world is likely to end long before the binding growth frontier is reached, even assuming that concept has a clear meaning. In the meantime, it is better to have higher economic growth. This rejoinder really is super-simple.
3b. A super-nerdy response might be “are we sure we can’t just find more and more growth resources forever, especially if the final theory of physics is quite strange?”. It is hard for me to judge that one, but I find #3 much more relevant than any version of this #3b. Still, if we are going to look that far into the future, I don’t see any reason to rule out #3b. Which will keep alive the expected value of future economic growth.
Even if Tyler’s point 4) is correct, and the world will likely end before we reach the limits of growth - my response is contained in the argument Tyler makes in 3b) - which says that our expected value calculations should be dominated by the absurdly optimistic scenario.
Extend the logic of 3b), and it tells you that we should still optimize for the branch of the timeline with trillions of thriving future people.
That being said, I think it’s likely that growth over the coming centuries (or its preconditions - like good institutions and technological advancement) will increase the odds of a flourishing galaxy wide civilization.
I would love to read the director’s-cut chapter of Stubborn Attachments where Tyler explains how increasing growth rates this century matters even if civilization lasts for thousands of years
. I agree that such arguments are necessarily speculative, but their importance requires that we accept this inadequacy.V
5. Often I am suspicious of the method of ‘sequential elimination” in moral reasoning. It might run as follows: “I can show you that X doesn’t matter, therefore we are left with Y as the thing that matters.” Somehow the speaker ought to take greater care to consider X and Y together, and to realize that all of the moral reasoning along the way is going to be imperfect. The “ghost traces” of X may still continue to matter a great deal! What if I argued the following?: “Pascal’s Wager arguments can be used to show that existential risk cannot be allowed to dominate our moral theories, therefore ongoing economic growth has to be the thing that matters.” That too would be fallacious, and for similar reasons, even assuming you saw Pascal’s Wager-type arguments as something to be rejected.
A better approach would be “both X and Y are on the table here, and both X and Y seem to be really important. What kinds of consiliences can we find where arguments for both X and Y work together in similar directions? And that is where we should put our energies. More concretely, that might include finding and mobilizing talent, building better institutions, and making sure we don’t end up controlled by a dominant China.
In sum, the case for sustainable economic growth is alive and well, and not at the expense of existential risk.
I agree! The whole discussion about growth rates thousands of years from now seems so murky that I wouldn’t cite it to make drastic changes to our actions.
At current margins, we could be doing much more both to increase economic growth and to reduce existential risk at the same time. And no issue today requires us to make a significant tradeoff between the two. The iota of biotech advancement virologists might be able to produce from their ghoulish experiments is not worth the added existential risk. And slowing economic growth doesn’t seem like a plausible path to extend AI timelines.
But it’s not obvious to me that this will always be true, especially if society starts to prioritize both of these goals a lot more. At some margin, the tails come apart, and you run out of Pareto improvements that increase growth without increasing risk and vice versa.
We should be skeptical of arguments that claim that economic growth must be slowed in order to alleviate some esoteric risk concocted from first principles on an online forum. I can’t think of a single example in history where things would have been better off had growth been slower.
But if and when someone presents a compelling tradeoff between growth and reducing risk, it becomes relevant that the total possible growth seems finite, and delaying its fulfillment has meager consequences compared to the potential cancellation of all future value.
Many thanks to Tyler Cowen for comments! If you liked this post, you will enjoy my podcast with him, where we discuss these topics (also available on Spotify and Apple Podcasts).
And my interview of David Deutsch, where I ask him how potential physical limits to growth impact his worldview (timestamped here):
I’m skeptical of the idea that beings who value such experiences more than real goods and services will survive long. There will be a selection pressure in favor of societies that value physical expansion more than virtual enlightenment. Many people say that an acid or mushroom trip changed their lives and connected them to the divine and the infinite. But how many of them would pay even 10k for their next encounter with endless bliss?
This does have interesting implications for the lock-in argument, but not on the “Is eternal economic growth possible?” argument.
In this talk, Tyler points out that his argument for growth only works if civilization lasts anywhere from a few hundred to a few thousand years:
Let's say you think the world literally can just keep on running forever … sustainability is going to win out because there's so much at stake if the world ends. You've got to play it very safe ... Let's say alternatively the time horizon gets too short. Let's say we all know the world's going to end in a year - there's a big asteroid on its way and we can't do anything … I don't think it would really make sense as a recipe to maximize the rate of sustainable economic growth with the world ending in a year. The returns out there just are not very large. Maybe we should have a big party …
There's a funny way in which the maximize growth imperative - it's only true for some intermediate time horizon. If you think it's approaching eternal you become super safe and I would say politics becomes boring and terrible and somehow aesthetically maybe we all become less. If you think the relevant time horizon is super short again, nothing really to maximize things are going to end. There's some intermediate phase so some degree is pessimism actually is useful.
Every time I read arguments like yours or Tyler Cowen, I feel like we're missing the fact that a lot of economic growth seems proportional to knowledge itself--and a lot of the effects we're describing (such as dematerialization, energy efficiency, and even arguments if it is appropriate to compare stone-age people with folks in Beverly Hills) are essentially artifacts of this knowledge growth.
Consider the simplest case: a bakery. A baker can hand-kneed bread--but the first baker who figured out you could kneed bread in a paddle mixer considerably improved their output of bread. While one could point to the paddle mixer--a material thing--as being the source of the improvements, in fact, it's the knowledge behind the paddle mixer: how to make one, and how to use it, which is behind the improvements.
The material things are simply a reflection of this increased knowledge.
And this increased knowledge has created whole industries--whole ways of doing things, including this very blog--which didn't exist 50 or 100 years ago. Most of the people alive today are doing jobs that simply did not exist 100 years ago--all thanks to our improved knowledge, which led to improvements across all areas of the economy, from business practices to how to make silicon chips.
This leads me to believe that wealth will continue to grow so long as knowledge (and experience, a facet of knowledge) continues to grow. And in areas where knowledge is somehow restricted--where we are not allowed to use our understanding to gain insight and create better ways of doing things, such as you see in large parts of the medical industry thanks to CMS and the FDA, you see slow growth and slow improvements making areas more expensive relative to the rest of the economy.
----
Now one could debate if our knowledge and experiences have reached an asymptotic limit. That is, we may be at a point in our knowledge of the material universe that there is nothing left to discover in material sciences, information technology and in business organization beyond insignificant and minor improvements that do little to improve our material lives.
But I doubt it.
Especially as we live through a rather exciting era taking place under our noses, the "softwareization" (for lack of a better term) of nearly every sector of the economy, as corporations increasingly rely on more and more sophisticated software systems to handle every element of their business, from customer support to inventory tracking to control.
Pretty much every business is a software business now--and no-one seems to have noticed that, or the implications of that.
Why would the civilization have to stop at our galaxy? Seems arbitrary.