Dwarkesh Podcast
Dwarkesh Podcast
David Deutsch - AI, America, Fun, & Bayes
0:00
-1:24:11

David Deutsch - AI, America, Fun, & Bayes

Transcript + YouTube, Apple Podcasts, Spotify

David Deutsch is the founder of the field of quantum computing and the author The Beginning of Infinity and The Fabric of Reality.

Read me contra David on AI.

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.

Read the full transcript with helpful links here.

Follow David on Twitter. Follow me on Twitter for updates on future podcasts.

Thanks for reading The Lunar Society! Subscribe to find out about future episodes!

Timestamps

(0:00:00) - Will AIs be smarter than humans? 

(0:06:34) - Are intelligence differences immutable / heritable?

(0:20:13) - IQ correlation of twins separated at birth

(0:27:12) - Do animals have bounded creativity?

(0:33:32) - How powerful can narrow AIs be?

(0:36:59) - Could you implant thoughts in VR?

(0:38:49) - Can you simulate the entire universe?

(0:41:23) - Are some interesting problems insoluble?

(0:44:59) - Does America fail Popper's Criterion?

(0:50:01) - Does finite matter mean there's no beginning of infinity?

(0:53:16) - The Great Stagnation

(0:55:34) - Changes in epistemic status is Popperianism

(0:59:29) - Open-ended science vs. Gain of Function

(1:02:54) - Contra Tyler Cowen on Civilizational Lifespan

(1:07:20) - Fun Criterion

(1:14:16) - Does AGI through evolution require suffering?

(1:18:01) - Would David enter the Experience Machine?

(1:20:09) - (Against) Advice for young people

Please share if you enjoyed this episode! Helps out a ton!

Share

Transcript

Will AIs be smarter than humans? 

Dwarkesh Patel 0:05 

Okay, today I'm speaking with David Deutsch. Now, this is a conversation I've eagerly wanted to have for years. So this is very exciting for me. So first, let's talk about AI. Can you briefly explain why you anticipate that AIs will be no more fundamentally intelligent than humans?

David Deutsch 0:24 I suppose you mean AGIs? And by fundamentally intelligent, I suppose you mean, capable of all the same types of cognition as humans are? In principle?

Dwarkesh Patel 0:37

Yes.

David Deutsch 0:37

So, that would include doing science and art and, in principle, also falling in love and being good and evil and all that. So the reason is twofold. One half is about computation - hardware - and the other is about software. So if we take the hardware, we know that our brains are Turing-complete bits of hardware, and therefore can exhibit the functionality of running any computable program and function. Now, when I say any, I don't really mean any, because you and I are sitting here and having a conversation. We could say that we could have any conversation. Well, we can assume that maybe in 100 years’ time, we will both be dead. And therefore, the number of conversations we could have is strictly limited. Also, some conversations depend on the speed of computation. So if we're going to be solving the traveling salesman problem, then there are many traveling salesman problems that we wouldn't be able to solve in the age of the universe.

When I say “any”, what I mean is that we're not limited in the programs we can run, apart from speed and memory capacity. So all hardware limitations on us boil down to speed and memory capacity. And both of those can be augmented to the level of any other entity that is in the universe. Because if somebody builds a computer that can think faster than the brain, then we can use that very computer or that very technology to make our thinking go just as fast as that. So that's the hardware.

As far as explanations go, can we reach the same kind of explanations as any other entity,–– usually, this is said not in terms of AGI but in terms of extraterrestrial intelligence. But also it's said about AGIs. What if they are to us as we are to ants? Well, again, part of that is just hardware, which is easily fixable by adding more hardware. So let's get around that.

So really, the idea is, are there concepts that we are inherently incapable of comprehending?  Martin Rees believes this. He thinks that we can comprehend quantum mechanics and apes can't, and maybe the extraterrestrials can comprehend something beyond quantum mechanics, which we can't comprehend, and no amount of brain add-ons with extra hardware can give us that because they have the hardware that is adapted to having these concepts which we haven't.

The same kind of thing is said about certain qualia - that maybe we can experience love. And an AGI couldn't experience love because it has to do with our hardware - not just memory and speed, but specialized hardware. That falls victim to the same argument. The thing is, this specialized hardware can't be anything except for a computer. And if there's hardware that is needed for love, let's say that somebody is born without that hardware, then that hardware - that bit of the brain - that does love or does mathematical insight or whatever, is just a bit of the brain. And it's connected to the rest of the brain in the same way that other parts of the brain are connected to the rest of the brain, namely by neurons, passing electrical signals, and by chemicals whose concentrations are altered, and so on. So therefore, an artificial device that computed which signals were to be sent and which chemicals were to be adjusted, could do the same job and it would be indistinguishable, and therefore a person augmented was one of those who couldn't feel love could feel love after that augmentation. And those two things are the only relevant ones. So that's why that AGI and humans have the same range in the sense I’ve defined.

Are intelligence differences immutable/heritable?

Dwarkesh Patel 6:18

Okay, interesting. So,  the software question is more interesting than the harder one immediately. But, I do want to take issue with the idea that the memory and speed of human brains can be arbitrarily and easily expanded. But we can get into that later. We can start with this question: Can all humans explain everything that even the smartest humans can explain?

If I took the village idiot and asked him to create the theory of quantum computing, should I anticipate that he could do this if he wanted to? For a frame of reference, about 21-24% of Americans on the National Adult Literacy survey fall in level one—which means that they can't perform basic tasks like identifying the expiry date of a driver's license or totaling a bank deposit slip. Are these humans capable of explaining quantum computing? Or creating the Deutsch–Jozsa algorithm? If they're incapable of doing this, doesn't that mean that the Theory of Universal Explainers falls apart?

David Deutsch 7:22

So, you're talking about tasks that no ape could do. However, some humans are brain-damaged to the extent that they can't even do the tasks that an ape can do. Then, there comes a point when installing the program that would be able to read the driver's license would require augmenting their hardware and software. We don’t know enough about the brain yet, but if it’s 24% of the population, then it's definitely not hardware. For those people, it's definitely software. If it was hardware, then getting them to do this would be a matter of repairing the imperfect hardware. If it's software, it is not just a matter of them wanting to be taught. It is a matter of whether the existing software is conceptually ready to do that. 

For example, Brett Hall has often said he would like to speak Mandarin Chinese. He wants to, but he will never be able to speak Mandarin Chinese because he's never going to want it enough to go through the process of acquiring that program. But nothing about his hardware prevents him from learning Mandarin Chinese, and there's nothing about his software either. Except that he wants to learn it, but he doesn't want to go through the process of being programmed with that program. But if his circumstances changed, he might want to. For example, many of my relatives a couple of generations ago were forced to migrate to a very alien place. They had to learn languages they never thought they would speak and never wanted to speak. Yet very quickly, they did speak those languages. Was it because they wanted to change? In the big picture, perhaps you could say what they wanted to be changed. So if your driving license blind people wanted to be educated to read driving licenses in the sense that my ancestors wanted to learn languages, then yes, they could learn that.

There is a level of dysfunction below which they couldn't, which are hardware limitations. On the borderline between those two, there's not that much difference. That's the question: Could apes be programmed with a fully human intellect? The answer to that is yes. Although programming them would not require hardware, a surgery—in the sense of repairing a defect— would require intricate changes at the neuron level. That's to transfer the program from a human mind into the ape's mind. I would guess that is possible because although the ape has far less memory space than humans do, and no specific specialized modules like humans, neither of those things is a thing that we use to the full anyway. When I'm speaking to you now, there's a lot of knowledge in my brain that I'm not referring to at all. For example, the fact that I can play the piano or drive a car is not being used in this conversation. So, I don't think having such a large memory capacity would affect this. This project would be highly immoral because you'd intentionally create a person inside deficient brain hardware.

Dwarkesh Patel 12:31

Suppose hardware differences distinguish different humans in terms of their intelligence. If it were just up to the people who are not even functionally literate, these are people…. 

David Deutsch 12:42

Wait, wait! I said it could only be hardware at the low level, at the level of brain defects, or at the level of using up the whole of our allocation of memory, speed, or whatever. Apart from that, it can be hardware.

Dwarkesh Patel 13:01

By the way, is hardware synonymous with genetic influences for you, or can software be genetic too?

David Deutsch 13:12

The software can be genetic, too, but that doesn't mean it's immutable. It just means that it’s there at the beginning.

Dwarkesh Patel 13:20

Okay. I suspect it's not software because––let's suppose it was software and something that they chose to do is something they could change—it's mysterious to me why these people would also accept the jobs with lower pay but are less cognitively demanding, or why they would choose to do worse on an academic test or IQ test. Why would they choose to do precisely this thing somebody who was less cognitively powerful would do? So it seems the more parsimonious explanation is that they are cognitively less powerful.

David Deutsch 13:55

Why would someone choose not to go to school, for instance, if they were given a choice and not to have any lessons? Well, there are many reasons why they might choose that. Some of them are good, some of them bad. Calling some jobs cognitively demanding is already begging the question because you're just referring to a choice that people make—which is a software choice—as being, by definition, forced on them by hardware. It's not cognitively deficient, and it's just that they don't want to do it! The same way if there was a culture that required Brett Hall to be able to speak fluent Mandarin Chinese to do a wide range of tasks. If he didn't know Mandarin Chinese, he'd be relegated to low-level tasks. Then he would be  “choosing” the low-level tasks rather than the “cognitively demanding task”, but it's the only culture that assigns a hardware interpretation to the difficulty of doing that task.

Dwarkesh Patel 15:16

Right. It doesn't seem that arbitrary to say that the kind of jobs you could do sitting down on a laptop require probably more cognition than the ones you can do on a construction site. If it's not cognition that distinguishes what is measured by both these literacy tests, and by what you're doing your job, what is the explanation for why there's such a high correlation between people who are not functionally literate and an anti-correlation between people who are not functionally literate, and people who are programming. I guarantee that people working at Apple are above level one on this literacy survey. Why did they happen to make the same choices? Why is that their correlation?

David Deutsch 16:01

There are correlations everywhere, and culture is built to make use of certain abilities that people have. So if you're setting up a company that is going to employ 10,000 employees, then it's best to make the signs above the doors or the signs on the doors, or the numbers on the dials, all be ones that people in that culture—who are highly educated—can read. You could, in principle, make each label on each door a different language. There are 1000s of human languages, and 5000 languages and 5000 doors in the company. You could, given the same meaning, make them all different languages. They're all the same language, not just any old language, it's a language that many educated people know fluently. You could also misinterpret that as saying, “Oh, there is something. There is some hardware reason why everybody speaks the same language.” Well, no, there isn't. It's a cultural reason.

Dwarkesh Patel 17:28

 If the culture was different, somehow—maybe if there was some other way of communicating ideas—do you think the people currently designated as not functionally literate could be in a position to learn about quantum computing? For example, if they made the right choices, or not the right choices, but the choices that could lead to them understanding quantum reading?

David Deutsch 17:53

So, I don't want to evade the question. The answer is yes. But the way you put it begs the question. It's not the only language like this. It's all knowledge. So, if someone doesn't speak English, quantum computing is a field in which English is the standard language. That used to be German. Now, it's English. Someone who doesn't know English is disadvantaged in learning about quantum computers. But, not only because of the deficiency in language. 

Suppose they come from a culture in which the culture of physics, mathematics, and logic are equivalent, and only the language is different. If they learn the language, they will find it as easy as anyone else. But suppose a person doesn't think in terms of logic but thinks in terms of pride, manliness, fear, and concepts that fill the lives of, say, prehistoric people or pre-enlightenment people. In that case, they'd have to learn a lot more than just the language of the civilization to understand quantum computers. They'd have to learn a range of other features of the civilization. On that basis, people who can't read driving licenses are similarly in a different culture—which they would also have to learn if they are to increase their IQ, i.e., the ability to function at a high level in intellectual culture in our civilization. 

IQ correlation of twins separated at birth

Dwarkesh Patel 20:12 

Okay, so if it's those kinds of differences, then how do you explain the fact that identical twins separated at birth and adopted by different families tend to have the most of the variance that does exist between humans in terms of IQ? That doesn't exist between identical twins, the correlation is 0.8—which is the correlation you would have when you took the test on different days, depending on how good a day you were having. These are people adopted by families with different cultures who were often in different countries. In fact, a hardware theory explains very well why they would have similar scores and IQ tests correlate with literacy, job performance, and so on—whereas I don't know how the software would explain why it would involve being adopted by different families. 

David Deutsch 20:58   

The hardware theory explains it in the sense that it might be hardware, it might be true. So it doesn't because it doesn't have an explanation beyond that, nor does the software theory. 

Dwarkesh Patel 21:15   

So there are differences in the brain level that correlate with IQ, right? So your actual skull size is like a 0.3 correlation with IQ. There are a few more like this. They don't explain the entire variance in human intelligence or the entire genetic variance of human intelligence. But we have identified a few actual hardware differences that correlate.

David Deutsch 21:34

Suppose that on the contrary, the results of these experiments had been different. Suppose that the result was that people who are brought up in the same family differ only in the amount of hair they have, or in their appearance in any other way, none of those differences made any difference to their IQ. Only who their parents were made a difference. Wouldn't it be surprising that there's nothing else correlated with IQ other than who your parents are?

Now, how much correlation should we expect? There are correlations everywhere, and they're these things on the internet: jokes, memes, or whatever you call them. But they make a serious point where they correlate things like how many adventure movies have been made in a given year correlated with the GNP per capita. That's a bad example because there's an obvious relation, but you know what I mean. 

It's the number of films made by a particular actor against the number of bird flu outbreaks. Part of being surprised by randomness is the fact that correlations are everywhere. It's not just that correlation isn't causation. It's that correlations are everywhere. It's not a rare event to get a correlation between two things. The more you ask about, the more you will get correlations. What is surprising is that the things that are correlated are things that you expect to be correlated and measured. For example, when they do these twin studies and measure IQ, they control for certain things. Like you said, identical twins reared together—they've got to be reared together. 

But, there are infinitely more things that they don't control. So it could be that the real determinant of IQ is, for example, how well a child is treated between the ages of three and a half and four and a half. Where “well”  is defined by something that we don't know yet. Then, you would expect that thing we don't know about, and nobody has bothered to control for it to be correlated with IQ. But unfortunately, that thing is also correlated with whether someone's an identical twin or not. So it's not that it’s identical twins that are causing the similarity. It's this other thing. This is an aspect of appearance or something. If you were to surgically change a person, and you knew what this thing was, you would be able to have the same effect as making an identical twin.

Dwarkesh Patel 25:37 

Right. But, as you say, in science, or to explain any phenomenon, there's an infinite amount of possible explanations, right? You have to pick the best one. So it could be that there's some unknown trait, which is so obvious to different adoptive parents, so they can use it as a basis for discrimination or for different treatment. 

David Deutsch 25:56 

I would assume they don't know what it is.

Dwarkesh Patel 26:00 

But then aren’t they using it as a basis to treat kids differently at age three?

David Deutsch 26:04

Not by consciously identifying it. It would be something like getting the idea that this child is really smarter. But I'm just trying to show you that it could be something the parents are unaware of. If you ask parents to list the traits in their children that caused them to behave differently towards their children, they might list 10 traits. But then there are another 1000 traits they're unaware of, which also affect their behavior.

Dwarkesh Patel 26:33 

So we first need an explanation for what this trait is that researchers have not been able to identify. But it's so obvious that even unconsciously, parents are able to reliably use it as a way to treat their children. 

David Deutsch 26:45 

It wouldn’t have to be obvious at all. Parents have a huge amount of information about their children—which they are processing in their minds. They don’t know what most of it is. 

Do animals have bounded creativity?

Dwarkesh Patel 27:05 

All right. Okay, so let's leave this topic aside for now. Let me bring us to animals. So if creativity is something that doesn't exist in increments or capacity to create explanations… you can use a simple example of a cat opening a door, right? You'll see a cat develop a theory that applying torque to this handle will open a door. Then, it'll climb onto a countertop, and it'll jump on top of that door handle. It hasn't seen any other cat do it. It hasn't seen another human like get on a countertop and try to open the door that way. But a conjecture is that this is a way, given its morphology, that it can access a door. So, that's its theory, and the experiment is, “Will the door open?” This seems like a classic cycle of conjecture and refutation. Is this compatible with the cat not having at least some bounded form of creativity? 

David Deutsch 28:01 

I think it’s perfectly compatible. So animals are amazing things, and instinctive animal knowledge is designed to make animals easily capable of thriving in environments that they've never seen before. In fact, if you don't go down to the level of detail, animals have never seen the environment before. Maybe a goldfish in a goldfish bowl might have. But when a wolf runs through the forest, it sees a pattern of trees that it has never seen before, and it has to create strategies to avoid each tree, and for actually catching the rabbit that it's running after in a way that has never been done before. This is because of a vast amount of knowledge in the wolf genes. What kind of knowledge is this? Well, it's not the kind of knowledge that suggests turning left, right, etc. It's an instruction that takes input from the outside and then generates behavior that is relevant to that input. It doesn't involve creativity but a degree of sophistication in the program that Human Robotics has not yet reached anywhere near. By the way, when it sees a wolf of the opposite sex, it may decide to leave the rabbit and go and have sex instead.

A program for a robot to locate another robot of the right species and then have sex with it is beyond present-day robotics, but it will be done. It does not require creativity because that same program will lead the next wolf to do the same thing in the same circumstances. Suppose the fact that the circumstances are ones it has never seen before and can still function is a testimony to the incredible sophistication of that program. But, it has nothing to do with creativity.

Humans do tasks that require much, much less programming sophistication, such as sitting around a campfire and telling each other a scary story about a wolf that almost ate them. Now, animals can do the wolf-running-away thing. They can enact a story that's even more complicated than the one the human tells. But they can't tell a story. Telling a story is a typical creative activity. It's the same kind of activity as forming an explanation. So I don't think it's surprising that cats can jump on handles. I can easily imagine that the same amazingly sophisticated program that lets it jump on a branch will also function in this new environment it's never seen before. But there are all sorts of other things that it can't do.

Dwarkesh Patel 32:01 

Oh, that's certainly true. My point is that it has a bounded form of creativity, and if it is bounded, creativity can't exist, that humans could be in one such. But I'm having a hard time imagining the ancestral circumstance in which a cat couldn’t gain the genetic knowledge that jumping on a metal rod would get a wooden plank to open and give it access to the other side.

How powerful can narrow AIs be?

David Deutsch 32:26

 Well, I thought I just gave an example. Suppose we don't know what kind of environment the ancestors of the domestic cats lived in. If it contains undergrowth, then dealing with undergrowth requires some very sophisticated programs. Otherwise, you will just get stuck somewhere and starve to death. If a dog gets stuck in a bush, it has no program to get out other than to shake itself until it gets out. It doesn't have a concept of doing something that temporarily makes matters worse and allows you to get out. Dogs can't do that. It's not because that's a particularly complicated thing. It's just that programming doesn't have that. But an animal's programming could easily have that if it lived in an environment in which that happened a lot.

Dwarkesh Patel 33:33 

Is your theory of AI compatible with AIs that have narrow objective functions, but functions that, if fulfilled, would give the creator a lot of power? For example, if I wrote a deep learning program, I trained it over financial history and asked it to make me a trillion dollars on the stock market. Do you think that this would be impossible? Or if you think this would be possible, it doesn’t seem like an AGI, but it seems like a very powerful AI, right?  So, it seems like AI is getting somewhere.

David Deutsch 34:04 

Well, if you want to be powerful, you might do better inventing a weapon or something. But or a better mousetrap because it's nonviolent. You can invent a paperclip. To use an example was often used in this context. If paperclips weren't invented, you could invent a paperclip and make a fortune. That's an idea, but it's not an AI because it's not the paperclip that's going out there. It's your idea in the first place, which caused the whole value of the paperclip. 

Similarly, suppose you invent a dumb arbitrage machine that seeks out complicated trades to make that are more complicated than anyone else is trying to do. Suppose that makes you a fortune. Well, the thing that made you a fortune was not the arbitrage machine. It was your idea of how to search for arbitrage opportunities that no one else sees. Right? That is what was valuable. That's the usual way of making money in the economy—you have an idea, and then you implement it. AI is beside the point. It could have been the paperclip.

Dwarkesh Patel 35:29 

But the thing is, the models that are used nowadays are not expert systems like the chess engines of the 90s. They're something like Alpha zero or AlphaGo. It's almost a blank neural net that they were able to let it win Go. So, if you arbitrarily throw financial history at a blank neural net, wouldn't it be fair to say that the AI figured out the right trades, even though it's not a general intelligence?

David Deutsch 35:57 

I think it's possible in chess, but not in the economy - the value in the economy is created by creativity. Arbitrage is one thing that can sort of skim value off the top by taking opportunities that were too expensive for other people to take. So, you can make a lot of money if you have a good idea about how to do it. But, most of the value in the economy is created by the creation of knowledge. For example, somebody has the idea that a smartphone would be good to have, even though most people think that's not going to work. That idea cannot be anticipated by anything less than an AGI. An AGI could have that idea, but no AI could.

Could you implant thoughts in VR?

Dwarkesh Patel 36:53 

Okay, so there are other topics I want to get into. So, let's talk about virtual reality. In The Fabric of Reality, you discussed the possibility that virtual reality generators could plug directly into our nervous system and give us sense data. As you might know, many meditators like Sam Harris speak of thoughts and senses as intrusions into consciousness. They can welcome intrusions. But, they are both things that come into consciousness. So, do you think virtual reality generators could also place thoughts and sense data into the mind?

David Deutsch 37:30

Oh yes, but that's only because that model is wrong. It's the Cartesian theater, as Dan Dennett puts it, with the stage cleared of all the characters. So that's pure consciousness without content, as Sam Harris envisages it. But I think all that's happening there is that you are conscious of this theater. And you envisage it as having certain properties, which it doesn't have—but that doesn't matter. We can imagine lots of things that don't happen. That, in a way, characterizes what we do all the time. So one can interpret one's thoughts about this empty stage as being thoughts about nothing. One can interpret the actual hardware of the stage that one imagined as being pure conscious content plus consciousness, but it's not. It has the content of a stage or a space, or however you want to envision it.

Can you simulate the whole universe?

Dwarkesh Patel 38:49

 Okay, and then let's talk about the Turing principle. So this is a term you coined; it's otherwise been called the Church-Turing-Deutsch principle (a universal computer can simulate any physical process). Would this principle imply that you could simulate the whole of the universe in a compact, efficient computer smaller than the universe itself? Or is it constrained to physical processes of a certain size?

David Deutsch 39:19 

No, it couldn't simulate the whole universe. That would be an example of a computationally capable task but wouldn't have enough memory or time. So the more memory and time you give it, the more closely it could simulate the whole universe. But it couldn't ever simulate the whole universe or anything near the whole universe because it is hard for it to simulate itself. Also, the sheer size of the universe is large. 

Even if we discovered ways of encoding information in a dense way—maybe quantum gravity would allow a great density of information—it still couldn't simulate the universe because that would mean the rest of the universe was also that complex due to the laws of universality and physics. Because quantum gravity applies to the rest of the universe as well. But, I think it's significant. Being limited by the available time and memory… to separate that from being limited by computational capacity. It's only when you separate those that you realize what computational universality is. The Turing or Quantum universality is the most important thing in the theory of computation because computation doesn't even make sense unless you have a concept of a universal computer.

Are some interesting problems insoluble?

Dwarkesh Patel 41:24 

What could falsify your theory that all interesting problems are solvable? So I asked this, because there are people who have tried offering explanations for certain problems or questions like “Why is there something rather than nothing?” Or “How can mere physical interactions explain consciousness?” They have offered explanations why these problems are, in principle, insoluble. Now, I'm not convinced they're right. But do you have a strong reason for, in principle, believing that they're wrong?

David Deutsch 41:51 

No. So this is a philosophical theory and could not be proved wrong by experiment. However, I have a good argument for why they aren't, namely, that each individual case of this is a bad explanation. So let's say that, that some people say, for example, that simulating a human brain is impossible. Now, I can't prove that it's possible, nobody can prove that it's possible until they actually do it, or unless they have a design for it, which they prove will work. So pending that there is there is no way of proving that, that it's not true that this is a fundamental limitation. But the trouble is, with that idea that it is a fundamental limitation, that the trouble with that is that it could be applied to anything, for example, it could be applied to the theory that you have recently, just a minute ago been replaced by a humanoid robot, which which has got is going to save for the next few minutes, just a pre-arranged set of things, and you're no longer a person––

Dwarkesh Patel 43:14 

I can't believe you figured it out.

Does America fail Popper's Criterion?

David Deutsch 43:17 

Well, that's the first thing you'd say. There is no way to refute that, by experiment, short of actually doing it, short of actually talking to and so on. So it's the same with all these other things. In order for it to make sense to have a theory that something is impossible, you have to have an explanation for why it is impossible. So we know that for example, almost all mathematical propositions are undecidable. So that's not because somebody has said, “Oh, maybe we can't decide everything. Because thinking we could decide everything is hubris.” That's not an argument, you need an actual functional argument to prove that, that that is so and then at being a functional argument, in which the steps of the argument make sense and relate to other things, you can then say, well, what does this actually mean? Does this mean that maybe we can never understand the laws of physics? Well, it doesn't, because if the laws of physics included an undecidable function, then we would simply write, f of x and f of x is an undecidable function. We couldn't evaluate f of x; it would limit our ability to make predictions. But then you could say that lots of our ability to make predictions is totally limited anyway, but it would not affect our ability understood understand the properties of the function f of f, and therefore the properties of the of the physical world.

Dwarkesh Patel 45:01

 Okay, is a system of government like America's which has distributed powers and checks and balances incompatible with Poppers criteria? So the reason I asked is that the last administration had a theory that if you build a wall, there'll be positive consequences. And, the idea there could have been tested and then the person could have been evaluated about whether that theory succeeded. But because our system of government has distributed powers, Congress opposed the testing of that theory, and so it was never tested. So if the American government wanted to fulfill coverage criteria, we would need to give the president more power, for example.

David Deutsch 45:35 

It's not as simple as that. So I agree that this is a big defect in the American system of government. No country has a system of government that perfectly fulfills Poppers criteria . We can always improve,  the British one is actually the best in the world, and it's far from optimal. Making a single change, like that, is not going to be the answer. The constitution of a polity, is a very complicated thing, much of which is inexplicit. So the American founding fathers realized they had a tremendous problem: what did they want to do? What they thought of themselves as doing was to implement the British Constitution. In fact, they thought they were the defenders of the British Constitution, and that the British king had violated it. And he was bringing it down, they wanted to retain it. The trouble is that in order to do this, to gain the independence to do this, they had to get rid of the king. And then they wondered whether they should get an alternative King…. whichever way they did it, there were problems. The way they decided to do it,  made for a system that was inherently much worse than the one they were replacing, but they had no choice. If they wanted to get rid of a king, they had to have a different system for having a head of state, and they wanted to be democratic. That meant the President had a legitimacy in legislation that the King never had, or sorry, never had! The king did use to have that in medieval times, but the king by the time of the Enlightenment, and so on, no longer had full legitimacy to legislate. So they had to implement a system where his seizing power was prevented by something other than tradition. And so they instituted these checks and balances, checks. And so the whole thing they inserted was immensely sophisticated. It's an amazing intellectual achievement. And that it works as well as it does is something of a miracle, but the inherent flaws are there and one of them is the fact that there are checks and balances means that responsibility is dissipated. And nobody is ever to blame for anything in the American system. Which is terrible. In the British system, blame is absolutely focused, everything is sacrificed to the end of focusing blame and responsibility to the government, passed, passed the law, courts have passed the parliament, right to the government. That's where it's all focused. And there are no systems that do that better. But as you will know, the British system also has flaws. And we recently saw with the sequence of events with the Brexit referendum, and then Parliament balking at implementing some laws that didn't agree with and then that being referred to the courts. And so there were the courts and the parliament and the government and the Prime Minister all blaming each other. And there was a mini constitutional crisis, which could only be resolved by having an election and then having a majority government which is by the mathematics of how the government works, that's how it usually is in Britain. Although, we have been unlucky several times recently in not having a majority government.

Does finite matter mean there's no beginning of infinity?

Dwarkesh Patel 50:04 

Okay, so this could be wrong, but it seems to me that in an expanded universe, there will be like a finite amount of total matter that will ever exist in our light cone, right? There's a limit. And that means that there's a limit on the amount of computation that this matter can execute, the amount of energy it can provide, perhaps even the amount of economic value we can sustain. So maybe it would be weird if the GDP per atom could be arbitrarily large. So does this impose some limit on your concept of the beginning of infinity?

David Deutsch 50:37 

So what you've just recounted is a cosmological theory. The universe could be like that. But we know very little about cosmology, we know very little about the universe in general, like, theories of cosmology are changing on a timescale of about a decade. So it doesn't make all that much sense to speculate about what the ultimate asymptotic form of the cosmological theories will be. At the same time, we don't have a good idea about the asymptotic form of very small things. Like, we know that our conception of physical processes must break down somehow at the level of quantum gravity, like 10 to the minus 42 seconds, but we have no idea what happens below that. Some people say it's got to stop below that. But there's no argument for that at all. It's just that we don't know what happens beyond that. Now, what happens beyond that may be a finite limit. Similarly, the way it happens on a large scale may impose a finite limit, in which case computation is bounded by a finite limit imposed by the cosmological initial conditions of this universe, which is still different from its being imposed by inherent hardware limitations. For example, if there's a finite amount of GNP, available in the distant future, then it's still up to us, whether we spend that on mathematics or music or political systems, or any of the 1000s of even more worthwhile things that have yet to be invented. So it's up to us which ideas we feel the 10 to the 10 to the 10th and 10 bits with now, and my guess is that there are no such limits. But my worldview is not affected by whether there are such limits. Because as I said, it's still up to us what to fill them with. So then if we get chopped off at some point in the future, then everything will have been worthwhile up to them.

The Great Stagnation

Dwarkesh Patel 53:15

Gotcha. Okay. So the way I understand that your customers are getting infinite, it seems to me that the more knowledge we gain, the less knowledge we're in a position to gain. So there should be an exponential growth of knowledge. But if we look at the last 50 years, it seems that there has been a slowdown or decrease in research productivity, economic growth, and productivity growth. This seems compatible with the story that there's a limited amount of fruit on the tree that we picked the low hanging fruit and now there's less and less fruit and harder and harder fruit to pick. And, eventually the orchard will be empty. So do you have an alternative explanation for what's been going on in the last 50 years?

David Deutsch 53:52

 Yes,  it's very simple. There are sociological factors in academic life, which have stultified the culture. Not totally and not everywhere, but that has been a tendency in what has happened. It has resulted in a loss of productivity in many sectors, in many ways, (but not in every sector, not in every way).  For example, I've often said there was a solidification in theoretical physics, starting in the 1920s, and it still hasn't fully dissipated if it wasn't for that. Quantum computers wouldn’t have been invented in the 1930s and built in the 1960s. So that is just an accidental fact. But it just goes to show that there are no guarantees. The fact that our horizons are unlimited, does not guarantee that we will get anywhere and that we won't start declining tomorrow. I don't think we are currently declining. These declines that we see are parochial effects caused by specific mistakes that have been made, and which can be undone.

Changes in epistemic status is Popperianism

Dwarkesh Patel 55:35 

Okay, so I want to ask you a question about Bayesianism versus Praetorianism. So one reason why people prefer Bayes is because there seems to be a way of describing changes in epistemic status when the relative status of a theory hasn't changed. So I'll give you an example. Currently, the Many Worlds explanation is the best way to explain quantum mechanics. Right? But suppose in the future, we were able to build an AGI on a quantum computer and were able to design some clever interference experiment, as you suggest to have it be able to report back being in a superposition across many worlds. Now, it seems that even though many worlds remain the best, the only explanation somehow is that its epistemic status has changed as a result of the experiment. In terms of the Bayesian theory, you can say the credibility of this theory has increased. So how would you describe these sorts of changes in Praetorian view?

David Deutsch 56:33 

So what has happened is that at the moment, we have only one explanation that can't be immediately knocked down. If we did that thought experiment, we might well decide that this will provide the ammunition to knock down ideas for alternative explanations that have not been thought of yet. Obviously, it wouldn't be enough to knock down every possible explanation, because for a start, we know that quantum theory is false. We don't know for sure that the next theory will have many worlds in it. It will, but we can't prove anything like that. But I would replace the idea of increased credence with a theory that the experiment will provide a quiver full of arrows or a repertoire of arguments that goes beyond the known bad arguments andwill reach into other types of arguments. The reason I would say that is that some of the existing misconceptions about quantum theory reside in misconceptions about the methodology of science. Now I've written the paper about what  it's the right methodology of science where that doesn't apply. But many physicists and many philosophers would disagree with that. And they would advocate a methodology of science that's more based on empiricism. Of course, that empiricism is at stake and can be knocked down in some terms. But not everybody thinks that. Now, once we have an experiment, such as my thought experiment, if that was actually done, then people could not use their arguments based on a fallacious idea of empiricism, because their theory would have been refuted, even by the standards of empiricism, which shouldn't have been needed in the first place. But, so that's why  that's the way I would express that the repertoire of arguments will become more powerful if that experiment were done successfully.

Open-ended science vs gain of function

Dwarkesh Patel 59:29

The next question I have is, how far do you take the principle that open ended scientific progress is the best way to deal with existential dangers. To give one example, you have something like data function research, right. And it's conceivable that it could lead to more knowledge and how to stop dangerous pathogens. But at least in Bayesian terms, you could say it seems even more likely that it can or has led to the spread of a manmade pathogen that would have not otherwise been naturally developed. So would your belief in open scientific Progress allow us to say, “Okay, let's stop doing some research?”

David Deutsch 1:00:09

No, it wouldn't allow us to say “let's stop it”, it might make it reasonable to say, “let us do research into how to make laboratories more secure.” Before we do gain function research, it's really part of the same thing. It's like saying, Let's do research into how to make the plastic hoses through which the reagents pass more impermeable, before we actually do the experiments with the reagents. So it's all part of the same experiment. I wouldn't want to stop something just because new knowledge might be discovered. But which knowledge we need to discover first, that's the problem of scheduling, which is a non-trivial part of any research and of any learning.

Dwarkesh Patel 1:01:06 

Would it be conceivable for you to say that, until we figure out how to make sure these laboratories are held to a certain standard, we should stop the research as it exists now. And then, meanwhile, we'll focus on doing the other kind of research before we can restart. But until then, it's not allowed.

David Deutsch 1:01:27

 Yes, in principle, that will be reasonable. I don't know enough about the actual situation to have a view., I don't know how these labs work. I don't know what the precautions consist of. And when I hear people talking about, for example, lab leaks,  well the most likely lab leak is that one of the people who works there walks out of the front door. So the leak is not leaked from the lab to the outside. The leak is from the test tube to the person and then from the person walking out the door. And I don't know enough about what these precautions are or what the state of the art is, to know to what extent the risk is actually minimized. It could be that the culture of these labs is not good enough–– in which case it would be part of the next experiment to improve the culture in the labs. But I am very suspicious of saying that all labs have to stop and meet a criterion. Because I suspect that this stopping wouldn't be necessary, and the criterion wouldn't be appropriate. Then again, which criterion to use depends on the actual research being done.

Contra Tyler Cowen on Civilizational Lifespan

Dwarkesh Patel 1:02:56

When I had Tyler Cowen on my podcast, I asked him why he thinks that humans are lazy and are only going to be around for 700 more years. I gave him your rebuttal or what I understand as your rebuttal that is “creative, optimistic societies will innovate ways of safety technologies faster than totalitarian static societies can innovate with disruptive technologies.” And he responded, “maybe, but the cost of destruction is just so much lower than the cost of building.” That trend has been going on for a while now, what happens when a new bomb costs $60,000? Or what happens if there's a mistake? Like the kinds that we saw many times over in the Cold War? How would you respond to that?

David Deutsch 1:03:42 

First of all, we've been getting safer and safer throughout the entire history of civilization. There were these plagues that wiped out a hot third of the population of the world or half, and it could have been 99% or 100%. We went through some kind of bottleneck 70,000 years ago, I understand from genetics that all our cousins and species have been wiped out so we were much less safe. Now, also if a 10 Kilometer asteroid had been on target to hit the Earth at any time in the past 2 million years or whatever it is history of the Genus Homo, that would have been the end of it. Whereas now, it will just mean higher taxation. That's how amazingly safer we are. 

Now, I would never say that it's impossible that we will destroy ourselves. That would be the contrary to the universality of the human mind. We can make wrong choices, we can make so many wrong choices that will destroy ourselves. On the other hand, the atomic bomb accident thing, would have had no zero chance of destroying civilization, all they would have done is cause a vast amount of suffering. And, but I don't think we have the technology to end civilization, even if we wanted to,  all we would do if we just deliberately unleashed hell all over the world is we would cause a vast amount of suffering. But there would be survivors, and they would resolve never to do that again. So I don't think we're even able to let alone that we would do it accidentally.

 But as for the bad guys? Well, we are doing the wrong thing largely in regard to both external and internal threats. But I don't think we're doing the wrong thing to an existential risk level. And over the next 700 years, or whatever it is, well, I don't want to prophesy, because I don't know most of the advances that are going to be made in that time. But I see no reason why if we are solving problems, we won't solve problems. There’s another metaphor by Nick Bostrom about a jar with white balls, and there's one black ball and you take out a white ball, and then you pick the black ball, and that's the end of you. I don't think it's like that, because every white ball you take out and have reduces the number of black balls in the jar. So again, I'm not saying that's a law of nature, it could be that the very next ball we take out will be the black one, that will be the end of us. It could be but all arguments that will be are fallacious.

Fun criterion

Dwarkesh Patel 1:07:21

 I do want to talk about the Fun criteria. Is your definition of fun, different from how other people define other positive emotions like eudaimonia, or well being, or satisfaction, is “fun” a different emotion?

David Deutsch 1:07:35

I don't think it's an emotion. And all these things are not very well defined. They can't possibly be very well defined until we have a satisfactory theory of qualia at least, and probably a more satisfactory theory of creativity, how creativity works, and so on.  I think that the choice of the word fun for the thing that I explained more precisely, but still not very precisely as, as a “creation of knowledge where the different kinds of knowledge in explicit, unconscious, conscious explicit, are all in harmony with each other.”  That is actually the only way in which the everyday usage of the word fun differs from that is that fun is considered frivolous, or seeking fun is considered as seeking frivolity. But that isn't so much a different use of the word , it's just a different pejorative theory about whether this is a good or bad thing. But nevertheless, I can't define it precisely. The important thing is that there is a thing which has this property of “fun” that you can't, you can't compulsorily enact. So, in some views, no pain, no gain, well, then you can find out mechanically, whether the thing is causing pain, and whether it's doing it according to the theory that says that you will have gain if you have that pain, and so on. So, that can all be done mechanically, and therefore, it is subject to criticism. Another way of looking at the Fun theory is that it’s some mode of criticism. This is subject to the criticism that this isn't fun, ie. This is privileging one kind of knowledge arbitrarily over another rather than being rational and letting content decide.

Dwarkesh Patel 1:10:04

Is this placing a limitation on universal explainers? If they can create some theory about why a thing could or should be fun? Why could anything be fun? It seems to be that sometimes we actually can make things fun that aren't like, for example, take exercise, no pain, no gain is like when you first go, it's not fun. But, once you start going, you understand the mechanics, you develop a theory for why CAD should be fun?

David Deutsch 1:10:27 

Yes, that's quite a good example. Because there you see that fun cannot be defined as the absence of pain. You can have fun while experiencing physical pain. And that physical pain is not sparking suffering, but joy. However, there is such a thing as physical pain not sparking joy, as Marie Kondo would say. And that's important, because if you are dogmatically or uncritically implementing in your life, a theory of the good, that involves pain, and which excludes the criticism that maybe this can't be fun. Or maybe this isn't yet fun. Or maybe I should make it fun. And if I can't, that's a reason to stop all those things, if all those things are excluded, because by definition, the thing is good, and your pain, your suffering doesn't matter, then that opens the door to not only suffering, but to stasis. You won't be able to get to a better theory.

Dwarkesh Patel 1:11:58

And then why is fun central to this instead of another emotion? So, like, for example, Aristotle thought that a widely defined sense of happiness is “what should be the goal of our endeavors?” Why fun instead of something like that?

David Deutsch 1:12:15

 Well, that's defining it vaguely enough. The point is, the underlying thing is as far as going one level below, we really understand that we'd need to go about seven levels below that, which we can't do yet. But the important thing is that there are several kinds of knowledge in our brains. And the one that is written down in the exercise book that says you should do this number of reps. And you should power through this. And it doesn't matter if you feel that and so on. That's an explicit theory. And it contains some knowledge, but it also contains errors. All our knowledge is like that. We also have other knowledge, which is contained in our biology, it's contained in our genes.

 We have knowledge that is inexplicit, like our knowledge of grammar is always my favorite example, as we know why certain sentences are acceptable and why they're unacceptable. But we can't state explicitly all in every case, why it isn't, or why it is. And then as there's so that there's explicit and explicit knowledge, there's conscious and unconscious knowledge. All those are bits of programming the brain, they’re ideas, they are they they are bits of knowledge in this, if you define knowledge as information with causal power, they are all information with causal power. They all contain truth, and they all contain error. And it's always a mistake, to shield something to just shield one of them from criticism or replacement. Not doing that is what I call the fun criterion. Now, you might say that that's a bad name, but it's the best I can find.

Does AGI through evolution require suffering?

Dwarkesh Patel 1:14:18

So why would creating an AGI through evolution necessarily entail suffering? Because the way I see it, it seems to be your theory that you need to be a general intelligence in order to feel suffering, but by the point and evolution, simulated Bing is a general intelligence and we can just stop the simulation. So where's the suffering coming from?

David Deutsch 1:14:38 

Okay. So the kinds of simulation by evolution that I'm thinking of, there may be several kinds, but the kind that I'm thinking of, which I said would be the greatest crime in history, is the kind of work that just simulates the actual evolution of humans from pre-humans that weren't people. So you have a population of non people, which in this simulation would be some kind of NPCs. And then they would just evolve–– we don't know what the criteria would be, we just have an artificial universe which simulated the surface of the earth, and they'd be walking around, and some of them might or might not become people. 

Now the thing is, when you're part of the way there, what is happening is that the only way that I can imagine the evolution of personhood or the explanatory creativity happened was that the hardware needed for it for it was was first needed for something else, I have proposed that it was needed to transmit memes. So there'd be people who were transmitting memes creatively, but they were running out of resources. So they weren't running out of resources before it managed to increase the stock of memes. So in every generation, there was a stock of memes that was being passed down to the next generation. And once they got beyond a certain complexity, they had to be passed down by the use of creativity by the recipient.

 So there may well have been a time and as I say, I can't think of any other way it could have been, where there was genuine creativity being used, but it ran out of resources very quickly, but not so quickly that it didn't increase the mean bandwidth. Then in the next generation, there was more mean bandwidth.  And then after, certain number of generations, there would have been some opportunity to use this hardware, or whatever it is, firmware, I expect to use this firmware for something other than just Trump blindly transmitting memes, or rather, creatively transmitting memes, but they were blind memes. So in that time, it would have been very unpleasant to be alive. It was already very unpleasant to be alive when we did have enough resources to think as well as do the memes, but I don't think there would have been a moment at which you would say, “Yes, now, the suffering begins to matter,” the people were already suffering at the time when they were blindly transmitting memes, because they were using general, genuine creativity. They were just not using it to any good effect.

Would David enter the Experience Machine?

Dwarkesh Patel 1:18:01 

Gotcha. Would, being in the Experience Machine be compatible with the fun criterion?  So you're not aware that you're in the Experience Machine? It's all virtual reality. But you're still doing the things that would make you have fun, in fact, more so than in the real world? Would you be tempted to get into an experience machine? Would it be compatible with Minecraft? They're different questions.

David Deutsch 1:18:24

But I'm not sure what the Experience Machine is. Is it just a virtual reality world in which things work better than in the real world or something? 

Dwarkesh Patel 1:18:40

 So it's a thought experiment by Robert Nozick. And the idea is that you would enter this world, but you would forget that you're in virtual reality. So all the world would be perfect in every possible way that it could be perfect or not perfect. But it would be better in every possible way. But you would think the relationships you have here are real, the knowledge we're discovering here is novel, and so on. Would you be tempted to enter such a world?

David Deutsch 1:19:08 

Well, no, I certainly wouldn't want to enter any world which involves erasing the memory that I have come from this world. Related to that is the fact that the laws of physics in this virtual world couldn't be the true ones, because the true ones aren't yet known. So I'd be in a world in which I was trying to learn laws of physics, which aren't the actual laws. They would have been designed by somebody for some purpose to manipulate me. Maybe it would be designed to be a puzzle that would take 50 years to solve. But it would have to be by definition, a finite puzzle, and it wouldn't be the actual world. Meanwhile in the real world, things are going wrong. And I don't know about this. Eventually they go so wrong that my computer runs out of power. And then where would I be?

Against Advice for young people

Dwarkesh Patel 1:20:11

 The final question I always like to ask people I interview is, “What advice would you give to young people?” So somebody in their 20s, is there something that you would like some advice you would give them?

David Deutsch 1:20:24

Well, I try very hard not to give advice. Because it's not a good relationship to be with somebody and give them advice. I can have opinions about things. So, for example, I may have an opinion that it's dangerous to condition your short term goals by reference to some long term goal. I have a good epistemological reason for that. Namely, that if your short term goals are subordinate to your long term goal, then if your long term goal is wrong, or deficient in some way, you won't find out until you're dead. So it's a bad idea because it is subordinating the things that you could error-correct now, or in six months time or in a year's time, to something that you could only error correct on a 50 year, timespan, and then it will be too late. So I'm suspicious of advice of the form, “Set your goal,” and even more suspicious of making your goal be so and so.

Dwarkesh Patel 1:21:48 

Interesting. Why do you think the relationship between advicee and advice giver is dangerous?

David Deutsch 1:21:56 

Oh, well, because it's one of authority.  I tried to make this example of, quote, advice that I just gave, I tried to make it non authoritative. I just gave an argument for why certain other arguments are bad. So, if it's advice of the form, “a healthy mind and a healthy body, or don't drink coffee before 12 o'clock, or something like that?” It's an argument.  If I have an argument, I can give the argument and not tell the person what to do. Who knows what somebody might do with an argument, they might change it to a better argument which actually implies a different behavior. I can contribute to the world and make arguments as best I can. I don't claim that they are privileged over other arguments. I just put them out because I think that argument works. And I expect other people not to think they work. Like we've just done this in this very podcast: I put out an argument about AI and that kind of thing, and you criticized it. If I was in the position of making that argument and saying that “Therefore you should do so and so!” that's a relationship of authority, which is immoral to have.

Dwarkesh Patel 1:23:42

Well, David, thanks so much for coming on the podcast and thanks so much.

David Deutsch 1:23:49

Fascinating, thank you for inviting me.

0 Comments