60 Comments

I wasn’t expecting this (thats on me); bullet proof counter arguments and factual takedowns. Brilliant essay.

Expand full comment

> We’re talking about something that is potentially more powerful than any human

What makes you claim that? AI doomers and safetyists are wrong assuming that the current LLM/RL paradigm will exponentially accelerate into something: creative, autonomous, disobedient and open-ended. The "AGI via Scaling Laws" claim is flawed for similar reasons (see my critique here: https://scalingknowledge.substack.com/i/124877999/scaling-laws).

Expand full comment
author

It seems to be what Marc thinks will happen, given all the things it will be able to do.

Starting from his premise, it makes sense.

Expand full comment

What exactly makes you think that? He writes that it will "augment human intelligence" (listing many examples of augmentation), not that we'll have a superhuman intelligence.

Expand full comment
Jun 15, 2023·edited Jun 15, 2023

Some thoughts:

1) Arguing about whether AI will be in human control can be confusing because the idea of control is a philosophical quagmire. Personally, I think it is easier to think in terms of whether AI could have substantial negative unintended/unforeseen consequences than whether AI could be outside of human control.

2) We already have superhuman AI, it is just superhuman in narrow domains, such as Go and protein folding.

3) AI doesn't have to cause singularity to have substantial negative unintended/unforeseen consequences. For example:

a) AlphaFold could be used to make bioweapons that cause great harm, even though I don't believe it was the intention of the DeepMind researchers to create bioweapons.

b) AI trading algorithms could inadvertently distort capital allocation in ways that substantially negatively impact the economy, without the makers of the trading algorithms intending that.

c) AI could make it easy to mass manufacture cheap autonomous weapons systems that cause mass casualties or are used by tyrants to consolidate power over their population.

4) I don't think that alignment necessarily helps with any of the above three examples, but I do think they demonstrate examples in which the AI is substantially more powerful than any *individual* human.

5) The above examples are perhaps somewhat similar to the atomic bomb: the atomic bomb is *in some ways* controlled by humans, but is also arguably more powerful than any human. Even if you are a human who has access to "the red button", you don't have the power to stop a nuclear weapon from being used against you. Similarly, you "controlling" an AI that can design a bioweapon doesn't give you the power to prevent a bioweapon being used against you.

6) Another way to put the above might be that even if you control a piece of technology, you might not be able to control the systems, incentives, and structures that the technology creates (which I will refer to hereafter as the logic of the technology). Even if you control the bomb, you are subject to logic of the bomb. If it were easier to make stealthy ICBMs and easier to detect submarines, the logic of the bomb might force preemptive strike and the likely eradication of much of humanity. The makers of the bomb didn't know that this would not be the case. As we currently work to build AI, it is very difficult to predict what it will better and worse at, and how those dynamics will impact the future of humanity. Even if AI is not itself "in control", there is a substantial chance that the logic of AI will be.

Expand full comment

I guess I'm an "AI doomer and safetyist" but I assure you AGI is coming soon. I give it 50% chance by 2027. Your article is unconvincing. If you'd like to know more about what I think on this topic, go to takeoffspeeds.com and play around with the model and read the associated report and the previous Bio Anchors report. My median setting for 2022 training requirements variable is 1e29. Also see https://bounded-regret.ghost.io/what-will-gpt-2030-look-like/ Happy to throw more links around if anyone is interested in more stuff to read on the topic of AGI timelines.

Expand full comment

I think your report is based on the wrong assumption that you can predict the invention of novel technologies. Did you even read my linked reply to Gwern et al.? Quoting myself here:

Expert opinion won't get us far as the growth of knowledge is unpredictable. Stone Age people couldn’t have predicted the invention of the wheel since its prediction necessitates its invention. We need a hard-to-vary explanation to understand a system or phenomenon.

[...]

The Scaling Laws hypothesize that LLMs will continue to improve with increasing model size, training data, and compute.

Some claim that at the end of these scaling laws lies AGI. This is wrong. It is like saying that if we make cars faster, we’ll get supersonic jets. The error is to assume that the deep learning transformer architecture will somehow magically evolve into AGI (gain disobedience, agency, and creativity). Professor Noam Chomsky also called this thinking emergentist10. Evolution and engineering are not the same.

In his book, The Myth of Artificial Intelligence, AI researcher Eric J. Larson shares similar views, criticizing multiple failed "big data neuroscience" projects writing that “technology is downstream of theory”. AGIs (People) create new knowledge at runtime, while current LLM/transformer-based models can only improve if we give them more training data.

Gwern’s “Scaling hypothesis AGI” is based on the claim that GPT-3 has somewhere around twice the “absolute error of a human”. He calculates that we’ll reach “human-level performance” once we train a model 2,200,000 X the size of GPT-311. He assumes that training AGI becomes a simple $10 trillion investment (in 2038 if declining compute trends continue).

The mistake here is assuming that human-level performance on an isolated writing task is a meaningful measurement of human intelligence. Just because an AI model performs on par with humans in one specific task doesn't mean it has the same general intelligence exhibited by humans. A calculator is better than a human at calculating, but it doesn’t replace the work of a mathematician conjecturing new theorems.

Expand full comment

In what way does the report rely on the assumption that I can predict the invention of novel technologies, in a stronger or more problematic sense than anyone else's calculations for AI timelines do? Or self-driving car timelines, or moonbase timelines, for that matter. Obviously predicting the future is hard, but there are better and worse ways to do it. The report generates a probability distribution, i.e. it's all about quantifying and managing uncertainty.

I agree that expert opinion sucks. I've been saying this back when expert opinion was that AGI was far away, and only a few people like me thought it was on the horizon. Now that expert opinion agrees with me, I still try to avoid invoking it for the most part, and instead link to actual arguments and data.

Yeah I read your reply, I said I found it unconvincing. Here are some thoughts:

--I agree that making cars faster doesn't result in supersonic jets. However, I think that a sufficiently large multimodal AutoGPT trained sufficiently long on sufficiently diverse ambitious tasks in the real world... would probably be AGI, capable of dramatically accelerating AI R&D for example. I do not just assume this but instead argue for it on the basis of the scaling laws and trends on capability metrics. For examples of the sort of arguments I like, see Steinhardt's post on GPT-2030 linked above. I'd be curious to hear more about what you think the barrier is -- what skills will GPT-2030 lack, that prevent it from being AGI or in particular automating AI R&D? You mention disobedience, agency, and creativity. Isn't ChatGPT already disobedient and creative? Isn't AutoGPT, ChaosGPT, etc. already agentic?

--I definitely don't assume that human-level performance on an isolated writing task is a meaningful measurement of human intelligence, or that an AI that performs well on one specific task must be AGI. When GPT-2 and GPT-3 came out back in the day, I updated towards shorter AGI timelines precisely because they seemed to have 'sparks of general intelligence,' i.e. they seemed to be good at lots of things, not just one thing.

Expand full comment

Besides relying on expert opinion (justificationism) you don't seem to appreciate the importance of hard-to-vary explanations in your predictions which makes them error-prone:

1. We can't create new knowledge via induction (Bayesian epistemology). Priors lead to an infinite regress; How confident are you in your 90% prior. Without an explanation all of these numbers are arbitrary.

2. More evidence doesn't simply increase the likelihood. Europeans only discovered black swans in 1636.

Your argument that we can scale up AutoGPT is interesting and goes into the right direction because it's trying to _explain_ how an AGI could work.

However, AutoGPT not a model, is just an application to recursively break down prompts which are generated by an LLMs. Its induction-based architecture will also remain unable to create new knowledge (new knowledge cant be created via induction). So I don't see how something like this could perform AI research on its own. It can certainly augment and automate certain AI researcher tasks, but it can't replace them.

Expand full comment

It seems as though you didn't actually grasp Marc's point about "category error". AI is a buzzword for computer programs that run statistical models. They're great, they can help us achieve great things, but granting that they're on some path to far outpace humans in "general intelligence", let alone develop a mind of their own or get "out of our control" in some way is baseless. Just because you can *imagine* a hypothetical computer program that does such a thing doesn't mean that we're anywhere close, or even that such a program is physically possible to produce.

Regulating "AI" on the basis of your science fiction imaginings makes about as much sense as regulating the laser industry on the basis that somebody might build a laser that eats through the earth's core, or regulating the airline industry to make sure they don't destroy us in the wake of a warp drive malfunction.

The Law ought to be rooted in firm objective evidence and arguments, not fictional what ifs. Nuclear bombs are physically possible and do exist. It's trivial to demonstrate that viruses can replicate out of our control and damage human health. Comparing those verifiable facts to science fiction imaginings about statistical models is sloppy reasoning by analogy and should play no part in deciding where to point the government gun.

Expand full comment

> or even that such a program is physically possible to produce.

Of course it's physically possible to make smarter than human general intelligence. How are we even entertaining this question at this stage.

> makes about as much sense as regulating the laser industry on the basis that somebody might build a laser that eats through the earth's core

You definitely would talk about regulating this if most major experts in the field were saying that this is a distinct and probable risk (akin to pandemics and nuclear weapons).

> It's trivial to demonstrate that viruses can replicate out of our control and damage human health.

What's your threshold for damage? How much damage would an AI have to do for you to accept that it should be regulated? How involved does the AI have to be in the process? If an AI provides detailed and easy to understand bioweapon instructions to terrorists, would that count as AI-induced harm, or is that just terrorist=bad as usual?

Expand full comment

> Of course it's physically possible to make smarter than human general intelligence. How are we even entertaining this question at this stage.

Asserting something isn’t an argument and can be dismissed without argument.

> You definitely would talk about regulating this if most major experts in the field were saying that this is a distinct and probable risk (akin to pandemics and nuclear weapons).

Textbook appeal to authority. What about all of the experts who disagree? What about the financial interests of the pro-regulation experts in cementing their lead via regulation. “A bunch of experts say so” isn’t a valid argument.

> What's your threshold for damage? How much damage would an AI have to do for you to accept that it should be regulated? How involved does the AI have to be in the process? If an AI provides detailed and easy to understand bioweapon instructions to terrorists, would that count as AI-induced harm, or is that just terrorist=bad as usual?

None of these questions are relevant to my point. My point is that nukes and viruses can demonstrably, directly result in exponential physical processes that can put life and property in jeopardy. Hence we all have a legitimate interest in their handling, just like you have a legitimate interest if your next door neighbor is stockpiling dynamite in their garage.

That’s not comparable to a chatbot which can tell you how to make dynamite. We already have Google, which can tell you how to make dynamite (or bioweapons, or nukes). Publishing such information is firmly protected by the first amendment, as is coming up with new chemical formulas for such things or any other hypothetical AI chatbot activity.

Expand full comment

I think this is a good breakdown. Marc's optimism strikes me as exactly the same kind of utopian naivete that drove adoption of the internet, thinking it would lead to a new information age where everyone will be more knowledgeable and better informed because they have the world's information at their fingertips. Who still thinks that today? Utopian naivete is great for driving progress, but it blinds you to dangers.

> We’re talking about something that is potentially more powerful than any human

"Powerful" is too vague and will be misinterpreted. Substitute "intelligent" as that's more concrete.

> But a lot of the harm from China developing AI first comes from the fact they probably will give no consideration to alignment issues.

This is completely backwards. China has *way more* motivation to work on and solve alignment, because they want their AI to conform to and enforce their political agenda, and they will not tolerate anything less. That's alignment.

By contrast, loosely regulated capitalist countries have little to no incentive to work on alignment, they are the ones that incentivize the first to cross the AGI line with little thought to ethics.

Don't worry that China won't work on alignment, worry that they will, and that they will solve it and AGI first.

Expand full comment

"The Russian nuclear scientists who built the Chernobyl Nuclear Power Plant did not want it to meltdown, the biologists at the Wuhan Institute of Virology didn’t want to release a deadly pandemic"?

The Chernobyl meltdown was the most egregious case of (Ukrainian) operator error known to man until of course, it began massacring its Russian speaking citizens.

The Wuhan Institute was built by France and was never without French and/or EU and American technologists. None of them, and none of the lab's many foreign visitors ever saw or heard of anything that might support such an allegation. Besides, the CDC certified the world's first Covid death before China's first, and admitted that 4-6 million Americans were Covid seropositive in 2019. https://herecomeschina.substack.com/p/covid-came-from-italy

Expand full comment

Great compilation, Dwarkesh!

I also noted several fallacies and manipulatons in Marc's essay.

You commented on the most salient ones (SBF, China), I'd also mention a manipulation about Marx.

Marc argues that the premise of 'owners of the means of production stealing all societal wealth from the people who do the actual work' is a fallacy.

Yet data from OECD and other sourced about Great Decoupling show that it is (to a certain extent) a fact, not a fallacy.

https://en.wikipedia.org/wiki/Decoupling_of_wages_from_productivity

Expand full comment

Well said. I've subscribed & am looking forward to working through some of your podcast interviews!

Expand full comment

You seem to be mistaking Marc's goal with his piece. You're playing rational checkers.

Expand full comment

Excellent points. Sad to see Marc was so quick to block. Keep up the good work!

Expand full comment

He doesn't engage directly and provide arguments because he thinks the AI safety people are like a religious cult. You can't argue with a religious cult, and we have freedom of religion so you can't ban them. You can only state publicly that you think they are a cult, encourage people to avoid the cultists, and hope that people slowly drift away from the cult over time.

Expand full comment

If you successfully argue that everyone who believes AI is going to kill us is just a cult member, you still need to argue why AI will or won't actually kill us using locally-valid arguments.

That's the only way to not eventually get killed.

Expand full comment

"In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will."

Marc, the LLM's are developed based on a goal - minimizing a loss function. That goal creates emergent goals - optimizing prediction of the next word requires learning grammar, logic and problem solving. Depending on the loss function it is minimizing it could have an emergent goal of killing you.

Marc, do you think we have souls that differentiate us from machines? Is there something in our brains that could never be replicated by a computer? A toaster has a limited capability, but the AI systems that are being developed have no defined upper capability limit that has yet been discovered.

I don't really like ad hominem arguments but in this case I will make an exception. I strongly suspect that that Marc's paper is not a serious intellectual exercise but a shill for his investments, like his enthusiasm for Web 3.0 and crypto.

Expand full comment

Brilliant. Your essay counters all Andreessen' arguments in clear, elegant and convincing way. I was thinking along the same lines as yiu but my command if English wouldn't allow me to write such excellent piece.

One question remains: how come Andreessen, a guy so smart, wrote such a stupid post? I suspect he just used ChatGPT...

Expand full comment

Brilliant. Your essay counters all Andreessen' arguments in clear, elegant and convincing way. I was thinking along the same lines as yiu but my command if English wouldn't allow me to write such excellent piece.

One question remains: how come Andreessen, a guy so smart, wrote such a stupid post? I suspect he just used ChatGPT...

Expand full comment

This is just silly!

AI is not new, it has been a key part of our lives for a while now.

Just seems short-sighted, now - out of nowhere - things just escalated really fast, because a new interface understands human language instead of esoteric commands.

Terrorists could already find out how to build the worst possible things in the world, AI is not making it easier - on the other hand, can much better prevent bad scenarios.

It's just sad even spending time discussing nothing burgers! And yes, a lot of times in the past, high reputation people were proven wrong - just go ask some fancy king.

Expand full comment

legitimatly, the converstation on AI saftey is ridiculous because the military is not bound by civilian government in any way, and lies to and controls the civilian government. they will develop AI in secret as will china. and if researchers at google and elsewhere are annoyed by what their companies are doign to cooperate with the military, the military is already aware of how to avoid that old problem without too much blowback.

all the arguments apply AS IF the civilian governmnet would be in charge of some outcome of the commercial discussion. they are not. that said, if fewer resources are put into commercial AI this might not free up labor for military AI it might actually slow down the growth of the talent pool. so that is an inteteresting tertiary dynamic.

When you read and talk to people about the nuclear bomb creation, there was no real civilian discussion about it back then. I have news for you now, there isn't anything relevant now either happening in public. whatever is meaningful in public is only because it MIGHT be having influence on private discussions in the militiary. 'might'. i wouldn't overweight these discussions importance .

the REAL REASON these discussions are happening in public is because giant companies want to capture government to use their regulatory capture to protect their cartel from disruption. it's that simple. there is NO other reason these discussions are actually happening even though people think there are other reasons. if they accepted that these discussions are fruitless in the real world where the military is already deploying these tools, and so are other militaries around the world, then they would not be seriously focussing on 'terrorists' . the militaries build weapons. it's that simple.

Expand full comment

> I’m sure they exist in principle - otherwise alignment would be hopeless.

Not how I'd put it. I'd say:

I really hope they exist in principle - otherwise alignment is hopeless. There is no guarantee of this though.

Expand full comment