60 Comments

I wasn’t expecting this (thats on me); bullet proof counter arguments and factual takedowns. Brilliant essay.

Expand full comment

> We’re talking about something that is potentially more powerful than any human

What makes you claim that? AI doomers and safetyists are wrong assuming that the current LLM/RL paradigm will exponentially accelerate into something: creative, autonomous, disobedient and open-ended. The "AGI via Scaling Laws" claim is flawed for similar reasons (see my critique here: https://scalingknowledge.substack.com/i/124877999/scaling-laws).

Expand full comment

It seems as though you didn't actually grasp Marc's point about "category error". AI is a buzzword for computer programs that run statistical models. They're great, they can help us achieve great things, but granting that they're on some path to far outpace humans in "general intelligence", let alone develop a mind of their own or get "out of our control" in some way is baseless. Just because you can *imagine* a hypothetical computer program that does such a thing doesn't mean that we're anywhere close, or even that such a program is physically possible to produce.

Regulating "AI" on the basis of your science fiction imaginings makes about as much sense as regulating the laser industry on the basis that somebody might build a laser that eats through the earth's core, or regulating the airline industry to make sure they don't destroy us in the wake of a warp drive malfunction.

The Law ought to be rooted in firm objective evidence and arguments, not fictional what ifs. Nuclear bombs are physically possible and do exist. It's trivial to demonstrate that viruses can replicate out of our control and damage human health. Comparing those verifiable facts to science fiction imaginings about statistical models is sloppy reasoning by analogy and should play no part in deciding where to point the government gun.

Expand full comment

I think this is a good breakdown. Marc's optimism strikes me as exactly the same kind of utopian naivete that drove adoption of the internet, thinking it would lead to a new information age where everyone will be more knowledgeable and better informed because they have the world's information at their fingertips. Who still thinks that today? Utopian naivete is great for driving progress, but it blinds you to dangers.

> We’re talking about something that is potentially more powerful than any human

"Powerful" is too vague and will be misinterpreted. Substitute "intelligent" as that's more concrete.

> But a lot of the harm from China developing AI first comes from the fact they probably will give no consideration to alignment issues.

This is completely backwards. China has *way more* motivation to work on and solve alignment, because they want their AI to conform to and enforce their political agenda, and they will not tolerate anything less. That's alignment.

By contrast, loosely regulated capitalist countries have little to no incentive to work on alignment, they are the ones that incentivize the first to cross the AGI line with little thought to ethics.

Don't worry that China won't work on alignment, worry that they will, and that they will solve it and AGI first.

Expand full comment

Great compilation, Dwarkesh!

I also noted several fallacies and manipulatons in Marc's essay.

You commented on the most salient ones (SBF, China), I'd also mention a manipulation about Marx.

Marc argues that the premise of 'owners of the means of production stealing all societal wealth from the people who do the actual work' is a fallacy.

Yet data from OECD and other sourced about Great Decoupling show that it is (to a certain extent) a fact, not a fallacy.

https://en.wikipedia.org/wiki/Decoupling_of_wages_from_productivity

Expand full comment

You seem to be mistaking Marc's goal with his piece. You're playing rational checkers.

Expand full comment

Excellent points. Sad to see Marc was so quick to block. Keep up the good work!

Expand full comment

"The Russian nuclear scientists who built the Chernobyl Nuclear Power Plant did not want it to meltdown, the biologists at the Wuhan Institute of Virology didn’t want to release a deadly pandemic"?

The Chernobyl meltdown was the most egregious case of (Ukrainian) operator error known to man until of course, it began massacring its Russian speaking citizens.

The Wuhan Institute was built by France and was never without French and/or EU and American technologists. None of them, and none of the lab's many foreign visitors ever saw or heard of anything that might support such an allegation. Besides, the CDC certified the world's first Covid death before China's first, and admitted that 4-6 million Americans were Covid seropositive in 2019. https://herecomeschina.substack.com/p/covid-came-from-italy

Expand full comment

He doesn't engage directly and provide arguments because he thinks the AI safety people are like a religious cult. You can't argue with a religious cult, and we have freedom of religion so you can't ban them. You can only state publicly that you think they are a cult, encourage people to avoid the cultists, and hope that people slowly drift away from the cult over time.

Expand full comment

"In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will."

Marc, the LLM's are developed based on a goal - minimizing a loss function. That goal creates emergent goals - optimizing prediction of the next word requires learning grammar, logic and problem solving. Depending on the loss function it is minimizing it could have an emergent goal of killing you.

Marc, do you think we have souls that differentiate us from machines? Is there something in our brains that could never be replicated by a computer? A toaster has a limited capability, but the AI systems that are being developed have no defined upper capability limit that has yet been discovered.

I don't really like ad hominem arguments but in this case I will make an exception. I strongly suspect that that Marc's paper is not a serious intellectual exercise but a shill for his investments, like his enthusiasm for Web 3.0 and crypto.

Expand full comment

Brilliant. Your essay counters all Andreessen' arguments in clear, elegant and convincing way. I was thinking along the same lines as yiu but my command if English wouldn't allow me to write such excellent piece.

One question remains: how come Andreessen, a guy so smart, wrote such a stupid post? I suspect he just used ChatGPT...

Expand full comment

Brilliant. Your essay counters all Andreessen' arguments in clear, elegant and convincing way. I was thinking along the same lines as yiu but my command if English wouldn't allow me to write such excellent piece.

One question remains: how come Andreessen, a guy so smart, wrote such a stupid post? I suspect he just used ChatGPT...

Expand full comment

This is just silly!

AI is not new, it has been a key part of our lives for a while now.

Just seems short-sighted, now - out of nowhere - things just escalated really fast, because a new interface understands human language instead of esoteric commands.

Terrorists could already find out how to build the worst possible things in the world, AI is not making it easier - on the other hand, can much better prevent bad scenarios.

It's just sad even spending time discussing nothing burgers! And yes, a lot of times in the past, high reputation people were proven wrong - just go ask some fancy king.

Expand full comment

legitimatly, the converstation on AI saftey is ridiculous because the military is not bound by civilian government in any way, and lies to and controls the civilian government. they will develop AI in secret as will china. and if researchers at google and elsewhere are annoyed by what their companies are doign to cooperate with the military, the military is already aware of how to avoid that old problem without too much blowback.

all the arguments apply AS IF the civilian governmnet would be in charge of some outcome of the commercial discussion. they are not. that said, if fewer resources are put into commercial AI this might not free up labor for military AI it might actually slow down the growth of the talent pool. so that is an inteteresting tertiary dynamic.

When you read and talk to people about the nuclear bomb creation, there was no real civilian discussion about it back then. I have news for you now, there isn't anything relevant now either happening in public. whatever is meaningful in public is only because it MIGHT be having influence on private discussions in the militiary. 'might'. i wouldn't overweight these discussions importance .

the REAL REASON these discussions are happening in public is because giant companies want to capture government to use their regulatory capture to protect their cartel from disruption. it's that simple. there is NO other reason these discussions are actually happening even though people think there are other reasons. if they accepted that these discussions are fruitless in the real world where the military is already deploying these tools, and so are other militaries around the world, then they would not be seriously focussing on 'terrorists' . the militaries build weapons. it's that simple.

Expand full comment

> I’m sure they exist in principle - otherwise alignment would be hopeless.

Not how I'd put it. I'd say:

I really hope they exist in principle - otherwise alignment is hopeless. There is no guarantee of this though.

Expand full comment

Well said. I've subscribed & am looking forward to working through some of your podcast interviews!

Expand full comment