Listen now | Paul Christiano is the world’s leading AI safety researcher. My full episode with him is out! We discuss: Does he regret inventing RLHF, and is alignment necessarily dual-use? Why he has relatively modest timelines (40% by 2040, 15% by 2030), What do we want post-AGI world to look like (do we want to keep gods enslaved forever)?
Fascinating podcast episode, especially for someone like me who is not in favor of Artificial Intelligence systems being promoted everywhere. I was proud to be called a Luddite on another website!
Incredible guest. Excited to listen to this one.