I had a lot of fun chatting with Shane Legg - Founder and Chief AGI Scientist, Google DeepMind! We discuss: Why he expects AGI around 2028 How to align superhuman models What new architectures needed for AGI Has Deepmind sped up capabilities or safety more?
Does "alignment" include the idea of avoiding "political incorrect" statements that nevertheless happen to be scientifically true? Involving the genetics of intelligence for example. If the answer is yes I would like to hear more about it. If the answer is no I would like to hear more about that too.
Shane Legg (DeepMind Founder) - 2028 AGI, New Architectures, Aligning Superhuman Models
It's Geoffrey Irving not Jeffrey, and the website is wrong (correct: https://naml.us/ )
Does "alignment" include the idea of avoiding "political incorrect" statements that nevertheless happen to be scientifically true? Involving the genetics of intelligence for example. If the answer is yes I would like to hear more about it. If the answer is no I would like to hear more about that too.