Jan Leike

8 Podcast Episodes

[Linkpost] “OpenAI’s massive push to make superintelligence safe in 4 years or less (Jan Leike on the 80,000 Hours Podcast)” by 80000_Hours

[Linkpost] “OpenAI’s massive push to make superintelligence safe in 4 years or less (Jan Leike on the 80,000 Hours Podcast)” by 80000_Hours

We just published an interview: Jan Leike on OpenAI’s massive push to make superintelligence safe in 4 years or less. Yo... Read more

8 Aug 2023

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent... Read more

7 Aug 2023

2hr 51mins

Similar People

EA - Jan Leike: On the windfall clause by Cullen OKeefe

EA - Jan Leike: On the windfall clause by Cullen OKeefe

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist ... Read more

5 Aug 2022

4mins

AF - [Link] A minimal viable product for alignment by Jan Leike

AF - [Link] A minimal viable product for alignment by Jan Leike

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist ... Read more

6 Apr 2022

Most Popular

AF - [Link] Why I’m excited about AI-assisted human feedback by Jan Leike

AF - [Link] Why I’m excited about AI-assisted human feedback by Jan Leike

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist ... Read more

6 Apr 2022

96. Jan Leike - AI alignment at OpenAI

96. Jan Leike - AI alignment at OpenAI

The more powerful our AIs become, the more we’ll have to ensure that they’re doing exactly what we want. If we don’t, we... Read more

29 Sep 2021

1hr 5mins

AIAP: On DeepMind, AI Safety, and Recursive Reward Modeling with Jan Leike

AIAP: On DeepMind, AI Safety, and Recursive Reward Modeling with Jan Leike

Jan Leike is a senior research scientist who leads the agent alignment team at DeepMind. His is one of three teams withi... Read more

16 Dec 2019

58mins

#23 - How to actually become an AI alignment researcher, according to Dr Jan Leike

#23 - How to actually become an AI alignment researcher, according to Dr Jan Leike

Want to help steer the 21st century’s most transformative technology? First complete an undergrad degree in computer sci... Read more

16 Mar 2018

45mins

“Podium: AI tools for podcasters. Generate show notes, transcripts, highlight clips, and more with AI. Try it today at https://podium.page”