Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368

Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI. Please support this podcast by checking out our sponsors: - Linode: to get $100 free credit - House of Macadamias: and use code LEX to get 20% off your first order - InsideTracker: to get 20% off EPISODE LINKS: Eliezer’s Twitter: LessWrong Blog: Eliezer’s Blog page: Books and resources mentioned: 1. AGI Ruin (blog post): 2. Adaptation and Natural Selection: PODCAST INFO: Podcast website: Apple Podcasts: Spotify: RSS: Full episodes playlist: Clips playlist: OUTLINE: 0:00 - Introduction 0:43 - GPT-4 23:23 - Open sourcing GPT-4 39:41 - Defining AGI 47:38 - AGI alignment 1:30:30 - How AGI may kill us 2:22:51 - Superintelligence 2:30:03 - Evolution 2:36:33 - Consciousness 2:47:04 - Aliens 2:52:35 - AGI Timeline 3:00:35 - Ego 3:06:27 - Advice for young people 3:11:45 - Mortality 3:13:26 - Love SOCIAL: - Twitter: - LinkedIn: - Facebook: - Instagram: - Medium: @lexfridman - Reddit: - Support on Patreon:
Back to Top