Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431
Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this podcast by checking out our sponsors:
- Yahoo Finance:
- MasterClass: to get 15% off
- NetSuite: to get free product tour
- LMNT: to get free sample pack
- Eight Sleep: to get $350 off
TRANSCRIPT:
EPISODE LINKS:
Roman’s X:
Roman’s Website:
Roman’s AI book:
PODCAST INFO:
Podcast website:
Apple Podcasts:
Spotify:
RSS:
Full episodes playlist:
Clips playlist:
OUTLINE:
0:00 - Introduction
2:20 - Existential risk of AGI
8:32 - Ikigai risk
16:44 - Suffering risk
20:19 - Timeline to AGI
24:51 - AGI turing test
30:14 - Yann LeCun and open source AI
43:06 - AI control
45:33 - Social engineering
48:06 - Fearmongering
57:57 - AI deception
1:04:30 - Verification
1:11:29 - Self-improving AI
1:23:42 - Pausing AI development
1:29:59 - AI Safety
1:39:43 - Current AI
1:45:05 - Simulation
1:52:24 - Aliens
1:53:57 - Human mind
2:00:17 - Neuralink
2:09:23 - Hope for the future
2:13:18 - Meaning of life
SOCIAL:
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:
- Medium: @lexfridman
- Reddit:
- Support on Patreon:
1 view
100
27
5 months ago 00:15:08 1
KI wird uns alle VERNICHTEN - KI-Experten warnen!
1 year ago 00:33:28 1
Hacking The Simulation
8 years ago 00:45:14 1
AGI-16 Panel Discussion for Session 4: Reinforcement Learning
8 years ago 00:14:56 1
AGI-16 James Babcock - The AGI Containment Problem