The Hidden Complexity of Wishes

This video is about AI Alignment. At the moment, humanity has no idea how to make AIs follow complex goals that track human values. This video introduces a series focused on what is sometimes called “the outer alignment problem“. In future videos, we’ll explore how this problem affects machine learning systems today and how it could lead to catastrophic outcomes for humanity. The text of this video has been slightly adapted from an original article written by Eliezer Yudkowsky. You can read the original article here:
Back to Top