First Page | Document Content | |
---|---|---|
Date: 2017-01-30 22:06:06Artificial intelligence Technology Motivation Metaphysics Game theory Futurology Philosophy of artificial intelligence Choice modelling Utility Expected utility hypothesis Intelligent agent Friendly artificial intelligence | The AI Alignment Problem: Why It’s Hard, and Where to Start Eliezer Yudkowsky Machine Intelligence Research Institute May 5, 2016Add to Reading ListSource URL: intelligence.orgDownload Document from Source WebsiteFile Size: 278,17 KBShare Document on Facebook |