<--- Back to Details
First PageDocument Content
Singularitarianism / Prediction / Sociocultural evolution / Philosophy of artificial intelligence / Transhumanists / Superintelligence / Friendly artificial intelligence / Nick Bostrom / Risks to civilization /  humans /  and planet Earth / Futurology / Time / Future
Date: 2003-08-02 08:05:13
Singularitarianism
Prediction
Sociocultural evolution
Philosophy of artificial intelligence
Transhumanists
Superintelligence
Friendly artificial intelligence
Nick Bostrom
Risks to civilization
humans
and planet Earth
Futurology
Time
Future

The Ethics of Superintelligent Machines

Add to Reading List

Source URL: www.nickbostrom.com

Download Document from Source Website

File Size: 159,84 KB

Share Document on Facebook

Similar Documents

Computer vision / Artificial intelligence / Vision / Feature detection / Image processing / Ensemble learning / Traffic signals / Pedestrian detection / Surveillance / Histogram of oriented gradients / Background subtraction / Foreground detection

Pedestrian Friendly Traffic Signal Control FINAL RESEARCH REPORT Stephen F. Smith (PI), Gregory J. Barlow, Hsu-Chieh Hu, Ju-Hsuan Hua

DocID: 1r736 - View Document

Artificial intelligence / Time / Futurology / Future / Science and technology / Singularitarianism / Philosophy of artificial intelligence / Computational neuroscience / Superintelligence / Intelligent agent / Existential risk from artificial general intelligence / Friendly artificial intelligence

Analysis and Metaphysics GENERAL PURPOSE INTELLIGENCE: ARGUING THE ORTHOGONALITY THESIS STUART ARMSTRONG Future of Humanity Institute, Oxford Martin School

DocID: 1qbtV - View Document

Futurology / Time / Future / Singularitarianism / Eschatology / Philosophy of artificial intelligence / Artificial intelligence / Existential risk / Friendly artificial intelligence / Superintelligence / Eliezer Yudkowsky / Intelligence explosion

A peer-reviewed electronic journal published by the Institute for Ethics and Emerging Technologies ISSN) – December 2015

DocID: 1mx9A - View Document

Future / Artificial intelligence / Futurology / Transhumanists / Computational neuroscience / Friendly artificial intelligence / Eliezer Yudkowsky / Strong AI / Agent-based model / Singularitarianism / Science / Time

Aligning Superintelligence with Human Interests: An Annotated Bibliography Nate Soares Machine Intelligence Research Institute

DocID: 1gGhe - View Document

Science / Philosophy of artificial intelligence / Transhumanists / Futurology / Computational neuroscience / Friendly artificial intelligence / Ben Goertzel / Singularity Institute for Artificial Intelligence / Strong AI / Singularitarianism / Time / Future

MIRI MACH IN E INT ELLIGENCE R ESEARCH INS TITU TE AI Risk Bibliography 2012

DocID: 1gaHz - View Document