Prof. Nick Bostrom on the Existential Risks of Artificial Intelligence

Philosopher Nick Bostrom from Oxford University’s Future of Humanity Institute agrees with Elon Musk that more money should be spent on research to mitigate the potential existential risks of artificial superintelligence.

Nick Bostrom’s 2014 best seller, Superintelligence: Paths, Dangers, Strategies, is cited by both Elon Musk and Bill Gates for shaping their thinking on the existential risks of artificial superintelligence.

This video aired on July 20th, 2017

Facebook Comments