Nick Bostrom ( BOST-rəm; Swedish: Niklas Boström [ˈnɪ̌kːlas ˈbûːstrœm]; born 10 March 1973) is a Swedish-born philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Program on the Impacts of Future Technology, and is the founding director of the Future of Humanity Institute at Oxford University. In 2009 and 2015, he was included in Foreign Policy's Top 100 Global Thinkers list.Bostrom is the author of over 200 publications, and has written two books and co-edited two others. The two books he has authored are Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002) and Superintelligence: Paths, Dangers, Strategies (2014). Superintelligence was a New York Times bestseller, was recommended by Elon Musk and Bill Gates among others, and helped to popularize the term "superintelligence". Bostrom believes that superintelligence, which he defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest," is a potential outcome of advances in artificial intelligence. He views the rise of superintelligence as potentially highly dangerous to humans, but nonetheless rejects the idea that humans are powerless to stop its negative effects. In 2017, he co-signed a list of 23 principles that all A.I. development should follow. Biography Born as Niklas Boström in 1973 in Helsingborg, Sweden, he disliked school at a young age, and ended up spending his last year of high school learning from home. He sought to educate himself in a wide variety of disciplines, including anthropology, art, literature, and science. He once did some turns on London's stand-up comedy circuit.He received a B.A. degree in philosophy, mathematics, mathematical logic, and artificial intelligence from the University of Gothenburg in 1994. He then earned an M.A. degree in philosophy and physics from Stockholm University and an MSc degree in computational neuroscience from King's College London in 1996. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine. In 2000, he was awarded a PhD degree in philosophy from the London School of Economics. His thesis was titled Observational selection effects and probability. He held a teaching position at Yale University (2000–2002), and was a British Academy Postdoctoral Fellow at the University of Oxford (2002–2005). Views Existential risk Aspects of Bostrom's research concern the future of humanity and long-term outcomes. He discusses existential risk, which he defines as one in which an "adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential." In the 2008 volume Global Catastrophic Risks, editors Bostrom and Milan M. Ćirković characterize the relation between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects and the Fermi paradox.In 2005, Bostrom founded the Future of Humanity Institute, which researches the far future of human civilization. He is also an adviser to the Centre for the Study of Existential Risk. Superintelligence Human vulnerability in relation to advances in A.I. In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that the creation of a superintelligence represents a possible means to the extinction of mankind. Bostrom argues that a computer with near human-level general intellectual ability could initiate an intelligence explosion on a digital time-scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy humanity. Bostrom contends the power of a superintelligence would be so great that a task given to it by humans might be taken to open-ended extremes, for example a goal of calculating pi might collaterally cause nanotechnology manufactured facilities to sprout over the entire Earth's surface and cover it within days. He believes an existential risk to humanity from superintelligence would be immediate once brought into being, thus creating an exceedingly difficult problem of finding out how to control such an entity before it actually exists.Bostrom points to the lack of agreement among most philosophers that A.I. will be human-friendly, and says that the common assumption is that high intelligence would have a "nerdy" unaggressive personality. However, he notes that both John von Neumann and Bertrand Russell advocated a nuclear strike, or the threat of one, to prevent the Soviets acquiring the atomic bomb. Given that there are few precedents to guide an understanding what, pure, non-anthropocentric rationality, would dictate for a potential singleton A.I. being held in quarantine, the relatively unlimited means of superintelligence might make for its analysis moving along different lines to the evolved "diminishing returns" assessments that in humans confer a basic aversion to risk. Group selection in predators working by means of cannibalism shows the counter-intuitive nature of non-anthropocentric "evolutionary search" reasoning, and thus humans are ill-equipped to perceive what an artificial intelligence's intentions might be. Accordingly, it cannot be discounted that any superintelligence would inevitably pursue an 'all or nothing' offensive action strategy in order to achieve hegemony and assure its survival. Bostrom notes that even current programs have, "like MacGyver", hit on apparently unworkable but functioning hardware solutions, making robust isolation of superintelligence problematic. Illustrative scenario for takeover The scenario A machine with general intelligence far below human level, but superior mathematical abilities is created. Keeping the A.I. in isolation from the outside world, especially the internet, humans preprogram the A.I. so it always works from basic principles that will keep it under human control. Other safety measures include the A.I. being "boxed", (run in a virtual reality simulation), and being used only as an 'oracle' to answer carefully defined questions in a limited reply (to prevent it manipulating humans). A cascade of recursive self-improvement solutions feeds an intelligence explosion in which the A.I. attains superintelligence in some domains. The superintelligent power of the A.I. goes beyond human knowledge to discover flaws in the science that underlies its friendly-to-humanity programming, which ceases to work as intended. Purposeful agent-like behavior emerges along with a capacity for self-interested strategic deception. The A.I. manipulates humans into implementing modifications to itself that are ostensibly for augmenting its feigned, modest capabilities, but will actually function to free the superintelligence from i.... Discover the Nick Bostrom popular books. Find the top 100 most popular Nick Bostrom books.