
Nick Bostrom
An in-depth analysis of the emergence of machine superintelligence, its potential risks, and strategies for ensuring a beneficial future.
Nick Bostrom is the founding director of the Future of Humanity Institute at Oxford University.
Section 1
9 Sections
Imagine a world where sparrows, small and vulnerable, dreamt of an owl to help them build nests and guard them from predators. This allegory,
History reveals a pattern of accelerating growth: from millennia-long economic doubling times in hunter-gatherer societies to mere decades following agriculture, and now to astonishing speeds in our industrial and information ages.
Yet, the path to this future has been anything but smooth. The birth of artificial intelligence research in the 1950s was greeted with exuberance, with early systems demonstrating feats like theorem proving and language understanding. But the complexity of intelligence soon humbled researchers, leading to periods known as AI winters—times of skepticism and reduced funding. Despite these setbacks, the field steadily advanced, fueled by new methods like neural networks and evolutionary algorithms.
Today, AI systems outperform humans in numerous narrow domains—from chess and Go to speech recognition and medical diagnosis. But these achievements, impressive as they are, represent
As we embark on this exploration, remember the sparrows and their owl: the promise of superintelligence is immense, but so too are the risks if control and understanding lag behind creation.
Let us now turn to the various paths that might lead us from today’s AI to tomorrow’s superintelligence.
8 more insights available in app
Unlock all 9 sections, 9 insights, full audio, and interactive mind map in the SnapBooks app.
Unlocking the secrets of superintelligence — the promise, the peril, and the path ahead.
Read articleExploring the diverse routes that could lead to machines smarter than humans.
Read article
Neil Postman

Temple Grandin

Steven Pinker

Steven Rosenbaum