The Coming Wave - by Mustafa Suleyman, Michael Bhaskar

Published:

The Coming Wave - by Mustafa Suleyman, Michael Bhaskar

Read: 2025-01-10

Recommend: 2/10

I did not learn much from reading this book.

Notes

Here are some text that I highlighted in the book:

  1. This is key to understanding the coming wave. The technology scholar Everett Rogers talks about technology as “clusters of innovations” where one or more features are closely interrelated. The coming wave is a supercluster, an evolutionary burst like the Cambrian explosion, the most intense eruption of new species in the earth’s history, with many thousands of potential new applications. Each technology described here intersects with, buttresses, and boosts the others in ways that make it difficult to predict their impact in advance. They are all deeply entangled and will grow more so. Another trait of the new wave is speed. The engineer and futurist Ray Kurzweil talks about the “law of accelerating returns,” feedback loops where advances in technology further increase the pace of development. By allowing work at greater levels of complexity and precision, more sophisticated chips and lasers help create more powerful chips, for example, which in turn can produce better tools for further chips. We see this now on a large scale, with AI helping design better chips and production techniques that enable more sophisticated forms of AI and so on. Different parts of the wave spark and accelerate one another, sometimes with extreme unpredictability and combustibility.

  2. By creating something smarter than us, we could put ourselves in the position of our primate cousins. With a long-term view in mind, those focusing on AGI scenarios are right to be concerned. Indeed, there is a strong case that by definition a superintelligence would be fully impossible to control or contain. An “intelligence explosion” is the point at which an AI can improve itself again and again, recursively making itself better in ever faster and more effective ways. Here is the definitive uncontained and uncontainable technology. The blunt truth is that nobody knows when, if, or exactly how AIs might slip beyond us and what happens next; nobody knows when or if they will become fully autonomous or how to make them behave with awareness of and alignment with our values, assuming we can settle on those values in the first place.

  3. Ultimately, in its most dramatic forms, the coming wave could mean humanity will no longer be at the top of the food chain. Homo technologicus may end up being threatened by its own creation. The real question is not whether the wave is coming. It clearly is; just look and you can see it forming already. Given risks like these, the real question is why it’s so hard to see it as anything other than inevitable.

  4. For us the event was a scientific experiment. It was a powerful—and, yes, cool—demonstration of cutting-edge techniques we’d spent years trying to perfect. It was exciting from an engineering perspective, exhilarating for its competition, and bewildering to be at the center of a media circus. For many in Asia it was something more painful, an instance of wounded regional and national pride.

  5. In China, Go wasn’t just a game. It represented a wider nexus of history, emotion, and strategic calculation. China was already committed to investing heavily in science and technology, but AlphaGo helped focus government minds even more acutely on AI. China, with its thousands of years of history, had once been the crucible of world technological innovation; it was now painfully aware of how it had fallen behind, losing the technological race to Europeans and Americans on various fronts from medicines to aircraft carriers. It had endured a “century of humiliation,” as the Chinese Communist Party (CCP) calls it. One that, the party believes, must never happen again.

  6. The central problem for humanity in the twenty-first century is how we can nurture sufficient legitimate political power and wisdom, adequate technical mastery, and robust norms to constrain technologies to ensure they continue to do far more good than harm. How, in other words, we can contain the seemingly uncontainable.

  7. I believe that figuring out ways to reconcile profit and social purpose in hybrid organizational structures is the best way to navigate the challenges that lie ahead, but making it work in practice is incredibly hard.