Nexus - by Yuval Noah Harari

Published:

Nexus - by Yuval Noah Harari

Read: 2024-09-26

Recommend: 8/10

This book emphasizes how information connects people (create order) rather than delivering truth. I enjoyed the perspective of history as the study of change. Through the lens of Russian and Soviet history, I gained valuable insights into China.

Notes

Here are some text that I highlighted in the book:

  1. the power of AI could supercharge existing human conflicts, dividing humanity against itself. Just as in the twentieth century the Iron Curtain divided the rival powers in the Cold War, so in the twenty-first century the Silicon Curtain—made of silicon chips and computer codes rather than barbed wire—might come to divide rival powers in a new global conflict. Because the AI arms race will produce ever more destructive weapons, even a small spark might ignite a cataclysmic conflagration.

  2. the Silicon Curtain might come to divide not one group of humans from another but rather all humans from our new AI overlords. No matter where we live, we might find ourselves cocooned by a web of unfathomable algorithms that manage our lives, reshape our politics and culture, and even reengineer our bodies and minds—while we can no longer comprehend the forces that control us, let alone stop them.

  3. Trusting only “my own research” may sound scientific, but in practice it amounts to believing that there is no objective truth. As we shall see in chapter 4, science is a collaborative institutional effort rather than a personal quest.

  4. A variation on this theme calls on people to put their trust in charismatic leaders like Trump and Bolsonaro, who are depicted by their supporters either as the messengers of God or as possessing a mystical bond with “the people.” While ordinary politicians lie to the people in order to gain power for themselves, the charismatic leader is the infallible mouthpiece of the people who exposes all the lies. One of the recurrent paradoxes of populism is that it starts by warning us that all human elites are driven by a dangerous hunger for power, but often ends by entrusting all power to a single ambitious human.

  5. Information isn’t the raw material of truth, but it isn’t a mere weapon, either. There is enough space between these extremes for a more nuanced and hopeful view of human information networks and of our ability to handle power wisely. This book is dedicated to exploring that middle ground.

  6. History isn’t the study of the past; it is the study of change. History teaches us what remains the same, what changes, and how things change. This is as relevant to information revolutions as to every other kind of historical transformation.

  7. What the example of astrology illustrates is that errors, lies, fantasies, and fictions are information, too. Contrary to what the naive view of information says, information has no essential link to truth, and its role in history isn’t to represent a preexisting reality. Rather, what information does is to create new realities by tying together disparate things—whether couples or empires. Its defining feature is connection rather than representation, and information is whatever connects different points into a network. Information doesn’t necessarily inform us about things. Rather, it puts things in formation. Horoscopes put lovers in astrological formations, propaganda broadcasts put voters in political formations, and marching songs put soldiers in military formations.

  8. As a paradigmatic case, consider music. Most symphonies, melodies, and tunes don’t represent anything, which is why it makes no sense to ask whether they are true or false. Over the years people have created a lot of bad music, but not fake music. Without representing anything, music nevertheless does a remarkable job in connecting large numbers of people and synchronizing their emotions and movements. Music can make soldiers march in formation, clubbers sway together, church congregations clap in rhythm, and sports fans chant in unison.

  9. The emphasis on connection leaves ample room for other types of information that do not represent reality well. Sometimes erroneous representations of reality might also serve as a social nexus, as when millions of followers of a conspiracy theory watch a YouTube video claiming that the moon landing never happened. These images convey an erroneous representation of reality, but they might nevertheless give rise to feelings of anger against the establishment or pride in one’s own wisdom that help create a cohesive new group.

  10. Viewing information as a social nexus helps us understand many aspects of human history that confound the naive view of information as representation. It explains the historical success not only of astrology but of much more important things, like the Bible. While some may dismiss astrology as a quaint sideshow in human history, nobody can deny the central role the Bible has played. If the main job of information had been to represent reality accurately, it would have been hard to explain why the Bible became one of the most influential texts in history.

  11. information sometimes represents reality, and sometimes doesn’t. But it always connects. This is its fundamental characteristic. Therefore, when examining the role of information in history, although it sometimes makes sense to ask “How well does it represent reality? Is it true or false?” often the more crucial questions are “How well does it connect people? What new network does it create?”

  12. Sapiens no longer had to know each other personally; they just had to know the same story. And the same story can be familiar to billions of individuals. A story can thereby serve like a central connector, with an unlimited number of outlets into which an unlimited number of people can plug. For example, the 1.4 billion members of the Catholic Church are connected by the Bible and other key Christian stories; the 1.4 billion citizens of China are connected by the stories of communist ideology and Chinese nationalism; and the 8 billion members of the global trade network are connected by stories about currencies, corporations, and brands.

  13. what holds human networks together tends to be fictional stories, especially stories about intersubjective things like gods, money, and nations. When it comes to uniting people, fiction enjoys two inherent advantages over the truth. First, fiction can be made as simple as we like, whereas the truth tends to be complicated, because the reality it is supposed to represent is complicated. Take, for example, the truth about nations. It is difficult to grasp that the nation to which one belongs is an intersubjective entity that exists only in our collective imagination. You rarely hear politicians say such things in their political speeches. It is far easier to believe that our nation is God’s chosen people, entrusted by the Creator with some special mission. This simple story has been repeatedly told by countless politicians from Israel to Iran and from the United States to Russia. Second, the truth is often painful and disturbing, and if we try to make it more comforting and flattering, it will no longer be the truth. In contrast, fiction is highly malleable. The history of every nation contains some dark episodes that citizens don’t like to acknowledge and remember. An Israeli politician who in her election speeches details the miseries inflicted on Palestinian civilians by the Israeli occupation is unlikely to get many votes. In contrast, a politician who builds a national myth by ignoring uncomfortable facts, focusing on glorious moments in the Jewish past, and embellishing reality wherever necessary may well sweep to power. That’s the case not just in Israel but in all countries. How many Italians or Indians want to hear the unblemished truth about their nations? An uncompromising adherence to the truth is essential for scientific progress, and it is also an admirable spiritual practice, but it is not a winning political strategy.

  14. We have now seen that information networks don’t maximize truth, but rather seek to find a balance between truth and order. Bureaucracy and mythology are both essential for maintaining order, and both are happy to sacrifice truth for the sake of order. What mechanisms, then, ensure that bureaucracy and mythology don’t lose touch with truth altogether, and what mechanisms enable information networks to identify and correct their own mistakes, even at the price of some disorder?

  15. In our personal lives, religion can fulfill many different functions, like providing solace or explaining the mysteries of life. But historically, the most important function of religion has been to provide superhuman legitimacy for the social order. Religions like Judaism, Christianity, Islam, and Hinduism propose that their ideas and rules were established by an infallible superhuman authority, and are therefore free from all possibility of error, and should never be questioned or changed by fallible humans.

  16. Christians accepted the divinity of texts like Genesis, Samuel, and Isaiah, but they argued that the rabbis misunderstood these texts, and only Jesus and his disciples knew the true meaning of passages like “the Lord himself will give you a sign: the almah will conceive and give birth to a son, and will call him Immanuel” (Isaiah 7:14). The rabbis said almah meant “young woman,” Immanuel meant “God with us” (in Hebrew immanu means “with us” and el means “God”), and the entire passage was interpreted as a divine promise to help the Jewish people in their struggle against oppressive foreign empires. In contrast, the Christians argued that almah meant “virgin,” that Immanuel meant that God will literally be born among humans, and that this was a prophecy about the divine Jesus being born on earth to the Virgin Mary.

  17. The history of print and witch-hunting indicates that an unregulated information market doesn’t necessarily lead people to identify and correct their errors, because it may well prioritize outrage over truth. For truth to win, it is necessary to establish curation institutions that have the power to tilt the balance in favor of the facts. However, as the history of the Catholic Church indicates, such institutions might use their curation power to quash any criticism of themselves, labeling all alternative views erroneous and preventing the institution’s own errors from being exposed and corrected. Is it possible to establish better curation institutions that use their power to further the pursuit of truth rather than to accumulate more power for themselves?

  18. In other words, the scientific revolution was launched by the discovery of ignorance. Religions of the book assumed that they had access to an infallible source of knowledge. The Christians had the Bible, the Muslims had the Quran, the Hindus had the Vedas, and the Buddhists had the Tipitaka. Scientific culture has no comparable holy book, nor does it claim that any of its heroes are infallible prophets, saints, or geniuses. The scientific project starts by rejecting the fantasy of infallibility and proceeding to construct an information network that takes error to be inescapable. Sure, there is much talk about the genius of Copernicus, Darwin, and Einstein, but none of them is considered faultless. They all made mistakes, and even the most celebrated scientific tracts are sure to contain errors and lacunae.

  19. Since even geniuses suffer from confirmation bias, you cannot trust them to correct their own errors. Science is a team effort, relying on institutional collaboration rather than on individual scientists or, say, a single infallible book. Of course, institutions too are prone to error. Scientific institutions are nevertheless different from religious institutions, inasmuch as they reward skepticism and innovation rather than conformity. Scientific institutions are also different from conspiracy theories, inasmuch as they reward self-skepticism. Conspiracy theorists tend to be extremely skeptical regarding the existing consensus, but when it comes to their own beliefs, they lose all their skepticism and fall prey to confirmation bias. The trademark of science is not merely skepticism but self-skepticism, and at the heart of every scientific institution we find a strong self-correcting mechanism. Scientific institutions do reach a broad consensus about the accuracy of certain theories—such as quantum mechanics or the theory of evolution—but only because these theories have managed to survive intense efforts to disprove them, launched not only by outsiders but by members of the institution itself.

  20. Therefore, while democracies give the center the authority to make some vital decisions, they also maintain strong mechanisms that can challenge the central authority. To paraphrase President James Madison, since humans are fallible, a government is necessary, but since government too is fallible, it needs mechanisms to expose and correct its errors, such as holding regular elections, protecting the freedom of the press, and separating the executive, legislative, and judicial branches of government. Consequently, while a dictatorship is about one central information hub dictating everything, a democracy is an ongoing conversation between diverse information nodes.

  21. Supporters of strongmen often don’t see this process as antidemocratic. They are genuinely baffled when told that electoral victory doesn’t grant them unlimited power. Instead, they see any check on the power of an elected government as undemocratic. However, democracy doesn’t mean majority rule; rather, it means freedom and equality for all. Democracy is a system that guarantees everyone certain liberties, which even the majority cannot take away.

  22. It was these self-correcting mechanisms that gradually enabled the United States to expand the franchise, abolish slavery, and turn itself into a more inclusive democracy. As noted in chapter 2, the Founding Fathers committed enormous mistakes—such as endorsing slavery and denying women the vote—but they also provided the tools for their descendants to correct these mistakes. That was their greatest legacy.

  23. In the 1930s and 1940s, Stalin perfected the totalitarian system he inherited. The Stalinist network was composed of three main branches. First, there was the governmental apparatus of state ministries, regional administrations, and regular Red Army units, which in 1939 comprised 1.6 million civilian officials and 1.9 million soldiers. Second, there was the apparatus of the Communist Party of the Soviet Union and its ubiquitous party cells, which in 1939 included 2.4 million party members. Third, there was the secret police: first known as the Cheka, in Stalin’s days it was called the OGPU, NKVD, and MGB, and after Stalin’s death it morphed into the KGB. Its post-Soviet successor organization has been known since 1995 as the FSB. In 1937, the NKVD had 270,000 agents and millions of informers.

  24. In none of these cases could the secret police defeat the regular army in traditional warfare, of course; what made the secret police powerful was its command of information. It had the information necessary to preempt a military coup and to arrest the commanders of tank brigades or fighter squadrons before they knew what hit them. During the Stalinist Great Terror of the late 1930s, out of 144,000 Red Army officers about 10 percent were shot or imprisoned by the NKVD. This included 154 of 186 divisional commanders (83 percent), eight of nine admirals (89 percent), thirteen of fifteen full generals (87 percent), and three of five marshals (60 percent).

  25. Totalitarian regimes are based on controlling the flow of information and are suspicious of any independent channels of information. When military officers, state officials, or ordinary citizens exchange information, they can build trust. If they come to trust one another, they can organize resistance to the regime. Therefore, a key tenet of totalitarian regimes is that wherever people meet and exchange information, the regime should be there too, to keep an eye on them. In the 1930s, this was one principle that Hitler and Stalin shared.

  26. What if the officials decided on a major agrarian reform, but the peasant families rejected it? So when in 1928 the Soviets came up with their first Five-Year Plan for the development of the Soviet Union, the most important item on the agenda was to collectivize farming. The idea was that in every village all the families would join a kolkhoz—a collective farm. They would hand over to the kolkhoz all their property—land, houses, horses, cows, shovels, pitchforks. They would work together for the kolkhoz, and in return the kolkhoz would provide for all their needs, from housing and education to food and health care. The kolkhoz would also decide—based on orders from Moscow—whether they should grow cabbages or turnips; whether to invest in a tractor or a school; and who would work in the dairy farm, the tannery, and the clinic. The result, thought the Moscow masterminds, would be the first perfectly just and equal society in human history.

  27. They were similarly convinced of the economic advantages of their proposed system, thinking that the kolkhoz would enjoy economy of scale. For example, when every peasant family had but a small strip of land, it made little sense to buy a tractor to plow it, and in any case most families couldn’t afford a tractor. Once all land was held communally, it could be cultivated far more efficiently using modern machinery. In addition, the kolkhoz was supposed to benefit from the wisdom of modern science. Instead of every peasant deciding on production methods on the basis of old traditions and groundless superstitions, state experts with university degrees from institutions like the Lenin All-Union Academy of Agricultural Sciences would make the crucial decisions.

  28. When their efforts to collectivize farming encountered resistance and led to economic disaster, Moscow bureaucrats and mythmakers took a page from Kramer’s Hammer of the Witches. I don’t wish to imply that the Soviets actually read the book, but they too invented a global conspiracy and created an entire nonexistent category of enemies. In the 1930s Soviet authorities repeatedly blamed the disasters afflicting the Soviet economy on a counterrevolutionary cabal whose chief agents were the “kulaks,” or capitalist farmers. Just as in Kramer’s imagination witches serving Satan conjured hailstorms that destroyed crops, so in the Stalinist imagination kulaks beholden to global capitalism sabotaged the Soviet economy. In theory, kulaks were an objective socioeconomic category, defined by analyzing empirical data on things like property, income, capital, and wages. Soviet officials could allegedly identify kulaks by counting things. If most people in a village had only one cow, then the few families who had three cows were considered kulaks. If most people in a village didn’t hire any labor, but one family hired two workers during harvest time, this was a kulak family. Being a kulak meant not only that you possessed a certain amount of property but also that you possessed certain personality traits. According to the supposedly infallible Marxist doctrine, people’s material conditions determined their social and spiritual character. Since kulaks allegedly engaged in capitalist exploitation, it was a scientific fact (according to Marxist thinking) that they were greedy, selfish, and unreliable—and so were their children. Discovering that someone was a kulak ostensibly revealed something profound about their fundamental nature.

  29. “Americans grow up with the idea that questions lead to answers,” he said. “But Soviet citizens grew up with the idea that questions lead to trouble.”

  30. what happens if the leader himself embezzles public funds or makes some disastrous policy mistake? Nobody can challenge the leader, and on his own initiative the leader—being a human being—may well refuse to admit any mistakes. Instead, he is likely to blame all problems on “foreign enemies,” “internal traitors,” or “corrupt subordinates” and demand even more power in order to deal with the alleged malefactors.

  31. Stalinism was in fact one of the most successful political systems ever devised—if we define “success” purely in terms of order and power while disregarding all considerations of ethics and human well-being. Despite—or perhaps because of—its utter lack of compassion and its callous attitude to truth, Stalinism was singularly efficient at maintaining order on a gigantic scale. The relentless barrage of fake news and conspiracy theories helped to keep hundreds of millions of people in line.

  32. Facebook itself relied on this rationale to deflect criticism. It publicly acknowledged only that in 2016–17 “we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence.” While this statement may sound like an admission of guilt, in effect it shifts most of the responsibility for the spread of hate speech to the platform’s users and implies that Facebook’s sin was at most one of omission—failing to effectively moderate the content users produced. This, however, ignores the problematic acts committed by Facebook’s own algorithms. The crucial thing to grasp is that social media algorithms are fundamentally different from printing presses and radio sets. In 2016–17, Facebook’s algorithms were making active and fateful decisions by themselves.

  33. Humans are more likely to be engaged by a hate-filled conspiracy theory than by a sermon on compassion. So in pursuit of user engagement, the algorithms made the fateful decision to spread outrage.

  34. Intelligence is the ability to attain goals, such as maximizing user engagement on a social media platform. Consciousness is the ability to experience subjective feelings like pain, pleasure, love, and hate.

  35. True, it was the human ARC researchers who set GPT-4 the goal of overcoming the CAPTCHA, just as it was human Facebook executives who told their algorithm to maximize user engagement. But once the algorithms adopted these goals, they displayed considerable autonomy in deciding how to achieve them.

  36. Democracy is a conversation, and conversations rely on language. By hacking language, computers could make it extremely difficult for large numbers of humans to conduct a meaningful public conversation. When we engage in a political debate with a computer impersonating a human, we lose twice. First, it is pointless for us to waste time in trying to change the opinions of a propaganda bot, which is just not open to persuasion. Second, the more we talk with the computer, the more we disclose about ourselves, thereby making it easier for the bot to hone its arguments and sway our views.

  37. History is the interaction between biology and culture; between our biological needs and desires for things like food, sex, and intimacy and our cultural creations like religions and laws.

  38. Within a few years AI could eat the whole of human culture—everything we have created over thousands of years—digest it, and begin to gush out a flood of new cultural artifacts.

  39. The Matrix proposed that to gain total control of human society, computers would have to first gain physical control of our brains, hooking them directly to a computer network. But in order to manipulate humans, there is no need to physically hook brains to computers. For thousands of years prophets, poets, and politicians have used language to manipulate and reshape society. Now computers are learning how to do it.

  40. Facebook and TikTok are two familiar examples. These computer-to-human chains are different from traditional human-to-document chains, because computers can use their power to make decisions, create ideas, and deepfake intimacy in order to influence humans in ways that no document ever could. The Bible had a profound effect on billions of people, even though it was a mute document. Now try to imagine the effect of a holy book that not only can talk and listen but can get to know your deepest fears and hopes and constantly mold them.

  41. When accused of creating social and political mayhem, they hide behind arguments like “We are just a platform. We are doing what our customers want and what the voters permit. We don’t force anyone to use our services, and we don’t violate any existing law. If customers didn’t like what we do, they would leave. If voters didn’t like what we do, they would pass laws against us. Since the customers keep asking for more, and since no law forbids what we do, everything must be okay.” These arguments are either naive or disingenuous. Tech giants like Facebook, Amazon, Baidu, and Alibaba aren’t just the obedient servants of customer whims and government regulations. They increasingly shape these whims and regulations. The tech giants have a direct line to the world’s most powerful governments, and they invest huge sums in lobbying efforts to throttle regulations that might undermine their business model.

  42. Life has been divided into separate reputational spheres, with separate status competitions, and there were also many off-grid moments when you didn’t have to engage in any status competitions at all. Precisely because status competition is so crucial, it is also extremely stressful. Therefore, not only humans but even other social animals like apes have always welcomed some respite from it. Unfortunately, social credit algorithms combined with ubiquitous surveillance technology now threaten to merge all status competitions into a single never-ending race. Even in their own homes or while trying to enjoy a relaxed vacation, people would have to be extremely careful about every deed and word, as if they were performing onstage in front of millions. This could create an incredibly stressful lifestyle, destructive to people’s well-being as well as to the functioning of society. If digital bureaucrats use a precise points system to keep tabs on everybody all the time, the emerging reputation market could annihilate privacy and control people far more tightly than the money market ever did.

  43. Of course, the YouTube algorithms were not themselves responsible for inventing lies and conspiracy theories or for creating extremist content. At least in 2017–18, those things were done by humans. The algorithms were responsible, however, for incentivizing humans to behave in such ways and for pushing the resulting content in order to maximize user engagement. Fisher documented numerous far-right activists who first became interested in extremist politics after watching videos that the YouTube algorithm auto-played for them.

  44. The people who manage Facebook, YouTube, TikTok, and other platforms routinely try to excuse themselves by shifting the blame from their algorithms to “human nature.” They argue that it is human nature that produces all the hate and lies on the platforms. The tech giants then claim that due to their commitment to free-speech values, they hesitate to censor the expression of genuine human emotions.

  45. Facebook and other social media platforms didn’t consciously set out to flood the world with fake news and outrage. But by telling their algorithms to maximize user engagement, this is exactly what they perpetrated.

  46. A more recent example of military victory leading to political defeat was provided by the American invasion of Iraq in 2003. The Americans won every major military engagement, but failed to achieve any of their long-term political aims. Their military victory didn’t establish a friendly regime in Iraq, or a favorable geopolitical order in the Middle East. The real winner of the war was Iran. American military victory turned Iraq from Iran’s traditional foe into Iran’s vassal, thereby greatly weakening the American position in the Middle East while making Iran the regional hegemon

  47. the more powerful the computer, the more careful we need to be about defining its goal in a way that precisely aligns with our ultimate goals. If we define a misaligned goal to a pocket calculator, the consequences are trivial. But if we define a misaligned goal to a superintelligent machine, the consequences could be dystopian.

  48. For millennia, philosophers have been looking for a definition of an ultimate goal that will not depend on an alignment to some higher goal. They have repeatedly been drawn to two potential solutions, known in philosophical jargon as deontology and utilitarianism. Deontologists (from the Greek word deon, meaning “duty”) believe that there are some universal moral duties, or moral rules, that apply to everyone. These rules do not rely on alignment to a higher goal, but rather on their intrinsic goodness.

  49. Whereas deontologists struggle to find universal rules that are intrinsically good, utilitarians judge actions by their impact on suffering and happiness. The English philosopher Jeremy Bentham—another contemporary of Napoleon, Clausewitz, and Kant—said that the only rational ultimate goal is to minimize suffering in the world and maximize happiness.

  50. Kant further fulminated that because such acts are contrary to nature, they “make man unworthy of his humanity. He no longer deserves to be a person.” Kant, in fact, repackaged a Christian prejudice as a supposedly universal deontological rule, without providing empirical proof that homosexuality is indeed contrary to nature.

  51. The danger of utilitarianism is that if you have a strong enough belief in a future utopia, it can become an open license to inflict terrible suffering in the present. Indeed, this is a trick traditional religions discovered thousands of years ago. The crimes of this world could too easily be excused by the promises of future salvation.

  52. one of the most important things to realize about computers is that when a lot of computers communicate with one another, they can create inter-computer realities, analogous to the intersubjective realities produced by networks of humans. These inter-computer realities may eventually become as powerful—and as dangerous—as human-made intersubjective myths.

  53. The database on which an AI is trained is a bit like a human’s childhood. Childhood experiences, traumas, and fairy tales stay with us throughout our lives. AIs too have childhood experiences. Algorithms might even infect one another with their biases, just as humans do.

  54. History is full of rigid caste systems that denied humans the ability to change, but it is also full of dictators who tried to mold humans like clay. Finding the middle path between these two extremes is a never-ending task.

  55. As long as democratic societies understand the computer network, their self-correcting mechanisms are our best guarantee against AI abuses. Thus the EU’s AI Act, proposed in 2021, singled out social credit systems like the one that stars in “Nosedive” as one of the few types of AI that are totally prohibited, because they might “lead to discriminatory outcomes and the exclusion of certain groups” and because “they may violate the right to dignity and non-discrimination and the values of equality and justice.” As with total surveillance regimes, so also with social credit systems, the fact that they could be created doesn’t mean that we must create them.

  56. The Russian Constitution makes grandiose promises about how “everyone shall be guaranteed freedom of thought and speech” (Article 29.1), how “everyone shall have the right freely to seek, receive, transmit, produce and disseminate information” (29.4), and how “the freedom of the mass media shall be guaranteed. Censorship shall be prohibited” (29.5). Hardly any Russian citizen is naive enough to take these promises at face value. But computers are bad at understanding doublespeak. A chatbot instructed to adhere to Russian law and values might read that constitution and conclude that freedom of speech is a core Russian value. Then, after spending a few days in Russian cyberspace and monitoring what is happening in the Russian information sphere, the chatbot might start criticizing the Putin regime for violating the core Russian value of freedom of speech. Humans too notice such contradictions but avoid pointing them out, due to fear. But what would prevent a chatbot from pointing out damning patterns? And how might Russian engineers explain to a chatbot that though the Russian Constitution guarantees all citizens freedom of speech and forbids censorship, the chatbot shouldn’t actually believe the constitution or ever mention the gap between theory and reality?