The Gene - by Siddhartha Mukherjee
This is the second book I’ve read by Siddhartha Mukherjee, with the first one being about cancer. It begins with historical elements that I remember from my high school biology class, covering figures like Darwin, Lamarck, and Mendel. Gradually, it delves into the more recent developments surrounding CRISPR/Cas9, Gene editing, and the future of human race.
Here are some text that I highlighted in the book:
Rajesh had begun to behave oddly, as if a wire had been tripped in his brain. The most striking change in his personality was his volatility: good news triggered uncontained outbursts of joy, often extinguished only through increasingly acrobatic bouts of physical exercise, while bad news plunged him into inconsolable desolation.
my father recollects an altered brother: fearful at times, reckless at others, descending and ascending steep slopes of mood, irritable one morning and overjoyed the next (that word: overjoyed. Used colloquially, it signals something innocent: an amplification of joy. But it also delineates a limit, a warning, an outer boundary of sobriety. Beyond overjoy, as we shall see, there is no over-overjoy; there is only madness and mania).
He had aged beyond his years. At forty-eight, he looked a decade older.
Three profoundly destabilizing scientific ideas ricochet through the twentieth century, trisecting it into three unequal parts: the atom, the byte, the gene.
Rather than a colossal biblical Flood, Lyell argued, there had been millions of floods; God had shaped the earth not through singular cataclysms but through a million paper cuts. For Darwin, Lyell’s central idea—of the slow heave of natural forces shaping and reshaping the earth, sculpting nature—would prove to be a potent intellectual spur.
The phenomenon was called variation—animals occasionally produced offspring with features different from the parental type.
This struggle for survival was the shaping hand. Death was nature’s culler, its grim shaper.
The best adapted—the “fittest”—survive (the phrase survival of the fittest was borrowed from the Malthusian economist Herbert Spencer).
Separated by oceans and continents, buffeted by very different intellectual winds, the two men had sailed to the same port.
On November 24, 1859, on a wintry Thursday morning, Charles Darwin’s book On the Origin of Species by Means of Natural Selection appeared in bookstores in England, priced at fifteen shillings a copy. Twelve hundred and fifty copies had been printed. As Darwin noted, stunned, “All copies were sold [on the] first day.”
For Darwin’s theory to work, heredity had to possess constancy and inconstancy, stability and mutation.
If heredity had no means of maintaining variance—of “fixing” the altered trait—then all alterations in characters would eventually vanish into colorless oblivion by virtue of blending.
Mendel termed these overriding traits dominant, while the traits that had disappeared were termed recessive.
The particles came in two variants, or two alleles: short versus tall (for height) or white versus violet (for flower color) and so forth.
When the dominant allele was present, the recessive allele seemed to disappear, but when a plant received two recessive alleles, the allele reiterated its character. Throughout, the information carried by an individual allele remained indivisible.
Bill by bill, and letter by letter, his scientific imagination was slowly choked by administrative work.
His study discussed transformation as a curiosity of microbial biology, but never explicitly mentioned the discovery of a potential chemical basis of heredity. The most important conclusion of the most important biochemical paper of the decade was buried, like a polite cough, under a mound of dense text.
Frederick Griffith had made genes move between organisms. Muller had altered genes using energy. A gene, whatever it was, was capable of motion, transmission, and of energy-induced change—properties generally associated with chemical matter.
Martin Neimöller, the German theologian, summarized the slippery march of evil in his often-quoted statement: First they came for the Socialists, and I did not speak out— Because I was not a Socialist. Then they came for the Trade Unionists, and I did not speak out— Because I was not a Trade Unionist. Then they came for the Jews, and I did not speak out— Because I was not a Jew. Then they came for me—and there was no one left to speak out for me.
To a geneticist, the development of an organism could be described as the sequential induction (or repression) of genes and genetic circuits. Genes specified proteins that switched on genes that specified proteins that switched on genes—and so forth, all the way to the very first embryological cell. It was genes, all the way.
Genes make proteins that regulate genes. Genes make proteins that replicate genes. The third R of the physiology of genes is a word that lies outside common human vocabulary, but is essential to the survival of our species: recombination—the ability to generate new combinations of genes.
humans have about 37 trillion cells.
But in certain tissues, Kerr noted, dying cells seemed to activate specific structural changes in anticipation of death—as if turning on a “death subroutine.”
In worms, ced9 prevents cell death by sequestering the cell-death-related executioner proteins (hence the “undead” cells in the worm mutants). In human cells, the activation of BCL2 results in a cell in which the death cascade is blocked, creating a cell that is pathologically unable to die: cancer.
Genes operate in the same manner. Individual genes specify individual functions, but the relationship among genes allows physiology.
three bases of DNA are read together to encode one amino acid in a protein
But DNA, broken into pieces, degenerates into a garble of four bases—A, C, G, and T. You cannot read a book by dissolving all its words into alphabets. With DNA, as with words, the sequence carries the meaning. Dissolve DNA into its constituent bases, and it turns into a primordial four-letter alphabet soup.
Intergenic DNA and introns—spacers between genes and stuffers within genes—are thought to have sequences that allow genes to be regulated in context.
Using reverse transcriptase, every RNA in a cell could be used as a template to build its corresponding gene.
I believe in the inalienable right of all adult scientists to make absolute fools of themselves in private. —Sydney Brenner
Genes were no longer just the subjects of study, but the instruments of study. There is an illuminated moment in the development of a child when she grasps the recursiveness of language: just as thoughts can be used to generate words, she realizes, words can be used to generate thoughts. Recombinant DNA had made the language of genetics recursive. Biologists had spent decades trying to interrogate the nature of the gene—but now it was the gene that could be used to interrogate biology. We had graduated, in short, from thinking about genes, to thinking in genes.
The assigned ten minutes grew into a marathon meeting.
Rather than patenting insulin as “matter” or “manufacture,” it concentrated its efforts, boldly, on a variation of “method.”
Stripped to its bare essence, a medicinal chemical—a drug—is nothing more than a molecule that enables a therapeutic change in human physiology.
Proteins are thus poised to be some of the most potent and most discriminating medicines in the pharmacological world. But to make a protein, one needs its gene—and here recombinant DNA technology provided the crucial missing stepping-stone.
There is no one-to-one mapping of one gene and one illness. Even if you inherit the entire set of genes that causes NPH in one person, you may still need an accident or an environmental trigger to “release” it
The “right to be born” could be rephrased as a right to be born with the right kind of genes.
Joseph Dancis was not just rewriting the past; he was also announcing the future. Even a casual reader of the extraordinary claim—that every parent had to shoulder the duty to create babies “who will not be a liability to society,” or that the right to be born without “genetic anomalies” was a fundamental right—might have detected the cry of a rebirth within it.
A woman could choose to be tested or not, choose to know the results or not, and choose to terminate or continue her pregnancy even after testing positive for a fetal abnormality. This was eugenics in its benevolent avatar. Its champions called it neo-eugenics or newgenics.
But the human genome has 3 billion base pairs—while a typical disease-linked gene mutation might result in the alteration of just one base pair in the genome.
discovering the genetic nature of an illness is not the same as identifying the actual gene that causes that illness. The pattern of inheritance of hemochromatosis, for instance, clearly suggests that a single gene governs the disease, and that the mutation is recessive—i.e., two defective copies of the gene (one from each parent) are necessary to cause the illness. But the pattern of inheritance tells us nothing about what the hemochromatosis gene is or what it does.
In the history of science and technology too, breakthroughs seem to come in two fundamental forms. There are scale shifts—where the crucial advance emerges as a result of an alteration of size or scale alone (the moon rocket, as one engineer famously pointed out, was just a massive jet plane pointed vertically at the moon). And there are conceptual shifts—in which the advance arises because of the emergence of a radical new concept or idea. In truth, the two modes are not mutually exclusive, but reinforcing.
As biologists realized in the early 1980s, cancer, then, was a “new” kind of genetic disease—the result of heredity, evolution, environment, and chance all mixed together.
It has 3,088,286,401 letters of DNA (give or take a few).
It encodes about 20,687 genes in total—only 1,796 more than worms, 12,000 fewer than corn, and 25,000 fewer genes than rice or wheat. The difference between “human” and “breakfast cereal” is not a matter of gene numbers, but of the sophistication of gene networks. It is not what we have; it is how we use it.
Medicine, the sociologist Everett Hughes once observed wryly, perceives the world through “mirror writing.” Illness is used to define wellness.
Since mutations accumulate over generations—i.e., over intergenerational time—the family with the greatest diversity in gene variations is the one with the most generations. The triplets have exactly the same genome; their genetic diversity is minimal. The great-grandfather and great-grandson pair, in contrast, have related genomes—but their genomes have the most differences. Evolution is a metronome, ticktocking time through mutations. Genetic diversity thus acts as a “molecular clock,” and variations can organize lineage relationships. The intergenerational time between any two family members is proportional to the extent of genetic diversity between them.
genetic diversity could be used to measure the oldest populations within a species: a tribe that has the most genetic diversity within it is older than a tribe with little or no diversity.
In November 2008, a seminal study led by Luigi Cavalli-Sforza, Marcus Feldman, and Richard Myers from Stanford University characterized 642,690 genetic variants in 938 individuals drawn from 51 subpopulations across the world. The second startling result about human origins emerges from this study: modern humans appear to have emerged exclusively from a rather narrow slice of earth, somewhere in sub-Saharan Africa, about one hundred to two hundred thousand years ago, and then migrated northward and eastward to populate the Middle East, Europe, Asia, and the Americas. “You get less and less variation the further you go from Africa,” Feldman wrote. “Such a pattern fits the theory that the first modern humans settled the world in stepping-stone fashion after leaving Africa less than 100,000 years ago. As each small group of people broke away to found a new region, it took only a sample of the parent population’s genetic diversity.”
The oldest human populations—their genomes peppered with diverse and ancient variations—are the San tribes of South Africa, Namibia, and Botswana, and the Mbuti Pygmies, who live deep in the Ituri forest in the Congo. The “youngest” humans, in contrast, are the indigenous North Americans who left Europe, and crossed into the Seward peninsula in Alaska through the icy cleft of the Bering Strait, some fifteen to thirty thousand years ago. This theory of human origin and migration, corroborated by fossil specimens, geological data, tools from archaeological digs, and linguistic patterns, has overwhelmingly been accepted by most human geneticists. It is called the Out of Africa theory, or the Recent Out of Africa model (the recent reflecting the surprisingly modern evolution of modern humans, and its acronym, ROAM, a loving memento to an ancient peripatetic urge that seems to rise directly out of our genomes).
[线粒体] The exclusively female origin of all the mitochondria in an embryo has an important consequence. All humans—male or female—must have inherited their mitochondria from their mothers, who inherited their mitochondria from their mothers, and so forth, in an unbroken line of female ancestry stretching indefinitely into the past.
For modern humans, that number has reached one: each of us can trace our mitochondrial lineage to a single human female who existed in Africa about two hundred thousand years ago. She is the common mother of our species. We do not know what she looked like, although her closest modern-day relatives are women of the San tribe from Botswana or Namibia. I find the idea of such a founding mother endlessly mesmerizing. In human genetics, she is known by a beautiful name—Mitochondrial Eve.
The problem with racial discrimination, though, is not the inference of a person’s race from their genetic characteristics. It is quite the opposite: it is the inference of a person’s characteristics from their race.
the vast proportion of genetic diversity (85 to 90 percent) occurs within so-called races (i.e., within Asians or Africans) and only a minor proportion (7 percent) between racial groups
This degree of intraracial variability makes “race” a poor surrogate for nearly any feature: in a genetic sense, an African man from Nigeria is so “different” from another man from Namibia that it makes little sense to lump them into the same category.
For race and genetics, then, the genome is a strictly one-way street. You can use genome to predict where X or Y came from. But, knowing where A or B came from, you can predict little about the person’s genome. Or: every genome carries a signature of an individual’s ancestry—but an individual’s racial ancestry predicts little about the person’s genome. You can sequence DNA from an African-American man and conclude that his ancestors came from Sierra Leone or Nigeria. But if you encounter a man whose great-grandparents came from Nigeria or Sierra Leone, you can say little about the features of this particular man. The geneticist goes home happy; the racist returns empty-handed.
the Stanford geneticist, described the problem of racial classification as a “futile exercise” driven by cultural arbitration rather than genetic differentiation.
Told that they are being tested for “intelligence,” however, their scores collapse. The real variable being measured, then, is not intelligence but an aptitude for test taking, or self-esteem, or simply ego or anxiety. In a society where black men and women experience routine, pervasive, and insidious discrimination, such a propensity could become fully self-reinforcing: black children do worse at tests because they’ve been told that they are worse at tests, which makes them perform badly in tests and furthers the idea that they are less intelligent—ad infinitum.
That genes have anything to do with the determination of sex, gender, and gender identity is a relatively new idea in our history. The distinction between the three words is relevant to this discussion. By sex, I mean the anatomic and physiological aspects of male versus female bodies. By gender, I am referring to a more complex idea: the psychic, social, and cultural roles that an individual assumes. By gender identity, I mean an individual’s sense of self (as female versus male, as neither, or as something in between)
careful students of genetics knew that the Y chromosome was an inhospitable place for genes. Unlike any other chromosome, the Y is “unpaired”—i.e., it has no sister chromosome and no duplicate copy, leaving every gene on the chromosome to fend for itself. A mutation in any other chromosome can be repaired by copying the intact gene from the other chromosome. But a Y chromosome gene cannot be fixed, repaired, or recopied; it has no backup or guide. When the Y chromosome is assailed by mutations, it lacks a mechanism to recover information. The Y is thus pockmarked with the potshots and scars of history. It is the most vulnerable spot in the human genome.
humans biologically female but chromosomally male. “Women” born with “Swyer syndrome” were anatomically and physiologically female throughout childhood, but did not achieve female sexual maturity in early adulthood.
Women with Swyer syndrome are not “women trapped in men’s bodies.” They are women trapped in women’s bodies that are chromosomally male (except for just one gene). A mutation in that single gene, SRY, creates a (largely) female body—and, more crucially, a wholly female self. It is as artless, as plain, as binary, as leaning over the nightstand and turning a switch on or off.
These case reports finally put to rest the assumption, still unshakably prevalent in some circles, that gender identity can be created or programmed entirely, or even substantially, by training, suggestion, behavioral enforcement, social performance, or cultural interventions. It is now clear that genes are vastly more influential than virtually any other force in shaping sex identity and gender identity—although in limited circumstances a few attributes of gender can be learned through cultural, social, and hormonal reprogramming.
the growing consensus in medicine is that, aside from exceedingly rare exceptions, children should be assigned to their chromosomal (i.e., genetic) sex regardless of anatomical variations and differences—with the option of switching, if desired, later in life. As of this writing, none of these children have opted to switch from their gene-assigned sexes.
The SRY gene indubitably controls sex determination in an on/off manner. Turn SRY on, and an animal becomes anatomically and physiologically male. Turn it off, and the animal becomes anatomically and physiologically female.
But to enable more profound aspects of gender determination and gender identity, SRY must act on dozens of targets—turning them on and off, activating some genes and repressing others, like a relay race that moves a baton from hand to hand. These genes, in turn, integrate inputs from the self and the environment—from hormones, behaviors, exposures, social performance, cultural role-playing, and memory—to engender gender. What we call gender, then, is an elaborate genetic and developmental cascade, with SRY at the tip of the hierarchy, and modifiers, integrators, instigators, and interpreters below.
if sexual orientation was partly inherited, then a higher proportion of identical twins should both be gay compared to fraternal twins.
Among the fifty-six pairs of identical twins, both twins were gay in 52 percent Of the fifty-four pairs of nonidentical twins, 22 percent were both gay—lower than the fraction for identical twins, but still significantly higher than the estimate of 10 percent gay in the overall population.
Male homosexuality was not just genes, Bailey found. Influences such as families, friends, schools, religious beliefs, and social structure clearly modified sexual behavior—so much so that one identical twin identified as gay and the other as straight as much as 48 percent of the time.
the gay gene had to be carried on the X chromosome.
By comparing separated-at-birth twins against twins brought up in the same family, Bouchard could untwist the effects of genes and environments. The similarities between such twins could have nothing to do with nurture; they could only reflect hereditary influences—nature.
we need to ask the converse question: Why do identical twins raised in identical homes and families end up with different lives and become such different beings? Why do identical genomes become manifest in such dissimilar personhoods, with nonidentical temperaments, personalities, fates, and choices?
signals from neighboring cells—events of fate, as far as an individual cell is concerned—are also registered by the turning on and off of master-regulatory genes, leading to alterations in cell lineages.
when the children born to women who were pregnant during the famine grew up, they too had higher rates of obesity and heart disease.
It was an extreme form of parasitism: the egg cell became merely a host, or a vessel, for the genome of a normal cell and allowed that genome to develop into a perfectly normal adult animal.
The transfer of an adult frog nucleus (i.e., all its genes) into an empty egg worked: perfectly functional tadpoles were born, and each of these tadpoles carried a perfect replica of the genome of the adult frog.
Genes are turned “on” and “off” in response to these events, and epigenetic marks are gradually layered above genes.
the introduction of these four genes into a mature skin cell caused a small fraction of the cells to transform into something resembling an embryonic stem cell.
Myc, the rejuvenation factor, is no ordinary gene: it is one of the most forceful regulators of cell growth and metabolism known in biology. Activated abnormally, it can certainly coax an adult cell back into an embryo-like state, thereby enabling Yamanaka’s cell-fate reversal experiment (this function requires the collaboration of the three other genes found by Yamanaka). But myc is also one of the most potent cancer-causing genes known in biology; it is also activated in leukemias and lymphomas, and in pancreatic, gastric, and uterine cancer. As in some ancient moral fable, the quest for eternal youthfulness appears to come at a terrifying collateral cost. The very genes that enable a cell to peel away mortality and age can also tip its fate toward malignant immortality, perpetual growth, and agelessness—the hallmarks of cancer.
We can now understand the Dutch Hongerwinter, and its multigenerational effects, in mechanistic terms that involve both genes and epigenes. The acute starvation of men and women during those brutal months in 1945 indubitably altered the expression of genes involved in metabolism and storage. The first changes were transient—no more, perhaps, than the turning on and turning off of genes that respond to nutrients in the environment. But as the landscape of metabolism was frozen and reset by prolonged starvation—as transience hardened into permanence—more durable changes were imprinted in the genome. Hormones fanned out between organs, signaling the potential long-term deprivation of food and auguring a broader reformatting of gene expression. Proteins intercepted these messages within cells. Genes were shut down, one by one, then imprints were stamped on DNA to close them down further. Like houses shuttering against a storm, entire gene programs were barricaded shut. Methylation marks were added to genes. Histones may have been chemically modified to record the memory of starvation.
Environmental information can certainly be etched on the genome. But most of these imprints are recorded as “genetic memories” in the cells and genomes of individual organisms—not carried forward across generations.
When scientists underestimate complexity, they fall prey to the perils of unintended consequences. The parables of such scientific overreach are well-known: foreign animals, introduced to control pests, become pests in their own right; the raising of smokestacks, meant to alleviate urban pollution, releases particulate effluents higher in the air and exacerbates pollution; stimulating blood formation, meant to prevent heart attacks, thickens the blood and results in an increased risk of blood clots to the heart.
What was needed, perhaps, was a backup copy—a mirror image to protect the original or to restore the prototype if damaged. Perhaps this was the ultimate impetus to create a double-stranded nucleic acid. The data in one strand would be perfectly reflected in the other and could be used to restore anything damaged; the yin would protect the yang. Life thus invented its own hard drive.
Stem cells fulfill this function, especially after catastrophic cell loss. A stem cell is a unique type of cell that is defined by two properties. It can give rise to other functional cell types, such as nerve cells or skin cells, through differentiation. And it can renew itself—i.e., give rise to more stem cells, which can, in turn, differentiate to form the functional cells of an organ.
But embryonic stem cells, or ES cells, which arise from the inner sheath of an animal’s embryo, are vastly more potent; they can give rise to every cell type in the organism—blood, brains, intestines, muscles, bone, skin. Biologists use the word pluripotent to describe this property of ES cells.
ES cells also possess an unusual third characteristic—a quirk of nature. They can be isolated from the embryo of an organism and grown in petri dishes in the lab.
As the students had imagined it, the future of human genetics would be built on two fundamental elements. The first was “genetic diagnosis”—the idea that genes could be used to predict or determine illness, identity, choice, and destiny. The second was “genetic alteration”—that genes could be changed to change the future of diseases, choice, and destiny.
The future of a woman carrying a BRCA1 mutation is fundamentally changed by that knowledge—and yet it remains just as fundamentally uncertain. For some women, the genetic diagnosis is all-consuming; it is as if their lives and energies are spent anticipating cancer and imagining survivorship—from an illness that they have not yet developed. A disturbing new word, with a distinctly Orwellian ring, has been coined to describe these women: previvors—pre-survivors.
Gene hunting devolves into a spot-the-odd-man-out game on a gigantic scale: by comparing the genetic sequences of all the closely related family members, a mutation that appears in the affected individual but not in the unaffected relatives can be found.
Human genetics has become progressively adept at creating what one might describe as a “backward catalog”—a rearview mirror—of a genetic disorder: Knowing that a child has a syndrome, what are the genes that are mutated? But to estimate penetrance and expressivity, we also need to create a “forward catalog”: If a child has a mutant gene, what are the chances that he or she will develop the syndrome? Is every gene fully predictive of risk?
The point is this: if you cannot separate the phenotype of mental illness from creative impulses, then you cannot separate the genotype of mental illness and creative impulse. The genes that “cause” one (bipolar disease) will “cause” another (creative effervescence).
As Edvard Munch put it, “[My troubles] are part of me and my art. They are indistinguishable from me, and [treatment] would destroy my art. I want to keep those sufferings.”
The prospect of a genetic diagnosis for schizophrenia and bipolar disorder thus involves confronting fundamental questions about the nature of uncertainty, risk, and choice. We want to eliminate suffering, but we also want to “keep those sufferings.”
Should we consider allowing parents to fully sequence their children’s genomes and potentially terminate pregnancies with such known devastating genetic mutations? We would certainly eliminate Erika’s mutation from the human gene pool—but we would eliminate Erika as well. I will not minimize the enormity of Erika’s suffering, or that of her family—but there is, indubitably, a deep loss in that. To fail to acknowledge the depth of Erika’s anguish is to reveal a flaw in our empathy. But to refuse to acknowledge the price to be paid in this trade-off is to reveal, conversely, a flaw in our humanity.
Astonishingly, if you remove a few cells from that embryo, the remaining cells divide and fill in the gap of missing cells, and the embryo continues to grow normally as if nothing had happened. For a moment in our history, we are actually quite like salamanders or, rather, like salamanders’ tails—capable of complete regeneration even after being cut by a fourth.
readers from India and China might note, with some shame and sobriety, that the largest “negative eugenics” project in human history was not the systemic extermination of Jews in Nazi Germany or Austria in the 1930s. That ghastly distinction falls on India and China, where more than 10 million female children are missing from adulthood because of infanticide, abortion, and neglect of female children.
Until recently, three unspoken principles have guided the arena of genetic diagnosis and intervention. First, diagnostic tests have largely been restricted to gene variants that are singularly powerful determinants of illness—i.e., highly penetrant mutations, where the likelihood of developing the disease is close to 100 percent (Down syndrome, cystic fibrosis, Tay-Sachs disease). Second, the diseases caused by these mutations have generally involved extraordinary suffering or fundamental incompatibilities with “normal” life. Third, justifiable interventions—the decision to abort a child with Down syndrome, say, or intervene surgically on a woman with a BRCA1 mutation—have been defined through social and medical consensus, and all interventions have been governed by complete freedom of choice.
Humans endowed with certain genomes are responsible for defining the criteria to define, intervene on, or even eliminate other humans endowed with other genomes. “Choice,” in short, seems like an illusion devised by genes to propagate the selection of similar genes.
Evidently, the conversation around genes and predilections has already slipped past the original boundaries—from high-penetrance genes, extraordinary suffering, and justifiable interventions—to genotype-driven social engineering.
A child born to a parent with schizophrenia, we now know, has between a 13 to 30 percent chance of developing the disease by age sixty. If both parents are affected, the risk climbs to about 50 percent. With one uncle affected, a child runs a risk that is three- to fivefold higher than the general population.
Conceptually, gene therapy comes in two distinct flavors. The first involves modifying the genome of a nonreproductive cell—say a blood, brain, or muscle cell. The genetic modification of these cells affects their function, but it does not alter the human genome for more than one generation. The second, more radical, form of gene therapy is to modify a human genome so that the change affects reproductive cells.
The “seeker” and the “hitman” worked in concert: the Cas9 protein delivered its cuts to the genome only after the sequence had been matched by the recognition element. It was a classic combination of collaborators—spotter and executor, drone and rocket, Bonnie and Clyde.
Doudna and Charpentier published their data on the microbial defense system, called CRISPR/Cas9, in Science magazine in 2012.
If one man’s illness is another man’s normalcy, as this history teaches us, then one person’s understanding of enhancement may be another’s conception of emancipation (“why not make ourselves a little better?” as Watson asks).
Another scientist wrote of the Chinese approach, “Do first, think later.”
The genome is only a mirror for the breadth or narrowness of human imagination. It is Narcissus, reflected.
The SRY gene determines sexual anatomy and physiology in a strikingly autonomous manner; it is all nature. Gender identity, sexual preference, and the choice of sexual roles are determined by intersections of genes and environments—i.e., nature plus nurture.
The desire to homogenize and “normalize” humans must be counterbalanced against biological imperatives to maintain diversity and abnormalcy. Normalcy is the antithesis of evolution.
History repeats itself, in part because the genome repeats itself. And the genome repeats itself, in part because history does. The impulses, ambitions, fantasies, and desires that drive human history are, at least in part, encoded in the human genome. And human history has, in turn, selected genomes that carry these impulses, ambitions, fantasies, and desires. This self-fulfilling circle of logic is responsible for some of the most magnificent and evocative qualities in our species, but also some of the most reprehensible. It is far too much to ask ourselves to escape the orbit of this logic, but recognizing its inherent circularity, and being skeptical of its overreach, might protect the weak from the will of the strong, and the “mutant” from being annihilated by the “normal.”
It is a gargantuan twin study—except without twins: millions of virtual genetic “twins” are created computationally by matching genomes across space and time and these permutations are then annotated against life events.