Hermann Hesse’s Arrested Development


November 15, 2018

Hermann Hesse’s Arrested Development

The stories Hesse tells appeal to young people, because they keep faith with the powerful emotions of adolescence, which most adults forget or outgrow.

“It has to be said, there are no points to be won from liking Hesse nowadays.” This rueful assessment of the novelist Hermann Hesse, quoted in the opening pages of Gunnar Decker’s new biography, “Hesse: The Wanderer and His Shadow” (Harvard), appeared in an obituary in 1962; but it could just as well have been pronounced yesterday, or a hundred years ago. Ever since he published his first novel, in 1904, Hesse has been one of those odd writers who manage to be at the same time canonical—in 1946, he won the Nobel Prize in Literature—and almost perpetually unfashionable among critics. The great German modernists who were his contemporaries mostly disdained him: “A little man,” according to the poet Gottfried Benn; “He displays the foibles of a greater writer than he actually is,” the novelist Robert Musil said. In America today, Hesse is usually regarded by highbrows as a writer for adolescents. Liking him is a good sign at age fifteen, a bad one by age twenty.

For many readers, Hesse’s novels are among the first serious fiction they encounter—a literary gateway drug. This was particularly so during the international Hesse craze of the nineteen-sixties, when the books became passports to the counterculture and Timothy Leary advised, “Before your LSD session, read”‘ Siddhartha’’ and ‘Steppenwolf.’  But, long before then, adolescents were the core of Hesse’s readership, a fact that sometimes irritated him.

His first novel—“Peter Camenzind,” the tale of a moody, nature-loving young man who drops out of bourgeois society—was taken up as an inspiration by the Wandervogel, a back-to-nature youth movement that promoted what Hesse himself derided as “campfire Romanticism.” For Peter to inspire a mass of followers, Hesse complained, was a misunderstanding of the whole point of the character: “He does not want to follow the path trodden by many, but to resolutely plow his own furrow. . . . He is not made for the collective life.”That book was at least written by a young man about the problems of the young.

“Steppenwolf,” on the other hand, tells the story of an aging intellectual’s midlife crisis; you don’t need the clue offered by the initials of Harry Haller, the book’s unhappy hero, to make the identification with the author. It seems strange that such a book would become a bible of the sixties, inspiring the name of the band behind “Born to Be Wild.” Hesse didn’t live quite long enough to see what the sixties made of him, but he had seen similar cults before, and he didn’t trust them. “I often have cause to get a little annoyed at schoolboys reading and enthusing over ‘Steppenwolf,’ ” he wrote, in 1955. “After all, the fact is that I wrote this book shortly before my fiftieth birthday.”

Still, Hesse’s young readers, then and now, were not wrong to feel that he was speaking directly to them. The stories he tells appeal to young people because they keep faith with the powerful emotions of adolescence, which most adults forget or outgrow—the woundedness, the exaltation, the enormous demands on life. The young Emil Sinclair, the narrator of “Demian,” is a good example of Hesse’s totally unironic self-seriousness: “I have been and still am a seeker, but I have ceased to question stars and books. I have begun to listen to the teachings my blood whispers to me. My story is not a pleasant one; it is neither sweet nor harmonious, as invented stories are; it has the taste of nonsense and chaos, of madness and dreams—like the lives of all men who stop deceiving themselves.”

Many young men, in particular, see a glamorous reflection of themselves in the typical Hesse hero—a sensitive, brooding man who cannot find a place for himself in ordinary society. This figure might live in India in the age of the Buddha, like Siddhartha, or in Germany in the Jazz Age, like Harry Haller, or in the Middle Ages, like Goldmund in “Narcissus and Goldmund.” Whatever the setting, his path will generally feature the same landmarks. He will be plucked out of his childhood surroundings and sent to an élite school, where he will suffer deeply. He will rebel against conventional ideas of success and refuse to pursue any kind of career, combining downward mobility with spiritual striving. Often, like Peter Camenzind, he will turn to drink, regarding alcoholism as a kind of noble infirmity. “The god of wine loves me and tempts me to drink only when his spirit and mine enter into friendly dialogue,” Peter says.

Because the Hesse hero occupies a precarious position outside human society, he is at the same time extremely arrogant—Siddhartha refers to the normal human beings around him as “the child people”—and full of self-contempt. No wonder he is much given to thoughts of suicide, whether or not he actually commits it. For, as Hesse explains in “Steppenwolf,” “to call suicides only those who actually destroy themselves is false. . . . What is peculiar to the suicide is that his ego, rightly or wrongly, is felt to be an extremely dangerous, dubious, and doomed germ of nature; that he is always in his own eyes exposed to an extraordinary risk.”

The idea that one’s inner life is unusually dangerous and risky is one that most adults grow out of—partly because we get calmer with age, partly because we come to recognize the full reality of other people. But Hesse’s heroes are punk Peter Pans—they don’t grow up, and despise people who do, because they see maturation as a surrender to conformity and accommodation. Things that most people learn to put up with strike Harry Haller as the fetters of a living death:

Without really wanting to at all, they pay calls and carry on conversations, sit out their hours at desks and on office chairs; and it is all compulsory, mechanical and against the grain, and it could all be done or left undone just as well by machines; and indeed it is this never-ceasing machinery that prevents their being, like me, the critics of their own lives and recognizing the stupidity and shallowness, the hopeless tragedy and waste of the lives they lead.

Most people, in other words, are what Holden Caulfield, another favorite avatar of teen-age readers, called “phonies.” What torments Hesse is the difficulty of being authentic—of staying true to who you really are, despite the enormous pressures of alienation and conformity. “If I search retrospectively”—in his own writing—“for a common thread of meaning, then I can indeed find one,” Hesse wrote near the end of his life. “A defense of (sometimes even a desperate plea on behalf of) the human personality, the individual.”

 

Decker’s biography shows that Hesse’s life was an uneasy compromise between his spiritual absolutism, which pushed him in the direction of irascible isolation, and his human needs, which encumbered him with wives, children, and houses that he never quite wanted or accepted. Married three times, he was unhappy as a husband and as a father, and the characters in his books mostly shun both roles. His last novel, “The Glass Bead Game,” is a futuristic fantasy about an academy of scholars who are all male, and all single.

It is not surprising that Hesse would remain attuned to adolescence, since his teen-age years, in the eighteen-nineties, were the most dramatic and consequential period of his life. It was then that Hesse was first forced to confront the entire weight of the institutions ranged against him—family, church, school, society—and do battle with them in the name of defending his individuality. He won, but not without sustaining deep wounds; in a sense, his fiction is a series of reenactments of this primal struggle.

From a very young age, it was clear that there was a mismatch between Hesse and his family. He was born in 1877, in Calw, a small town in the Black Forest, in southwest Germany, where his father and grandfather worked together in a Christian publishing house. On both sides, he was descended from devout Pietists—members of a German Protestant sect that, like the Methodists in England, rejected the established church in favor of a fervently inward, evangelical striving for virtue. In Decker’s words, Pietism “regarded as the devil’s work everything that did not serve the ultimate purpose of preparing one for the kingdom of God in the hereafter.” When it came to child-rearing, this conviction translated, at least in the Hesse family, into a concerted effort to break the young Hermann’s will, to teach him the docility and submissiveness that God demanded.

Yet in Hermann this religious force met an immovable object. “I was the child of pious parents, whom I loved tenderly and would have done even more so had they not made me aware from a very early age of the Fourth Commandment. Unfortunately commandments have always had a catastrophic effect on me,” Hesse recalled in an autobiographical sketch. Compelled to honor his father and mother, he instinctively refused. In one incident recorded in his mother’s diary, the three-year-old Hesse put an iron nail in his mouth, and, when he was told he could die if he swallowed it, he stubbornly replied, “I don’t care! If I die and go to my grave, I’ll just take a couple of picture-books with me!” Some years later, his father contemplated sending him away “to an institution or to be raised by another family.” For his part, Hesse recalled that, as a child, he would dream of setting the family’s house on fire and of murdering his father.

These tensions boiled over in 1891, when the fourteen-year-old Hesse enrolled in Maulbronn Monastery, an élite state-run boarding school housed in a medieval abbey; its mission was to recruit the region’s brightest boys and turn them into Lutheran ministers. Getting into Maulbronn required passing a gruelling examination, an experience that marked Hesse so deeply that he returned to it in several novels. Indeed, many of his books are not just novels of education—the Bildungsroman that had been a classic genre in European literature since Goethe—but specifically novels of schooling. Each of the dormitories at Maulbronn, for instance, had a grandiose name; Hesse lived in Hellas, a tribute to the school’s conventional idolatry of ancient Greece. Fifteen years later, when he came to fictionalize his school days in the novel “Beneath the Wheel,” the main character goes to just such a school and lives in a dormitory called Hellas. And thirty-seven years after that, in “The Glass Bead Game,” Hesse told the story of Joseph Knecht, who once again lives in a dormitory called Hellas.

“Beneath the Wheel” assigns many of Hesse’s own experiences to Hans Giebenrath, a gifted boy who is emotionally destroyed by the pressure of studying to get into a Maulbronn-like school. He passes the examination, but only by cramming so intensively that his boyish love of life is extinguished. He is soon overcome by apathy and despair, and has to drop out; in the end he drowns in a river, possibly a suicide.

The conclusion of the book channels the self-pity that Hesse remembered so well: “All nausea, shame and suffering had passed from him; the cold bluish autumn night looked down on the dark shape of his drifting body and the dark water played with his hands and hair and bloodless lips.” (The very title of the book is an indictment, and “Beneath the Wheel” belongs with other German works of the period, such as Frank Wedekind’s “Spring Awakening” and Heinrich Mann’s “The Blue Angel,” as an exposé of a soul- and libido-crushing educational system.)

Hesse avoided Hans Giebenrath’s fate, but only barely. In March, 1892, he ran away from Maulbronn and was reported missing. He returned after just a day and, as Decker writes, truancy hardly sounds like an unprecedented crime for a fourteen-year-old. But the reaction from school and family was extreme. It speaks volumes about his parents’ religious sensibility, for instance, that his mother’s response to the news of his disappearance was to hope that he was dead: “I was very relieved when I finally got the feeling . . . that he was in God’s merciful hands,” she wrote in her diary.

Unfortunately, he returned alive, a bigger headache than ever. Hesse had to leave school, and his parents, unable to cope with him, resorted to having him committed to a mental asylum. Facing the prospect of indefinite, possibly lifelong incarceration, he bombarded his parents with heartbreaking letters: “I loathe everything here from the bottom of my heart. It is like it has been designed especially to show a young man how wretched life and all its aspects are.”

After several months, Hesse was released on a trial basis, and he was able to attend a local high school. But the damage to his relationship with his parents was permanent: when his mother died, in 1902, he refused to attend the funeral. And the damage to his career seemed equally irreparable. At Maulbronn, he was on a fast track to a prestigious and secure job as a minister or a teacher. Now college was out of the question, and Hesse became an apprentice to a bookseller. To his parents—often, surely, to himself—it must have looked as if he had failed for good.

But Hesse’s genius was to embrace this failure and make it his inspiration. “In the beginning was the myth” is the first sentence of “Peter Camenzind,” the book that rescued Hesse from poverty and obscurity; and many of his books are retellings of the same myth, one that Hesse devised to interpret his own unhappy existence. Indeed, Hesse’s novels are best understood as successive versions of a spiritual autobiography—a form that, ironically, was a staple of Pietist literature. “The only way I can conceive” of writing, Hesse once said, is “as an act of confession”—a statement that could have been endorsed by his paternal grandfather, a doctor who left behind a memoir in two volumes. Indeed, in rebelling against his Pietist upbringing, Hesse ended up recapitulating its central themes: he never lost the habit of rigorous self-examination or his feelings of unworthiness and his longing for an experience of the divine.

The difference was that he could not imagine finding that experience within Pietism. “If I had grown up in a respectable religious tradition, for example as a Catholic, I would probably have stuck to the faith throughout my life,” he explained wryly.

Instead, he was driven to look for spiritual wisdom in other traditions, always admiring figures who seemed to defy dogma and doctrine. Francis of Assisi was an early inspiration: Hesse wrote a short biography of the saint who preached to the animals and spoke of the sun and the moon as his brother and sister.

He soon found himself looking farther afield—especially to the East, to the religious traditions of India. This, too, was a kind of atavism—his maternal grandfather, a missionary, had spent many years in India, and his mother had partly grown up there. But, while they went to spread a Christian faith they knew was the true one, Hesse went as a seeker. In 1911, he made an impulsive journey to Ceylon and Singapore, which proved disappointing at the time—he could not get used to the climate—but laid the groundwork for his later book “Journey to the East,” which imagines a spiritual secret society that includes the great minds of Europe and Asia.

Image result for hermann hesse siddhartha quotes

 

The book that connects Hesse with India for most readers, of course, is “Siddhartha.” Published in 1922, in the wake of a world war that had destroyed and discredited European civilization, “Siddhartha” takes refuge in a distant place and time—India in the age of the Buddha, in the fifth century B.C. In this short book, Hesse boils down his archetypal story to its mythic core. Once again, we meet a sensitive, gifted young man—Siddhartha, the son of a Brahman priest—who rejects his family, its religion, and its aspirations, and sets out to discover the truth for himself.

Along the way, he experiences the extremes of deprivation, as an ascetic, wandering monk, and of satiety, as the wealthy lover of the beautiful courtesan Kamala. But he remains unhappy in every condition, until he finds that the only true wisdom is nonattachment, a resigned acceptance of everything that happens. Life cannot be fixed in place; it flows, like the river where Siddhartha receives his revelation:

And when Siddhartha listened attentively to this river, to this thousand-voiced song, when he listened neither for the sorrow nor for the laughter, when he did not attach his soul to any one voice and enter into it with his ego but rather heard all of them, heard the whole, the oneness—then the great song of the thousand voices consisted only of a single word: Om, perfection.

Image result for hermann hesse siddhartha

“Siddhartha” appears to be a kind of wisdom writing—a teaching. Yet the central message of the book is the impossibility of learning anything that matters from a guru or teacher. Siddhartha’s revelation sounds very Buddhist, and Hesse borrowed the character’s name from Siddhartha Gautama, the founder of Buddhism. But, in the book’s most important scene, Siddhartha actually encounters the Buddha—and spurns him. While his more timid and conventional friend, Govinda, becomes a Buddhist monk, Siddhartha knows that any kind of religion—even a true and admirable one—is an obstacle to enlightenment. “No one will ever attain redemption through doctrine!” he exclaims. After all, the Buddha didn’t become the Buddha by following the Buddha; he forged his own unique path. Hesse’s moral is similar to that of a famous Zen koan: “If you meet the Buddha on the road, kill him.”

Hesse’s emphasis on self-reliance, with its echoes of Emerson—another writer fascinated by Eastern religions—helped to make him a trusted guide for a generation of readers whose faith in institutions was destroyed by the First World War. Indeed, Hesse’s reputation as a sage rests mainly on the books he wrote after the war—starting with “Demian,” in 1919, and continuing through “Siddhartha” and “Steppenwolf,” in the nineteen-twenties.

Although Hesse was a German subject, he was a resident of Switzerland—he lived there on and off during his early life, and permanently starting in 1912—and he viewed the war fever that infected Germany from an ironic distance. (He nonetheless volunteered for the German Army, but was rejected because of his weak vision, the result of a childhood fireworks accident.) Early in the war, Hesse published an essay in which, while he still expressed hope for a German victory, he insisted on the need to preserve humane values and communication between nations. “This disastrous world war should serve to drum into us more insistently than ever the realization that love is better than hate,” he wrote. Even so mild an avowal earned Hesse the permanent hostility of many Germans. For the rest of his life, he would be attacked by incensed nationalists, both in the press and in regular deliveries of hate mail.

By the same token, in the nineteen-thirties Hesse’s hostility to Hitler was automatic. Nazism, with its blood sacrifice of the individual to the state and the race, represented the opposite of everything he believed in. In March, 1933, seven weeks after Hitler took power, Hesse wrote to a correspondent in Germany, “It is the duty of spiritual types to stand alongside the spirit and not to sing along when the people start belting out the patriotic songs their leaders have ordered them to sing.” Still, while he hosted and helped many émigré writers—including Thomas Mann, a good friend—Hesse never threw himself into anti-Nazi politics. Decker points out that, in the nineteen-thirties, he made a quiet statement of resistance by reviewing and publicizing the work of banned Jewish authors, including Kafka. But, tellingly, his own books were not banned by the Nazis until 1943.

It was Thomas Mann who, at the end of the First World War, published a book called “Reflections of a Nonpolitical Man”; but the title would have applied much better to Hesse, for whom being nonpolitical was a first principle. After all, if the world and the self are illusions, it is delusive to believe that they can be redeemed. To those who wanted him to take a more public stand against Hitler, Hesse replied that anti-fascism was as much a betrayal of the self as fascism: “What’s it got to do with me?” he asked. “I can’t change a thing. What I can do, though, is offer a little succor to those who, like me, strive in everything that they think and do to undermine the whole filthy business of striving after power and political supremacy.”

This attitude to politics and history is characteristic of what Hegel called “the beautiful soul”—one who remains unstained by the world because he declines to engage with it. The phrase was invented by Goethe, who used it in his “Confessions of a Beautiful Soul,” a fictional memoir in which a Pietist noblewoman describes her spiritual life. Hesse, by analogy, might be called an ugly soul, one who is so occupied with his own spiritual distempers that the outside world barely makes an impression. This is also a key to Hesse’s appeal to young readers, who seldom see beyond the limits of the self. But the complete integrity of Hesse’s self-absorption is what guarantees the permanence of his work. As long as people struggle with the need to be themselves, and the difficulty of doing so, he will be a living presence—which is even better, perhaps, than being a great writer. ♦

This article appears in the print edition of the November 19, 2018, issue, with the headline “The Art of Failure.”

 

Jamal Khashoggi’s Final Words—for Other Journalists Like Him


October 20, 2018

Jamal Khashoggi’s Final Words—for Other Journalists Like Him

On October 3rd, the day after Jamal Khashoggi disappeared, the Washington Post received a final column left behind with his assistant when he went off to Turkey to get married. It was, in seven hundred words, poignant and personal and epically appropriate, considering his fate. “The Arab world was ripe with hope during the spring of 2011. Journalists, academics and the general population were brimming with expectations of a bright and free Arab society within their respective countries,” he opined. “They expected to be emancipated from the hegemony of their governments and the consistent interventions and censorship of information.” Instead, rulers grew ever more repressive after the short-lived Arab Spring.

Today, hundreds of millions of people across the Middle East “are unable to adequately address, much less publicly discuss, matters that affect the region and their day-to-day lives,” Khashoggi wrote. They are either “uninformed or misinformed” by draconian censorship and fake state narratives. As the headline of his last published words asserted, “What the Arab world needs most is free expression.”

In his death, Khashoggi, a Saudi journalist and former government supporter who became a vocal and fearless critic of the current Saudi crown prince, has galvanized global attention far more than he was able to do during his life. The horrific details of his murder and dismemberment have had an effect he would never have imagined—putting into serious question the fate of a Saudi leader, the state of U.S.-Saudi relations, American foreign-policy goals in the world’s most volatile region, and even policies that have kept dictators in power. The repercussions are only beginning.

But Khashoggi was hardly a lone voice decrying political repression in the Middle East, as he acknowledged in his final Post column. Saudi Arabia may be the most cruel and ruthless government in the region, but it uses tactics embraced by dictators, sheikhs, and Presidents across twenty-two countries.

In 2014, Egypt’s military-dominated government seized all print copies of the newspaper Al-Masry Al-Youm, whose name means “The Egyptian Today.” Al-Masry Al-Youm is that rare private newspaper in the Arab world where young reporters once dared to question government policies in hard-hitting editorials and groundbreaking journalism. “The Egyptian government’s seizure of the entire print run of a newspaper, al-Masry al Youm, did not enrage or provoke a reaction from colleagues. These actions no longer carry the consequence of a backlash from the international community,” Khashoggi wrote. “Instead, these actions may trigger condemnation quickly followed by silence.”

The world, particularly the West, is partly culpable for looking the other way, he wrote. It is a tragic irony that the world is paying attention to Khashoggi’s death, yet still not making an issue of a sweeping problem that could determine the future of a region of twenty-two countries and four hundred million people. On Thursday, the U.S. Treasury Secretary, Steve Mnuchin, announced that he would not attend the Saudi investment conference known as “Davos in the Desert,” which is pivotal to the crown prince’s plans to modernize the kingdom’s oil-reliant economy. The British trade minister, the French and Dutch finance ministers, and the president of the International Monetary Fund also backed out after Khashoggi’s disappearance. But no foreign government is addressing the broader political practices in any other country, or any other case, in the region.

In his column, Khashoggi drew attention to imprisoned colleagues who receive no coverage. “My dear friend, the prominent Saudi writer Saleh al-Shehi, wrote one of the most famous columns ever published in the Saudi press,” Khashoggi noted. “He unfortunately is now serving an unwarranted five-year prison sentence for supposed comments contrary to the Saudi establishment.” Shehi, who had more than a million followers on Twitter, was charged with “insulting the royal court” for his statements about widespread government corruption in his columns for the newspaper Al Watan and on a local television program.

 Image result for Michael Abramowitz, the President of Freedom House

Michael Abramowitz, the President of Freedom House and a former national editor at the Washington Post, told me that Khashoggi rightly identified the broader stakes. “Khashoggi’s final column accurately pinpointed the appalling lack of political rights and civil liberties in much of the Arab world, especially the right to freely express oneself,” he said. Khashoggi began his last piece by citing Freedom House’s 2018 report—and the fact that only one Arab country, Tunisia, is ranked as “free.” Abramowitz told me, “What is especially sad is that, while we are properly focussed on the outrageous actions by the Saudi government to silence one critic, we must also remember that countless other bloggers, journalists, and writers have been jailed, censored, physically threatened, and even murdered—with little notice from the rest of the world. And, in some cases, notably Egypt, conditions have deteriorated.”

Image result for Michael Abramowitz, the President of Freedom House

In the Gulf states, Human Rights Watch chronicled a hundred and forty cases—a number chosen based on the original character limit on Twitter, though there are actually many, many more—where governments have silenced peaceful critics simply for their online activism. Among the most famous is Raif Badawi, a young Saudi blogger who ran a Web site called the Saudi Liberal Network that dared to discuss the country’s rigid Islamic restrictions on culture. One post mocked the prohibition against observing Valentine’s Day, which, like all non-Muslim holidays, is banned in Saudi Arabia. In 2014, he was sentenced to ten years in prison, a thousand lashes, and a fine that exceeded a quarter million dollars. (I wrote about his case in 2015.)

Badawi’s sister Samar—who received the 2012 International Women of Courage Award at a White House ceremony hosted by Michelle Obama and Secretary of State Hillary Clinton—was arrested in July. When the Canadian Foreign Minister, Chrystia Freeland, tweeted her concern about the Badawi siblings, in August, the kingdom responded by expelling the Canadian Ambassador, recalling its envoy, freezing all new trade and investment, suspending flights by the state airline to Toronto, and ordering thousands of Saudi students to leave Canada. (I wrote about the episode that month.)

In Bahrain, Nabeel Rajab, one of the Arab world’s most prominent human-rights advocates, is languishing in jail after being sentenced to five years for tweeting about torture in the tiny sheikhdom and criticizing Saudi Arabia’s war in Yemen. In the United Arab Emirates, Ahmed Mansoor, who ran a Web site focused on reforms, was sentenced to ten years for social-media comments calling for reform.

“The Arab people are desperate for real news and information, and Arab governments are desperately trying to make sure they never get that,” Sarah Leah Whitson, the executive director of Human Rights Watch’s Middle East and North Africa division, told me. “Uncensored communication on social media promised journalists and writers in the Middle East the greatest hope to freely exchange ideas and information, but it’s also why Arab governments, so terrified of the voices of their own citizens, rushed to pass laws criminalizing online communications and jailing writers and activists for mere tweets.”

Image result for crown prince mohammed bin salman

The wider world bought into the Saudi narrative that Mohammed bin Salman, the crown prince and de-facto ruler, was intent on opening up the kingdom. Perhaps tellingly, it is the free press elsewhere in the world that first asked questions about Khashoggi’s October 2nd disappearance, in the Saudi consulate in Istanbul, where he had gone to get papers so he could marry. “The world should take note that it is the free press, not the Saudi government or the White House, that has doggedly searched for the truth about what happened to Mr. Khashoggi,” the Democratic senator Patrick Leahy, of Vermont, said in a statement. “It reminds us, once again, that a free press is an essential check against tyranny, dishonesty, and impunity.”

 

Ambassador Nikki Haley’s Resignation–A Loss For US Diplomacy


October 13, 2018

Ambassador Nikki Haley’s Resignation–A Loss For US Diplomacy

by John Cassidy

https://www.newyorker.com

As Brett Kavanaugh was listening to his first legal arguments as a Justice on the Supreme Court, on Tuesday morning, and liberal America was getting even more angry and depressed, Nikki Haley popped up to announce that she’s resigning at the end of the year as the U.S. representative at the United Nations.

Sitting next to Donald Trump in the Oval Office, Haley said that it had been “an honor of a lifetime” to hold the U.N. job, which comes with a plush suite at the Waldorf Towers. Preëmpting the obvious question about why she is leaving the Administration at this juncture, she added, “No, I am not running for 2020.”

That didn’t prevent the publication of a slew of pieces speculating about Haley’s motivations, including one from my colleague Eric Lach. On Wednesday morning, one of the most-read pieces on the Washington Post’s Web site was headlined, “ ‘A rising star’: Haley poses a potential threat to Trump even if she doesn’t run in 2020.”

All this interest in Haley’s intentions is understandable. As an Indian-American woman, the daughter of immigrants, she stands out from the sea of white men at the helm of the Republican Party. That itself makes her a “story”—one that, someday, could threaten to knock Trump off the home pages. But Haley isn’t just a G.O.P. oddity. She’s a canny and ambitious politician who has the ability to shape-shift seamlessly.

In getting elected Governor of South Carolina as part of the Tea Party wave, in 2010, she campaigned against the “good old boys” who dominated politics in the Palmetto State. Then she worked alongside them.

In February, 2016, she called Trump “everything a governor doesn’t want in a President.” Four months later, she endorsed him. At the U.N., she enthusiastically defended some of the most brazenly reactionary and isolationist foreign policies that any modern-day U.S. Administration has put forward, and now, as her tenure comes to a end, an editorial in the Times says that she will be missed. Anybody who can simultaneously retain the support of Trump and the Times’s editorial board should never be underestimated.

Perhaps to quell some of the speculation about Haley’s political ambitions, people close to her leaked the suggestion that she’s looking to make money in the private sector. That may well be true (her most recent federal ethics report listed debts of up to a million dollars), but it doesn’t detract from the main takeaway from her resignation, which should provide some succor for anybody eager to see the back of Trump: he won’t be President forever, and some politically astute people, Haley included, are already looking ahead. Rather than waiting for the dénouement of the Trump story, which is unlikely to be pretty, she’s getting out while the getting is good.

That’s a smart move. A month from now, if the opinion polls are correct, Trump will be facing a Democrat-controlled House of Representatives and an inability to get any legislation passed without making concessions. To be sure, even if the midterms go the House Democrats’ way, they might overplay their hand, as the House Republicans did during the second term of the Clinton Administration, rushing to impeach the President and generating a backlash from voters. But that doesn’t have to happen. If Democratic leaders get their way, they will wait for Robert Mueller to file his report on the Russia investigation, and, in the meantime, torment the White House with subpoenas demanding the release of Trump’s financial records, including his tax returns.

Another potential danger to Trump is the economy, which is currently in the category of “as good as it gets.” Going into 2019, the fiscal stimulus from the December, 2017, tax-cut bill and the February, 2018, bipartisan spending deal will start to wear off, and G.D.P. growth will probably fall back. In addition, as the Federal Reserve continues to raise interest rates, the stock market, which has enjoyed a record-breaking run in the past ten years, could take a sustained dive at any moment. With the unemployment rate at 3.7 per cent and the Dow trading above twenty-five thousand, Trump’s approval rating is in the low forties. Where will it be if the market crashes and the economy falters? (The eight-hundred-point fall in the market on Wednesday could be an augur of things to come.)

Rather than languishing in depression, people opposed to Trump should follow Haley’s example and look forward. Handicapped by an antiquated and blatantly inequitable electoral system, the Democratic Party desperately needs to reverse at least part of the gains that Republicans have made away from the coasts, and outside of big cities, in the past thirty years.

There is another huge challenge in the nation’s courts, where Kavanaugh’s confirmation marked the culmination of a multi-decade conservative campaign to wrest control from moderate and liberal judges. But in democracies things can change. Things do change. And, in fewer than four weeks, there will be an invaluable opportunity to start the process.

This post has been updated to account for Wednesday’s stock-market decline.

What Termites Can Teach Us


September 11, 2018

In 1781, Henry Smeathman wrote a report for the Royal Society celebrating termites as “foremost on the list of the wonders of the creation” for “most closely imitating mankind in provident industry and regular government.” Termites, he wrote, surpassed “all other animals” in the “arts of building” by the same margin that “Europeans excel the least cultivated savages.”–Amia Srinivasan

Do you know why?  Because there no politicians messing up their harmonious existence. Do you think we can live in societies without politicians. Those who invented”democracy”did not study how termites are able to live without politics. –Din Merican

What Termites Can Teach Us

Roboticists are fascinated by their “swarm intelligence,” biologists by their ability to turn grass into energy. But can humans replicate their achievements?

New termite colonies are founded on windless evenings, at dusk, after the rain. Most termites have neither eyes nor wings, but every mature colony has a caste of translucent-winged seeing creatures called alates, which are nurtured by the colony’s workers until they are ready to propagate. When the time comes—given the right temperature and humidity—colonies release thousands of alates into the air, an event called “swarming.” Most of the nutrient-rich alates are eaten by animals as they glide to the ground. The few that survive shed their wings and pair off, male and female. Then they burrow into the earth, future kings and queens. The pair will remain there, alone in a dark hole, for the rest of their lives. They bite off the ends of their antennae, reducing their acute sensitivity; perhaps it’s a means of making more bearable a life wholly given over to procreation. They mate, and the queen begins to lay her eggs. She will lay millions in the course of her decades-long life—the longest life span of any insect. Her translucent white abdomen, constricted by the tight black bands of her exoskeleton, swells to the size of a human thumb, leaving her unable to move. Her tiny head and legs flail while her pulsating body is fed and cleaned by her offspring.

 

The South African naturalist and poet Eugène Marais described the queen’s fate in “The Soul of the White Ant” (first published, in Afrikaans, in 1934): “Although you will apparently be an immobile shapeless mass buried in a living grave, you will actually be a sensitive mainspring. You will become the feeling, the thinking, the seeing, of a life a thousand times greater and more important than yours could ever have become.”

Humans have often looked at insects and seen themselves, or the selves they would like to be. Early-modern European naturalists peered into termite mounds, anthills, and beehives and saw microcosms of well-ordered states: monarchs, soldiers, laborers. (There was no general recognition that bee “kings” were actually female “queens” until the sixteen-seventies, when a Dutch microscopist, Jan Swammerdam, pointed out that bee kings had ovaries.) In 1781, Henry Smeathman wrote a report for the Royal Society celebrating termites as “foremost on the list of the wonders of the creation” for “most closely imitating mankind in provident industry and regular government.” Termites, he wrote, surpassed “all other animals” in the “arts of building” by the same margin that “Europeans excel the least cultivated savages.”

According to Smeathman, the “perfect” alate caste “might very appositely be called the nobility or gentry, for they neither labour, or toil, or fight, being quite incapable of either,” but are instead devoted to founding new colonies. (In 1786, Smeathman published a plan for the settling of freed black slaves in a new colony, on the West African coast, where he had done his termite studies.) He viewed the laborers, meanwhile, as “voluntary subjects” who served the “happy pair” of king and queen. Just over a century later, in “Mutual Aid” (1902), the Russian thinker and revolutionary Peter Kropotkin exalted the coöperative habits of termites as a model, and a scientific basis, for Communism. In “Civilization and Its Discontents,” Freud presented the termite mound as an example of the perfect sublimation of the individual will to the demands of the group—a sublimation that, he said, would continue to elude mankind.

 

Some have seen in termites a darker vision for humanity, a warning rather than a guide. The early-twentieth-century American entomologist William Wheeler began as a believer in the political example of termites and ants, detecting in their colonies a Deweyan ethos, both communitarian and democratic. But, by the late nineteen-twenties, Wheeler had begun to worry that the social insects represented a sort of evolutionary cul-de-sac, which foretold “the eventual state of human society”: “very low intelligence combined with an intense and pugnacious solidarity of the whole.” For Wheeler, the harmony of insect society was made possible by its solution to what he called the “problem of the male.” Males, Wheeler said, are the “antisocial sex,” responsible for the “instability and mutual aggressiveness so conspicuous among the members of our own society.” Termites and ants, with their castes of sterile male workers and soldiers, had done away with the problem of the male. But humans could do so only at the cost of civilization, Wheeler warned, for “all progress . . . is initiated by a relatively small portion of the male population, whose restlessly questing intellects are really driven by the unsocial dominance impulses of their male mammalian constitution and not by any intense desire to improve society.” Among those products of male striving Wheeler counted “sciences, arts, technologies,” along with “philosophies, theologies, social utopias.” He did not appear to worry about what the termite life might mean for women, or about the possibility that the queen was not a queen at all but a slave.

 

Termites are insects of the infraorder Isoptera. They have bulbous, eyeless heads and teardrop-shaped bodies that are often translucent, exposing a swirl of guts and digesting plant matter. They are eusocial creatures—eusociality being the highest level of animal sociality recognized by sociobiologists, characterized by a division of reproductive labor between fertile and non-fertile castes, and by the collective care of the young. Until 2007, Isoptera was considered its own distinct order, and it had been classified that way for the previous hundred and seventy-five years. But phylogenetic studies confirmed that, despite appearances, termites are a kind of cockroach, and so Isoptera was reclassified under the cockroach order, Blattodea. This demotion has not helped the termite cause. Termites already suffer in the comparison with other eusocial insects: they lack the charisma of bees, with their summery associations and waggle dances, and do not receive the same recognition as ants for their work ethic and load-bearing capacities. They also have a reputation for destruction. In the United States, termites have been estimated to consume somewhere between $1.5 and $20 billion worth of property every year. At times they go straight for the cash: in 2011, termites consumed around ten million rupees’ worth of banknotes in a branch of the State Bank of India in Uttar Pradesh; two years later, termites munched part of the way through the savings of an elderly woman in Guangdong, who had wrapped four hundred thousand yuan in plastic and put it in a drawer.

The Australian Mastotermes darwiniensis, the oldest and one of the largest species of termite—most closely resembling the wood-eating cockroach from which termites are thought to have evolved—is reported to have performed legendary feats of chewing, including reducing a house to rubble while its owner was travelling for two weeks.

In fact, only twenty-eight of approximately twenty-six hundred identified species of termite are invasive pests. (If they all were, we would be in big trouble: collectively, termites outweigh humans ten to one.) What’s more, noninvasive termites are ecologically crucial, in irrigating land, protecting against drought, and enriching the soil. They may also have served as a crucial food source for our own australopithecine ancestors. And yet termites are generally unloved.

While I was reading Lisa Margonelli’s new book, “Underbug: An Obsessive Tale of Termites and Technology,” I discovered that everyone I knew had an unsavory termite tale. A friend who lives in Los Angeles is disgusted by the piles of black beads she finds near neat holes in her hardwood floors, which I unhelpfully identified as the fecal pellets, or “frass,” of dry-wood termites. Another friend, in Berkeley, swears that she can hear termites chewing when she closes her eyes at night, despite an exterminator’s assurances that her house is not infested. As a small child in suburban New Jersey, I discovered a piece of wood in our back yard that was covered in a maze of delicate etching. I was thrilled with the beauty of it: the smooth, shallow holes and grooves had the look of secret runes—the writing, I imagined, of Druids or fairies. I took it in to my mother. She told me that this was not magic but the sign of a termite infestation, and made me throw it out.

Termites may be hard to love, but they should be easy to admire. Termite mounds are among the largest structures built by any nonhuman animal. They reach as high as thirty feet, which, proportional to the insects’ tiny size, is the equivalent of our building something twice as tall as the 2,722-foot Burj Khalifa, in Dubai. The mounds are also fantastically beautiful, Gaudíesque structures, with rippling, soaring towers, in browns and oranges and reds. The interior of a termite mound is an intricate structure of interweaving tunnels and passageways, radiating chambers, galleries, archways, and spiral staircases. To build a mound, termites move vast quantities of mud and water; in the course of a year, eleven pounds of termites can move about three hundred and sixty-four pounds of dirt (in the form of mud balls) and thirty-three hundred pounds of water (which they suck into their bodies).

The point of all this construction is not to have a place to dwell—the colony lives in a nest a metre or two below the mound—but to be able to breathe. A termite colony, which may contain a million bugs, has about the same metabolic rate as a nine-hundred-pound cow, and, like cows (and humans), termites breathe in oxygen and expel carbon dioxide. The mound acts as a lung for the colony, managing the exchange of gases, leveraging small changes in wind speed to inhale and exhale. Also like lungs, a termite mound has a role as a secondary diffusion system, which carries oxygen to and carbon dioxide away from the far reaches of the underground termite nest. The mound functions as a humidifier, too, tightly regulating moisture levels across wet and dry seasons. Some termite species partly outsource their digestion through the practice of fungiculture—the farming of a grass-eating fungus, which they store, tend, and feed in an elaborate garden maze below the mound.

Termites appear to do all this without any centralized planning: there are no architects, engineers, or blueprints. Indeed, the termite mound is not so much a building as a body, a self-regulating organic process that continuously reacts to its changing environment, building and unbuilding itself. Its complex behavior emerges, as if by magic, from its simple constituents. It is generally agreed that individual termites are not particularly intelligent, lacking memory and the ability to learn. Put a few termites into a petri dish and they wander around aimlessly; put in forty and they start stampeding around the dish’s perimeter like a herd. But put enough termites together, in the right conditions, and they will build you a cathedral.

“Underbug” is more about humans who are preoccupied with termites than about termites themselves. Specifically, Margonelli is concerned with the sort of human whose interest in termites isn’t confined to wanting to kill them. (About half the scientific papers written about termites from 2000 to 2013 involve their extermination). These entomologists, geneticists, synthetic biologists, mathematical biologists, microbial ecologists, roboticists, computer scientists, and physicists are drawn to termites for a variety of reasons, not all of which are compatible. Some of these scientists, the minority, simply appear to be seduced by termites, and want to understand how they do what they do. One such is J. Scott Turner, a physiologist who, before turning to termites, placed alligators in wind tunnels in order to understand how they regulate their body temperature. By pumping propane gas down termite mounds, he was able to show that they function as lungs, not as chimneys that allow hot air to escape, which had been the previous assumption. (Putting things into a mound and seeing what happens is a favored mode of termite experimentation; Turner and his team have also experimented with plastic beads and molten aluminum. One convenience of working with termites is that there are few regulations concerning their treatment.)

Turner is a proponent of what he calls the “extended organism” thesis. (It’s meant as a variant of, and ultimately as an alternative to, Richard Dawkins’s “extended phenotype” model.) In Turner’s view, the physical termite mound—with its mud tunnels and walls, digested wood and grass and fungus—is part of the termite, rather than part of the environment on which the termite acts. The entire mound—insects plus structure—is thus a living thing: a self-regulating physiological and cognitive system, with a sense of its own boundaries, a memory, and a kind of collective intentionality.

The extended-organism hypothesis also recalls an older idea: that the termite, bee, or ant colony is a “superorganism.” This term was coined by William Wheeler in 1911, though the idea dates back to Darwin, who saw the superorganism as a solution to the “problem” of eusociality. The problem is this: if natural selection favors those organisms which are best at reproducing, then how do castes of nonreproductive insects ever evolve? One way to address the problem is to regard the colony as a whole as the unit of selection. The sterile worker should be thought of not as an individual organism but as a “well-flavored vegetable,” in Darwin’s phrase, produced by the queen.

Today, most evolutionary theorists favor the “inclusive fitness” explanation of eusociality, a theory developed by W. D. Hamilton in the early nineteen-sixties. Hamilton showed mathematically that altruism can be a beneficial reproductive strategy for an organism, so long as the altruistic act benefits another organism to which it is sufficiently genetically similar. As a human being, the obvious way for me to reproduce my genes is to have biological children, who will inherit half of my genes. But I can also reproduce my genes by helping my sister, who shares on average half of my genetic material, nurture and protect her own children, who will share a quarter. If sacrificing my life will enable my sister to have more than twice as many children as I would have had, my sacrifice is “worth it,” from the perspective of my selfish genes. E. O. Wilson, though an early evangelist for Hamilton’s theory, has recently argued for a return to the superorganism as a solution to Darwin’s problem. In this, Wilson is very much in the minority; Richard Dawkins has called his criticisms of inclusive fitness “downright perverse.”

Most of the other scientists Margonelli follows are interested in termites as a means to human ends, and aim at simplifying their complexity to something replicable. Consider termites’ ability to convert dead plant matter into energy. They do this with the help of the hundreds, sometimes thousands, of species of microbes—bacteria and protists—that live in their guts, ninety per cent of which are found nowhere else on earth. Some of these microbes are themselves, like the termite superorganism, composite animals. The protist Trichonympha, found in some termite guts, is itself host to colonies of symbiotic bacteria. Termites and their gut microbes are thought to have coevolved between two hundred and fifty million and a hundred and fifty-five million years ago, when some cockroaches ingested wood-eating microbes, and then began sharing what entomologists politely call “woodshake”—a mixture of feces, microbes, and plant matter—among themselves, mouth to mouth, and mouth to anus.

 

This practice, known as “trophallaxis” (another of William Wheeler’s coinages), allows a communal pooling of digestive capacity, which can then be handed down from one generation to the next. (With the rise of fecal transplants to cure C. difficile infection and other gastrointestinal disorders, trophallaxis is gaining popularity among humans; the F.D.A. has, since 2013, officially classified human feces as a drug.) The Department of Energy says that the U.S. can produce 1.3 billion tons of dry biomass—from harvested trees, cornstalks, high-energy grasses, and the like—without taking anything away from regular agricultural uses. If humans can crack the code to termite digestion, the U.S. could turn the stuff into nearly a hundred billion gallons of biofuel a year—what’s sometimes called “grassoline”—and thereby reduce automobile emissions by eighty-six per cent.

“AXP ↓ 0.78, AIG ↓ 1.12, T ↓ 2.63, BAC ↓ 0.98, BA ↓ 0.08, CAT ↓ 4.37 . . .”

The search for a termite-inspired grassoline is a major goal of the emerging field of synthetic biology, in which biological systems—metabolic pathways, cells, organisms—are reëngineered to produce things humans want, including biofuels and precursors of drugs. One of the field’s leaders is Jay Keasling, who runs the Department of Energy’s Joint BioEnergy Institute, or J.B.E.I. Keasling imagines a fully modular system of synthetic biology, with different companies producing different off-the-shelf parts—empty cell “bags,” the chromosomes with which to program them, the molecules to “boot” them up—that can readily be assembled to produce the desired chemical output. Manufacturing a termite biofuel would require identifying the genes for wood eating from the termite’s microbe colony and inserting them into a cellular bag. The first challenge is overcoming the fickleness of microbes: less than one per cent of them can be isolated and grown in a petri dish. This used to mean that it was nearly impossible to map the genomes of the termite’s wood-eating microbes. But in 2004 a team led by the Berkeley earth scientist Jill Banfield came up with “metagenomics,” a process of sequencing the genes of an entire microbial community at once. In 2007, Nature published a metagenomic analysis of gut microbes from a Costa Rican termite; puzzle-piecing together fifty-four million base pairs of DNA, researchers identified more than a thousand genes that might be for digesting wood. A termite biofuel seemed not far off.

Yet the synthetic biologists at J.B.E.I. still have not produced a grassoline that can compete with ordinary fossil fuels. (They have turned their attention to the production of other biofuels, including those in demand by the military.) Margonelli suggests two reasons for this failure. First, the termite’s gut turned out to be too complex to understand, let alone imitate. Phil Hugenholtz, one of the researchers who helped sequence the gut microbes of the Costa Rican termite, jokes that “you might as well go and hook your car to a bunch of termites.” Second, the biology itself seems to resist being reëngineered in the way that synthetic biologists would like. “What we’re doing,”

 

Héctor García Martín, a physicist who works with Keasling, says, “is taking a bug”—like E. coli—“with no interest in producing biofuels and forcing it to produce them.” García Martín goes on to cite the microbiologist Carl Woese, who observed that, unlike electrons, cells have a history—something like memories of what they have metabolized in the past. These “memories” are encoded not in the cells’ DNA but somewhere else in their chemistry, so it may be misguided to think in terms of swapping genetic programs in and out of cell “bags.” The willingness, on the part of a physicist like García Martín, to talk about the “memories” and “interests” of biological systems is surprising. But it reflects a larger shift among synthetic biologists away from a belief in the fundamentally mechanical nature of life.

In 2014, Keasling and three other prominent synthetic biologists published a paper in the journal Cell, in which they declared it an “open question . . . whether biology is genuinely modular in an engineering sense”—that is, a predictable aggregation of rudimentary components—“or whether modularity is only a human construct that helps us understand biology.” But the spectre raised by termites, microbes, and other organisms that are at once simple and devilishly complex is that the very metaphor of modularity might be misleading: that, as long as we think of living systems as machines, we are guaranteed not to understand them.

Another reason termites interest engineers is that they are a paradigm of “swarm intelligence”—highly complex behavior that emerges from the interaction of individual units in the absence of a centralized command. Each termite is presumed to be governed by a set of simple rules, which dictate particular actions—crawl, turn, dig, stack a mud ball—in response to specific triggers from the environment or from other termites. But it’s unclear precisely what mechanism produces termites’ group intelligence—which chemical or physical signals trigger which actions, and by what rules.

Since 1959, the dominant theory has been “stigmergy,” first developed by the French biologist Pierre-Paul Grassé. The term comes from the Greek stigma (mark or sign) and ergon (work or action); the idea is that a trace left behind in the environment by one agent triggers further action by other agents, creating a positive-feedback loop. Stigmergy seeks to explain how extremely simple creatures, with no capacity for communication, can achieve the appearance of joint decision-making. In the case of termites (stigmergy has also been used to explain the complex emergent behavior of other simple creatures, such as multicellular bacteria) scientists speculate that the action-triggering “trace” is found in their saliva. A termite picks up a mud ball, gets some of its saliva on it, and drops it, presumably at random; other termites, triggered by the saliva scent, start stacking mud-and-saliva balls on top of the first ball, strengthening the signal; eventually, the mud balls turn into a wall or a pillar.

In the nineteen-nineties, computer scientists began programming virtual termites that built “walls” via the principles of stigmergy. These virtual termites could build two-dimensional shapes, but they could not produce anything like the complex three-dimensional architecture of real termites. And though stigmergy might explain how termites build, it does not readily explain why they so often unbuild, dismantling and modifying their work as they go. Recent studies suggest that some individual termites have a tendency to lead, while others have a tendency to follow—meaning that what gets the stigmergic process going is not a random action but something more systematic. It also appears that termites are not so much industrious drones as they are denizens of a post-capitalist Utopia: in a petri dish of twenty-five termites, only five appear to work at any one time. It seems likely that stigmergy is, at best, just one of several mechanisms that produce the complex group behavior of termites. For many researchers, identifying these mechanisms is the key to the future of robotics and A.I.: not one smart machine but a hyper-smart flock of thousands of small, cheap, dumb machines.

In 2014, an issue of Science featured, on its cover, a piece on TERMES, a termite-inspired robot created by the computer scientist Radhika Nagpal and her team at Harvard’s Wyss Institute for Biologically Inspired Engineering. TERMES are adorable, semicircular, tissue-box-size robots that move on four “whegs” (short for “wheel legs,” a feature inspired by a cockroach’s climbing behavior) and lift and move blocks with their clawlike heads. Each TERMES robot is programmed with an algorithm that tells it what basic action (move forward, turn, pick up block, place block) to perform next, based on the input its sensors receive about its environment. By following a sequence of a hundred or so programmed steps, each robot can construct a preordained structure: a wall, a staircase, or a four-sided building. What is more, a group of TERMES, each programmed with the same set of individual instructions, will collectively build the same structure, without any centralized command or inter-robot communication; if one robot detects another in its way, it simply pauses until it stops sensing the other robot, and then gets back to its regular programming. The robots are built on the principle of what Nagpal and her team call “extended stigmergy”: the embedding of design information in the robots’ environment, rather than in the robots. Each building block, for example, can be given a unique label, allowing the robots to use the blocks as landmarks. In some versions of the TERMES system, the robots themselves tag the blocks as they build.

When the Science piece came out, there was a brief media frenzy, with some journalists predicting that TERMES would end up colonizing Mars, and others warning of the coming robot apocalypse. Still, TERMES are limited: they can build only on a black-and-white floor, in quiet rooms, and with magnetized blocks. Indeed, these are features of extended stigmergy: TERMES rely heavily on the orderliness of their environment to be able to build. Real termites, by contrast, are masters at responding to the novel and the unpredictable. “I don’t really know how to do that,” Nagpal says. What is not clear is whether TERMES ever will be termites—whether a more sophisticated version of stigmergy will eventually allow robots to mimic their biological models, or whether stigmergy, like modularity, is a framework that can take engineers only so far.

The Wyss Institute’s most famous robot is the RoboBee, a mechanical bee, smaller than a paper clip, that can take off, fly, and land. Although research for the RoboBee was funded by the National Science Foundation, its creator, Robert Wood, has previously been funded by DARPA and the Air Force. (J. Scott Turner, of the extended-organism thesis, has also been funded by the military.) An influential paper published by the Center for a New American Security, “Robotics on the Battlefield Part II: The Coming Swarm,” cites the RoboBee as evidence of the possibility of 3-D-printed, less-than-a-dollar-apiece drones that, in vast quantities, could “flood” civilian and combat areas as “smart clouds.” As Margonelli writes, “Everything termites do, the military would like to do, too.” The military would like to have weapons that are at once tiny (like termites) and massive (like swarms)—weapons that are easy to maneuver and hard to detect, but also smart and lethal. One researcher in Nagpal’s lab tells Margonelli, “We can’t stop the technology because it might be used for bad.”

Indeed, synthetic swarm intelligence is already with us. A few years ago, the U.S. Navy began testing swarms of autonomous, self-organizing robotic speedboats. In 2012, Human Rights Watch and the Harvard Law School’s International Human Rights Clinic called for a preëmptive, international ban on the development of fully autonomous weapons. The same year, the Department of Defense issued a directive that stopped far short of banning autonomous weapons, requiring only that a human be somehow involved whenever they are used to deliver lethal force.

Mark Hagerott, the former deputy director of the Center for Cyber Security Studies at the Naval Academy, favors stringent restrictions on the development of swarming weapons, including limits on size (no smaller than a human), fuel sources, and numbers. He worries that, with both semi-autonomous and autonomous weapons, it is increasingly difficult to identify the crucial place where finger meets trigger. This matters, Hagerott says, because this is the place where empathy is exercised, when it is exercised, during warfare.

What is less often mentioned by critics of autonomous weapons is that there is something valuable in the high casualty rate of conventional warfare. If war costs states nothing but money, what is there to hold them back? What will stop a bellicose government from pursuing its foreign projects, if there are no body bags to focus its citizens’ outrage?

The termite is no longer what it was to earlier observers: a model of what humans could be—more coöperative and harmonious, less competitive and aggressive. Instead, it has become a resource to be harnessed for the achievement of our own, already established, ends. ♦

 

This article appears in the print edition of the September 17, 2018, issue, with the headline “Busy Bodies.”
Image result for Amia Srinivasan
  • Amia Srinivasan is a contributing editor of the London Review of Books and an associate professor of philosophy at the University of Oxford. Her book of essays will be published next year.

Crazytown: A Bob Woodward Book, an Anonymous New York Times Op-Ed


September 8,2018

Crazytown: A Bob Woodward Book, an Anonymous New York Times Op-Ed, and a Growing Crisis for the Trump Presidency

The Republican “resistance” goes public (sort of), and everyone freaks out.

On Wednesday morning, the White House press secretary, Sarah Huckabee Sanders, walked out to address the cameras stationed in front of the West Wing and offered one of the week’s most unintentionally revealing comments. The first reports were out about the contents of Bob Woodward’s damning new account of the Trump Administration, “Fear: Trump in the White House,” and she was trying, not all that successfully, to downplay and deny the book’s sorry chronicle of internal chaos, dissension, and dismay within the White House over the President’s behavior. “I haven’t read a lot of his books,” she said, of Woodward, before going on to dismiss his latest as fiction.

Image result for sarah huckabee sanders

Trump is the eighth President of the United States to have been subjected to the Woodward treatment, and, had Sanders read his previous works, she would have known exactly what to expect: a devastating reported account of the Trump Presidency that will be consulted as a first draft of the grim history it portrays long after the best-sellers by Michael Wolff and Omarosa Manigault Newman have been forgotten. Merely dismissing it as fiction was never going to work. The book begins with what Woodward calls “an administrative coup d’etat”—Trump’s former chief economic adviser deciding to steal key papers off the President’s Oval Office desk in order to stop him from pulling out of a South Korean free-trade deal as tensions escalated with North Korea last summer. The book ends with the President’s lawyer John Dowd quitting his legal team, concluding that he could no longer represent someone he believed to be “a fucking liar.”

As Sanders spoke to reporters on Wednesday, her White House colleagues told journalists that they had only just managed to obtain a copy of the book, which does not go on sale to the public until next week. Up until then, they were left with the excerpts in the Washington Post, the New York Times, and CNN, which mostly emphasized, as Dwight Garner put it, in the Times, that “if this book has a single point to drive home, it is that the president of the United States is a congenital liar.” Trump’s lies, of course, have been thoroughly covered territory throughout his Presidency, especially because so many of them occur in public. Amazingly, it is no longer big news when the occupant of the Oval Office is shown to be callous, ignorant, nasty, and untruthful.

Reading through a copy of the book obtained by The New Yorker, however, I was struck by a different theme: what Woodward has written is not just the story of a deeply flawed President but also, finally, an account of what those surrounding him have chosen to do about it. Throughout much of Trump’s first year in office, the shorthand had it that, while Trump was an inexperienced, and possibly even dangerous, newcomer to politics, he could be managed by the grownups he had invited into his government. Many of those grownups are now gone, fired by Trump or pushed out by scandal, and for months they have been telling people how bad it was, or how much they did to stop the President, or how much worse it could have been. Here, then, are some of the details of what they claim to have done to control Trump: Gary Cohn, who was the economic adviser until he resigned, early this year, swiped that order to withdraw from a free-trade agreement with South Korea, and another one to unilaterally withdraw from NAFTA. H. R. McMaster, the soon-to-be-fired national-security adviser, pondered quitting after the President ranted against U.S. allies and threatened to pull troops out of Afghanistan, Iraq, and elsewhere, but he remained in the job and convinced Trump, at least temporarily, not to go through with it. James Mattis, the generally unflappable Defense Secretary, told the President, “We’re doing this in order to prevent World War III here,” during a contentious meeting that exasperated him so much, according to Woodward, that he later complained Trump behaved like a “fifth- or sixth-grader.” Another session in the Pentagon with the national-security team and Trump went so famously badly that, afterward, the Secretary of State at the time, Rex Tillerson, called Trump a “fucking moron,” a detail that Woodward confirms here for posterity.

This is hardly a flattering picture of what the internal pushback to Trump looks like. Many of Woodward’s sources come across as caricatures of Washington power brokers, scheming against one another as they jockey for Trump’s favor, shamelessly flattering the President, fuming about insults and threatening to quit but never actually doing so. I was about halfway through the book on Wednesday afternoon when the news cycle interrupted my reading: an anonymous Op-Ed by a “senior official in the Trump administration” had just been published by the Times, praising the “unsung heroes” inside the White House who have secretly been members of the same clandestine “resistance” chronicled in the Woodward book. “It may be cold comfort in this chaotic era,” Anonymous wrote, “but Americans should know that there are adults in the room. We fully recognize what is happening. And we are trying to do what’s right even when Donald Trump won’t.”

It was as if one of Woodward’s sources had chosen to publish a real-time epilogue in the pages of the Times. Reading the Op-Ed, I immediately thought of an amazing passage in the book, which quoted a summary of a national-security meeting written by a White House official (and which never even made it into the news accounts about the book). It said, “The president proceeded to lecture and insult the entire group about how they didn’t know anything when it came to defense or national security. It seems clear that many of the president’s senior advisers, especially those in the national security realm, are extremely concerned with his erratic nature, his relative ignorance, his inability to learn, as well as what they consider his dangerous views.”

Both the Op-Ed and the book convey the laments of conservatives who, in many respects, are fine with the Trump agenda but not with the man. That is, for now, what passes for the Republican wing of the resistance. So far, it is mostly underground, or perhaps still largely nonexistent; we don’t really know. The Republicans who control Capitol Hill have not joined, or even made token moves toward addressing these significant concerns raised by members of their own party. Instead, the silence from the congressional G.O.P., awaiting its fate in the November midterm elections and still wary of crossing the President who remains popular with the Republican base, has been deafening.

Meanwhile, the anonymous article and the nameless accounts in Woodward’s book have already infuriated those who have publicly struggled against Trump. For instance, The Atlantic’s David Frum, a former Bush Administration speechwriter, called the Op-Ed “a cowardly coup from within the administration” and said it “threatens to inflame the president’s paranoia and further endanger American security.” Such objections to the nascent Republican resistance are understandable; this is not the principled public combat of a democratic system as envisaged by the Founders. It is secretive and uncomfortable, the stuff of office backstabbing and private betrayals. It is not enough for those who have staked out public opposition to Trump, and it likely never will be. It is about ego and vanity as much as patriotism and principle. Most of all, it is a reminder of the terrible dilemma that Trump has posed for Republicans since the moment he announced his campaign for President, and will pose as long as he is in office.

 

A few minutes after the Post published the first story about the Woodward book, I met with a foreign-policy expert who is well connected in the Trump national-security world. The news seemed very much consistent with what both of us had been hearing since the beginning of the Administration. His friends who have been on the inside, he told me, have invariably told him stories like those now turning up in the news: “As bad as you think it is, it’s actually worse.” Senator Ben Sasse, Republican of Nebraska, who is one of Trump’s most open critics on Capitol Hill, made a similar point to an interviewer on Thursday. “It’s just so similar to what so many of us hear from senior people around the White House, you know, three times a week,” Sasse said. “So it’s really troubling, and yet, in a way, not surprising.”

As Thomas Wright, a Brookings scholar who has emerged as one of the most insightful analysts of Trump’s foreign policy, told me, “It’s the first time, maybe in history, key advisers have gone into the Administration to stop the President, not to enable him”—and that was back in January. The call has always been coming from inside the building.

By Thursday morning, Chris Cillizza had posted on CNN’s Web site a list of thirteen Trump insiders who could have written the article: Don McGahn, the departing White House counsel (Trump kicked him out, via Twitter, just last week); Dan Coats, the director of national intelligence, who was publicly humiliated by Trump in the aftermath of Trump’s summit with Vladimir Putin; Kellyanne Conway, the White House counsellor whose own husband has emerged as a Trump-bashing tweeter; John Kelly, the White House chief of staff, reported by Woodward and others to have called Trump an “idiot”; Kirstjen Nielsen, the Kelly ally who heads the Department of Homeland Security and has been repeatedly dressed down by Trump in front of her colleagues; Attorney General Jeff Sessions, who has come under withering, near-daily attack by Trump; Mattis, the Secretary of Defense and the most respected member of the President’s Cabinet; Fiona Hill, the top Russia expert on the National Security Council; Vice-President Mike Pence; Nikki Haley, the Ambassador to the United Nations, who has publicly expressed differences with Trump on Russia; and even “Javanka”—Trump’s daughter Ivanka and son-in-law, Jared Kushner—and Trump’s own wife, the long-suffering Melania Trump.

Others I spoke with suggested less well-known names in the senior ranks of various foreign-policy and national-security jobs, or in top economic posts. Could it have been Treasury Secretary Steve Mnuchin, a former Democrat who has been humiliated by Trump from the start, according to the Woodward book? Or Larry Kudlow, the free-trader heading his National Economic Council? One prominent Washingtonian sent me a long e-mail laying out the case for John Bolton, the national-security adviser whose hawkish views on such subjects as North Korea and Russia have been repeatedly undercut by Trump. National Review published a case for Jon Huntsman, the 2012 Republican Presidential candidate now serving as Trump’s Ambassador to Moscow.

Just as quickly as the speculation about who did it, the denials started rolling in. The Vice-President, the director of National Intelligence, the Secretary of State, the Attorney General, the Defense Secretary, the Treasury Secretary, the Secretary of Homeland Security, the Ambassador to the United Nations, the director of the Office of Management and Budget, and several others I’m sure I’m missing felt compelled to put out statements on Thursday morning denying that they wrote the Op-Ed calling Trump “impetuous, adversarial, petty, and ineffective.” Even Ben Carson, the wacky former surgeon and 2016 Presidential candidate now serving as Trump’s Secretary of Housing and Urban Development, put out a statement saying he wasn’t the Op-Ed author, though no one, at least that I am aware of, had speculated that he was. And that was a day after the Woodward denials: the White House chief of staff denied calling Trump an “idiot”; the Secretary of Defense denied calling him a middle-schooler; and his former lawyer denied calling him a “fucking liar.”

The denials seemed like some of those pointless, if required, Washington rituals. After all, Mark Felt, the deputy director of the F.B.I. during the Nixon Administration, who had been Woodward’s original secret source about the Watergate scandal, denied publicly for years that he was “Deep Throat,” a fact pointed out on Twitter on Wednesday, as journalists circulated a copy of an old Wall Street Journal story with Felt’s denial as the lede. Felt revealed himself as Woodward’s source before he died, and Woodward later published a book all about their dealings, “The Secret Man,” another book that Sarah Huckabee Sanders presumably did not read but should have: at the center of the tale is the story of how the F.B.I., outraged by the flagrant lawlessness of the President, decided to use its powers to take Nixon on.

At 6:58 A.M. on Thursday, as Washington obsessed over the identity of this latest Anonymous and I read the last few pages of Woodward’s damaging account, Trump tweeted about one person he had worked closely with who had “unwavering faith” in him: the dictator of North Korea. “Thank you Chairman Kim,” he enthused. “We will get it done together!” It was plaintive and pathetic and, yes, more than a little bit crazy.

It’s mornings like these that Trump’s beleaguered staff must dread more than anything. Reince Priebus, Trump’s fired first chief of staff, called the White House bedroom where Trump did most of his tweeting “the devil’s workshop,” Woodward reported. But Priebus’s successor, John Kelly, came up with the phrase that probably best sums up the situation. At one point in “Fear,” Woodward quotes Kelly calling his post as Trump’s chief of staff “the worst job I’ve ever had,” and describing the Trump White House as “crazytown.” It would have been the perfect title for the book, and I suspect that Kelly’s memorable description of the Trump Presidency may well outlive Kelly’s accomplishments in the job.

For twenty months, Washington has been asking, Is this the crisis? Is this finally the constitutional confrontation we have been waiting for? The Trump Presidency, to those closely watching it, and to many of those participating in it, has always seemed unsustainable. And yet it has gone on, and will keep going on, until and unless something seismic happens in our politics—and our Congress—to change it. We don’t need to wonder when the crisis will hit; it already has. Every day since January 20, 2017, has been the crisis.

 

Image result for susan b glasser new yorker
  • Susan B. Glasser is a staff writer at The New Yorker, where she writes a weekly column on life in Trump’s Washington.

  • https://www.newyorker.com

Foreign Policy: Francis Fukuyama Postpones the End of History


September 3, 2018

Foreign Policy: Francis Fukuyama Postpones the End of History

Image result for francis fukuyama

The political scientist argues that the desire of identity groups for recognition is a key threat to liberalism.

In February, 1989, Francis Fukuyama gave a talk on international relations at the University of Chicago. Fukuyama was thirty-six years old, and on his way from a job at the RAND Corporation, in Santa Monica, where he had worked as an expert on Soviet foreign policy, to a post as the deputy director of policy planning at the State Department, in Washington.

It was a good moment for talking about international relations, and a good moment for Soviet experts especially, because, two months earlier, on December 7, 1988, Mikhail Gorbachev had announced, in a speech at the United Nations, that the Soviet Union would no longer intervene in the affairs of its Eastern European satellite states. Those nations could now become democratic. It was the beginning of the end of the Cold War.

At RAND, Fukuyama had produced focussed analyses of Soviet policy. In Chicago, he permitted himself to think big. His talk came to the attention of Owen Harries, an editor at a Washington journal called The National Interest, and Harries offered to publish it. The article was titled “The End of History?” It came out in the summer of 1989, and it turned the foreign-policy world on its ear.

Fukuyama’s argument was that, with the imminent collapse of the Soviet Union, the last ideological alternative to liberalism had been eliminated. Fascism had been killed off in the Second World War, and now Communism was imploding. In states, like China, that called themselves Communist, political and economic reforms were heading in the direction of a liberal order.

So, if you imagined history as the process by which liberal institutions—representative government, free markets, and consumerist culture—become universal, it might be possible to say that history had reached its goal. Stuff would still happen, obviously, and smaller states could be expected to experience ethnic and religious tensions and become home to illiberal ideas. But “it matters very little what strange thoughts occur to people in Albania or Burkina Faso,” Fukuyama explained, “for we are interested in what one could in some sense call the common ideological heritage of mankind.”

Hegel, Fukuyama said, had written of a moment when a perfectly rational form of society and the state would become victorious. Now, with Communism vanquished and the major powers converging on a single political and economic model, Hegel’s prediction had finally been fulfilled. There would be a “Common Marketization” of international relations and the world would achieve homeostasis.

Even among little magazines, The National Interest was little. Launched in 1985 by Irving Kristol, the leading figure in neoconservatism, it had by 1989 a circulation of six thousand. Fukuyama himself was virtually unknown outside the world of professional Sovietologists, people not given to eschatological reflection. But the “end of history” claim was picked up in the mainstream press, Fukuyama was profiled by James Atlas in the New York Times Magazine, and his article was debated in Britain and in France and translated into many languages, from Japanese to Icelandic. Some of the responses to “The End of History?” were dismissive; almost all of them were skeptical. But somehow the phrase found its way into post-Cold War thought, and it stuck.

One of the reasons for the stickiness was that Fukuyama was lucky. He got out about six months ahead of the curve—his article appearing before the Velvet Revolution, in Czechoslovakia, and before the dismantling of the Berlin Wall, in November, 1989. Fukuyama was betting on present trends continuing, always a high-risk gamble in the international-relations business.

Any number of things might have happened for Gorbachev’s promise not to cash out: political resistance within the Soviet Union, the refusal of the Eastern European puppet regimes to cede power, the United States misplaying its hand. But events in Europe unfolded more or less according to Fukuyama’s prediction, and, on December 26, 1991, the Soviet Union voted itself out of existence. The Cold War really was over.

Events in Asia were not so obliging. Fukuyama missed completely the suppression of the pro-democracy movement in China. There is no mention of the massacre in Tiananmen Square in “The End of History?,” presumably because the piece was in production when it happened, in June, 1989. This does not seem to have made a difference to the article’s reception, however. Almost none of the initial responses to the piece mentioned Tiananmen, either—even though many people already believed that China, not Russia, was the power that liberal democracies would have to reckon with in the future. “The End of History?” was a little Eurocentric.

There was also a seductive twist to Fukuyama’s argument. At the end of the article, he suggested that life after history might be sad. When all political efforts were committed to “the endless solving of technical problems, environmental concerns, and the satisfaction of sophisticated consumer demands” (sounds good to me), we might feel nostalgia for the “courage, imagination, and idealism” that animated the old struggles for liberalism and democracy. This speculative flourish recalled the famous question that John Stuart Mill said he asked himself as a young man: If all the political and social reforms you believe in came to pass, would it make you a happier human being? That is always an interesting question.

Another reason that Fukuyama’s article got noticed may have had to do with his new job title. The office of policy planning at State had been created in 1947 by George Kennan, who was its first chief. In July of that year, Kennan published the so-called X article, “The Sources of Soviet Conduct,” in Foreign Affairs. It appeared anonymously—signed with an “X”—but once the press learned his identity the article was received as an official statement of American Cold War policy.

“The Sources of Soviet Conduct” defined the containment doctrine, according to which the aim of American policy was to keep the Soviet Union inside its box. The United States did not need to intervene in Soviet affairs, Kennan believed, because Communism was bound to collapse from its own inefficiency. Four decades later, when “The End of History?” appeared, that is exactly what seemed to be happening. That April, Kennan, then eighty-five, appeared before the Senate Foreign Relations Committee to declare that the Cold War was over. He received a standing ovation. Fukuyama’s article could thus be seen as a bookend to Kennan’s.

It was not the bookend Kennan would have written. Containment is a realist doctrine. Realists think that a nation’s foreign policy should be guided by dispassionate consideration of its own interests, not by moral principles, or by a belief that nations share a “harmony of interests.” To Kennan, it was of no concern to the United States what the Soviets did inside their own box. The only thing that mattered was that Communism not be allowed to expand.

The National Interest, as the name proclaims, is a realist foreign-policy journal. But Fukuyama’s premise was that nations do share a harmony of interests, and that their convergence on liberal political and economic models was mutually beneficial. Realism imagines nations to be in perpetual competition with one another; Fukuyama was saying that this was no longer going to be the case. He offered Cold War realists a kind of valediction: their mission, though philosophically misconceived, had been accomplished. Now they were out of a job. “Frank thought that what was happening spelled the end of the Realpolitik world,” Harries later said. It must have tickled him to have published Fukuyama’s article.

Twenty-nine years later, it seems that the realists haven’t gone anywhere, and that history has a few more tricks up its sleeve. It turns out that liberal democracy and free trade may actually be rather fragile achievements. (Consumerism appears safe for now.) There is something out there that doesn’t like liberalism, and is making trouble for the survival of its institutions.

 

Fukuyama thinks he knows what that something is, and his answer is summed up in the title of his new book, “Identity: The Demand for Dignity and the Politics of Resentment” (Farrar, Straus & Giroux). The demand for recognition, Fukuyama says, is the “master concept” that explains all the contemporary dissatisfactions with the global liberal order: Vladimir Putin, Osama bin Laden, Xi Jinping, Black Lives Matter, #MeToo, gay marriage, ISIS, Brexit, resurgent European nationalisms, anti-immigration political movements, campus identity politics, and the election of Donald Trump. It also explains the Protestant Reformation, the French Revolution, the Russian Revolution, Chinese Communism, the civil-rights movement, the women’s movement, multiculturalism, and the thought of Luther, Rousseau, Kant, Nietzsche, Freud, and Simone de Beauvoir. Oh, and the whole business begins with Plato’s Republic. Fukuyama covers all of this in less than two hundred pages. How does he do it?

Not well. Some of the problem comes from misunderstanding figures like Beauvoir and Freud; some comes from reducing the work of complex writers like Rousseau and Nietzsche to a single philosophical bullet point. A lot comes from the astonishingly blasé assumption—which was also the astonishingly blasé assumption of “The End of History?”—that Western thought is universal thought. But the whole project, trying to fit Vladimir Putin into the same analytic paradigm as Black Lives Matter and tracing them both back to Martin Luther, is far-fetched. It’s a case of Great Booksism: history as a chain of paper dolls cut out of books that only a tiny fraction of human beings have even heard of. Fukuyama is a smart man, but no one could have made this argument work.

Why is the desire for recognition—or identity politics, as Fukuyama also calls it—a threat to liberalism? Because it cannot be satisfied by economic or procedural reforms. Having the same amount of wealth as everyone else or the same opportunity to acquire it is not a substitute for respect. Fukuyama thinks that political movements that appear to be about legal and economic equality—gay marriage, for example, or #MeToo—are really about recognition and respect. Women who are sexually harassed in the workplace feel that their dignity has been violated, that they are being treated as less than fully human.

Fukuyama gives this desire for recognition a Greek name, taken from Plato’s Republic: thymos. He says that thymos is “a universal aspect of human nature that has always existed.” In the Republic, thymos is distinct from the two other parts of the soul that Socrates names: reason and appetite. Appetites we share with animals; reason is what makes us human. Thymos is in between.

The term has been defined in various ways. “Passion” is one translation; “spirit,” as in “spiritedness,” is another. Fukuyama defines thymos as “the seat of judgments of worth.” This seems a semantic overreach. In the Republic, Socrates associates thymos with children and dogs, beings whose reactions need to be controlled by reason. The term is generally taken to refer to our instinctive response when we feel we’re being disrespected. We bristle. We swell with amour propre. We honk the horn. We overreact.

Plato had Socrates divide the psyche into three parts in order to assign roles to the citizens of his imaginary republic. Appetite is the principal attribute of the plebes, passion of the warriors, and reason of the philosopher kings. The Republic is philosophy; it is not cognitive science. Yet Fukuyama adopts Plato’s heuristic and biologizes it. “Today we know that feelings of pride and self-esteem are related to levels of the neurotransmitter serotonin in the brain,” he says, and points to studies done with chimps (which Socrates would have counted as animals, but never mind).

But so what? Lots of feelings are related to changes in serotonin levels. In fact, every feeling we experience—lust, anger, depression, exasperation—has a corollary in brain chemistry. That’s how consciousness works. To say, as Fukuyama does, that “the desire for status—megalothymia—is rooted in human biology” is the academic equivalent of palmistry. You’re just making it up.

Fukuyama resorts to this tactic because he wants to do with the desire for recognition what he did with liberalism in “The End of History?” He wants to universalize it. This allows him to argue, for example, that the feelings that led to the rise of Vladimir Putin are exactly the same (albeit “on a larger scale”) as the feelings of a woman who complains that her potential is limited by gender discrimination. The woman can’t help it. She needs the serotonin, just like the Russians.

Hegel thought that the end of history would arrive when humans achieved perfect self-knowledge and self-mastery, when life was rational and transparent. Rationality and transparency are the values of classical liberalism. Rationality and transparency are supposed to be what make free markets and democratic elections work. People understand how the system functions, and that allows them to make rational choices.

The trouble with thymos is that it is not rational. People not only sacrifice worldly goods for recognition; they die for recognition. The choice to die is not rational. “Human psychology is much more complex than the rather simpleminded economic model suggests,” Fukuyama concludes.

“Sorry I’m late. I got caught up at home being happy.”

But how was that model of the rational economic actor ever plausible? It’s not just that human beings are neurotic; it’s that, on the list of things human beings are neurotic about, money is close to the top. People hoard money; they squander it; they marry for it; they kill for it. Don’t economists ever read novels? Practically every realist novel, from Austen and Balzac to James and Wharton, is about people behaving badly around money. Free markets didn’t change that. They arguably made people even crazier.

And as with money so with most of life. The notion that we have some mental faculty called “reason” that functions independently of our needs, desires, anxieties, and superstitions is, well, Platonic. Right now, you are trying to decide whether to finish this piece or turn to the cartoon-caption contest. Which mental faculty are you using to make this decision? Which is responsible for your opinion of Donald Trump? How can you tell?

“Identity” can be read as a corrective to the position that Fukuyama staked out in “The End of History?” Universal liberalism isn’t impeded by ideology, like fascism or communism, but by passion. Liberalism remains the ideal political and economic system, but it needs to find ways to accommodate and neutralize this pesky desire for recognition. What is odd about Fukuyama’s dilemma is that, in the philosophical source for his original theory about the end of history, recognition was not a problem. Recognition was, in fact, the means to get there.

That source was not Hegel. As Fukuyama stated explicitly in “The End of History?,” he was adopting an interpretation of Hegel made in the nineteen-thirties by a semi-obscure intellectual adventurer named Alexandre Kojève. How, fifty years later, Kojève’s ideas got into the pages of a Washington policy journal is an unusual story of intellectual musical chairs.

Kojève was born in 1902 into a well-off Moscow family, and he was raised in a cultivated atmosphere. The painter Wassily Kandinsky was an uncle. Kojève was a prodigious intellect; by the time he was eighteen, he was fluent in Russian, German, French, and English, and read Latin. Later, he learned Sanskrit, Chinese, and Tibetan in order to study Buddhism. In 1918, he went to prison for some sort of black-market transaction. After he got out, he and a friend managed to cross the closed Soviet border into Poland, where they were briefly jailed on suspicion of espionage. With the pointed encouragement of Polish authorities, Kojève left for Germany. He studied philosophy with Karl Jaspers at Heidelberg and lived as a bon vivant in Weimar Berlin. In 1926, he moved to Paris, where he continued to live the high life while writing a dissertation that dealt with quantum physics.

Kojève had invested his inheritance in the French company that made La Vache Qui Rit cheese, but he lost everything in the stock-market crash. In 1933, in need of income, he accepted a friend’s offer to take over a seminar on Hegel at the École Pratique des Hautes Études. He ended up running the course for six years.

People who were around Kojève seem to have regarded him as a kind of magician. In the Hegel seminar, he taught just one text, “The Phenomenology of Spirit,” first published in 1807. He would read a passage aloud in German (the book had not been translated into French) and then, extemporaneously and in perfect French (with an enchanting Slavic accent), provide his own commentary. People found him eloquent, brilliant, mesmerizing. Enrollment was small, around twenty, but a number of future intellectual luminaries, like Hannah Arendt and Jacques Lacan, either took the class or sat in on it.

For Kojève, the key concept in Hegel’s “Phenomenology” was recognition. Human beings want the recognition of other human beings in order to become self-conscious—to know themselves as autonomous individuals. As Kojève put it, humans desire, and what they desire is either something that other humans desire or the desire of other humans. “Human history,” he said, “is the history of desired desires.” What makes this complicated is that in the struggle for recognition there are winners and losers. The terms Hegel used for these can be translated as lords and servants, but also as masters and slaves, which are the terms Kojève used. The master wins the recognition of the slave, but his satisfaction is empty, since he does not recognize the slave as human in turn. The slave, lacking recognition from the master, must seek it in some other way.

Kojève thought that the other way was through labor. The slave achieves his sense of self by work that transforms the natural world into a human world. But the slave is driven to labor in the first place because of the master’s refusal to recognize him. This “master-slave dialectic” is the motor of human history, and human history comes to an end when there are no more masters or slaves, and all are recognized equally.

This is the idea that Marx had adopted to describe history as the history of class struggle. That struggle also has winners and losers, and its penultimate phase was the struggle between property owners (the bourgeoisie) and workers (the proletariat). The struggle would come to an end with the overthrow of capitalism and the arrival of a classless society—communism. Kojève called himself, mischievously or not, a Communist, and people listening to him in the nineteen-thirties would have understood this to be the subtext of his commentary. Equality of recognition was history’s goal, whether that meant Communist equality or liberal equality. People would stop killing one another in the name of dignity and self-respect, and life would probably be boring.

After the war, Kojève’s lectures were published as “Introduction to the Reading of Hegel,” a book that went through many printings in France. By then, he had stopped teaching and had become an official in the French Ministry of Economic Affairs, where he played an influential behind-the-scenes role in establishing the General Agreement on Tariffs and Trade (GATT) and the European Economic Community, the forerunner of the European Union—in other words, Common Marketization. He liked to say that he was presiding over the end of history.

In 1953, Allan Bloom, then a graduate student at the University of Chicago, met Kojève in Paris, at his office in the ministry. (The connection was presumably made through the émigré political theorist Leo Strauss, who was teaching at Chicago and who carried on a long correspondence with Kojève.) “I was seduced,” Bloom later said. He began studying with Kojève, and their meetings continued until Kojève’s death, in 1968. In 1969, Bloom arranged for the publication of the first English translation of the Hegel lectures and contributed an introduction. He was then a professor at Cornell.

Fukuyama entered Cornell as a freshman in 1970. He lived in Telluride House, a selective academic society for students and faculty, where Bloom was a resident. Fukuyama enrolled in Bloom’s freshman course on Greek philosophy, and, according to Atlas, he and Bloom “shared meals and talked philosophy until all hours.”

As it happened, that was Bloom’s last year at Cornell. He resigned in disgust at the way the administration had handled the occupation of a university building by armed students from the Afro-American Society. Fukuyama graduated in 1974 with a degree in classics. Following an excursus into the world of poststructuralist theory at Yale and in Paris, he switched his field to political science and received his Ph.D. from Harvard’s government department. He graduated in 1979, and went to RAND.

By then, Bloom was back at the University of Chicago, as a professor in the Committee on Social Thought. In 1982, he published an article on the condition of higher education in William F. Buckley’s National Review. He did not think the condition was good. Encouraged by his friend Saul Bellow, he decided to turn the article into a book. “The Closing of the American Mind,” which Simon & Schuster brought out in February, 1987, launched a campaign of criticism of American higher education that has taken little time off since.

“The Closing of the American Mind” is a Great Booksist attempt to account for the rise of cultural relativism, which Bloom thought was the bane of American higher education. Almost no one at Simon & Schuster had great hopes for sales. There is a story, possibly apocryphal, that when the editor who signed the book, Erwin Glikes, left the firm to run the Free Press he was invited to take Bloom’s book, not yet published, with him, and he declined.

If so, he missed out on one of the publishing phenomena of the decade. After a slow start, “The Closing of the American Mind” went to No. 1 on the Times best-seller list and stayed there for two and a half months. By March, 1988, it had sold a million hardcover copies in the United States alone. It made Bloom a rich man.

It was Bloom, along with another professor at Chicago, Nathan Tarcov, who invited Fukuyama to give his February, 1989, talk on international relations. If Fukuyama had not already been thinking about it, it is easy to imagine him deciding that, under the circumstances, it might be interesting to say something Kojèvean.

When “The End of History?” ran in The National Interest that summer, Bloom had become a star in the neoconservative firmament, and his was the first of six responses that the magazine printed to accompany the article. Bloom called it “bold and brilliant.” Possibly seeing the way the wind was blowing, Glikes offered Fukuyama six hundred thousand dollars to turn his article into a book. “The End of History and the Last Man” was published by the Free Press in 1992.

The book was a best-seller, but not a huge one, maybe because the excitement about the end of the Cold War had cooled. Fukuyama had taken his time writing it. “The End of History and the Last Man” is not a journal article on steroids. It is a thoughtful examination of the questions raised by the piece in The National Interest, and one of those questions is the problem of thymos, which occupies much of the book. A lot of “Identity” is a recap of what Fukuyama had already said there.

The importance of recognition has been emphasized by writers other than Kojève. The Canadian philosopher Charles Taylor, for example, whose book “The Sources of the Self,” published in 1989, the same year as “The End of History?,” argued that the modern idea of the self involved a cultural shift from the concept of honor, which is something for the few, to dignity, which is aspired to by all. In 1992, in the essay “The Politics of Recognition,” Taylor analyzed the advent of multiculturalism in terms similar to the ones Fukuyama uses in “Identity.” (Taylor, too, is a Hegel expert.)

Fukuyama acknowledges that identity politics has done some good, and he says that people on the right exaggerate the prevalence of political correctness and the effects of affirmative action. He also thinks that people on the left have become obsessed with cultural and identitarian politics, and have abandoned social policy. But he has surprisingly few policy suggestions himself.

He has no interest in the solution that liberals typically adopt to accommodate diversity: pluralism and multiculturalism. Taylor, for example, has championed the right of the Québécois to pass laws preserving a French-language culture in their province. Fukuyama concedes that people need a sense of national identity, whether ethnic or creedal, but otherwise he remains an assimilationist and a universalist. He wants to iron out differences, not protect them. He suggests measures like a mandatory national-service requirement and a more meaningful path to citizenship for immigrants.

It’s unfortunate that Fukuyama has hung his authorial hat on meta-historical claims. In other books—notably “The Great Disruption” (1999) and a two-volume world history, “The Origins of Political Order” (2011) and “Political Order and Political Decay” (2014)—he distinguishes civilizational differences and uses empirical data to explain social trends. But thymos is too clumsy an instrument to be much help in understanding contemporary politics.

Wouldn’t it be important to distinguish people who ultimately don’t want differences to matter, like the people involved in #MeToo and Black Lives Matter, from people who ultimately do want them to matter, like ISIS militants, Brexit voters, or separatist nationalists? And what about people who are neither Mexican nor immigrants and who feel indignation at the treatment of Mexican immigrants? Black Americans risked their lives for civil rights, but so did white Americans. How would Socrates classify that behavior? Borrowed thymos?

It might also be good to replace the linear “if present trends continue” conception of history as a steady progression toward some stable state with the dialectical conception of history that Hegel and Kojève in fact used. Present trends don’t continue. They produce backlashes and reshufflings of the social deck. The identities that people embrace today are the identities their children will want to escape from tomorrow. History is somersaults all the way to the end. That’s why it’s so hard to write, and so hard to predict. Unless you’re lucky. ♦

 

This article appears in the print edition of the September 3, 2018, issue, with the headline “What Identity Demands.”

  • Louis Menand, a staff writer since 2001, was awarded the National Humanities Medal in 2016.