The relationship between George W. Bush and his father, George H. W. Bush, might just be the most dissected filial relationship in modern history — compared, variously, to Shakespearean history, Greek tragedy and opéra bouffe. In his new book, the 43rd president draws an affectionate portrait of the 41st president that’s short on factual revelations and long on emotion.
In “41,” Mr. Bush sheds little new light on his fateful decision to invade Iraq in 2003 or on other pivotal moments of his presidency, nor does he tell us much about his father’s tenure in the White House that we didn’t already know. Instead, he’s written what he calls a “love story” about his dad. At its best, the book has the qualities of the younger Mr. Bush’s recent and much-talked-about paintings: It’s folksy, sharply observed and surprisingly affecting, especially for someone not exactly known for introspection. At its worst, the book reads like a banquet-dinner-type testimonial about his father, with transparent efforts to spin or sidestep important questions about his own time in office.
Since George W. Bush stepped onto the national stage, journalists, other politicians and even family members have been comparing and contrasting father and son. Whereas Bush senior was famous for his self-effacing New England manners and quiet diplomacy, Bush junior became known as a proud, outspoken gut player, with Texas swagger. Whereas Bush senior’s policies were grounded in foreign policy realism and old-school Republican moderation, Bush junior’s tilted toward neoconservatism and a drive to export democracy and remake the world. Bush senior was not crazy about “the vision thing,” whereas Bush junior was big on big ideas.
“On everything from taxes to Iraq,” the New York Times columnist Maureen Dowd wrote in 2002, “the son has tried to use his father’s failures in the eyes of conservatives as a reverse playbook.” When Bush 41 went to war against Saddam Hussein in 1991 (after Iraq’s invasion of Kuwait), he made a decision not to go on to Baghdad and topple Iraq’s dictator, later explaining that if we had gone in and created “more instability in Iraq, I think it would have been very bad for the neighborhood.”
The younger Mr. Bush writes, somewhat defensively here, that in ordering the invasion of Iraq in 2003, he “was not trying ‘to finish what my father had begun,’ as some have suggested. My motivation was to protect the United States of America, as I had sworn an oath to do.”
He also elaborates on a surprising statement he once made to Bob Woodward — that he couldn’t remember consulting his father about his decision to go to war. In “41,” he says: “I never asked Dad what I should do. We both knew that this was a decision that only the president can make. We did talk about the issue, however. Over Christmas 2002, at Camp David, I gave Dad an update on our strategy.”
His father, he says, replied: “You know how tough war is, son, and you’ve got to try everything you can to avoid war. But if the man won’t comply, you don’t have any other choice.”
The oddly dysfunctional inability of father and son to discuss policy and politics — out of fear, it seems, of meddling or stepping on each other’s toes — is a recurrent theme in this book. The younger Mr. Bush says his father did not directly caution him against running for Congress in the late ’70s, but instead sent him to talk with a friend who told him he couldn’t win. (He didn’t.)
For many concerned about the war drums beating within the younger Bush’s White House in 2002, something similar occurred when the elder Bush’s former national security adviser and close friend, Brent Scowcroft, wrote an op-ed piece in The Wall Street Journal warning that another attack on Saddam could “seriously jeopardize, if not destroy, the global counterterrorist campaign we have undertaken.”
George W. Bush also writes that his father had little to say, in 1993, about his decision to run for governor of Texas, and that he didn’t ask his father whether he should run for president in the 2000 election, adding, “I knew he would support whatever choice I made.”
Biographers and journalists have often observed that the young George W. Bush (whose hard-drinking, irresponsible youth had made him a black sheep in the family next to Jeb, the golden boy) frequently felt overshadowed by his father. And they have speculated that, as President, he was driven to outdo his dad by taking Saddam Hussein down for good, and by winning a second term — arguments the Bush family has dismissed as psychobabble.
In “41,” the younger Mr. Bush talks at length about his dad’s early success. (“Few could claim the trifecta of war hero, Phi Beta Kappa and captain of the baseball team” at Yale, he writes.) And there is certainly fodder for readers searching for clues to an Oedipal rivalry. Mr. Bush says that his father’s college dreams of a baseball career were foiled because “he didn’t have a big enough bat to make the major leagues,” and also frets about his well-mannered father looking “weak” in a debate against Ronald Reagan, recalling a press account that said he showed “the backbone of a jellyfish.”
He writes, however, that his dad gave him “unconditional love,” and that he and his siblings felt “there was no point in competing with our father — no point in rebelling against him — because he would love us no matter what.” He celebrates his father’s well-known generosity, his talent for friendship and his willingness to take risks (from enlisting at the age of 18, not long after Pearl Harbor, to moving to Texas after college, to diving into politics after a stint in the oil business).
Like many, 43 hails 41 for his diplomatic handling of the end of the Cold War, reaching out to the Soviet leader Mikhail S. Gorbachev and wisely refusing to gloat over the fall of the Berlin Wall. In some ways, the younger Mr. Bush says, his father was “like Winston Churchill, who had been tossed out of office in 1945 just months after prevailing in World War II.”
The most persuasive sections of this book deal not with the political, but with the personal. Mr. Bush’s writing doesn’t have the earnest charm of his father’s letters (“All the Best, George Bush“) or the literary gifts displayed by his wife, Laura, in her memoir, “Spoken From the Heart.” But unlike his earlier books (his perfunctory 1999 campaign memoir, “A Charge to Keep,” and his dogged 2010 autobiography, “Decision Points”), this volume comes close to capturing Mr. Bush’s distinctive voice — by turns jokey and sentimental, irreverent and sincere.
There is very little here about his other siblings (his brother Jeb, the potential presidential candidate, is mentioned only in brief asides), but the passages devoted to his younger sister Robin’s death from leukemia in 1953 are heartfelt and moving.
“In one of her final moments with my father,” Mr. Bush writes, “Robin looked up at him with her beautiful blue eyes and said, ‘I love you more than tongue can tell.’ Dad would repeat those words for the rest of his life.”
As for Mr. Bush’s descriptions of the West Texas world that greeted him and his parents in the 1950s, they are evocative in a way that attests to his painterly eye. “We lived briefly at a hotel and then moved into a new 847-square-foot house on the outskirts of town,” he recalls. “The neighborhood was called Easter Egg Row, because the developers had chosen vibrant paint colors to help residents tell the houses apart. Our Easter egg at 405 East Maple was bright blue.”
Although George senior’s failure to win a second term in the White House led to a sense of despondency, his son writes, he would find “something positive about his defeat in 1992 — it had given rise to the political careers of two people” (that is, the author and Jeb) “whom he had raised and loves.” Had his dad been re-elected that year, the younger Mr. Bush says, “I would not have run for governor” of Texas in 1994 — nor, presumably, run for president and ascended to the White House in the too-close-to-call election of 2000 that went to the Supreme Court. History works in strange ways.
The popular belief that religion is the cause of the world’s bloodiest conflicts is central to our modern conviction that faith and politics should never mix. But the messy history of their separation suggests it was never so simple.
As we watch the fighters of the Islamic State (Isis) rampaging through the Middle East, tearing apart the modern nation-states of Syria and Iraq created by departing European colonialists, it may be difficult to believe we are living in the 21st century.
The sight of throngs of terrified refugees and the savage and indiscriminate violence is all too reminiscent of barbarian tribes sweeping away the Roman empire, or the Mongol hordes of Genghis Khan cutting a swath through China, Anatolia, Russia and eastern Europe, devastating entire cities and massacring their inhabitants.
Only the wearily familiar pictures of bombs falling yet again on Middle Eastern cities and towns – this time dropped by the United States and a few Arab allies – and the gloomy predictions that this may become another Vietnam, remind us that this is indeed a very modern war.
The ferocious cruelty of these jihadist fighters, quoting the Qur’an as they behead their hapless victims, raises another distinctly modern concern: the connection between religion and violence.The atrocities of Isis would seem to prove that Sam Harris, one of the loudest voices of the “New Atheism”, was right to claim that “most Muslims are utterly deranged by their religious faith”, and to conclude that “religion itself produces a perverse solidarity that we must find some way to undercut”.
Many will agree with Richard Dawkins, who wrote in The God Delusion that “only religious faith is a strong enough force to motivate such utter madness in otherwise sane and decent people”. Even those who find these statements too extreme may still believe, instinctively, that there is a violent essence inherent in religion, which inevitably radicalises any conflict – because once combatants are convinced that God is on their side, compromise becomes impossible and cruelty knows no bounds.
Despite the valiant attempts by Barack Obama and David Cameron to insist that the lawless violence of Isis has nothing to do with Islam, many will disagree. They may also feel exasperated. In the west, we learned from bitter experience that the fanatical bigotry which religion seems always to unleash can only be contained by the creation of a liberal state that separates politics and religion.
Never again, we believed, would these intolerant passions be allowed to intrude on political life. But why, oh why, have Muslims found it impossible to arrive at this logical solution to their current problems? Why do they cling with perverse obstinacy to the obviously bad idea of theocracy? Why, in short, have they been unable to enter the modern world? The answer must surely lie in their primitive and atavistic religion. But perhaps we should ask, instead, how it came about that we in the west developed our view of religion as a purely private pursuit, essentially separate from all other human activities, and especially distinct from politics.
After all, warfare and violence have always been a feature of political life, and yet we alone drew the conclusion that separating the church from the state was a prerequisite for peace. Secularism has become so natural to us that we assume it emerged organically, as a necessary condition of any society’s progress into modernity. Yet it was in fact a distinct creation, which arose as a result of a peculiar concatenation of historical circumstances; we may be mistaken to assume that it would evolve in the same fashion in every culture in every part of the world.
We now take the secular state so much for granted that it is hard for us to appreciate its novelty, since before the modern period, there were no “secular” institutions and no “secular” states in our sense of the word. Their creation required the development of an entirely different understanding of religion, one that was unique to the modern west. No other culture has had anything remotely like it, and before the 18th century, it would have been incomprehensible even to European Catholics. The words in other languages that we translate as “religion” invariably refer to something vaguer, larger and more inclusive.
The Arabic word din signifies an entire way of life, and the Sanskrit dharma covers law, politics, and social institutions as well as piety. The Hebrew Bible has no abstract concept of “religion”; and the Talmudic rabbis would have found it impossible to define faith in a single word or formula, because the Talmud was expressly designed to bring the whole of human life into the ambit of the sacred. The Oxford Classical Dictionary firmly states: “No word in either Greek or Latin corresponds to the English ‘religion’ or ‘religious’.” In fact, the only tradition that satisfies the modern western criterion of religion as a purely private pursuit is Protestant Christianity, which, like our western view of “religion”, was also a creation of the early modern period.
Traditional spirituality did not urge people to retreat from political activity. The prophets of Israel had harsh words for those who assiduously observed the temple rituals but neglected the plight of the poor and oppressed. Jesus’s famous maxim to “Render unto Caesar the things that are Caesar’s” was not a plea for the separation of religion and politics. Nearly all the uprisings against Rome in first-century Palestine were inspired by the conviction that the Land of Israel and its produce belonged to God, so that there was, therefore, precious little to “give back” to Caesar.
When Jesus overturned the money-changers’ tables in the temple, he was not demanding a more spiritualised religion. For 500 years, the temple had been an instrument of imperial control and the tribute for Rome was stored there. Hence for Jesus it was a “den of thieves”. The bedrock message of the Qur’an is that it is wrong to build a private fortune but good to share your wealth in order to create a just, egalitarian and decent society. Gandhi would have agreed that these were matters of sacred import: “Those who say that religion has nothing to do with politics do not know what religion means.”
The Myth of Religious Violence
Before the modern period, religion was not a separate activity, hermetically sealed off from all others; rather, it permeated all human undertakings, including economics, state-building, politics and warfare. Before 1700, it would have been impossible for people to say where, for example, “politics” ended and “religion” began. The Crusades were certainly inspired by religious passion but they were also deeply political: Pope Urban II let the knights of Christendom loose on the Muslim world to extend the power of the church eastwards and create a papal monarchy that would control Christian Europe.
The Spanish inquisition was a deeply flawed attempt to secure the internal order of Spain after a divisive civil war, at a time when the nation feared an imminent attack by the Ottoman empire. Similarly, the European wars of religion and the thirty years war were certainly exacerbated by the sectarian quarrels of Protestants and Catholics, but their violence reflected the birth pangs of the modern nation-state.
It was these European wars, in the 16th and 17th centuries, that helped create what has been called “the myth of religious violence”. It was said that Protestants and Catholics were so inflamed by the theological passions of the Reformation that they butchered one another in senseless battles that killed 35% of the population of central Europe. Yet while there is no doubt that the participants certainly experienced these wars as a life-and-death religious struggle, this was also a conflict between two sets of state-builders: the princes of Germany and the other kings of Europe were battling against the Holy Roman Emperor, Charles V, and his ambition to establish a trans-European hegemony modelled after the Ottoman empire.
If the wars of religion had been solely motivated by sectarian bigotry, we should not expect to have found Protestants and Catholics fighting on the same side, yet in fact they often did so. Thus Catholic France repeatedly fought the Catholic Habsburgs, who were regularly supported by some of the Protestant princes.
In the French wars of religion (1562–98) and the thirty years war, combatants crossed confessional lines so often that it was impossible to talk about solidly “Catholic” or “Protestant” populations. These wars were neither “all about religion” nor “all about politics”. Nor was it a question of the state simply “using” religion for political ends. There was as yet no coherent way to divide religious causes from social causes.
People were fighting for different visions of society, but they would not, and could not, have distinguished between religious and temporal factors in these conflicts. Until the 18th century, dissociating the two would have been like trying to take the gin out of a cocktail.
These developments required a new understanding of religion. It was provided by Martin Luther, who was the first European to propose the separation of church and state. Medieval Catholicism had been an essentially communal faith; most people experienced the sacred by living in community. But for Luther, the Christian stood alone before his God, relying only upon his Bible.
Luther’s acute sense of human sinfulness led him, in the early 16th century, to advocate the absolute states that would not become a political reality for another hundred years. For Luther, the state’s prime duty was to restrain its wicked subjects by force, “in the same way as a savage wild beast is bound with chains and ropes”. The sovereign, independent state reflected this vision of the independent and sovereign individual. Luther’s view of religion, as an essentially subjective and private quest over which the state had no jurisdiction, would be the foundation of the modern secular ideal.
But Luther’s response to the peasants’ war in Germany in 1525, during the early stages of the wars of religion, suggested that a secularised political theory would not necessarily be a force for peace or democracy. The peasants, who were resisting the centralising policies of the German princes – which deprived them of their traditional rights – were mercilessly slaughtered by the state. Luther believed that they had committed the cardinal sin of mixing religion and politics: suffering was their lot, and they should have turned the other cheek, and accepted the loss of their lives and property.
“A worldly kingdom,” he insisted, “cannot exist without an inequality of persons, some being free, some imprisoned, some lords, some subjects.” So, Luther commanded the princes, “Let everyone who can, smite, slay and stab, secretly or openly, remembering that nothing can be more poisoned, hurtful, or devilish than a rebel.”
Dawn of the liberal state
By the late 17th century, philosophers had devised a more urbane version of the secular ideal. For John Locke it had become self-evident that “the church itself is a thing absolutely separate and distinct from the commonwealth. The boundaries on both sides are fixed and immovable.” The separation of religion and politics – “perfectly and infinitely different from each other” – was, for Locke, written into the very nature of things. But the liberal state was a radical innovation, just as revolutionary as the market economy that was developing in the west and would shortly transform the world. Because of the violent passions it aroused, Locke insisted that the segregation of “religion” from government was “above all things necessary” for the creation of a peaceful society.
Hence Locke was adamant that the liberal state could tolerate neither Catholics nor Muslims, condemning their confusion of politics and religion as dangerously perverse. Locke was a major advocate of the theory of natural human rights, originally pioneered by the Renaissance humanists and given definition in the first draft of the American Declaration of Independence as life, liberty and property. But secularisation emerged at a time when Europe was beginning to colonise the New World, and it would come to exert considerable influence on the way the west viewed those it had colonised – much as in our own time, the prevailing secular ideology perceives Muslim societies that seem incapable of separating faith from politics to be irredeemably flawed.
This introduced an inconsistency, since for the Renaissance humanists there could be no question of extending these natural rights to the indigenous inhabitants of the New World. Indeed, these peoples could justly be penalised for failing to conform to European norms. In the 16th century, Alberico Gentili, a professor of civil law at Oxford, argued that land that had not been exploited agriculturally, as it was in Europe, was “empty” and that “the seizure of [such] vacant places” should be “regarded as law of nature”.
Locke agreed that the native peoples had no right to life, liberty or property. The “kings” of America, he decreed, had no legal right of ownership to their territory. He also endorsed a master’s “Absolute, arbitrary, despotical power” over a slave, which included “the power to kill him at any time”. The pioneers of secularism seemed to be falling into the same old habits as their religious predecessors.
Secularism was designed to create a peaceful world order, but the church was so intricately involved in the economic, political and cultural structures of society that the secular order could only be established with a measure of violence. In North America, where there was no entrenched aristocratic government, the disestablishment of the various churches could be accomplished with relative ease. But in France, the church could be dismantled only by an outright assault; far from being experienced as a natural and essentially normative arrangement, the separation of religion and politics could be experienced as traumatic and terrifying.
During the French revolution, one of the first acts of the new national assembly on November 2, 1789, was to confiscate all church property to pay off the national debt: secularisation involved dispossession, humiliation and marginalisation. This segued into outright violence during the September massacres of 1792, when the mob fell upon the jails of Paris and slaughtered between two and three thousand prisoners, many of them priests.
Early in 1794, four revolutionary armies were dispatched from Paris to quell an uprising in the Vendée against the anti-Catholic policies of the regime. Their instructions were to spare no one. At the end of the campaign, General François-Joseph Westermann reportedly wrote to his superiors: “The Vendée no longer exists. I have crushed children beneath the hooves of our horses, and massacred the women … The roads are littered with corpses.”
Ironically, no sooner had the revolutionaries rid themselves of one religion, than they invented another. Their new gods were liberty, nature and the French nation, which they worshipped in elaborate festivals choreographed by the artist Jacques Louis David. The same year that the goddess of reason was enthroned on the high altar of Notre Dame cathedral, the reign of terror plunged the new nation into an irrational bloodbath, in which some 17,000 men, women and children were executed by the state.
To die for one’s country
When Napoleon’s armies invaded Prussia in 1807, the philosopher Johann Gottlieb Fichte similarly urged his countrymen to lay down their lives for the Fatherland – a manifestation of the divine and the repository of the spiritual essence of the Volk. If we define the sacred as that for which we are prepared to die, what Benedict Anderson called the “imagined community” of the nation had come to replace God. It is now considered admirable to die for your country, but not for your religion.
As the nation-state came into its own in the 19th century along with the industrial revolution, its citizens had to be bound tightly together and mobilised for industry. Modern communications enabled governments to create and propagate a national ethos, and allowed states to intrude into the lives of their citizens more than had ever been possible. Even if they spoke a different language from their rulers, subjects now belonged to the “nation,” whether they liked it or not.
John Stuart Mill regarded this forcible integration as progress; it was surely better for a Breton, “the half-savage remnant of past times”, to become a French citizen than “sulk on his own rocks”. But in the late 19th century, the British historian Lord Acton feared that the adulation of the national spirit that laid such emphasis on ethnicity, culture and language, would penalise those who did not fit the national norm: “According, therefore, to the degree of humanity and civilisation in that dominant body which claims all the rights of the community, the inferior races are exterminated or reduced to servitude, or put in a condition of dependence.”
The Enlightenment philosophers had tried to counter the intolerance and bigotry that they associated with “religion” by promoting the equality of all human beings, together with democracy, human rights, and intellectual and political liberty, modern secular versions of ideals which had been promoted in a religious idiom in the past. The structural injustice of the agrarian state, however, had made it impossible to implement these ideals fully. The nation-state made these noble aspirations practical necessities.
More and more people had to be drawn into the productive process and needed at least a modicum of education. Eventually they would demand the right to participate in the decisions of government. It was found by trial and error that those nations that democratised forged ahead economically, while those that confined the benefits of modernity to an elite fell behind.
Innovation was essential to progress, so people had to be allowed to think freely, unconstrained by the constraints of their class, guild or church. Governments needed to exploit all their human resources, so outsiders, such as Jews in Europe and Catholics in England and America, were brought into the mainstream.
Yet this toleration was only skin-deep, and as Lord Acton had predicted, an intolerance of ethnic and cultural minorities would become the achilles heel of the nation-state. Indeed, the ethnic minority would replace the heretic (who had usually been protesting against the social order) as the object of resentment in the new nation-state.
Thomas Jefferson, one of the leading proponents of the Enlightenment in the United States, instructed his secretary of war in 1807 that Native Americans were “backward peoples” who must either be “exterminated” or driven “beyond our reach” to the other side of the Mississippi “with the beasts of the forest”. The following year, Napoleon issued the “infamous decrees”, ordering the Jews of France to take French names, privatise their faith, and ensure that at least one in three marriages per family was with a gentile.
Increasingly, as national feeling became a supreme value, Jews would come to be seen as rootless and cosmopolitan. In the late 19th century, there was an explosion of antisemitism in Europe, which undoubtedly drew upon centuries of Christian prejudice, but gave it a scientific rationale, claiming that Jews did not fit the biological and genetic profile of the Volk, and should be eliminated from the body politic as modern medicine cut out a cancer.
When secularisation was implemented in the developing world, it was experienced as a profound disruption – just as it had originally been in Europe. Because it usually came with colonial rule, it was seen as a foreign import and rejected as profoundly unnatural. In almost every region of the world where secular governments have been established with a goal of separating religion and politics, a counter-cultural movement has developed in response, determined to bring religion back into public life.
What we call “fundamentalism” has always existed in a symbiotic relationship with a secularisation that is experienced as cruel, violent and invasive. All too often an aggressive secularism has pushed religion into a violent riposte. Every fundamentalist movement that I have studied in Judaism, Christianity and Islam is rooted in a profound fear of annihilation, convinced that the liberal or secular establishment is determined to destroy their way of life. This has been tragically apparent in the Middle East.
Very often modernising rulers have embodied secularism at its very worst and have made it unpalatable to their subjects. Mustafa Kemal Ataturk, who founded the secular republic of Turkey in 1918, is often admired in the west as an enlightened Muslim leader, but for many in the Middle East he epitomised the cruelty of secular nationalism.
He hated Islam, describing it as a “putrefied corpse”, and suppressed it in Turkey by outlawing the Sufi orders and seizing their properties, closing down the madrasas and appropriating their income. He also abolished the beloved institution of the caliphate, which had long been a dead-letter politically but which symbolised a link with the Prophet. For groups such as al-Qaida and Isis, reversing this decision has become a paramount goal.
Ataturk also continued the policy of ethnic cleansing that had been initiated by the last Ottoman sultans; in an attempt to control the rising commercial classes, they systematically deported the Armenian and Greek-speaking Christians, who comprised 90% of the bourgeoisie. The Young Turks, who seized power in 1909, espoused the antireligious positivism associated with August Comte and were also determined to create a purely Turkic state.
During the first world war, approximately one million Armenians were slaughtered in the first genocide of the 20th century: men and youths were killed where they stood, while women, children and the elderly were driven into the desert where they were raped, shot, starved, poisoned, suffocated or burned to death.
Clearly inspired by the new scientific racism, Mehmet Resid, known as the “execution governor”, regarded the Armenians as “dangerous microbes” in “the bosom of the Fatherland”. Ataturk completed this racial purge. For centuries Muslims and Christians had lived together on both sides of the Aegean; Ataturk partitioned the region, deporting Greek Christians living in what is now Turkey to Greece, while Turkish-speaking Muslims in Greece were sent the other way.
The Fundamentalist Reaction
Secularising rulers such as Ataturk often wanted their countries to look modern, that is, European. In Iran in 1928, Reza Shah Pahlavi issued the laws of uniformity of dress: his soldiers tore off women’s veils with bayonets and ripped them to pieces in the street. In 1935, the police were ordered to open fire on a crowd who had staged a peaceful demonstration against the dress laws in one of the holiest shrines of Iran, killing hundreds of unarmed civilians. Policies like this made veiling, which has no Qur’anic endorsement, an emblem of Islamic authenticity in many parts of the Muslim world.
Following the example of the French, Egyptian rulers secularised by disempowering and impoverishing the clergy. Modernisation had begun in the Ottoman period under the governor Muhammad Ali, who starved the Islamic clergy financially, taking away their tax-exempt status, confiscating the religiously endowed properties that were their principal source of income, and systematically robbing them of any shred of power. When the reforming army officer Gamal Abdul Nasser came to power in 1952, he changed tack and turned the clergy into state officials.
For centuries, they had acted as a protective bulwark between the people and the systemic violence of the state. Now Egyptians came to despise them as government lackeys. This policy would ultimately backfire, because it deprived the general population of learned guidance that was aware of the complexity of the Islamic tradition. Self-appointed freelancers, whose knowledge of Islam was limited, would step into the breach, often to disastrous effect.
If some Muslims today fight shy of secularism, it is not because they have been brainwashed by their faith but because they have often experienced efforts at secularisation in a particularly virulent form. Many regard the west’s devotion to the separation of religion and politics as incompatible with admired western ideals such as democracy and freedom. In 1992, a military coup in Algeria ousted a president who had promised democratic reforms, and imprisoned the leaders of the Islamic Salvation Front (FIS), which seemed certain to gain a majority in the forthcoming elections.
Had the democratic process been thwarted in such an unconstitutional manner in Iran or Pakistan, there would have been worldwide outrage. But because an Islamic government had been blocked by the coup, there was jubilation in some quarters of the western press – as if this undemocratic action had instead made Algeria safe for democracy. In rather the same way, there was an almost audible sigh of relief in the west when the Muslim Brotherhood was ousted from power in Egypt last year. But there has been less attention to the violence of the secular military dictatorship that has replaced it, which has exceeded the abuses of the Mubarak regime.
After a bumpy beginning, secularism has undoubtedly been valuable to the west, but we would be wrong to regard it as a universal law. It emerged as a particular and unique feature of the historical process in Europe; it was an evolutionary adaptation to a very specific set of circumstances. In a different environment, modernity may well take other forms.
Many secular thinkers now regard “religion” as inherently belligerent and intolerant, and an irrational, backward and violent “other” to the peaceable and humane liberal state – an attitude with an unfortunate echo of the colonialist view of indigenous peoples as hopelessly “primitive”, mired in their benighted religious beliefs.
There are consequences to our failure to understand that our secularism, and its understanding of the role of religion, is exceptional. When secularisation has been applied by force, it has provoked a fundamentalist reaction – and history shows that fundamentalist movements which come under attack invariably grow even more extreme. The fruits of this error are on display across the Middle East: when we look with horror upon the travesty of Isis, we would be wise to acknowledge that its barbaric violence may be, at least in part, the offspring of policies guided by our disdain. •
• Karen Armstrong’s Fields of Blood: Religion and the History of Violence is published today by Bodley Head.
The most interesting news in former Defense Secretary Leon E. Panetta’s memoir, “Worthy Fights,” concerns his disagreements with the Obama White House over Syria, Iraq and the budget crisis — disagreements that have been outlined in recent interviews and in testimony before Congress.
Still, Mr. Panetta elaborates on such subjects here, and these passages — in what is otherwise an often opaque and evasive book — shed light on the distressing events now unfolding in the Middle East as the Islamic State, also known as ISIS, rolls through large sections of Syria and Iraq. They also illuminate decisions made by the Obama administration, which, in the view of Mr. Panetta and many military observers, contributed to (or at least failed to help inhibit) these sobering developments.
In “Worthy Fights,” Mr. Panetta reminds us that two years ago, he — along with David H. Petraeus, then the Director of the Central Intelligence Agency, and Secretary of State Hillary Rodham Clinton — supported a plan to arm moderate Syrian rebels. In an interview last month with “60 Minutes,” Mr. Panetta said he thought that such a plan “would’ve helped. And I think in part, we pay the price for not doing that in what we see happening with ISIS” today. Here, he writes: “If we don’t prevent these Sunni extremists from taking over large swaths of territory in the Middle East, it will be only a matter of time before they turn their sights on us.”
Mr. Panetta also writes that he advocated leaving a small American force to help preserve “the fragile stability” that was “barely holding” Iraq together in 2011. This position was shared by members of the Joint Chiefs of Staff and military commanders in the region, he writes. But “the president’s team at the White House pushed back.”
Those “on our side of the debate,” Mr. Panetta goes on, “viewed the White House as so eager to rid itself of Iraq that it was willing to withdraw rather than lock in arrangements that would preserve our influence and interests.” And “without the president’s active advocacy,” he says, a deal failed to emerge with Nuri Kamal al-Maliki, then the Iraqi Prime Minister, to keep a modest number of American troops there.
To this day, Mr. Panetta says he believes “that a small, focused U.S. troop presence in Iraq could have effectively advised the Iraqi military on how to deal with Al Qaeda’s resurgence and the sectarian violence that has engulfed the country.” Instead, the last American troops left in December 2011, and at the start of this year, trucks flying the black flag of Al Qaeda rolled into Falluja and Ramadi, where American soldiers fought and died in some of the war’s bloodiest battles.
It is in these sections of the book, dealing with Iraq, Syria and presidential leadership (or its lack), that Mr. Panetta is most plain-spoken and impassioned. In other chapters, he writes more as the genial congressman he was for 16 years, dispensing a mix of reminiscence and spin, as well as boilerplate accounts of his work toward a balanced budget as director of the Office of Management and Budget and as Bill Clinton’s chief of staff. From 2009 through mid-2011, he served as the Obama administration’s first C.I.A. director, overseeing the American operation that led to the death of Osama ben Laden.
In this book, Mr. Panetta skims over crucial Defense Department issues, including systemic problems in veterans’ hospitals, and a military stretched thin during two long wars in Iraq and Afghanistan. He is even more evasive when it comes to discussing the C.I.A., often rationalizing or sidestepping troubling questions about the agency’s use of “enhanced interrogation” during the Bush years and its growing reliance, under President Obama, on drone warfare and targeted killings.
Having once accused the Bush administration of turning the country into “a nation of armchair torturers,” Mr. Panetta — who had little background in intelligence or military affairs — was initially greeted with suspicion by the agency when he arrived. But, as Mark Mazzetti, a national security correspondent for The New York Times wrote in his incisive book, “The Way of the Knife,” Mr. Panetta quickly “became a C.I.A. champion, beloved by many at Langley but criticized by others who said that, like so many C.I.A. directors before him, he had been co-opted by the agency’s clandestine branch.”
Though President Obama overruled him, Mr. Panetta argued against declassifying and releasing internal memos detailing the early Bush-era interrogation methods that he had once publicly condemned.
In “Worthy Fights,” Mr. Panetta writes that “it seemed wrong to me to ask a public servant to take a risk for his country and assure him that it was both legal and approved, then, years later, to suggest that he had done something wrong.” He also takes issue with critics who have questioned the utility of what he calls “unsavory techniques,” asserting that “harsh interrogation did cause some prisoners to yield to their captors and produced leads that helped our government understand Al Qaeda’s organization, methods and leadership.”
In this book’s pages, there is no substantial exploration of the intelligence lapses that contributed to the Obama administration’s failure to anticipate the Arab Spring or understand its fallout in Egypt, Libya and Syria.
Nor is there any real analysis of why the White House seems to have been caught off guard by the Islamic State’s swift advance and the collapse of the Iraqi Army. These developments took place after Mr. Panetta left government, but readers cannot help wishing he had weighed in here on the debate over whether this was primarily a problem with intelligence or a problem with policy-making in the White House.
While he neglects such important matters in “Worthy Fights,” Mr. Panetta does take time to argue that James R. Clapper Jr., the director of national intelligence — who misled a congressional hearing when he said that the National Security Agency was not gathering data on millions of Americans — “may be the perfect person to serve” in that “difficult position,” praising him as “deft and scrupulous.”
When it comes to the Obama administration’s proclivity for trying to centralize decision-making in the White House, Mr. Panetta echoes observations made by journalists (like James Mann, the author of “The Obamians”) and other administration insiders, like his predecessor at the Pentagon, Robert M. Gates (in his candid memoir, “Duty”).
Here, Mr. Panetta writes that the centralization of authority in the White House meant that cabinet members and agency heads “were rarely encouraged to take their own initiative or lobby for priorities,” and senior officials “who knew the most about certain subjects were excluded from important public debates, skewing the conversation in ways that sometimes did the administration’s policies a disservice.”
It was believed “among those close to him,” Mr. Panetta adds, that the President had not found his “time as a senator very rewarding” and tended “to be disdainful of Congress generally.” Mr. Panetta says he never “witnessed that disdain directly, but I did pick up evidence of it within his senior staff.”
Mr. Panetta also has some sharp things to say about Mr. Obama’s presidential leadership, rebuking him for his policy flip-flops on Syria. First, Mr. Panetta notes, Mr. Obama indicated he was leaning toward limited military action after concluding that President Bashar al-Assad’s forces had unleashed a devastating chemical attack against their own people (action that Mr. Obama had earlier warned was a “red line”); then he backed off, “agreeing to submit the matter to Congress,” which was, “as he well knew, an almost certain way to scotch any action.”
In Mr. Panetta’s view, this was “a blow to American credibility,” sending “the wrong message to the world”: “The power of the United States rests on its word, and clear signals are important both to deter adventurism and to reassure allies that we can be counted on.”
Echoing a complaint frequently heard within the Beltway, Mr. Panetta also laments what he regards as the president’s sometimes passive or disengaged approach to governing. He argues that Mr. Obama’s failure to lead Congress out of the sequester standoff is a prime example of his “reticence to engage his opponents and rally support for his cause.” At times, Mr. Panetta writes, Mr. Obama “avoids the battle, complains and misses opportunities,” giving “his opponents room to shape the contours of his presidency.”
As for the ben Laden raid, Mr. Panetta’s description not only lacks the visceral detail and immediacy of “No Easy Day” — a firsthand account of the raid by Matt Bissonnette (a.k.a. Mark Owen), a member of the SEAL team that took down the Qaeda leader — but also declines to give us a palpable sense of what was going on during the raid at Langley and the White House.
Mr. Panetta does, however, have a favorite joke, which he says he never had a chance to deliver before in public: “Looking back on my career, I’ve been a Republican, a congressman, and White House chief of staff, and a defense secretary. Come to think of it, I’ve done everything that Dick Cheney has done. Except the guy I made sure got shot in the face was Osama ben Laden.”
Only the credulous or the craven might consider a British politician their hero. I plead guilty, but only on one count. It is nearly a decade since Robin Cook’s sudden death. Parliament was robbed of a rare voice of principle, a man who combined erudition and acerbic wit with a forensic ability to assimilate and distil information to devastating effect.
Cook’s political career was punctuated by great moments, from the demolition of John Major over the Scott inquiry in 1996 to the demolition of his own Labour government, again over Iraq, in 2003. His intolerance of Whitehall deceit was matched by impatience towards those who couldn’t keep up with him. Cook’s refusal to schmooze – he would much rather go to the horse-racing – prevented him from getting to the very top, but he left his mark in a way that many of his colleagues and time-servers have not.
He may be best remembered for leading the opposition to Tony Blair’s great foreign misadventure, but Cook was actually an advocate of military action in defence of human rights, while trying (and largely failing) to curb arms sales. A fierce advocate of centre-left values, he was at the same time rarely tribal, and embraced the unfashionable cause of electoral reform.
I remember a trip we made not long after he’d been made foreign secretary. Fresh from giving a public dressing-down to Croatia’s nationalist President, he flew back to Scotland and straight to a constituency surgery.
He spent a couple of hours listening to a long line of concerns ranging from domestic violence to leaky roofs to housing benefit, writing down various points long-hand in his notebook. He was painstaking in the detail, but he saw in these examples a bigger picture. Even during this so-called time of plenty, long before the financial crash, he warned of the dangers of society’s stratification. He was always very aware of inequality.
I was thinking of Cook while putting the finishing touches to my study of 2,000 years of the global super-rich. Having been immersed in acquisitiveness, narcissism and the odd show of noblesse oblige, it is worth remembering that it doesn’t have to be this way.
• John Kampfner’s The Rich is published by Little, Brown.
During the H-bomb testing frenzy of the 1950s, a RAND Corporation researcher named Paul Baran became concerned about the fragility of America’s communications networks.
The era’s telephone systems required users to connect to a handful of major hubs, which the Soviets would doubtless target in the early hours of World War II. So Baran dreamed up a less vulnerable alternative: a decentralized network that resembled a vast fishnet, with an array of small nodes that were each linked to a few others.
These nodes could not only receive signals but also route them along to their neighbors, thereby creating countless possible paths for data to keep flowing should part of the network be destroyed. This data would travel across the structure in tiny chunks, called “packets,” that would self-assemble into coherent wholes upon reaching their destinations.
When Baran pitched his concept to AT&T, he was confident the company would grasp the wisdom of building a network that could withstand a nuclear attack. But as Walter Isaacson (left) recounts in “The Innovators,” his sweeping and surprisingly tenderhearted history of the digital age, AT&T’s executives reacted as if Baran had asked them to get into the unicorn-breeding business. They explained at length why his “packet-switching” network was a physical impossibility, at one point calling in 94 separate technicians to lecture Baran on the limits of the company’s hardware. “When it was over, the AT&T executives asked Baran, ‘Now do you see why packet switching wouldn’t work?’ ” Isaacson writes. “To their great disappointment, Baran simply replied, ‘No.’ ”
AT&T thus blew its chance to loom large in technological lore, for packet switching went on to become a keystone of the Internet. But the company can take solace in the fact that it was hardly alone in letting knee-jerk negativity blind itself to a tremendous digital opportunity: Time and again in “The Innovators,” powerful entities shrug their shoulders when presented with zillion-dollar ideas. Fortunately for those of us who now feel adrift when our iPads and 4G phones are beyond arm’s reach, the Paul Barans of the world are not easily discouraged.
Stubbornness is just one of the personality traits ubiquitous among the brilliant subjects of “The Innovators.” Isaacson identifies several other virtues that were essential to his geeky heroes’ success, none of which will surprise those familiar with Silicon Valley’s canon of self-help literature: The digital pioneers all loathed authority, embraced collaboration and prized art as much as science. Though its lessons may be prosaic, the book is still absorbing and valuable, and Isaacson’s outsize narrative talents are on full display. Few authors are more adept at translating technical jargon into graceful prose, or at illustrating how hubris and greed can cause geniuses to lose their way.
Having chosen such an ambitious project to follow his 2011 biography of the Apple co-founder Steve Jobs, Isaacson is wise to employ a linear structure that gives “The Innovators” a natural sense of momentum. The book begins in the 1830s with the prescient Ada Lovelace, Lord Byron’s mathematically gifted daughter, who envisioned a machine that could perform varied tasks in response to different algorithmic instructions. (Isaacson takes pains throughout to salute the unheralded contributions of female programmers.) The story then skips ahead to the eve of World War II, when engineers scrambled to build machines capable of calculating the trajectories of missiles and shells.
One of these inventors was John Mauchly, a driven young professor at Ursinus College. In June 1941, he paid a visit to Ames, Iowa, where an electrical engineer named John Atanasoff had cobbled together an electronic calculator “that could process and store data at a cost of only $2 per digit” — a seemingly magical feat. Against the advice of his wife, who suspected that Mauchly was a snake, Atanasoff proudly showed off his ragtag creation. Soon thereafter, Mauchly incorporated some of Atanasoff’s ideas into Eniac, the 27-ton machine widely hailed as the world’s first true computer. The bitter patent fight that ensued would last until 1973, with Atanasoff emerging victorious.
Mauchly is often demonized for stealing from that most romantic of tech archetypes, the “lone tinkerer in a basement” who sketched out brainstorms on cocktail napkins. But Isaacson contends that men like Atanasoff receive too much adulation, for an ingenious idea is worthless unless it can be executed on a massive scale. If Mauchly hadn’t come to Iowa to “borrow” his work, Atanasoff would have been “a forgotten historical footnote” rather than a venerated father of modern computing.
Isaacson is not nearly as sympathetic in discussing the sins of William Shockley, who shared a 1956 Nobel Prize in Physics for co-inventing the transistor. Shockley is the book’s arch-villain, a glory hog whose paranoid tendencies destroyed the company that bore his name. (He once forced all his employees to take lie-detector tests to determine if someone had sabotaged the office.) His eight best researchers quit and went on to found Fairchild Semiconductor, arguably the most seminal company in digital history; Shockley, meanwhile, devolved into a raving proponent of odious theories about race and intelligence.
The digital revolution germinated not only at button-down Silicon Valley firms like Fairchild, but also in the hippie enclaves up the road in San Francisco. The intellectually curious denizens of these communities “shared a resistance to power elites and a desire to control their own access to information.” Their freewheeling culture would give rise to the personal computer, the laptop and the concept of the Internet as a tool for the Everyman rather than scientists. Though Isaacson is clearly fond of these unconventional souls, his description of their world suffers from a certain anthropological detachment.
Perhaps because he’s accustomed to writing biographies of men who operated inside the corridors of power — Benjamin Franklin, Henry Kissinger, Jobs — Isaacson seems a bit baffled by committed outsiders like Stewart Brand, an LSD-inspired futurist who predicted the democratization of computing. He also does himself no favors by frequently citing the work of John Markoff and Tom Wolfe, two writers who have produced far more intimate portraits of ’60s counterculture.
Yet this minor shortcoming is quickly forgiven when “The Innovators” segues into its rollicking last act, in which hardware becomes commoditized and software goes on the ascent. The star here is Bill Gates, whom Isaacson depicts as something close to a punk — a spoiled brat and compulsive gambler who “was rebellious just for the hell of it.”
Like Paul Baran before him, Gates encountered an appalling lack of vision in the corporate realm — in his case at IBM, which failed to realize that its flagship personal computer would be cloned into oblivion if the company permitted Microsoft to license the machine’s MS-DOS operating system at will. Gates pounced on this mistake with a feral zeal that belies his current image as a sweater-clad humanitarian.
“The Innovators” cannot really be faulted for the hastiness of its final pages, in which Isaacson provides brief and largely unilluminating glimpses at Twitter, Wikipedia and Google. There is no organic terminus for the book’s narrative, since digital technology did not cease to evolve the moment Isaacson handed in his manuscript. As a result, any ending was doomed to feel dated. (There is, for example, but a single passing mention of the digital currency Bitcoin.)
But even at its most rushed, the book evinces a genuine affection for its subjects that makes it tough to resist. Isaacson confesses early on that he was once “an electronics geek who loved Heathkits and ham radios,” and that background seems to have given him keen insight into how youthful passion transforms into professional obsession. His book is thus most memorable not for its intricate accounts of astounding breakthroughs and the business dramas that followed, but rather for the quieter moments in which we realize that the most primal drive for innovators is a need to feel childlike joy.
The economics profession has not, to say the least, covered itself in glory these past six years. Hardly any economists predicted the 2008 crisis — and the handful who did tended to be people who also predicted crises that didn’t happen. More significant, many and arguably most economists were claiming, right up to the moment of collapse, that nothing like this could even happen. Furthermore, once crisis struck economists seemed unable to agree on a response. They’d had 75 years since the Great Depression to figure out what to do if something similar happened again, but the profession was utterly divided when the moment of truth arrived.
In “Seven Bad Ideas: How Mainstream Economists Have Damaged America and the World,” Jeff Madrick — a contributing editor at Harper’s Magazine and a frequent writer on matters economic — argues that the professional failures since 2008 didn’t come out of the blue but were rooted in decades of intellectual malfeasance.
As a practicing and, I’d claim, mainstream economist myself, I’m tempted to quibble. How “mainstream,” really, are the bad ideas he attacks? How much of the problem is bad economic ideas per se as opposed to economists who have proved all too ready to drop their own models — in effect, reject their own ideas — when their models conflict with their political leanings? And was it the ideas of economists or the prejudices of politicians that led to so much bad policy?
I’ll return to those quibbles later, but Madrick’s basic theme is surely right. His bad ideas are definitely out there, have been expressed by plenty of economists, and have indeed done a lot of harm.
So what are the seven bad ideas? Actually, they aren’t all that distinct. In particular, bad idea No. 1 — the Invisible Hand — is pretty hard to distinguish from bad idea No. 3, Milton Friedman’s case against government intervention, and segues fairly seamlessly into bad idea No. 7, globalization as something that is always good. As an aside, this sometimes makes Madrick’s argument more disjointed than I’d like, with key propositions spread across nonconsecutive chapters. But there is an important point here, and Madrick has clarified my own thinking on the subject.
Adam Smith used the phrase “invisible hand” only once in “The Wealth of Nations,” and he probably didn’t mean to say what most people now think he said. But never mind: Today the phrase is almost always used to mean the proposition that market economies can be trusted to get everything, or almost everything, right without more than marginal government intervention.
Is this belief well grounded in theory and evidence? No. As Madrick makes clear, many economists have, consciously or unconsciously, engaged in a game of bait and switch. On one side, we have elegant mathematical models showing that under certain conditions an unregulated free-market economy will produce an efficient “general equilibrium,” in the sense that nobody could be made better off without making anyone worse off. Yet as Madrick says, these assumed conditions — including the assumption that people “are rational decision makers, and that they have all the price and product information they need” — are manifestly not met in practice. What, then, do the elegant models tell us about the real world?
Well, in a different chapter Madrick recalls Milton Friedman’s dictum that economic models should be judged not by the realism of their assumptions but by the accuracy of their predictions. This lets general equilibrium off the hook, sort of. But has the proposition that free markets get it right ever been vetted for predictive accuracy? Of course not.
Friedman’s own polemics on behalf of free markets consist mainly of “assertions based on how free markets may work according to the Invisible Hand,” Madrick writes, with hardly any evidence presented that they actually work that way.In other words, economists arguing for free markets and limited government try to have it both ways: They claim that their doctrine is a deep insight derived from first principles, but dismiss as irrelevant the overwhelming evidence that these assumed principles don’t hold in practice.
Matters are even worse when it comes to the performance of financial markets. Here the proposition that markets should get it right — that major speculative bubbles can’t happen (bad idea No. 5) — doesn’t just depend on conditions that clearly don’t hold in practice, but is directly contradicted by evidence on herd behavior and excess volatility. Yet “efficient markets theory” has maintained its academic dominance. Eugene Fama of the University of Chicago, the father of efficient markets, still denies that financial bubbles even exist — and last year he shared a Nobel in economic science.
Still, all of these failings of mainstream economics were obvious long before the 2008 crisis. What has really come as news is the seeming inability of economists to agree on a policy response to mass unemployment. And here is where my quibbles with Madrick get louder.
No. 2 on Madrick’s bad idea list is Say’s Law, which states that savings are automatically invested, so that there cannot be an overall shortfall in demand. A further implication of Say’s Law is that government stimulus can never do any good, because deficit spending by the public sector will always crowd out an equal amount of private spending.
But is this “mainstream economics”? Madrick cites two University of Chicago professors, Casey Mulligan and John Cochrane, who did indeed echo Say’s Law when arguing against the Obama stimulus. But these economists were outliers within the profession. Chicago’s own business school regularly polls a representative sample of influential economists for their views on policy issues; when it asked whether the Obama stimulus had reduced the unemployment rate, 92 percent of the respondents said that it had. Madrick is able to claim that Say’s Law is pervasive in mainstream economics only by lumping it together with a number of other concepts that, correct or not, are actually quite different.
Now, it’s true that the relative handful of economists claiming that stimulus can’t possibly work, or that slashing government spending is actually expansionary, have a much higher profile than their numbers or their influence within the profession warrants. Why? Partly, the answer is that the news media — especially but not only partisan media like The Wall Street Journal’s editorial page — have promoted the views of economists they like for political reasons. Partly, also, it’s because politicians listen to economists who tell them what they want to hear. I’m not saying that mainstream economists bear none of the blame; the decades-long retreat from Keynes has undoubtedly allowed old fallacies to make a comeback. But austerity mania has to a large extent spread despite mainstream economics, not because of it.
I’d make a further observation here: Academic economists have much less influence in Europe than they do in America. Yet the policy response to the crisis, while poor on this side of the Atlantic, has been much worse on the other. Politicians don’t need bad advice from economists in order to go off the rails.
Such quibbles aside, “Seven Bad Ideas” tells us an important and broadly accurate story about what went wrong. Economists presented as reality an idealized vision of free markets, dressed up in fancy math that gave it a false appearance of rigor. As a result, the world was unprepared when markets went bad. Economic ideas, declared John Maynard Keynes, are “dangerous for good or evil.” And in recent years, sad to say, evil has had the upper hand.