FOR nearly two decades now articles and reviews by Daniel Bell have been appearing in our better journals of ideas…
Dennis H. Wrong /
An American “Centrist” The End of Ideology.
by Daniel Bell. The Free Press. 416 pp. $7.50.
For nearly two decades now articles and reviews by Daniel Bell have been appearing in our better journals of ideas and opinion. He has been so ubiquitous a figure, expressing himself on so many subjects, that readers must have occasionally wondered if there were more than one person writing under the name. Bell has in fact had several different careers: youthful radical journalist in the early 40’s, teacher of social science, labor editor of Fortune, globe-trotter for international committees of intellectuals. Now that he has returned to academic life as associate professor of sociology at Columbia, the publication of this collection of his more ambitious essays suggests an effort to indicate his intellectual resting places.
Range, variety, and versatility are the talents with which Daniel Bell is commonly credited. Yet the grouping together of his essays in The End of Ideology reveals his intellectual concerns to be rather more consistent and narrowly focused than regular readers of his magazine pieces might have anticipated. Two main themes predominate. One is what used to be called “American exceptionalism”: the view that political and sociological concepts derived from the study of European societies seriously distort our vision if applied to the American scene. The other is the “end of ideology” of the title: the abating in our prosperous post-bourgeois era of the ideological conflicts between left and right which have for so long dominated Western politics. Marx and Dewey, especially as interpreted by Sidney Hook, to whom the book is dedicated, are Bell’s chief intellectual mentors in developing, exploring, and illustrating these themes in their bearing on a range of topics including the bureaucratization of American capitalism, the decline in the militancy of the labor movement, the inadequacies of Mills’s theory of an American “power elite,” the rapid swings of the intellectual Zeitgeist since the 30’s, and the continuing boredom and emptiness of industrial work and what might be done to alleviate it.
It is possible to discern a certain formal inconsistency between the two themes. If the end of ideology applies to the Western world as a whole and reflects the stability of mature, increasingly egalitarian industrial societies, then the American immunity from divisive ideological passions which Bell consistently emphasizes as the unique virtue of our political system is no longer exceptional. There is, in fact, a dialectic between European and American experience that Bell misses—the much advertised postwar “Americanization” of Europe and the belated American adoption of reforms long advocated and in some cases long instituted by European socialist parties both helped dampen the fervors of the 30’s, the decade whose epitaph Bell is writing once more in so many of these essays. In an often penetrating discussion of the “New Left” here and abroad—Dissent, the British “angries,” the post-Hungary defectors from Communism on the continent—Bell fails to note the paradox that American radicals denounce mass culture in accents echoing European elitist or “Establishment” values, while young British leftists manifest immense sympathy and curiosity about American culture and are envious of the fluidity of status and the absence of stuffy, gilded institutions like the monarchy on this side of the Atlantic. Each group seems to wish that its own country resembled more closely its image of the other.
Except for this discussion of contemporary radicalism abroad, a review of theories of Soviet society, and some reflections on the thought of the early Marx, the essays in The End of Ideology deal with American life. Bell is perhaps our most conscientious and reliable historian of the return from the 30’s, of the assimilation of once-radical intellectuals and trade unionists into a society which they succeeded in modifying without transforming. But by now we know this story so well in so many of its ramifications that several of these essays have inevitably lost the flavor of originality they had when they first appeared. One is struck, nevertheless, with how good the best of them are, a category in which I would place those dealing with the U.S. economy. Probing the point at which economic problems of wage determination, national mobilization policy, technological rationalization, and shifts in business leadership and the sources of investment funds become or touch on political conflicts, Bell refuses to be frightened by the formidable abstractions or the arrogant professionalism of the economists and determinedly locates what they have to say in its larger social and historical context.
I have more reservations about his treatment of political and cultural issues. Dwight Macdonald recently described Bell as a congenital centrist. If so, and Bell good-naturedly accepted the designation, we are here confronted with the spectacle of a centrist engaged in finding his own image reflected in the society around him, a society founded, he argues, on the politics of moderation and the culture of the middle class, yet strong and vital enough to blunt the critical assaults of both the cultural aristocrat and the Utopian radical. Bell is aware of the intensity with which many intellectuals “ache for the lost Arcadia” of the 30’s; he recognizes the widespread hunger for heroism, passion, a transfiguring cause. Nor is he lacking in sympathy for such an outlook, observing that the young intellectual unavoidably feels that “the middle way is for the middle-aged, not for him.” And he is careful to note that “a repudiation of ideology, to be meaningful, must mean not only a criticism of the Utopian order but of existing society as well.”
Yet there is an irritating cageyness about all this. For Bell will not finally concede the reality of anything out there in the world to justify rebellion and rejection. His tone becomes avuncular: ah yes, young intellectuals naturally want to be fired with romantic passions—that is a rite de passage of their vocation. But unhappily history is unable to accommodate them, for the Age of Ideology has ended and anyway we have learned that “the tendency to convert concrete issues into ideological problems, to invest them with moral color and high emotional charge, is to invite conflicts which can only damage a society.”
Now if one equates “ideology” with secular messianism few will want to deny the essential rightness of this judgment after all that has happened in this century. The trouble is that it is almost impossible to believe Bell’s claim that, for himself at least, it represents a bitter, hard-won wisdom, that in giving up the delights of ideology he is really surrendering something for which he has a strong appetite. His references to the pessimism, disenchantment, et al. of his “generation” (he is a great player of the generations game) therefore ring hollow, and he simply sounds smug when urging the young to renounce this evil fruit. It is all very well to affirm with Machiavelli that “men commit the error of not knowing when to limit their hopes” (this quotation heads the final and title chapter of The End of Ideology), but the reality and indestructibility of their hopes need to be insisted on with equal force. Especially if one sees history as tragic.
But these are essentially matters of sensibility and are perhaps of secondary importance in what is not, after all, a personal testament but a series of uncommonly knowledgeable political and sociological interpretations. I find, however, that on at least two occasions Bell’s centrism and “moderationism” lead him rather seriously astray on substantive questions.
In an essay which has already acquired justified celebrity as one of the very few available critiques of the fashionable notion of “mass society,” Bell ably traces the origin of the idea to European reactionary and aristocratic-elitist thought. He catches Jaspers, Ortega, and other mass theorists in a number of extreme and romantic overstatements before turning to a defense of American society against the charges of social atomization, cultural mediocrity, and compulsive conformism leveled by these thinkers and their epigoni. So the American habit of establishing and joining a multitude of voluntary associations serving all conceivable purposes is invoked to refute the view that we are a rootless, alienated mass, although the very disposition to create such “artificial” social groups might as readily be considered evidence for as against our rootlessness. And then we are told for the umpteenth time how many good books are sold and how many symphony orchestras flourish in the United States, as if these conceivable indications of the average cultural level were in any way relevant to the arguments of Ortega and T. S. Eliot which bear solely on the opportunities for high culture. While Bell, finally, is right to chide leftist critics of mass culture for ignoring the possible conflict between the claims of cultural excellence and social justice, he himself then suppresses the issue by descending to the sorriest level of apologetics in defense of American culture.
Bell argues that socialism failed as a movement in the United States because American socialists of all breeds never resolved the contradiction between ethical idealism and the requirements of effective political action. Perhaps by stressing this somewhat esoteric consideration he is simply trying to avoid another rundown of the usual causes cited to explain the failure of American socialism: the lack of a feudal past, the influx of immigrants, the high standard of living, and so on. But he manages seriously to distort and oversimplify the thought of Max Weber, from whom he borrows the notion of an inevitable tension between ethics and politics. In his famous essay “Politics as a Vocation,” Weber contrasted an “ethic of responsibility” with an “ethic of absolute ends.” By the former he meant the recognition that to make changes in the world one must take into account its imperfection and be prepared to use ethically questionable means, i.e. force, for all politics involves the use of force at some level. The proponent of absolute ends, on the other hand, refuses to separate means and ends: he may simply recommend exemplary conduct or he may argue that “from good comes only good, but from evil only evil follows.” Arthur Koestler’s dichotomy of the Yogi and the Commissar refers to the extreme versions of each ethic.
Bell quite illegitimately identifies the Communist, the fanatic, the totalitarian extremist with the ethic of absolute ends. In fact the Communist does not, like the saint, “live his end,” but is at the very opposite pole, completely dissociating means from ends, and ultimately, in his worship of the “organizational weapon” of the party, collapsing the latter into the former. Bell wants to reserve rational, expedient, “responsible” conduct for the politics of compromise and moderation which he favors, so he stands Weber’s distinction on its head and obscures the fundamental dilemma of means and ends in politics which Weber grasped more profoundly than anyone else, the dilemma, in Weber’s words, that “if one makes any concessions at all to the principle that the end justifies the means, it is not possible to bring an ethic of ultimate ends and an ethic of responsibility under one roof or to decree ethically which end should justify which means.” Koestler’s pseudo-tragic polarity is also a false one because though a man acts by following an ethic of responsibility somewhere, unless he is a totalitarian, “he reaches the point where he says: ‘Here I stand; I can do no other.’ ”
Perhaps it is a measure of the ultimate limitations of centrism as a political philosophy that it so obscures the central, utterly irresolvable dilemma of politics. But this is not to deny the illumination it casts on particular issues and these essays amply attest to that.
In London, in the nineteen-thirties, the émigré Hungarian intellectual Karl Polanyi was known among his friends as “the apocalyptic chap.” His gloom was understandable. Nearly fifty, he’d had to leave his wife, daughter, and mother behind in Vienna shortly after Austria lurched toward fascism, in 1933. Although he had long edited and contributed to the prestigious Viennese weekly The Austrian Economist, which published such celebrated figures as Friedrich Hayek and Joseph Schumpeter, he had come to discount his career as a thing of “theoretical and practical barrenness,” and blamed himself for failing to diagnose his era’s crucial political conflict. As so often for refugees, money was tight. Despite letters of reference from eminent historians, Polanyi failed to land a professorship or a fellowship, though he did manage to earn thirty-seven pounds co-editing an anti-fascist anthology, which featured essays by W. H. Auden and Reinhold Niebuhr. In his own contribution to the book, he argued that fascism strips democratic politics away from human society so that “only economic life remains,” a skeleton without flesh.
In 1937, he taught in adult-education programs in Kent and Sussex, commuting by bus or train and spending the night at a student’s house if it got too late to return home. The subject was British economic history, which he hadn’t much studied before. As he learned how capitalism had challenged the political system of Great Britain, the first nation in the world to industrialize, he decided that it was no accident that fascism was infecting countries as disparate as Japan, Croatia, and Portugal. Fascism shouldn’t be “ascribed to local causes, national mentalities, or historical backgrounds,” he came to believe. It shouldn’t even be thought of as a political movement. It was, rather, an “ever-given political possibility”—a reflex that could occur in any polity experiencing a certain kind of pain. In Polanyi’s opinion, whenever the profit-making impulse becomes deadlocked with the need to shield people from its harmful side effects, voters are tempted by the “fascist solution”: reconcile profit and security by forfeiting civic freedom. The insight became the keystone of his masterpiece, “The Great Transformation,” which was published in 1944, as the world was coming to terms with the destruction that fascism had wrought.
Today, as in the nineteen-thirties, strongmen are ascendant worldwide, purging civil servants, subverting the judiciary, and bullying the press. In a sweeping, angry new book, “Can Democracy Survive Global Capitalism?” (Norton), the journalist, editor, and Brandeis professor Robert Kuttner (pic above) champions Polanyi as a neglected prophet. Like Polanyi, he believes that free markets can be crueller than citizens will tolerate, inflicting a distress that he thinks is making us newly vulnerable to the fascist solution. In Kuttner’s description, however, today’s political impasse is different from that of the nineteen-thirties. It is being caused not by a stalemate between leftist governments and a reactionary business sector but by leftists in government who have reneged on their principles. Since the demise of the Soviet Union, Kuttner contends, America’s Democrats, Britain’s Labour Party, and many of Europe’s social democrats have consistently tacked rightward, relinquishing concern for ordinary workers and embracing the power of markets; they have sided with corporations and investors so many times that, by now, workers no longer feel represented by them. When strongmen arrived promising jobs and a shared sense of purpose, working-class voters were ready for the message.
Born in 1886 in Vienna, Karl Polanyi grew up in Budapest, in an assimilated, highly cultured Jewish family. Polanyi’s father, an engineer who became a railroad contractor, was so conscientious that when his business failed, around 1900, he repaid the shareholders, plunging the family into genteel poverty. Polanyi’s mother founded a women’s college, hosted a salon, and had a somewhat chaotic personality that a daughter-in-law once likened to “a book not yet written.” At home, as Gareth Dale recounts in a thoughtful 2016 biography, the family spoke German, French, and a little Hungarian; Karl also learned English, Latin, and Greek as a child. “I was taught tolerance not only by Goethe,” he later recalled, “but also, with seemingly mutually exclusive accents, by Dostoyevsky and John Stuart Mill.”
After university, Polanyi helped to found Hungary’s Radical Citizens’ Party, which called for land redistribution, free trade, and extended suffrage. But he remained enough of a traditionalist to enlist as a cavalry officer shortly after the First World War broke out. At the front, where, he said, “the Russian winter and the blackish steppe made me feel sick at heart,” he read “Hamlet” obsessively, and wrote letters home asking his family to send volumes of Marx, Flaubert, and Locke. After the war, the Radical Citizens took power, but they fumbled it. In the short-lived Communist government that followed, Polanyi was offered a position in the culture ministry by his friend György Lukács, later a celebrated Marxist literary critic.
When the Communists fell, pogroms broke out, and Polanyi fled to Vienna. “He looked like one who looks back on life, not forward to it,” Ilona Duczynska, who became his wife, remembered. Duczynska was a Communist engineer, ten years younger than he was. She had smuggled tsarist diamonds out of Russia in a tube of toothpaste and once borrowed a pistol to assassinate Hungary’s Prime Minister, though he resigned before she could shoot him. She and Polanyi married in 1923 and soon had a daughter.
Karl Polanyi and Nobel Laureate in Economics Joseph E. Sitglitz
These were the days of so-called Red Vienna, when the city’s socialist government was providing apartments for the working class and opening new libraries and kindergartens. Polanyi held informal seminars on socialist economics at home. He started writing for The Austrian Economist in 1924, and he was promoted to editor-in-chief a few months before the right-wing takeover sent him into exile. Duczynska remained in Vienna, going underground with a militia, but, in 1936, she, too, emigrated, taking a job as a cook in a London boarding house. In 1940, Bennington College offered Polanyi a lectureship, and he left for Vermont, where his family soon joined him and he began to turn his lecture notes into a book. “Not since 1920 did I have a time so rich in study and development,” he wrote.
Polanyi starts “The Great Transformation” by giving capitalism its due. For all but eighteen months of the century prior to the First World War, he writes, a web of international trade and investment kept peace among Europe’s great powers. Money crossed borders easily, thanks to the gold standard, a promise by each nation’s central bank to sell gold at a fixed price in its own currency. This both harmonized trade between countries and stabilized relative currency values. If a nation started to sell more goods than it bought, gold streamed in, expanding the money supply, heating up the economy, and raising prices high enough to discourage foreign buyers—at which point, in a correction so smooth it almost seemed natural, exports sank back down to pre-boom levels. The trouble was that the system could be gratuitously cruel. If a country went into a recession or its currency weakened, the only remedy was to attract foreign money by forcing prices down, cutting government spending, or raising interest rates—which, in effect, meant throwing people out of work. “No private suffering, no restriction of sovereignty, was deemed too great a sacrifice for the recovery of monetary integrity,” Polanyi wrote.
The system was sustainable politically only as long as those whose lives it ruined didn’t have a say. But, in the late nineteenth and early twentieth centuries, the right to vote spread. In the twenties and thirties, governments began trying to protect citizens’ jobs from shifts in international prices by raising tariffs, so that, in the system’s final years, it hardened national borders instead of opening them, and engendered what Polanyi called a “new crustacean type of nation,” which turned away from international trade, making first one world war, and then another, inevitable.
In Vienna, Polanyi had heard socialism dismissed as utopian, on the ground that no central authority could efficiently manage millions of different wishes, resources, and capabilities. In “The Great Transformation,” he swivelled this popgun around. What was utopian, he declared, was “the concept of a self-regulating market.” Human life wasn’t as orderly as mathematics, and only a goggle-eyed idealist would think it wise to lash people to a mechanism like the gold standard and then turn the crank. For most of human history, he observed, money and the exchange of goods had been embedded within culture, religion, and politics. The experiment of subordinating a nation to a self-adjusting market hadn’t even been attempted until Britain tried it, in the mid-eighteen-thirties, and that effort had required a great deal of coördination and behind-the-scenes management. “Laissez-faire,” Polanyi earnestly joked, “was planned.”
On the other hand, Polanyi believed that resistance to market forces, which he dubbed “the countermovement,” truly was spontaneous and ad hoc. He pointed to the motley of late-nineteenth-century measures—inspecting food and drink, subsidizing irrigation, regulating coal-mine ventilation, requiring vaccinations, protecting juvenile chimney sweeps, and so on—that were instituted to housebreak capitalism. Because such restraints went against the laws of supply and demand, they were despised by defenders of laissez-faire, who, Polanyi noticed, usually argued “that the incomplete application of its principles was the reason for every and any difficulty laid to its charge.” But what was the alternative? Once the laissez-faire machine started running, it cheerfully annihilated the people and the natural environment that it made use of, unless it was restrained.
Polanyi offered the example of the enclosure movement in sixteenth-century England, when landowners tore down villages and turned common lands into private pastures. The changes brought efficiencies that raised the land’s food yield as well as its value, in the long term improving life for everyone. Enclosure was a good thing, in other words; the numbers said so. In the short term, however, it dispossessed peasants who couldn’t immediately improvise a new living, and it was only because of a countermovement—led in piecemeal fashion by the monarchy, in a long, losing battle with Parliament—that more people didn’t die of exposure and starvation. If you argued that resistance did not compute, you would be right, but the countermovement, though it couldn’t stop progress, shielded people by slowing it down. It made enclosure so gradual that, even three centuries later, the poet John Clare was lamenting its advance in his sonnets.
In the nineteen-thirties, when Polanyi was first formulating his critique, the British economist John Maynard Keynes was likewise arguing that capitalist economies aren’t self-adjusting. The markets for labor, goods, and money, he showed, don’t find equilibriums independently but through interactions with one another that can have unfortunate, counterintuitive side effects. In hard times, economies tend to retrench, just when stimulus is most needed; the richer they get, the less likely they are to invest enough to sustain their wealth. During the Depression, Keynes made the case that governments should deficit-spend their way out of recessions. By the time Polanyi’s book was published, the Keynesian view had become orthodoxy. For the next few decades, the world’s leading economies were tightly managed by their governments. America’s top marginal tax rate stayed at ninety-one per cent until 1964, and anti-usury laws kept a ceiling on interest rates until the late seventies. The memory of the financial chaos of the thirties, and of the fascism that it gave rise to, was still vivid, and the Soviet Union loomed as an alternative, should the Western democracies fail to treat their workers well.
In terms of international monetary systems, too, Keynesianism held sway. In 1944, at the Bretton Woods Conference, Keynes helped to negotiate a way of harmonizing exchange rates that gave national governments enough elbow room to boost their domestic economies when necessary. Only America continued to redeem its currency with gold. Other nations pegged their currencies to the dollar (making it their reserve currency), but they were free to adjust their currencies’ values within limits when the need arose. Countries were allowed, and sometimes even required, to impose capital controls, measures that limited the cross-border flow of investment capital. With investors unable to yank money suddenly from one country to another, governments were free to spur growth with low interest rates and to spend on social programs without fear that inflation-averse capitalists would sell off their nations’ bonds. So weak was the political power of investors that France, Britain, and America let inflation shrink the value of their war debts considerably. In France, the economist Thomas Piketty has quipped, the period amounted to “capitalism without capitalists.”
The result—highly inconvenient for free-market fundamentalists—was prosperity. In the three decades following the Second World War, per-capita output grew faster in Western Europe and North America than ever before or since. There were no significant banking or financial crises. The real income of Europeans rose as much as it had in the previous hundred and fifty years, and American unemployment, which had ranged between fourteen and twenty-five per cent in the thirties, dropped to an average of 4.6 per cent in the fifties. The new wealth was widely shared, too; income inequality plummeted across the developed world. And with the plenty came calm. The economic historian Barry Eichengreen, in his new book, “The Populist Temptation” (Oxford), reports that in twenty advanced nations no populist leader—which he defines as a politician who is “anti-elite, authoritarian, and nativist”—took office during this golden era, and that a far narrower share of votes went to extremist parties than before or after.
“This was the road once taken,” Kuttner writes. “There was no economic need for a different one.” Nevertheless, we strayed—or, rather, in Kuttner’s telling, we were driven off the road after capitalists grabbed the steering wheel away from the Keynesians. The year 1973, in his opinion, marked “the end of the postwar social contract.” Politicians began snipping away restraints on investors and financiers, and the economy returned to spasming and sputtering. Between 1973 and 1992, per-capita income growth in the developed world fell to half of what it had been between 1950 and 1973. Income inequality rebounded. By 2010, the real median earnings of prime-age American workingmen were four per cent lower than they had been in 1970. American women’s earnings rose for a bit longer, as more women made their way into the workforce, but declined after 2000. And, as Polanyi would have predicted, faith in democracy slipped. Kuttner warns that support for right-wing extremists in Western Europe is even higher today than it was in the nineteen-thirties.
But was Keynesianism pushed, or did it stumble? Kuttner’s indignation about its fall from grace is more straightforward than the course of events that led to it. In the years following the Second World War, Europe was swimming with dollars, thanks to the Marshall Plan and American military aid to Europe. Beyond America’s jurisdiction, those dollars slipped free of its capital controls, and in the nineteen-sixties investors began to sling them from country to country as impetuously as in the days before Bretton Woods, punitively dumping the bonds of any government that tried to run an interest rate lower than those of its peers. The cost of the Vietnam War sparked inflation in America, and the dollar’s second life as the world’s reserve currency risked pushing the inflation even higher. When America fell into recession in 1970, the Federal Reserve tried to boost the country out of it by dropping interest rates, and America became a target of opportunity for speculators: capital fled the country, taking gold with it. By May, 1971, the United States was facing its first merchandise trade deficit since 1893, an indication that the high dollar was discouraging foreign buyers. Unwilling to pacify investors by inflicting austerity on voters, President Richard Nixon uncoupled the dollar from gold, ending the Bretton Woods agreement. Then, in October, 1973, Arab nations, upset about America’s solidarity with Israel during the Yom Kippur War, embargoed oil sales to the United States, and the price of crude nearly quadrupled in the space of three months. Food prices skyrocketed, and, as wallets were pinched, the country tumbled into another recession.
At this juncture, a new economic monster appeared: stagflation, a chimera of inflation, recession, and unemployment. Keynesian economists, who didn’t think that high unemployment and inflation could coëxist, were at a loss for how to handle it. The predicament provided an opening for their critics, most notably Milton Friedman, who argued that incessant government stimulation of the economy risked promoting not only inflation but the expectation of inflation, which could then spiral out of control. Friedman declared Keynesianism discredited and demanded that the government refrain from tampering with the economy, other than to manage the money supply.
In 1974, Alan Greenspan, President Gerald Ford’s economic adviser and an acolyte of Ayn Rand, likewise urged resisting political pressure to help the economy grow. “Inflation is our domestic public enemy No. 1,” Ford declared, and the Federal Reserve raised interest rates. Five years later, when a revolution in Iran set off a second spike in oil prices, a new round of inflation, and yet another recession, President Jimmy Carter’s Federal Reserve chair, Paul Volcker, raised interest rates again and again, to as high as twenty per cent. By 1982, America’s G.D.P. was shrinking 2.2 per cent a year, and unemployment was higher than it had been since the Great Depression. The nation had gone back to stabilizing its currency the old-fashioned way—by throwing people out of work—and utopian faith in self-regulating free markets had made a comeback. Kuttner thinks that this was a terrible mistake, arguing that the inflation of the seventies was limited to particular sectors of the economy such as food and oil. That sounds a little like special pleading. It’s not clear how Ford and Carter could have resisted the pressure they were under to find a new policy solution once it was clear that the old one wasn’t working.
In time, Keynesians adapted their models—one adjustment took into account Friedman’s discovery of the dangers posed by the expectation of inflation—and the resulting synthesis, New Keynesianism, is now canonical. Both the Bush and the Obama Administrations adopted Keynesian policies in response to the financial crisis of 2008. But when stagflation flummoxed the Keynesians it cost them their near-monopoly on political advice-giving, and laissez-faire was rereleased into the political sphere. In January, 1974, the United States removed constraints on sending capital abroad. A 1978 Supreme Court decision overturned most state laws against usury. By the early twenty-first century, Kuttner charges, every New Deal regulation on finance was either “repealed or weakened by non-enforcement.” Starting in the eighties, developing nations found free-market doctrine written into their loan agreements: bankers refused to extend credit unless the nations promised to lift capital controls, balance their budgets, limit taxes and social spending, and aim to sell more goods abroad—an uncanny replica of the austerity terms enforced under the gold standard. The set of policies became known as the Washington Consensus. The idea was pain in the short term for the sake of progress in the long term, but a 2011 meta-analysis was unable to find statistically significant evidence that the trade-off is worth it. Even if it is worth it, Polanyi would have recommended tempering the short-term pain. From 2010, when austerity measures were first imposed on Greece, to 2016, its G.D.P. declined 35.6 per cent, according to the World Bank. A federally appointed panel is now pushing for a similar approach in Puerto Rico.
There is no shortage of villains in Kuttner’s narrative: financial deregulation; supply-side tax cuts; the decline of trade unions; the Democratic Party, which, by zigging left on identity politics and zagging right on economics, left conservative white working-class voters amenable to Donald Trump. Perhaps the most vexed issue Kuttner discusses, however, is trade policy—whether American workers should be protected against cheap foreign goods and labor.
The contours of the problem call to mind Polanyi’s account of enclosures in early-modern England. Half an hour with a supply-and-demand graph shows that free trade is better for every nation, developed or developing, no matter how much an individual businessperson might wish for a special tariff to protect her line of work. In a 2012 survey, eighty-five per cent of economists agreed that, in the long run, the boons of free trade “are much larger than any effects on employment.” But although free trade benefits a country over all, it almost always benefits some citizens more than—and even at the expense of—others. The proportion of low-skilled labor in America is smaller than in most countries that trade with America; economic theory therefore predicts that international trade will, on aggregate, make low-skilled workers in the United States worse off. The U.S. government has, since 1962, compensated workers laid off because of free trade, but the benefit has never been adequate; only four people were certified to receive it during its first decade. In a 2016 paper, “The China Shock,” the economists David H. Autor, David Dorn, and Gordon H. Hanson wrote that, for every additional hundred dollars of Chinese goods imported to an area, a manufacturing worker is likely to lose fifty-five dollars of income, while gaining only six dollars in government help.
In a laissez-faire utopia, dislodged workers would relocate or take jobs in other industries, but workers hurt by rivalry with China are doing neither. Maybe they don’t have the resources to move; maybe the flood of Chinese-made goods is so extensive that there are no unaffected manufacturing sectors for them to switch into. The authors of “The China Shock” calculate that, between 1999 and 2011, trade with China destroyed between two million and 2.4 million American jobs; Kuttner quotes even higher estimates. NAFTA, meanwhile, lowered the wage growth of American high-school dropouts in affected industries by sixteen percentage points. In “Why Liberalism Failed” (Yale), the political scientist Patrick J. Deneen denounces the assumption that “increased purchasing power of cheap goods will compensate for the absence of economic security.”
Kuttner follows Polanyi in attacking free-market claims of mathematic purity. “Literally no nation has industrialized by relying on free markets,” he writes. In 1791, Alexander Hamilton recommended that America encourage new branches of manufacturing by taxing imports and subsidizing domestic production. Even Britain, the world’s first great champion of free trade, started off protectionist. Kuttner believes that America stopped supporting its manufacturing sector partly because it got into the habit, during the Cold War, of rewarding foreign allies with access to American consumers, and eventually decided that exports of financial services, rather than of manufactured goods, would be the country’s future. Toward the end of the century, as American manufacturers saw the writing on the wall, they shifted production abroad.
Kuttner doesn’t give a full hearing to the usual reply by defenders of laissez-faire, which is that a transition from goods to services is inevitable in a maturing economy—that the efficiency of American manufacturing means that it would likely be shedding workers no matter what the government did. Even Eichengreen, a critic of globalization, notes, in “The Populist Temptation,” that, if you graph the share of the German workforce employed in manufacturing from 1970 to 2012, you see a steady, grim decline very similar to that of its American counterpart, despite the fact that Germany has long spent heavily on apprenticeship and vocational training. The industrial revolution created widely shared wealth almost magically at its dawn: when an unemployed farmworker took a job in a factory, his power to make things multiplied, along with his earning power, without his having to learn much. But, as factories grew more efficient, fewer workers were needed to run them. One study has attributed eighty-seven per cent of lost manufacturing jobs to improved productivity.
When a worker leaves a factory, her power to create wealth stops being multiplied. The only way to increase it again is through education—by teaching her to become a sommelier, say, or an anesthesiologist. But efficiency gains are notoriously harder to come by in service industries than in manufacturing ones. There are only so many leashes a dog walker can hold at one time. As a result, if an economy deindustrializes without securing a stable manufacturing core, its productivity may erode. The dynamic has caused stagnation in Latin America and sub-Saharan Africa, and there are signs of a comparable weakening of America’s earning power.
Meanwhile, in the factories that remain, machines have grown more complex; the few workers they employ need to be better educated, further widening the gap between educated and uneducated workers. Kuttner dismisses this labor-skills explanation for job loss as an “alibi” with “an insulting subtext”: “If your economic life has gone to hell, it’s your fault.” This is intemperate but, in Kuttner’s defense, he has been warning American politicians to protect manufacturing jobs since 1991, and has been enlisting Polanyi in the cause for at least as long. Moreover, he has a point: to talk about productivity-induced job loss when challenged to explain trade-induced job loss is to change the subject. Economists estimate that advances in automation explain only thirty to forty per cent of the premium that a college degree now adds to wages. And though Eichengreen is right about manufacturing’s declining share of the German workforce, it still stood at twenty per cent in 2012, which is roughly where the American share stood three decades earlier, and the German decline has been less steep. Somehow, Germany’s concern for its manufacturing workforce made a difference.
In any case, if one’s concern is populism, it may not matter whether jobs have been lost to trade competition or to automation. In areas where more industrial robots have been introduced, one analysis shows, voters were more likely to choose Trump in 2016. According to another analysis, if competition with Chinese imports had been somehow halved, Michigan, Wisconsin, and Pennsylvania would likely have chosen Hillary Clinton that year. Economic explanations like these have been challenged. In April, the political scientist Diana C. Mutz published a paper finding that Trump voters were no more likely than Clinton ones to have suffered a personal financial setback; she concluded that Trump’s victory was more likely caused by white anxiety about loss of status and social dominance. But it’s not surprising that Trump voters weren’t basing their decisions on their personal circumstances, because voters almost never do. And Mutz’s own results showed that the factors most likely to lead to a Trump vote included pessimism about the economy and preferring Trump’s position on China to Clinton’s. It may not be possible to untangle economic anxiety and a more tribal mind-set.
Casting about for a Polanyi-style countermovement to temper the ruthlessness of laissez-faire, Kuttner doesn’t rule out tariffs. They’re economically inefficient, but so are unions, and, for a follower of Polanyi, efficiency isn’t the only consideration. A decision about a nation’s economic life, the Harvard economist Dani Rodrik writes, in “Straight Talk on Trade” (Princeton), “may entail trading off competing social objectives—such as stability versus innovation—or making distributional choices”; that is, deciding who gains at whose expense. Such a decision should therefore be made by elected politicians rather than by economists. America imposed export quotas on Japan in the seventies and eighties, to the alarm of headline writers at the time: “Protectionist Threat,” the Times warned. But Rodrik, looking back, judges the measures to have been reasonable ad-hoc defenses—“necessary responses to the distributional and adjustment challenges posed by the emergence of new trade relationships.”
Trump’s chief trade negotiator served on the Reagan team that administered quotas against Japan. A similar approach today, however, seems unlikely to work on China, whose economy is much more messily enmeshed with America’s. You probably can’t name as many Chinese brands as Japanese ones, even though you probably buy more Chinese-made products, because they are sold to Americans by American companies. American workers may wish they had been shielded from the effects of trade with China, but American businesses, by and large, don’t. Perhaps that’s why Trump has escalated from a tariff on steel and aluminum to erratic threats of a trade war. To achieve his campaign goal of bringing manufacturing jobs home from China, he will have to not only impose tariffs but also convince multinationals that the tariffs will stay in place beyond the end of his Administration. Only then will executives calculate that they can’t just wait it out—that they have no choice but to incur the enormous costs and capital losses of abandoning investments in China and making new ones here. It’s hard to imagine such a scheme working, unless Trump establishes a political command over the private sector not seen in America since the forties. That can’t be ruled out, given the state of affairs in Russia, China, Hungary, and Turkey, but it seems more likely that Trump’s bluster will merely motivate businesses to be deferential to him, in pursuit of favorable treatment.
“Basically there are two solutions,” Polanyi wrote in 1935. “The extension of the democratic principle from politics to economics, or the abolition of the democratic ‘political sphere’ altogether.” In other words, socialism or fascism. The choice may not be so stark, however. During America’s golden age of full employment, the economy came, in structural terms, as close as it ever has to socialism, but it remained capitalist at its core, despite the government’s restraining hand. The result was that workers shared directly in the country’s growing wealth, whereas today proposals for fostering greater financial equality hinge on taxing winners in order to fund programs that compensate losers. Such redistributive measures, Kuttner observes, are only “second bests.” They don’t do much for social cohesion: winners resent the loss of earnings; losers, the loss of dignity.
Can we return to an equality in workers’ primary incomes rather than to one brought about by secondary redistribution? In a recent essay for the journal Democracy, the Roosevelt Institute fellow Jennifer Harris recommends reimagining international trade as an engine for this rather than as an obstacle to it. When negotiating trade deals, for instance, governments could make going to bat for multinationals conditional on their agreeing to, say, pay their workers a higher fraction of what they pay executives.
Failing that, we’d be better off with redistributive programs that are universal—parental leave, national health care—rather than targeted. Benefits available to everyone help people without making them feel like charity cases. Kuttner reports great things from Scandinavia, where governments support workers directly—through wage subsidies, retraining sabbaticals, and temporary public jobs—rather than by constraining employers’ power to fire people. “We won’t protect jobs,” Sweden’s labor minister recently told the Times. “But we will protect workers.” Income inequality in Scandinavia is lower than here, and a larger proportion of citizens work. Maybe a government can insure higher pay for its workers by treating them as if they were, in and of themselves, valuable. True, Denmark’s spending on its labor policies has at times risen to as high as 4.5 per cent of its G.D.P., more than the share America spends on defense, and studies show that diverse countries such as ours find it harder to muster social altruism than more racially and culturally homogenous ones do. Nonetheless, programs like Social Security and Medicare, instituted when a communitarian ethic was still strong in American politics, remain popular. Why not try for more? It might make sense even if the numbers don’t add up. ♦
In the early years of the American republic, James Madison warned his fellow countrymen that their chosen system of governance would only survive if they adhered to the principles of representation and kept factionalism in check. In the era of Donald Trump, it would seem that these two conditions are no longer being met.
BERKELEY – From the very beginning of the American experiment, Alexander Hamilton, one of the new country’s founders, had serious doubts about democracy. “It is impossible to read the history of the petty republics of Greece and Italy without feeling sensations of horror and disgust at the … state of perpetual vibration between the extremes of tyranny and anarchy,” he wrote in The Federalist Papers No. 9.
But Hamilton went on to praise such principles as, “The regular distribution of power into distinct departments; the introduction of legislative balances and checks; the institution of courts composed of judges holding their offices during good behavior; the representation of the people in the legislature.” These, he wrote, “are means, and powerful means, by which the excellences of republican government may be retained and its imperfections lessened or avoided.”
And yet those improvements in the “science of politics” that Hamilton identified could apply just as well to monarchies as to republics, and in fact emerged from monarchies. The Plantagenet kings who ruled England between the twelfth and fifteenth centuries professionalized the judiciary, and established the precedent of securing parliamentary consent before levying taxes. Likewise, the professional bureaucracy and distribution of power that one would expect to find in a republic were also enshrined in the Council of the Indies and the Council of Castile under the sixteenth-century Spanish monarch Philip II.
If Hamilton’s favored political institutions had just as much potential to improve monarchy as to improve republicanism, then why did he have so much confidence in the latter form of governance? He never addressed that question, but another founder, James Madison, devoted considerable attention to it.
Judging by his contributions to The Federalist Papers, Madison’s position revolved around two core ideas: “representation,” which he welcomed; and “faction,” which he warned against. With respect to representation, Madison surmised that, “The public voice, pronounced by the representatives of the people, will be more consonant to the public good than if pronounced by the people themselves.”
Madison expected elected representatives to look outward, assessing the people’s interests and drawing on their knowledge and ideas. But he also hoped that elected officials would look inward, to the government and to one another, to ensure that policies were well crafted. Through prudent representation, a republican form of government can enjoy the advantages of professionalization and expertise, as well as new ideas from society, as it pursues the public interest.
At the same time, Madison stressed the importance of avoiding factionalism, which he defined as, “some common impulse of passion, or of interest, adverse to the rights of other citizens, or to the permanent and aggregate interests of the community.” A monarchy or aristocracy, of course, is nothing but a faction – one that is firmly in control and under little pressure to work for the public interest or consider new ideas. But in a republic, Madison observed, a faction could rule only if it commanded an electoral majority. That is why, when “you take in a greater variety of parties and interests,” he wrote, “you make it less probable that a majority of the whole will have a common motive to invade the rights of other citizens.”
The problem, of course, is that majorities with a malign “common motive” emerge nonetheless. That is how the US got the near-century-long period of “Jim Crow” racial persecution following the Civil War, the herding of Japanese-Americans into concentration camps during World War II, and other shameful episodes.
Or consider what today we would call the ethnic cleansing of Cherokee land in the early nineteenth century – an act of state-sanctioned forced migration known as the “Trail of Tears.” When the US Supreme Court ruled in 1832 that the Cherokee were in fact a sovereign nation, then-President Andrew Jackson simply ignored it. “The decision of the Supreme Court has fell still born,” he told Brigadier General John Coffee, and “cannot coerce Georgia to yield to its mandate.”
Jackson thus rejected a decision handed down by what Hamilton would call “judges holding their offices during good behavior.” In doing so, he confirmed Madison’s fear that if bureaucracy, established procedure, and deliberation cannot transcend the passions of a majority faction, then there can be no “republican remedy for the diseases most incident to republican government.”
Meanwhile, it has been more than a century since the constitutional and semi-constitutional monarchies of Europe faced their own political crises. In the event, they did not move toward centralized socialist dictatorships or strongman plebiscitary ethnocracies, but rather toward representative parliamentary democracy.
The two primary advantages of republican democracy that Madison identified – prudent, informed representation and the transcendence of factionalism – seem to have gone missing. For republican democracy to remain the best form of government, they will need to be rediscovered.– J. Bradford Delong
The American experiment has not yet reached a point of existential crisis. But there can be little doubt that the US in the Trump era is experiencing the problems that Madison foresaw when he warned that “enlightened statesmen” capable of making “clashing interests … subservient to the public good … will not always be at the helm.”
The two primary advantages of republican democracy that Madison identified – prudent, informed representation and the transcendence of factionalism – seem to have gone missing. For republican democracy to remain the best form of government, they will need to be rediscovered.
“Perpetual peace… is nothing more than an idea, an end which, like the “greatest good” or “absolute knowledge,” can never be achieved, but that serves as the constant object of our activity. The moral law demands that we strive for the establishment of justice in the relation among civil states through the creation of a liberal world order. But, as Kant writes, there are empirical causes that disrupt this task—the competition of states, the ebbing and flowing of empires, the spirit of acquisitiveness, and so on. Try as the spirit may, the body will subvert it, and so the idea in its purity can never become an actuality, just as the form of a universal concept is always corrupted by the particularity of its material instance.”–Andrew Beddow
“Man is essentially prone to evil inclinations.”-Immanuel Kant
Perhaps more than any other philosopher, Immanuel Kant has suffered the praise of having been labeled an historical optimist. Kant’s writings on international affairs have been misconstrued as a defense of the both the desirability and inevitability of a world federation of democracies bound by law. This rendering of Kant has been aided by a selective attention to a few key works and a cultivated ignorance of his more pessimistic remarks on the human condition elsewhere. The truth is that Kant’s theory of international affairs is complicated, eluding clean classification as “realist” or “liberal,” but altogether gloomier than is commonly thought. Although the optimist may dream of perpetual peace, no such harmony can be hewn from “the crooked timber of humanity.” Kant’s view of history is in fact quite pessimistic, and his prescriptions for politics are in many ways consistent with the realist paradigm. Mankind faces a Sisyphean task: an unending struggle between the universal and particular, harmony and multiplicity, sympathy and antagonism.
The case for Kant-as-liberal-optimist is straightforward: in Perpetual Peace and the Doctrine of Right, he speaks of the condition of injustice in which states find themselves, arranged in an international anarchy with no higher authority. Like the state of nature among individuals, states are subjected to the unilateral whims of their potential adversaries, a condition formally incompatible with the duty to respect each as an end-in-itself. Just as men must overcome this anarchic condition of injustice by establishing a civil state, states must institute an international legal order in the form of a federation of states submitting to a common adjudicative authority. Only then can coercion become regulated in the international sphere and represent the omnilateral will of the human race, just as the state represents the will of its people.
If only states were to submit to this regime, war might be done away with. This, however, is possible only in a federation of republics, since they alone will be constituted in a way to abolish the motive for war. Here is the origin of the “democratic peace” doctrine Kant purportedly advanced: for any state with a democratic constitution, “the consent of the citizens . . . is required to determine . . . ‘Whether there shall be war or not?’ Hence, nothing is more natural than that they should be very loth to enter upon so undesirable an undertaking, for . . . they would necessarily be resolving to bring upon themselves all the horrors of War.” Only a democratic world order, in which each state’s population internalizes the costs of its own behavior, can organize itself into a liberal world order in which states are regulated by law.
And yet Kant is elsewhere less sanguine about the prospects for peace. He claims in the Doctrine of Right that “perpetual peace, the ultimate goal of the whole right of nations, is indeed an unachievable idea”—“idea,” as we will soon see, being the operative word. He even characterizes the history of the human race as afflicted by an unavoidable antagonism which cannot—and indeed should not—be cured. Is Kant simply an incoherent thinker, at times a hopeful liberal, elsewhere a pessimist?
We must first understand Kant’s conception of humanity as participating in two worlds, the sensuous and intelligible. Man is at once rational and free, therefore subject to laws of reason and accountable for his actions, and also prone to sensual inclinations, therefore liable to disobey his rational duties when given over to temptation. To say that man ought to act in some way is not to say that he will, but only that, from the practical point of view of free choice, he can and should. A purely rational being, a Holy Will, devoid of sensuous content, could only ever affirm the laws of reason. Animals, on the other hand, obey their instincts and are incapable of freedom and, for this reason, morality also. Man alone is capable of both recognizing the moral law and choosing to disobey it, and it is for this reason that Kant claims that people are by nature evil.
Whether by weakness of will, impurity of motive, or the radical evil of freely choosing immorality, man lives in a fallen condition, inflicting himself on humanity with a cruelty unseen even in beasts. As evidence, Kant offers “international affairs where civilized nations stand vis-à-vis one another in a relationship of a raw state of nature (a continuing state of war), and are firmly resolved never to depart from that . . . This is so much the case that the philosophical chiliasm, who hoped for a state of perpetual peace based on a federation of nations as a world republic, was ridiculed as mentally raptured.” Concupiscence, the inclination toward sin that rests at the base of man’s soul, is incurable, playing itself out time and again in the bloody annals of history. Kant’s judgment is fatalistic, for “as long as a state has another adjacent state which it might hope to subdue, it strives to enlarge itself through the subjugation of this other, and therefore also to make itself a universal monarchy . . . But this monster . . . after having swallowed all neighboring states, finally dissolves of itself and separates through uproar and discord into many smaller states which, instead of striving toward a union of states (a republic of free, allied peoples), simply and of itself starts the process all over again in order never to cease the war.”
What are we to make of Kant’s simultaneous enjoinder to establish liberal order and his pessimism about its prospects? At times it seems as though Kant’s views, especially in Perpetual Peace and the Doctrine of Right, entertain the genuine possibility of perpetual peace through democratization. But this is mistaken, resting on a misunderstanding of Kant’s notion of republican government, a condition that is, like perpetual peace, an unachievable idea. Where the Kantian optimists have failed is to conflate the regulative and the constitutive.
In his Critique of Pure Reason, Kant distinguishes between the regulative and constitutive use of ideas. The latter, he claims, is the mistake of all systems of dogmatic metaphysics—to take the principles of reason as actually constituting objects of the world of appearances apart from one’s own judgment. Space and time are constitutive categories, but ideas are not—phenomena are ordered according to these categories, and every object given in intuition conforms to them a priori. Regulative ideas, by contrast, are merely principles of judgment that, although not formative elements of the world of appearances, must nonetheless be presupposed to render man’s activities coherent. The unity of nature is one such regulative principle—we cannot know as a matter of sense certainty that all sensible particulars are integrated in a lawful whole, but such a unity must be assumed in order to make scientific inquiry possible.
There are practical regulative ideas as well, the presupposition of which is necessary for action, but which are not to be thought constitutive of reality. The existences of God and of an immortal soul are such ideas that must be assumed in order to regulate our actions, but for which we have no theoretical proof. When Kant speaks of an idea, he is speaking in this sense: a formal principle exceeding all bounds of experience, that is not an empirical law governing the world of things, but is instead a necessary postulate to guide our activities. The idea is not an object of possible knowledge, but a necessary assumption that renders knowledge (or, in this case, action) coherent.
So what of the idea of perpetual peace and the prospects of a liberal-democratic world order? Kant writes:
Now morally practical reason pronounces in us its irresistible veto: there is to be no war . . . So the question is no longer whether perpetual peace is something real or a fiction, and whether we are not deceiving ourselves in our theoretical judgment when we assume that it is real. Instead, we must act as if it is something real, though perhaps it is not; we must work toward establishing perpetual peace and the kind of constitution that seems to us most conducive to it . . . And even if the complete realization of this objective always remains a pious wish, still we are certainly not deceiving ourselves in adopting the maxim of working incessantly towards it.
Perpetual peace, as Kant earlier stated, is nothing more than an idea, an end which, like the “greatest good” or “absolute knowledge,” can never be achieved, but that serves as the constant object of our activity. The moral law demands that we strive for the establishment of justice in the relation among civil states through the creation of a liberal world order. But, as Kant writes, there are empirical causes that disrupt this task—the competition of states, the ebbing and flowing of empires, the spirit of acquisitiveness, and so on. Try as the spirit may, the body will subvert it, and so the idea in its purity can never become an actuality, just as the form of a universal concept is always corrupted by the particularity of its material instance.
But what of democratic peace? Can the procedures of democratic decision making offer a constitutional solution to the problem of war? This, too, seems to misunderstand Kant’s thesis. Although Kant does raise the hope that various political developments may abolish war, that is, by his own admission, “only opinion and mere hypothesis, and it is uncertain, like all theories which aim at stating the only suitable natural cause for a proposed effect that is not wholly in our own power.” Kant is, at most, a skeptical proponent. His stronger claim, that republics cannot wage aggressive war, is not so much a claim about the empirical proclivities of democracies as it is a claim that the concept of a republican government excludes the motives of aggressive war from political decision making. Kant’s conception of a republic, a Rechtsstaat, is not essentially democratic, but law-ordered, such that it only ever acts according to the public will. In such a state, in which the concept of law alone determines policy, there could never exist grounds for an aggressive war, since such a war would only be fought on the basis of sensuous inclinations not considered by a pure system of law.
Thus the idea of perpetual peace is simply a consequence of the idea of a republican constitution considered in connection with a world of other republican constitutions. In such a world, there is no motive for war, as each state aims only at the law. Yet, as soon as one state gives over to temptation, this strict compliance is threatened and competition resurges. Such a risk is an unavoidable stain upon the human condition, as Kant remarks that man is essentially prone to evil inclinations. The drive to sin is ineradicable, so we are left with the conclusion that “Human Nature appears nowhere less amiable than in the relation of whole nations to each other,” and, for this reason, “No State is for a moment secure against another . . . the will to subdue each other or to reduce their power, is always rampant.”
The alternative to a world federation is a balance of power, but this, too, is a “mere chimera” prone to instability at the slightest change. The balance of power is the only hope for peace, and, Kant claims, states are entitled to wage for its sake, but this is an ephemeral hope. History will be characterized not by a final stage of cooperation, but a succession of violent convulsions, the rise and fall of empires. We are left, at best, with a cynical liberalism, one steeped in an appreciation of Realpolitik.
Is this not cause for despair? Far from it—Kant tells us to appreciate the struggle, to assume a cosmopolitan view of history from which the seeming chaos takes on new meaning. The sublime nature of the human condition is not a static equilibrium, but a primordial discord of the soul, in the words of Faulkner, “the human heart in conflict with itself.” All great history is the unfolding of the spirit through its particular forms, a flux that must be seen as ordered by the light of universal law, and a universal that must be viewed as manifested in its particular flawed instances. Kant remarks that Nature employs this strife in order to draw out the capacities of man, raising him above the dumb animals by his being subjected to the struggle, from which he develops Kultur. Just as the pure form or concept—a universal beyond the world of appearance—must manifest itself in the multiplicity of particular objects to become actual, so too must the human spirit be instantiated in a rich diversity of civilizations in order to know itself as law. There is no tree apart from the plurality of existing trees, and yet they achieve their potential and grow taller only out of mutual competition for sunlight. So too does mankind perfect itself through antagonism, through “unsocial sociability.”
Within the movements of history, men act according to their own plans, without any grand design directing them, entering into conflict with one another. But, from the cosmopolitan perspective, seeming order emerges, or, at least, we become able to conceive of the fluctuations of the development of the human race in terms of such an orderly process. The ideal reconstruction of the instants of human development in terms of historical science involves, at each stage, rendering the individual act something more, a participation in a historical moment, reflecting the condition of spirit.
It would be the height of hubris to assume such a point of view in action, to take oneself to be the world-historical figure carrying humanity from one stage of development to the next. For, again, to do so would be to mistake the regulative for the constitutive, to take the idea to be operative in the world like the hand of providence. This is the failing of those who assume the end of history is in sight, and, worse, those who mistakenly assume that such an end can be brought about, not as an abstract goal of the human race, but as the project of some determinate generation. Perpetual peace is not an apparent reality, but an idea through which the developmental stages of humanity becomes synthetically unified in the history of the human race.
Far from a naïve Whig history, hurtling deterministically toward liberal utopia, Kant’s Perpetual Peace is but the form of history, the regulative idea that stands at the end of historical judgment and the guide of political practice. He is as much a skeptic as a rationalist, for whom the pure idea becomes real only in the endless unfolding of its particulars. Harmony is only manifested in its diversity, driven by strife pregnant with cultural significance. Thus, in Kant, we find not the philosopher of the end of history, but of history without end.
Clinton, Albright, Kissinger, Kerry, Baker and Powell–Past Secretaries of State
Diplomacy is changing before our eyes.
“The unspoken objective is to constrain the U.S., and to transfer authority from national governments to international bodies. The specifics of each case differ, but the common theme is diminished American sovereignty, submitting the United States to authorities that ignore, outvote or frustrate its priorities…. By reasserting their sovereignty, the British are in the process of escaping, among other things, the European Court of Justice and the European Court of Human Rights.” — Ambassador John R. Bolton, Wall Street Journal, March 7, 2017.
The Singapore summit is indeed historic. First, it is so because just a few weeks ago we were closer to a nuclear war than to even the semblance of a peace process. The way we got here is surprising, because it did not obey the usual rules.
A few days ago, during the G7 summit held in Canada, US President Donald Trump upheld his decisions on tariffs and his positions on the trade deficit. These stances followed his decision to pull out of the Paris climate change agreement and the Iranian “nuclear deal”. It is clear that the new US administration challenged the alliances inherited from the Cold War. President Trump, a businessman, not a politician — one of the reasons he was elected — is asking America’s trading partners just to have “free, fair and reciprocal” agreements. It is probably not all that unusual to feel affronted when asked for money or to regard the person asking for it as mercenary or adversarial. It does not always mean that this feeling is justified.
Pictured: Donald Trump and other heads of state deliberate at the G7 summit on June 9, 2018 in Charlevoix, Canada. (Photo by Jesco Denzel /Bundesregierung via Getty Images)
In short, President Trump’s arguments, which sound like a leitmotif, go back to the economic aspect of things. NATO? Why should it be normal that, in order to defend Europe, the American taxpayer pays the heaviest part. Free trade? Why should America suffer a trade deficit with so many countries? Climate change? The results of the Paris Climate Change conference, COP 21, were apparently not only costly but questionable, and to critics, looked like a list of unenforceable promises that would not have come due until 2030 — if ever.
A new paradigm is shaping up on the international scene: This is the first time that the US domestic policy is to prevail over its so-called “strategic” role — sometimes possibly to the detriment of allies.
“The unspoken objective is to constrain the U.S., and to transfer authority from national governments to international bodies. The specifics of each case differ, but the common theme is diminished American sovereignty, submitting the United States to authorities that ignore, outvote or frustrate its priorities…. While many European Union governments seem predisposed to relinquish sovereignty, there is scant hint of similar enthusiasm in America…. By reasserting their sovereignty, the British are in the process of escaping, among other things, the European Court of Justice and the European Court of Human Rights.”
America’s Walras John Bolton–The Trade Wracking Ball
Unfortunately, Europe is the first to suffer from this new reality. But is the European Union able to stage a showdown? Probably not. The populist wave flooding the EU countries is primarily the result of the social impacts of the fiscal policy imposed by Germany. While the US has an unemployment rate effectively past full employment, the rather sluggish growth in Europe produces a near-zero effect on this indicator. With 27 members, and because of the rule of “one country one vote,” as well as a possibly outdated view of how to incentivize growth and finance pensions, Europe has been slowing down even the possibility any development on issues such as immigration or common defense. Europe is shattered, all the more that there does not seem to be any solution on the horizon.
The group called the European Union does not weigh much against the forced march of Donald Trump. The US President only believes in bilateral agreements when it comes to international relations. The use of the principle of ex-territoriality, or diplomatic immunity, has taken the agreement with Iran out of the equation. The big French and German companies have already withdrawn from it.
Diplomacy is changing before our eyes. “The Western camp,” it seems, is becoming nothing more than a specter that does not rest on any on-the-ground reality.
Inevitably, each power will have to adapt, according to its own interests. As Europeans continue to cast their votes, these adjustments may, in turn, feed current divisions even more.
Ahmed Charai is a Moroccan publisher. He is on the board of directors for the Atlantic Council, an international counselor of the Center for a Strategic and International Studies, and a member of the Advisory Board of The Center for the National Interest in Washington and Advisory Board of Gatestone Institute in New York.
In the middle of the twentieth century, people feared that advances in computers and communications would lead to the type of centralized control depicted in George Orwell’s 1984. Today, billions of people have eagerly put Big Brother in their pockets.
CAMBRIDGE – It is frequently said that we are experiencing an information revolution. But what does that mean, and where is the revolution taking us?
Information revolutions are not new. In 1439, Johannes Gutenberg’s printing press launched the era of mass communication. Our current revolution, which began in Silicon Valley in the 1960s, is bound up with Moore’s Law: the number of transistors on a computer chip doubles every couple of years.
Information provides power, and more people have access to more information than ever before, for good and for ill. That power can be used not only by governments, but also by non-state actors ranging from large corporations and non-profit organizations to criminals, terrorists, and informal ad hoc groups.–Joseph S. Nye
By the beginning of the twenty-first century, computing power cost one-thousandth of what it did in the early 1970s. Now the Internet connects almost everything. In mid-1993, there were about 130 websites in the world; by 2000, that number had surpassed 15 million. Today, more than 3.5 billion people are online; experts project that, by 2020, the “Internet of Things” will connect 20 billion devices. Our information revolution is still in its infancy.
The key characteristic of the current revolution is not the speed of communications; instantaneous communication by telegraph dates back to the mid-nineteenth century. The crucial change is the enormous reduction in the cost of transmitting and storing information. If the price of an automobile had declined as rapidly as the price of computing power, one could buy a car today for the same price as a cheap lunch. When a technology’s price declines so rapidly, it becomes widely accessible, and barriers to entry fall. For all practical purposes, the amount of information that can be transmitted worldwide is virtually infinite.
The cost of information storage has also declined dramatically, enabling our current era of big data. Information that once would fill a warehouse now fits in your shirt pocket.
In the middle of the twentieth century, people feared that the computers and communications of the current information revolution would lead to the type of centralized control depicted in George Orwell’s dystopian novel 1984. Big Brother would monitor us from a central computer, making individual autonomy meaningless.
Instead, as the cost of computing power has decreased and computers have shrunk to the size of smart phones, watches, and other portable devices, their decentralizing effects have complemented their centralizing effects, enabling peer-to-peer communication and mobilization of new groups. Yet, ironically, this technological trend has also decentralized surveillance: billions of people nowadays voluntarily carry a tracking device that continually violates their privacy as it searches for cell towers. We have put Big Brother in our pockets.
Likewise, ubiquitous social media generate new transnational groups, but also create opportunities for manipulation by governments and others. Facebook connects more than two billion people, and, as Russian meddling in the 2016 US presidential election showed, these connections and groups can be exploited for political ends. Europe has tried to establish rules for privacy protection with its new General Data Protection Regulation, but its success is still uncertain. In the meantime, China is combining surveillance with the development of social credit rankings that will restrict personal freedoms such as travel.
Information provides power, and more people have access to more information than ever before, for good and for ill. That power can be used not only by governments, but also by non-state actors ranging from large corporations and non-profit organizations to criminals, terrorists, and informal ad hoc groups.
This does not mean the end of the nation-state. Governments remain the most powerful actors on the global stage; but the stage has become more crowded, and many of the new players can compete effectively in the realm of soft power. A powerful navy is important in controlling sea-lanes; but it does not provide much help on the Internet. In nineteenth-century Europe, the mark of a great power was its ability to prevail in war, but, as the American analyst John Arquilla has pointed out, in today’s global information age, victory often depends not on whose army wins, but on whose story wins.
Public diplomacy and the power to attract and persuade become increasingly important, but public diplomacy is changing. Long gone are the days when foreign service officers carted film projectors to the hinterlands to show movies to isolated audiences, or people behind the Iron Curtain huddled over short-wave radios to listen to the BBC. Technological advances have led to an explosion of information, and that has produced a “paradox of plenty”: an abundance of information leads to scarcity of attention.
When people are overwhelmed by the volume of information confronting them, it is hard to know what to focus on. Attention, not information, becomes the scarce resource. The soft power of attraction becomes an even more vital power resource than in the past, but so does the hard, sharp power of information warfare. And as reputation becomes more vital, political struggles over the creation and destruction of credibility multiply. Information that appears to be propaganda may not only be scorned, but may also prove counterproductive if it undermines a country’s reputation for credibility.
During the Iraq War, for example, the treatment of prisoners at Abu Ghraib and Guantanamo Bay in a manner inconsistent with America’s declared values led to perceptions of hypocrisy that could not be reversed by broadcasting images of Muslims living well in America. Similarly, President Donald Trump’s tweets that prove to be demonstrably false undercut American credibility and reduce its soft power.
Public diplomacy and the power to attract and persuade become increasingly important, but public diplomacy is changing. Long gone are the days when foreign service officers carted film projectors to the hinterlands to show movies to isolated audiences, or people behind the Iron Curtain huddled over short-wave radios to listen to the BBC. Technological advances have led to an explosion of information, and that has produced a “paradox of plenty”: an abundance of information leads to scarcity of attention.–Joseph S. Nye
The effectiveness of public diplomacy is judged by the number of minds changed (as measured by interviews or polls), not dollars spent. It is interesting to note that polls and the Portland index of the Soft Power 30 show a decline in American soft power since the beginning of the Trump administration. Tweets can help to set the global agenda, but they do not produce soft power if they are not credible.
Now the rapidly advancing technology of artificial intelligence or machine learning is accelerating all of these processes. Robotic messages are often difficult to detect. But it remains to be seen whether credibility and a compelling narrative can be fully automated.
Joseph S. Nye, Jr., a former US assistant secretary of defense and chairman of the US National Intelligence Council, is University Professor at Harvard University. He is the author of Is the American Century Over?