As ex-Prime Minister, he has a duty to offer a solution on Brexit, but lacks the guts
“David Cameron is a former PM. He not only has the right to offer his solution but a duty. If he is to earn the right to a hearing, however, he must first find not only self-knowledge and courage, but an un-English seriousness of purpose he has evaded all his life.”–Nick Cohen
John Major, Tony Blair and Gordon Brown have warned of the dangers of Brexit. But where is the former Prime Minister who called the referendum that will blight Britain for as far ahead as anyone can see? Whatever happened to that likely lad? David Cameron doesn’t want to talk about it, one of his friends tells me. “He doesn’t defend the referendum, but won’t say he made a mistake either. Europe is like a family scandal. We know what’s happened but we don’t say a word: it’s his no-go zone.”
At a personal level, the consequences swirl around him. I may be exhausting your capacity for compassion but the smallest of the casualties of Brexit has been the good fellowship of the Chipping Norton set. Naturally, the Cotswolds’ wealthy Leavers are grateful. But Cameron must resent them. He must know that he has been the useful idiot who succumbed to the demands of Rupert Murdoch’s Rebekah Brooks, a member of the local nouveau gentry by virtue of her converted barn, in the crashingly stupid belief that no harm would come from his surrender.
Invitations to “kitchen suppers” from Remainers, however, can only include Samantha Cameron’s name – if, they are extended at all. Tania Rotherwick invited the Camerons to her pool at the magnificent Cornbury Park estate before she split from her husband and Cameron split Britain from Europe. She is now particularly contemptuous, I hear.
Cameron’s memoirs were meant to be published this month but have been delayed until next year. The early signs are ominous. A book has to be coherent if it is to find a readership: its opening must prefigure its conclusion. As described in the publishing press, Cameron’s effort will have no consistency. He will tell the story of the formation of the coalition, his contributions to economic, welfare and foreign policy, his surprise victory in the 2015 election and then – as if from nowhere – the conventional memoir will end with the author carelessly deciding he will settle the European question, without planning a campaign or preparing an argument and, instead, launching a crisis that will last for decades. Nothing will make sense. Nothing will hang together. It’s as if a romcom were to conclude with serial killers murdering the cooing lovers or Hilary Mantel were to have aliens invade Tudor England on the last page of her Thomas Cromwell trilogy.
The book Cameron cannot write would accept that his political battles and achievements were as nothing when set against his decision to appeal to the worst of the Tory party. It would begin with Cameron honouring the decision that won him the Conservative leadership in 2005. He would confess that he should have known better than to pull the Conservatives out of the centre-right group in the European parliament and align them with Law and Justice, the know-nothing Polish nationalists who are reducing their country to an ill-governed autocracy. The manoeuvre was pure Cameron: tactics above strategy; appeasement instead of confrontation.
The pattern continued throughout his premiership. He thought he could buy off the right by refusing to explain the benefits of EU membership to the voters. At one point in 2014 he threatened to leave the EU. He then turned around in 2016 and asked the public to believe that leaving would be a disaster and was surprised when 17.4 million men and women he had never treated as adults worthy of inclusion in a serious conversation ignored him.
If he were being honest, Cameron would admit too that Brexit ought to bring an end to a British or, to be specific, English, style that is by no means confined to the upper class, but was everywhere present among the public-school boys who ruled us.
I mean the ironic style that gives us our famously impenetrable sense of humour (which we will need now the rest of the world is laughing at us). The perfidious style that allows us to hide behind masks and has made England superb at producing brilliant actors for the West End but hopeless at producing practical politicians for Westminster. The teasing style of speaking in codes that benighted foreigners can never understand, however well they speak English. The cliquey style that treats England as a club, not a country, and allowed Jeremy Corbyn to say that Jews cannot “understand English irony”, however long their ancestors have lived here.
Theresa May is no Dame Margaret Thatcher but a bumbling Cameronite
The deferential style that allowed one Etonian to lead the Remain campaign and another to lead the Leave campaign and for the English to not even see why that was wrong. The life’s-a-game-you-shouldn’t-take-too-seriously style that inspired Cameron to say he holds “no grudges” against Boris Johnson now the match is over and the covers back on the pitch.
The gentleman amateur style that convinced Cameron he could treat a momentous decision like an Oxford essay crisis and charm the electorate into agreeing with him in a couple of weeks, as if voters were a sherry-soaked don who could be won round with a few clever asides. The effortlessly superior style that never makes the effort to ask what the hell the English have to feel superior about. The gutless, dilettantish and fatally flippant style that has dominated England for so long and failed it so completely. The time for its funeral has long passed.
A politician who bumped into Cameron said he thinks the referendum result must be respected, but that Britain should protect living standards by going for the softest Brexit imaginable and staying in the single market. This is a compromise well to the “left” of Theresa May and Corbyn’s plans and is worth discussing. Whatever his critics say, David Cameron is a former PM. He not only has the right to offer his solution but a duty. If he is to earn the right to a hearing, however, he must first find not only self-knowledge and courage, but an un-English seriousness of purpose he has evaded all his life.
The Trump administration’s most significant and lasting decisions will be about U.S. policy toward China. Far more consequential than even the Supreme Court’s composition or immigration policy is whether the 21st century will be marked by conflict or cooperation between the two most prosperous and powerful countries on the planet. The last time there was such a question — when Britain confronted a rising Germany 150 years ago — it did not work out so well.
Since the end of the Cold War, we have lived in an era of almost no genuine great-power competition, which has led to the emergence of a dynamic global economy and a huge expansion of international trade, travel, culture and contact. All this happened under the United States’ uncontested supremacy — military, political, economic and cultural.
That age is over. Twenty-five years ago, China made up less than 2 percent of the global gross domestic product. Today that figure is 15 percent, second only to the United States’ 24 percent. In the next decade or so, the Chinese economy will surpass the size of America’s. Already, nine of the 20 most valuable technology companies in the world are based in China. Beijing has also become far more active on the global stage, ramping up its defense spending, foreign aid and international cultural missions. Its Belt and Road Initiative, infrastructure investment in dozens of countries, will ultimately be at least seven times larger than the Marshall Plan, if not far more, in inflation-adjusted terms.
The Trump administration has many of the right instincts on China. Beijing has taken advantage of free trade and the United States’ desire to integrate China into the global system. The administration is right to push back and try to get a fundamentally different attitude from China on trade. But instincts do not make for a grand strategy.
Were Washington to be more strategic, it would have allied with Europe, Japan and Canada on trade and presented China with a united front, almost guaranteeing that Beijing would have to acquiesce. It would have embraced the Trans-Pacific Partnership as a way to provide Pacific countries an alternative to the Chinese economic system. But in place of a China strategy, we have a series of contradictory initiatives and rhetoric.
In fact, the administration seems divided on the broader issue of U.S.-China relations. On one side are people such as Treasury Secretary Steven Mnuchin, who want to use tough talk and tariffs to extract a better deal from China, while staying within the basic framework of the international system. Others, such as trade adviser Peter Navarro, would prefer that the United States and China were far less intertwined. This would undoubtedly mean a more mercantilist world economy and a more tense international order. There is a similar split among geo-politicians, with the Pentagon being more hawkish (not least because it ensures huge budgets) and the State Department more conciliatory.
Vice President Pence recently gave a fiery speech that came close to declaring that we are in a new Cold War with China. An outright labeling of China as the enemy would be a seismic shift in U.S. strategy and would certainly trigger a Chinese response. It could lead us to a divided, unstable and less prosperous world. Here’s hoping the Trump administration has thought through the dangers of such a confrontational approach.
History tells us that if China is indeed now the United States’ main rival for superpower status, the best way to handle such a challenge lies less in tariffs and military threats and more in revitalization at home. The United States prevailed over the Soviet Union not because it waged war in Vietnam or funded the contras in Nicaragua, but because it had a fundamentally more vibrant and productive political-economic model. The Soviet threat pushed the United States to build the interstate highway system, put a man on the moon, and lavishly fund science and technology.
The former head of Google China, Kai-Fu Lee, has written an important book arguing that China is likely to win the race for artificial intelligence — the crucial technology of the 21st century. He points out that China’s companies are highly innovative, its government is willing to make big bets for the long term, and its entrepreneurs are driven and determined.
Tariffs and military maneuvers might be fine at a tactical level, but they don’t address the core challenge. The United States desperately needs to rebuild its infrastructure, fix its educational system, spend money on basic scientific research and solve the political dysfunction that has made its model less appealing around the world. If China is a threat, that’s the best response.
Having unilaterally reimposed sanctions on Iran, US President Donald Trump’s administration is threatening to penalize companies doing business with the Islamic Republic by denying them access to US banks. But that could hasten the dollar’s demise as the main global currency.
Trump is squandering US leverage
BRUSSELS – US President Donald Trump’s unilateralism is reshaping the world in profound and irreversible ways. He is undermining the working of multilateral institutions. Other countries, for their part, no longer regard the United States as a reliable alliance partner and feel impelled to develop their own geopolitical capabilities.
Now the Trump Administration is eroding the dollar’s global role. Having unilaterally reimposed sanctions on Iran, it is threatening to penalize companies doing business with the Islamic Republic by denying them access to US banks.
The threat is serious because US banks are the main source of dollars used in cross-border transactions. According to the Society for Worldwide Interbank Financial Telecommunication (SWIFT), dollars are used in nearly half of all cross-border payments, a share far greater than the weight of the US in the world economy.
In response to the Trump administration’s stance, Germany, France, and Britain, together with Russia and China, have announced plans to circumvent the dollar, US banks, and US government scrutiny. “Plans” may be a bit strong, given that few details have been provided. But the three countries have described in general terms the creation of a stand-alone financial entity, owned and organized by the governments in question, to facilitate transactions between Iran and foreign companies.
Those companies will presumably settle their claims in euros, not dollars, freeing them from dependence on US banks. And insofar as the Europeans’ special-purpose financial vehicle also bypasses SWIFT, it will be hard for the US to track transactions between Iran and foreign companies and impose penalties.
Is this scheme viable? While there is no purely technical obstacle to creating an alternative payments channel, doing so is certain to enrage Trump, who will presumably respond with another round of tariffs against the offending countries. Such, unfortunately, is the price of political independence, at least for now.
Having learned a painful lesson about dependence on the dollar, will other countries move away from it more generally? The fact that the dollar is used so widely makes doing so difficult. Banks and companies prefer using dollars because so many other banks and companies use dollars and expect their counter parties to do likewise. Shifting to another currency would require coordinated action. But with the governments of three large European countries having announced just such coordination, such a scenario can no longer be excluded.
It is worth recalling how the dollar gained international prominence in the first place. Before 1914, it played essentially no international role. But a geopolitical shock, together with an institutional change, transformed the dollar’s status.
The geopolitical shock was World War I, which made it hard for neutral countries to transact with British banks and settle their accounts using sterling. The institutional change was the Federal Reserve Act, which created an entity that enhanced the liquidity of markets in dollar-denominated credits and allowed US banks to operate abroad for the first time. By the early 1920s the dollar had matched and, on some dimensions, surpassed sterling as the principal vehicle for international transactions.
This precedent suggests that 5-10 years is a plausible time frame over which the US could lose what Valéry Giscard d’Estaing, then France’s Finance Minister, famously called the “exorbitant privilege” afforded it by issuing the world’s main international currency. This doesn’t mean that foreign banks and companies will shun the dollar entirely. US financial markets are large and liquid and are likely to remain so. US banks operate globally. In particular, foreign companies will continue to use dollars in transactions with the US itself.
But in an era of US unilateralism, they will want to hedge their bets. If the geopolitical shock of Trump’s unilateralism spurs an institutional innovation that makes it easier for European banks and companies to make payments in euros, then the transformation could be swift (as it were). If Iran receives euros rather than dollars for its oil exports, it will use those euros to pay for merchandise imports. With companies elsewhere earning euros rather than dollars, there will be less reason for central banks to hold dollars in order to intervene in the foreign exchange market and stabilize the local currency against the greenback. At this point, there would be no going back.
One motivation for establishing the euro was to free Europe from excessive dependence on the dollar. This is likewise one of China’s motivations for seeking to internationalize the renminbi. So far, the success of both efforts has been mixed, at best. In threatening to punish Europe and China, Trump is, ironically, helping them to achieve their goals.
Moreover, Trump is squandering US leverage. Working with the Europeans and the Chinese, he could have threatened Iran, and companies doing business there, with comprehensive and effective sanctions had there been evidence that the country was failing to live up to its denuclearization obligations. But working together to ensure Iran’s compliance was, of course, precisely what the Joint Comprehensive Plan of Action, renounced by the Trump administration earlier this year, was established to do.
The United States has had the world’s largest trade deficit for almost half a century. In 2017, the US trade deficit in goods and services was $566 billion; without services, the merchandise account deficit was $810 billion.
The largest US trade deficit is with China, amounting to $375 billion, rising dramatically from an average of $34 billion in the 1990s. In 2017, its trade deficit with Japan was $69 billion, and with Germany, $65 billion. The US also has trade deficits with both its NAFTA partners, including $71 billion with Mexico.
Economist Professor Dr. Kwame Jomo Sundram
President Trump wants to reduce these deficits with protectionist measures. In March 2018, he imposed a 25% tariff on steel imports and a 10% tariff on aluminium, a month after imposing tariffs and quotas on imported solar panels and washing machines. On 10 July, the US listed Chinese imports worth $200 billion annually that will face 10% tariffs, probably from September, following 25% tariffs on $34 billion of such imports from 7 July.
Do US trade deficits reflect weakness?
The usual explanation for bilateral trade deficits is price differentials. However, the US accuses such countries of ‘unfair’ trade practices, such as currency manipulation, wage suppression and government subsidies to boost exports, besides blocking US imports.
Trump views most trade deals such as NAFTA as unfair. His team insists that renegotiating trade deals, ‘buying American’, a strong dollar and confronting China will shrink US trade deficits.
But the country’s overall trade deficit, offset by capital inflows, is related to the gap between its savings and investments. The US spends more than it produces, thus importing foreign goods and services. Cheap credit fuels debt-financed consumption, increasing the trade deficit.
Total US household debt rose to $13.2 trillion in the first quarter of 2018, the 15th consecutive quarter of growth in the mortgage, student, auto and credit card loan categories. American consumer debt was more than double GDP in 2017.
US government budget deficits have also been growing. From 67.7% of GDP in 2008, US government debt rose to 105.4% in 2017. The federal budget deficit was $665 billion in FY2017, rising 14% from $585 billion in FY2016.
The US budget deficit was 3.5% of GDP in 2017. According to the US Congressional Budget Office, it will surpass $1 trillion by 2020, two years sooner than previously projected, due to Trump tax cuts and spending increases.
Dr. Anis Chowdhury, Adjunct Professor of Economics at Western Sydney University (Australia)
The growing US economy may also increase the trade deficit, as consumers spend more on imported goods and services. The stronger dollar has made foreign products cheaper for American consumers while making US exports more expensive for foreigners.
These underlying economic forces have become more important than policies in raising the overall trade deficit, while bilateral deficits reflect specific commercial relations with particular countries. Thus, disrupting bilateral trade relations may only shift the trade deficit to others.
Have the cake and eat it?
So, why does the US have a structural trade deficit? As the de facto international ‘reserve currency’ after the Second World War, the US has provided the rest of the world with liquidity. Its perceived military strength means it is also seen as a safe place to keep financial assets. Of about $10 trillion in global reserves in 2016, for example, around three fifths (60 per cent) were held in US dollars.
US supply of international liquidity by issuing the global reserve currency offers several economic advantages. It also earns seigniorage from issuing the main currency used around the world, due to the difference between the face value of a currency note and the cost of issuing it.
With growing foreign demand for dollars, the US can run deficits almost indefinitely by creating more debt or selling assets. Demand for dollar-denominated assets, for example, US Treasury bonds, raises their prices, lowering interest rates, to finance both consumption and investment.
While foreign investors buy low-yielding, short-term US assets, Americans can invest abroad in higher-yielding, long-term assets. The US usually reaps higher returns on such investments than it pays for debt, labelled America’s ‘exorbitant privilege’.
” As the US retreats from the global diplomatic stage, use of other reserve currencies, including China’s renminbi, has been growing, especially in Europe and Africa. Thus, ironically, as Trump wages trade wars on both foes and friends, China will probably gain, both geo-politically and economically.
The resulting global economic shift will not only hurt the US dollar and economy through the exchange rate and borrowing costs, but also its geopolitical dominance”.–Jomo and Anis
Thus, for the US to enjoy the ‘exorbitant privilege’ of the dollar’s role as the major reserve currency, it must run a chronic trade deficit. Therefore, giving up the dollar’s global reserve currency status will have major implications for the US economy, finances and living standards.
Can the US win Trump’s trade war?
Barry Eichengreen noted that countries in military alliances with reserve-currency issuing countries hold about 30% more of the partner’s currency in their foreign-exchange reserves than countries not in such alliances. Instead, Trump has prioritized reducing trade deficits to strengthen the US dollar and dominance while disrupting some old political alliances.
As the US retreats from the global diplomatic stage, use of other reserve currencies, including China’s renminbi, has been growing, especially in Europe and Africa. Thus, ironically, as Trump wages trade wars on both foes and friends, China will probably gain, both geo-politically and economically.
The resulting global economic shift will not only hurt the US dollar and economy through the exchange rate and borrowing costs, but also its geopolitical dominance.
Anis Chowdhury, Adjunct Professor at Western Sydney University (Australia), held senior United Nations positions in New York and Bangkok.
Jomo Kwame Sundaram, a former economics professor, was United Nations Assistant Secretary-General for Economic Development. In 2007, he was awarded received the Wassily Leontief Prize for Advancing the Frontiers of Economic Thought. He was recently appointed a member of Prime Minister Dr. Mahathir Mohamad’s Eminent Persons Council on Strategy and Policy.
In London, in the nineteen-thirties, the émigré Hungarian intellectual Karl Polanyi was known among his friends as “the apocalyptic chap.” His gloom was understandable. Nearly fifty, he’d had to leave his wife, daughter, and mother behind in Vienna shortly after Austria lurched toward fascism, in 1933. Although he had long edited and contributed to the prestigious Viennese weekly The Austrian Economist, which published such celebrated figures as Friedrich Hayek and Joseph Schumpeter, he had come to discount his career as a thing of “theoretical and practical barrenness,” and blamed himself for failing to diagnose his era’s crucial political conflict. As so often for refugees, money was tight. Despite letters of reference from eminent historians, Polanyi failed to land a professorship or a fellowship, though he did manage to earn thirty-seven pounds co-editing an anti-fascist anthology, which featured essays by W. H. Auden and Reinhold Niebuhr. In his own contribution to the book, he argued that fascism strips democratic politics away from human society so that “only economic life remains,” a skeleton without flesh.
In 1937, he taught in adult-education programs in Kent and Sussex, commuting by bus or train and spending the night at a student’s house if it got too late to return home. The subject was British economic history, which he hadn’t much studied before. As he learned how capitalism had challenged the political system of Great Britain, the first nation in the world to industrialize, he decided that it was no accident that fascism was infecting countries as disparate as Japan, Croatia, and Portugal. Fascism shouldn’t be “ascribed to local causes, national mentalities, or historical backgrounds,” he came to believe. It shouldn’t even be thought of as a political movement. It was, rather, an “ever-given political possibility”—a reflex that could occur in any polity experiencing a certain kind of pain. In Polanyi’s opinion, whenever the profit-making impulse becomes deadlocked with the need to shield people from its harmful side effects, voters are tempted by the “fascist solution”: reconcile profit and security by forfeiting civic freedom. The insight became the keystone of his masterpiece, “The Great Transformation,” which was published in 1944, as the world was coming to terms with the destruction that fascism had wrought.
Today, as in the nineteen-thirties, strongmen are ascendant worldwide, purging civil servants, subverting the judiciary, and bullying the press. In a sweeping, angry new book, “Can Democracy Survive Global Capitalism?” (Norton), the journalist, editor, and Brandeis professor Robert Kuttner (pic above) champions Polanyi as a neglected prophet. Like Polanyi, he believes that free markets can be crueller than citizens will tolerate, inflicting a distress that he thinks is making us newly vulnerable to the fascist solution. In Kuttner’s description, however, today’s political impasse is different from that of the nineteen-thirties. It is being caused not by a stalemate between leftist governments and a reactionary business sector but by leftists in government who have reneged on their principles. Since the demise of the Soviet Union, Kuttner contends, America’s Democrats, Britain’s Labour Party, and many of Europe’s social democrats have consistently tacked rightward, relinquishing concern for ordinary workers and embracing the power of markets; they have sided with corporations and investors so many times that, by now, workers no longer feel represented by them. When strongmen arrived promising jobs and a shared sense of purpose, working-class voters were ready for the message.
Born in 1886 in Vienna, Karl Polanyi grew up in Budapest, in an assimilated, highly cultured Jewish family. Polanyi’s father, an engineer who became a railroad contractor, was so conscientious that when his business failed, around 1900, he repaid the shareholders, plunging the family into genteel poverty. Polanyi’s mother founded a women’s college, hosted a salon, and had a somewhat chaotic personality that a daughter-in-law once likened to “a book not yet written.” At home, as Gareth Dale recounts in a thoughtful 2016 biography, the family spoke German, French, and a little Hungarian; Karl also learned English, Latin, and Greek as a child. “I was taught tolerance not only by Goethe,” he later recalled, “but also, with seemingly mutually exclusive accents, by Dostoyevsky and John Stuart Mill.”
After university, Polanyi helped to found Hungary’s Radical Citizens’ Party, which called for land redistribution, free trade, and extended suffrage. But he remained enough of a traditionalist to enlist as a cavalry officer shortly after the First World War broke out. At the front, where, he said, “the Russian winter and the blackish steppe made me feel sick at heart,” he read “Hamlet” obsessively, and wrote letters home asking his family to send volumes of Marx, Flaubert, and Locke. After the war, the Radical Citizens took power, but they fumbled it. In the short-lived Communist government that followed, Polanyi was offered a position in the culture ministry by his friend György Lukács, later a celebrated Marxist literary critic.
When the Communists fell, pogroms broke out, and Polanyi fled to Vienna. “He looked like one who looks back on life, not forward to it,” Ilona Duczynska, who became his wife, remembered. Duczynska was a Communist engineer, ten years younger than he was. She had smuggled tsarist diamonds out of Russia in a tube of toothpaste and once borrowed a pistol to assassinate Hungary’s Prime Minister, though he resigned before she could shoot him. She and Polanyi married in 1923 and soon had a daughter.
Karl Polanyi and Nobel Laureate in Economics Joseph E. Sitglitz
These were the days of so-called Red Vienna, when the city’s socialist government was providing apartments for the working class and opening new libraries and kindergartens. Polanyi held informal seminars on socialist economics at home. He started writing for The Austrian Economist in 1924, and he was promoted to editor-in-chief a few months before the right-wing takeover sent him into exile. Duczynska remained in Vienna, going underground with a militia, but, in 1936, she, too, emigrated, taking a job as a cook in a London boarding house. In 1940, Bennington College offered Polanyi a lectureship, and he left for Vermont, where his family soon joined him and he began to turn his lecture notes into a book. “Not since 1920 did I have a time so rich in study and development,” he wrote.
Polanyi starts “The Great Transformation” by giving capitalism its due. For all but eighteen months of the century prior to the First World War, he writes, a web of international trade and investment kept peace among Europe’s great powers. Money crossed borders easily, thanks to the gold standard, a promise by each nation’s central bank to sell gold at a fixed price in its own currency. This both harmonized trade between countries and stabilized relative currency values. If a nation started to sell more goods than it bought, gold streamed in, expanding the money supply, heating up the economy, and raising prices high enough to discourage foreign buyers—at which point, in a correction so smooth it almost seemed natural, exports sank back down to pre-boom levels. The trouble was that the system could be gratuitously cruel. If a country went into a recession or its currency weakened, the only remedy was to attract foreign money by forcing prices down, cutting government spending, or raising interest rates—which, in effect, meant throwing people out of work. “No private suffering, no restriction of sovereignty, was deemed too great a sacrifice for the recovery of monetary integrity,” Polanyi wrote.
The system was sustainable politically only as long as those whose lives it ruined didn’t have a say. But, in the late nineteenth and early twentieth centuries, the right to vote spread. In the twenties and thirties, governments began trying to protect citizens’ jobs from shifts in international prices by raising tariffs, so that, in the system’s final years, it hardened national borders instead of opening them, and engendered what Polanyi called a “new crustacean type of nation,” which turned away from international trade, making first one world war, and then another, inevitable.
In Vienna, Polanyi had heard socialism dismissed as utopian, on the ground that no central authority could efficiently manage millions of different wishes, resources, and capabilities. In “The Great Transformation,” he swivelled this popgun around. What was utopian, he declared, was “the concept of a self-regulating market.” Human life wasn’t as orderly as mathematics, and only a goggle-eyed idealist would think it wise to lash people to a mechanism like the gold standard and then turn the crank. For most of human history, he observed, money and the exchange of goods had been embedded within culture, religion, and politics. The experiment of subordinating a nation to a self-adjusting market hadn’t even been attempted until Britain tried it, in the mid-eighteen-thirties, and that effort had required a great deal of coördination and behind-the-scenes management. “Laissez-faire,” Polanyi earnestly joked, “was planned.”
On the other hand, Polanyi believed that resistance to market forces, which he dubbed “the countermovement,” truly was spontaneous and ad hoc. He pointed to the motley of late-nineteenth-century measures—inspecting food and drink, subsidizing irrigation, regulating coal-mine ventilation, requiring vaccinations, protecting juvenile chimney sweeps, and so on—that were instituted to housebreak capitalism. Because such restraints went against the laws of supply and demand, they were despised by defenders of laissez-faire, who, Polanyi noticed, usually argued “that the incomplete application of its principles was the reason for every and any difficulty laid to its charge.” But what was the alternative? Once the laissez-faire machine started running, it cheerfully annihilated the people and the natural environment that it made use of, unless it was restrained.
Polanyi offered the example of the enclosure movement in sixteenth-century England, when landowners tore down villages and turned common lands into private pastures. The changes brought efficiencies that raised the land’s food yield as well as its value, in the long term improving life for everyone. Enclosure was a good thing, in other words; the numbers said so. In the short term, however, it dispossessed peasants who couldn’t immediately improvise a new living, and it was only because of a countermovement—led in piecemeal fashion by the monarchy, in a long, losing battle with Parliament—that more people didn’t die of exposure and starvation. If you argued that resistance did not compute, you would be right, but the countermovement, though it couldn’t stop progress, shielded people by slowing it down. It made enclosure so gradual that, even three centuries later, the poet John Clare was lamenting its advance in his sonnets.
In the nineteen-thirties, when Polanyi was first formulating his critique, the British economist John Maynard Keynes was likewise arguing that capitalist economies aren’t self-adjusting. The markets for labor, goods, and money, he showed, don’t find equilibriums independently but through interactions with one another that can have unfortunate, counterintuitive side effects. In hard times, economies tend to retrench, just when stimulus is most needed; the richer they get, the less likely they are to invest enough to sustain their wealth. During the Depression, Keynes made the case that governments should deficit-spend their way out of recessions. By the time Polanyi’s book was published, the Keynesian view had become orthodoxy. For the next few decades, the world’s leading economies were tightly managed by their governments. America’s top marginal tax rate stayed at ninety-one per cent until 1964, and anti-usury laws kept a ceiling on interest rates until the late seventies. The memory of the financial chaos of the thirties, and of the fascism that it gave rise to, was still vivid, and the Soviet Union loomed as an alternative, should the Western democracies fail to treat their workers well.
In terms of international monetary systems, too, Keynesianism held sway. In 1944, at the Bretton Woods Conference, Keynes helped to negotiate a way of harmonizing exchange rates that gave national governments enough elbow room to boost their domestic economies when necessary. Only America continued to redeem its currency with gold. Other nations pegged their currencies to the dollar (making it their reserve currency), but they were free to adjust their currencies’ values within limits when the need arose. Countries were allowed, and sometimes even required, to impose capital controls, measures that limited the cross-border flow of investment capital. With investors unable to yank money suddenly from one country to another, governments were free to spur growth with low interest rates and to spend on social programs without fear that inflation-averse capitalists would sell off their nations’ bonds. So weak was the political power of investors that France, Britain, and America let inflation shrink the value of their war debts considerably. In France, the economist Thomas Piketty has quipped, the period amounted to “capitalism without capitalists.”
The result—highly inconvenient for free-market fundamentalists—was prosperity. In the three decades following the Second World War, per-capita output grew faster in Western Europe and North America than ever before or since. There were no significant banking or financial crises. The real income of Europeans rose as much as it had in the previous hundred and fifty years, and American unemployment, which had ranged between fourteen and twenty-five per cent in the thirties, dropped to an average of 4.6 per cent in the fifties. The new wealth was widely shared, too; income inequality plummeted across the developed world. And with the plenty came calm. The economic historian Barry Eichengreen, in his new book, “The Populist Temptation” (Oxford), reports that in twenty advanced nations no populist leader—which he defines as a politician who is “anti-elite, authoritarian, and nativist”—took office during this golden era, and that a far narrower share of votes went to extremist parties than before or after.
“This was the road once taken,” Kuttner writes. “There was no economic need for a different one.” Nevertheless, we strayed—or, rather, in Kuttner’s telling, we were driven off the road after capitalists grabbed the steering wheel away from the Keynesians. The year 1973, in his opinion, marked “the end of the postwar social contract.” Politicians began snipping away restraints on investors and financiers, and the economy returned to spasming and sputtering. Between 1973 and 1992, per-capita income growth in the developed world fell to half of what it had been between 1950 and 1973. Income inequality rebounded. By 2010, the real median earnings of prime-age American workingmen were four per cent lower than they had been in 1970. American women’s earnings rose for a bit longer, as more women made their way into the workforce, but declined after 2000. And, as Polanyi would have predicted, faith in democracy slipped. Kuttner warns that support for right-wing extremists in Western Europe is even higher today than it was in the nineteen-thirties.
But was Keynesianism pushed, or did it stumble? Kuttner’s indignation about its fall from grace is more straightforward than the course of events that led to it. In the years following the Second World War, Europe was swimming with dollars, thanks to the Marshall Plan and American military aid to Europe. Beyond America’s jurisdiction, those dollars slipped free of its capital controls, and in the nineteen-sixties investors began to sling them from country to country as impetuously as in the days before Bretton Woods, punitively dumping the bonds of any government that tried to run an interest rate lower than those of its peers. The cost of the Vietnam War sparked inflation in America, and the dollar’s second life as the world’s reserve currency risked pushing the inflation even higher. When America fell into recession in 1970, the Federal Reserve tried to boost the country out of it by dropping interest rates, and America became a target of opportunity for speculators: capital fled the country, taking gold with it. By May, 1971, the United States was facing its first merchandise trade deficit since 1893, an indication that the high dollar was discouraging foreign buyers. Unwilling to pacify investors by inflicting austerity on voters, President Richard Nixon uncoupled the dollar from gold, ending the Bretton Woods agreement. Then, in October, 1973, Arab nations, upset about America’s solidarity with Israel during the Yom Kippur War, embargoed oil sales to the United States, and the price of crude nearly quadrupled in the space of three months. Food prices skyrocketed, and, as wallets were pinched, the country tumbled into another recession.
At this juncture, a new economic monster appeared: stagflation, a chimera of inflation, recession, and unemployment. Keynesian economists, who didn’t think that high unemployment and inflation could coëxist, were at a loss for how to handle it. The predicament provided an opening for their critics, most notably Milton Friedman, who argued that incessant government stimulation of the economy risked promoting not only inflation but the expectation of inflation, which could then spiral out of control. Friedman declared Keynesianism discredited and demanded that the government refrain from tampering with the economy, other than to manage the money supply.
In 1974, Alan Greenspan, President Gerald Ford’s economic adviser and an acolyte of Ayn Rand, likewise urged resisting political pressure to help the economy grow. “Inflation is our domestic public enemy No. 1,” Ford declared, and the Federal Reserve raised interest rates. Five years later, when a revolution in Iran set off a second spike in oil prices, a new round of inflation, and yet another recession, President Jimmy Carter’s Federal Reserve chair, Paul Volcker, raised interest rates again and again, to as high as twenty per cent. By 1982, America’s G.D.P. was shrinking 2.2 per cent a year, and unemployment was higher than it had been since the Great Depression. The nation had gone back to stabilizing its currency the old-fashioned way—by throwing people out of work—and utopian faith in self-regulating free markets had made a comeback. Kuttner thinks that this was a terrible mistake, arguing that the inflation of the seventies was limited to particular sectors of the economy such as food and oil. That sounds a little like special pleading. It’s not clear how Ford and Carter could have resisted the pressure they were under to find a new policy solution once it was clear that the old one wasn’t working.
In time, Keynesians adapted their models—one adjustment took into account Friedman’s discovery of the dangers posed by the expectation of inflation—and the resulting synthesis, New Keynesianism, is now canonical. Both the Bush and the Obama Administrations adopted Keynesian policies in response to the financial crisis of 2008. But when stagflation flummoxed the Keynesians it cost them their near-monopoly on political advice-giving, and laissez-faire was rereleased into the political sphere. In January, 1974, the United States removed constraints on sending capital abroad. A 1978 Supreme Court decision overturned most state laws against usury. By the early twenty-first century, Kuttner charges, every New Deal regulation on finance was either “repealed or weakened by non-enforcement.” Starting in the eighties, developing nations found free-market doctrine written into their loan agreements: bankers refused to extend credit unless the nations promised to lift capital controls, balance their budgets, limit taxes and social spending, and aim to sell more goods abroad—an uncanny replica of the austerity terms enforced under the gold standard. The set of policies became known as the Washington Consensus. The idea was pain in the short term for the sake of progress in the long term, but a 2011 meta-analysis was unable to find statistically significant evidence that the trade-off is worth it. Even if it is worth it, Polanyi would have recommended tempering the short-term pain. From 2010, when austerity measures were first imposed on Greece, to 2016, its G.D.P. declined 35.6 per cent, according to the World Bank. A federally appointed panel is now pushing for a similar approach in Puerto Rico.
There is no shortage of villains in Kuttner’s narrative: financial deregulation; supply-side tax cuts; the decline of trade unions; the Democratic Party, which, by zigging left on identity politics and zagging right on economics, left conservative white working-class voters amenable to Donald Trump. Perhaps the most vexed issue Kuttner discusses, however, is trade policy—whether American workers should be protected against cheap foreign goods and labor.
The contours of the problem call to mind Polanyi’s account of enclosures in early-modern England. Half an hour with a supply-and-demand graph shows that free trade is better for every nation, developed or developing, no matter how much an individual businessperson might wish for a special tariff to protect her line of work. In a 2012 survey, eighty-five per cent of economists agreed that, in the long run, the boons of free trade “are much larger than any effects on employment.” But although free trade benefits a country over all, it almost always benefits some citizens more than—and even at the expense of—others. The proportion of low-skilled labor in America is smaller than in most countries that trade with America; economic theory therefore predicts that international trade will, on aggregate, make low-skilled workers in the United States worse off. The U.S. government has, since 1962, compensated workers laid off because of free trade, but the benefit has never been adequate; only four people were certified to receive it during its first decade. In a 2016 paper, “The China Shock,” the economists David H. Autor, David Dorn, and Gordon H. Hanson wrote that, for every additional hundred dollars of Chinese goods imported to an area, a manufacturing worker is likely to lose fifty-five dollars of income, while gaining only six dollars in government help.
In a laissez-faire utopia, dislodged workers would relocate or take jobs in other industries, but workers hurt by rivalry with China are doing neither. Maybe they don’t have the resources to move; maybe the flood of Chinese-made goods is so extensive that there are no unaffected manufacturing sectors for them to switch into. The authors of “The China Shock” calculate that, between 1999 and 2011, trade with China destroyed between two million and 2.4 million American jobs; Kuttner quotes even higher estimates. NAFTA, meanwhile, lowered the wage growth of American high-school dropouts in affected industries by sixteen percentage points. In “Why Liberalism Failed” (Yale), the political scientist Patrick J. Deneen denounces the assumption that “increased purchasing power of cheap goods will compensate for the absence of economic security.”
Kuttner follows Polanyi in attacking free-market claims of mathematic purity. “Literally no nation has industrialized by relying on free markets,” he writes. In 1791, Alexander Hamilton recommended that America encourage new branches of manufacturing by taxing imports and subsidizing domestic production. Even Britain, the world’s first great champion of free trade, started off protectionist. Kuttner believes that America stopped supporting its manufacturing sector partly because it got into the habit, during the Cold War, of rewarding foreign allies with access to American consumers, and eventually decided that exports of financial services, rather than of manufactured goods, would be the country’s future. Toward the end of the century, as American manufacturers saw the writing on the wall, they shifted production abroad.
Kuttner doesn’t give a full hearing to the usual reply by defenders of laissez-faire, which is that a transition from goods to services is inevitable in a maturing economy—that the efficiency of American manufacturing means that it would likely be shedding workers no matter what the government did. Even Eichengreen, a critic of globalization, notes, in “The Populist Temptation,” that, if you graph the share of the German workforce employed in manufacturing from 1970 to 2012, you see a steady, grim decline very similar to that of its American counterpart, despite the fact that Germany has long spent heavily on apprenticeship and vocational training. The industrial revolution created widely shared wealth almost magically at its dawn: when an unemployed farmworker took a job in a factory, his power to make things multiplied, along with his earning power, without his having to learn much. But, as factories grew more efficient, fewer workers were needed to run them. One study has attributed eighty-seven per cent of lost manufacturing jobs to improved productivity.
When a worker leaves a factory, her power to create wealth stops being multiplied. The only way to increase it again is through education—by teaching her to become a sommelier, say, or an anesthesiologist. But efficiency gains are notoriously harder to come by in service industries than in manufacturing ones. There are only so many leashes a dog walker can hold at one time. As a result, if an economy deindustrializes without securing a stable manufacturing core, its productivity may erode. The dynamic has caused stagnation in Latin America and sub-Saharan Africa, and there are signs of a comparable weakening of America’s earning power.
Meanwhile, in the factories that remain, machines have grown more complex; the few workers they employ need to be better educated, further widening the gap between educated and uneducated workers. Kuttner dismisses this labor-skills explanation for job loss as an “alibi” with “an insulting subtext”: “If your economic life has gone to hell, it’s your fault.” This is intemperate but, in Kuttner’s defense, he has been warning American politicians to protect manufacturing jobs since 1991, and has been enlisting Polanyi in the cause for at least as long. Moreover, he has a point: to talk about productivity-induced job loss when challenged to explain trade-induced job loss is to change the subject. Economists estimate that advances in automation explain only thirty to forty per cent of the premium that a college degree now adds to wages. And though Eichengreen is right about manufacturing’s declining share of the German workforce, it still stood at twenty per cent in 2012, which is roughly where the American share stood three decades earlier, and the German decline has been less steep. Somehow, Germany’s concern for its manufacturing workforce made a difference.
In any case, if one’s concern is populism, it may not matter whether jobs have been lost to trade competition or to automation. In areas where more industrial robots have been introduced, one analysis shows, voters were more likely to choose Trump in 2016. According to another analysis, if competition with Chinese imports had been somehow halved, Michigan, Wisconsin, and Pennsylvania would likely have chosen Hillary Clinton that year. Economic explanations like these have been challenged. In April, the political scientist Diana C. Mutz published a paper finding that Trump voters were no more likely than Clinton ones to have suffered a personal financial setback; she concluded that Trump’s victory was more likely caused by white anxiety about loss of status and social dominance. But it’s not surprising that Trump voters weren’t basing their decisions on their personal circumstances, because voters almost never do. And Mutz’s own results showed that the factors most likely to lead to a Trump vote included pessimism about the economy and preferring Trump’s position on China to Clinton’s. It may not be possible to untangle economic anxiety and a more tribal mind-set.
Casting about for a Polanyi-style countermovement to temper the ruthlessness of laissez-faire, Kuttner doesn’t rule out tariffs. They’re economically inefficient, but so are unions, and, for a follower of Polanyi, efficiency isn’t the only consideration. A decision about a nation’s economic life, the Harvard economist Dani Rodrik writes, in “Straight Talk on Trade” (Princeton), “may entail trading off competing social objectives—such as stability versus innovation—or making distributional choices”; that is, deciding who gains at whose expense. Such a decision should therefore be made by elected politicians rather than by economists. America imposed export quotas on Japan in the seventies and eighties, to the alarm of headline writers at the time: “Protectionist Threat,” the Times warned. But Rodrik, looking back, judges the measures to have been reasonable ad-hoc defenses—“necessary responses to the distributional and adjustment challenges posed by the emergence of new trade relationships.”
Trump’s chief trade negotiator served on the Reagan team that administered quotas against Japan. A similar approach today, however, seems unlikely to work on China, whose economy is much more messily enmeshed with America’s. You probably can’t name as many Chinese brands as Japanese ones, even though you probably buy more Chinese-made products, because they are sold to Americans by American companies. American workers may wish they had been shielded from the effects of trade with China, but American businesses, by and large, don’t. Perhaps that’s why Trump has escalated from a tariff on steel and aluminum to erratic threats of a trade war. To achieve his campaign goal of bringing manufacturing jobs home from China, he will have to not only impose tariffs but also convince multinationals that the tariffs will stay in place beyond the end of his Administration. Only then will executives calculate that they can’t just wait it out—that they have no choice but to incur the enormous costs and capital losses of abandoning investments in China and making new ones here. It’s hard to imagine such a scheme working, unless Trump establishes a political command over the private sector not seen in America since the forties. That can’t be ruled out, given the state of affairs in Russia, China, Hungary, and Turkey, but it seems more likely that Trump’s bluster will merely motivate businesses to be deferential to him, in pursuit of favorable treatment.
“Basically there are two solutions,” Polanyi wrote in 1935. “The extension of the democratic principle from politics to economics, or the abolition of the democratic ‘political sphere’ altogether.” In other words, socialism or fascism. The choice may not be so stark, however. During America’s golden age of full employment, the economy came, in structural terms, as close as it ever has to socialism, but it remained capitalist at its core, despite the government’s restraining hand. The result was that workers shared directly in the country’s growing wealth, whereas today proposals for fostering greater financial equality hinge on taxing winners in order to fund programs that compensate losers. Such redistributive measures, Kuttner observes, are only “second bests.” They don’t do much for social cohesion: winners resent the loss of earnings; losers, the loss of dignity.
Can we return to an equality in workers’ primary incomes rather than to one brought about by secondary redistribution? In a recent essay for the journal Democracy, the Roosevelt Institute fellow Jennifer Harris recommends reimagining international trade as an engine for this rather than as an obstacle to it. When negotiating trade deals, for instance, governments could make going to bat for multinationals conditional on their agreeing to, say, pay their workers a higher fraction of what they pay executives.
Failing that, we’d be better off with redistributive programs that are universal—parental leave, national health care—rather than targeted. Benefits available to everyone help people without making them feel like charity cases. Kuttner reports great things from Scandinavia, where governments support workers directly—through wage subsidies, retraining sabbaticals, and temporary public jobs—rather than by constraining employers’ power to fire people. “We won’t protect jobs,” Sweden’s labor minister recently told the Times. “But we will protect workers.” Income inequality in Scandinavia is lower than here, and a larger proportion of citizens work. Maybe a government can insure higher pay for its workers by treating them as if they were, in and of themselves, valuable. True, Denmark’s spending on its labor policies has at times risen to as high as 4.5 per cent of its G.D.P., more than the share America spends on defense, and studies show that diverse countries such as ours find it harder to muster social altruism than more racially and culturally homogenous ones do. Nonetheless, programs like Social Security and Medicare, instituted when a communitarian ethic was still strong in American politics, remain popular. Why not try for more? It might make sense even if the numbers don’t add up. ♦
COMMENT: This is going to be the shortest comment I intend to make. Stop wasting public funds by going into another car project. We have invested and lost millions of money on Proton. But you are free, Dr. Mahathir to put your own money in your proposed joint Indonesia-Malaysia car for the ASEAN market.–Din Merican
Mahathir indicates possibility of a Malaysia-Indonesia car
Prime Minister Dr Mahathir Mohamad Friday spoke of the possibility of reviving the proposed project of a Malaysia-Indonesia car for the Asean market.
He said the idea was brought up when he test drove a Proton car in Malaysia in February 2015 with visiting Indonesian President Joko Widodo sitting beside him.
“I was no longer the prime minister then,” he said.
Mahathir was the prime minister from 1981 to 2003 and became the premier for the second time on May 10, 2018.
“I drove the car at a speed of 180 km per hour on the Sepang race circuit. The President (Joko Widodo) did not complain at all (when the car was driven at that speed),” Mahathir said at the joint press conference with Jokowi, as the Indonesian President is fondly called, in conjunction with his official visit to Indonesia.
Jokowi had recalled the test drive when he spoke earlier at the press conference and said he had no cause for worry because the person behind the wheel was Mahathir.
“I was not afraid because the driver was Mahathir,” he said.