Trump and the Truth About Climate Change


July 22, 2017

Trump and the Truth About Climate Change

by Joseph E. Stiglitz

http://www.project-syndicate.com

Image result for trump and climate change cartoon

Under President Donald Trump’s leadership, the United States took another major step toward establishing itself as a rogue state on June 1, when it withdrew from the Paris climate agreement. For years, Trump has indulged the strange conspiracy theory that, as he put it in 2012, “The concept of global warming was created by and for the Chinese in order to make US manufacturing non-competitive.” But this was not the reason Trump advanced for withdrawing the US from the Paris accord. Rather, the agreement, he alleged, was bad for the US and implicitly unfair to it.

While fairness, like beauty, is in the eye of the beholder, Trump’s claim is difficult to justify. On the contrary, the Paris accord is very good for America, and it is the US that continues to impose an unfair burden on others.

Historically, the US has added disproportionately to the rising concentration of greenhouse gases in the atmosphere, and among large countries it remains the biggest per capita emitter of carbon dioxide by far – more than twice China’s rate and nearly 2.5 times more than Europe in 2013 (the latest year for which the World Bank has reported complete data). With its high income, the US is in a far better position to adapt to the challenges of climate change than poor countries like India and China, let alone a low-income country in Africa.

Image result for trump and climate change

After 6 months in office, Trump has shown that he is incapable of getting his agenda going. He cannot get at the issues which require his leadership.

In fact, the major flaw in Trump’s reasoning is that combating climate change would strengthen the US, not weaken it. Trump is looking toward the past – a past that, ironically, was not that great. His promise to restore coal-mining jobs (which now number 51,000, less than 0.04% of the country’s non-farm employment) overlooks the harsh conditions and health risks endemic in that industry, not to mention the technological advances that would continue to reduce employment in the industry even if coal production were revived.

In fact, far more jobs are being created in solar panel installation than are being lost in coal. More generally, moving to a green economy would increase US income today and economic growth in the future. In this, as in so many things, Trump is hopelessly mired in the past.

Just a few weeks before Trump’s decision to withdraw from the Paris accord, the global High-Level Commission on Carbon Prices, which I co-chaired with Nicholas Stern, highlighted the potential of a green transition. The Commission’s report, released at the end of May, argues that reducing CO2 emissions could result in an even stronger economy.

The logic is straightforward. A key problem holding back the global economy today is deficient aggregate demand. At the same time, many countries’ governments face revenue shortfalls. But we can address both issues simultaneously and reduce emissions by imposing a charge (a tax) for CO2 emissions.

It is always better to tax bad things than good things. By taxing CO2, firms and households would have an incentive to retrofit for the world of the future. The tax would also provide firms with incentives to innovate in ways that reduce energy usage and emissions – giving them a dynamic competitive advantage.

The Commission analyzed the level of carbon price that would be required to achieve the goals set forth in the Paris climate agreement – a far higher price than in most of Europe today, but still manageable. The commissioners pointed out that the appropriate price may differ across countries. In particular, they noted, a better regulatory system – one that restrains coal-fired power generation, for example – reduces the burden that must be placed on the tax system.

Interestingly, one of the world’s best-performing economies, Sweden, has already adopted a carbon tax at a rate substantially higher than that discussed in our report. And the Swedes have simultaneously sustained their strong growth without US-level emissions.

America under Trump has gone from being a world leader to an object of derision. In the aftermath of Trump’s withdrawal of the US from the Paris accord, a large sign was hung over Rome’s city hall: “The Planet First.” Likewise, France’s new president, Emmanuel Macron, poked fun at Trump’s campaign slogan, declaring “Make Our Planet Great Again.”

Image result for joseph e. stiglitz

But the consequences of Trump’s actions are no laughing matter. If the US continues to emit as it has, it will continue to impose enormous costs on the rest of the world, including on much poorer countries. Those who are being harmed by America’s recklessness are justifiably angry.

Fortunately, large parts of the US, including the most economically dynamic regions, have shown that Trump is, if not irrelevant, at least less relevant than he would like to believe. Large numbers of states and corporations have announced that they will proceed with their commitments – and perhaps go even further, offsetting the failures of other parts of the US.

In the meantime, the world must protect itself against rogue states. Climate change poses an existential threat to the planet that is no less dire than that posed by North Korea’s nuclear ambitions. In both cases, the world cannot escape the inevitable question: what is to be done about countries that refuse to do their part in preserving our planet?

The Quiet Demise of Austerity


July 21, 2017

The Quiet Demise of Austerity

by James McCormack

James McCormack is Managing Director and Global Head of the Sovereign and Supranational Group at Fitch Ratings.

https://www.project-syndicate.org

Image result for End of the Economics of Austerity

It has been several years since policymakers seriously discussed the merits of fiscal austerity. Debates about the potential advantages of using stimulus to boost short-term economic growth, or about the threat of government debt reaching such a level as to inhibit medium-term growth, have gone silent.

There is no mistaking which side won, and why. Austerity is dead. And as conventional politicians continue to take rearguard action against populist upstarts, they will likely embrace more fiscal-policy easing – or at least avoid tightening – to reap near-certain short-term economic gains. At the same time, they are not likely to heed warnings of the medium-term consequences of higher debt levels, given widespread talk of interest rates remaining “lower for longer.”

One way to confirm that an international fiscal-policy consensus has emerged is to review policymakers’ joint statements. The last time the G7 issued a communiqué noting the importance of fiscal consolidation was at the Lough Erne Summit in 2013, when it was still the G8.

Since then, joint statements have contained amorphous proposals to implement “fiscal strategies flexibly to support growth” and ensure that debt-to-GDP ratios are sustainable. Putting debt on a sustainable path presumably means that it will not increase without interruption. But in the absence of a definite timeframe, debt levels can undergo lengthy deviations, the sustainability of which is open to interpretation.

Objections to austerity were understandable in the period following the 2008 financial crisis. Fiscal policy was being tightened when growth was languishing below 2% (after bouncing back in 2010), and sizeable negative output gaps suggested that overall employment would be slow to recover.

In late 2012, at the peak of the post-crisis austerity debate, advanced economies were in the midst of a multi-year tightening equivalent to more than one percentage point of GDP annually, according to cyclically-adjusted primary balance data from the International Monetary Fund.

Image result for Austerity

But just as fiscal policy was being tightened when cyclical economic conditions seemed to call for easing, it is now being eased when conditions seem to call for tightening. The output gap in advanced economies has all but disappeared, inflation is picking up, and world economic growth is forecast to be its strongest since 2010.

In 2013, Japan was the only advanced economy to loosen fiscal policy. But this year, the United Kingdom appears to be the only one preparing to tighten its policy – and that is assuming recent political ruptures haven’t altered its fiscal orientation, which will be reflected in the Chancellor of the Exchequer’s Autumn Statement.

Most observers would agree that government debt levels are uncomfortably high in many advanced economies, so it would be prudent for policymakers to discuss strategies for bringing them down. Moreover, there are several options for doing this, some of which are easier or more effective than others.

In the end, government deleveraging is about the relationship between economic growth and interest rates. The higher the growth rate relative to interest rates, the lower the level of fiscal consolidation needed to stabilize or reduce debt as a share of GDP.

As economic growth continues to pick up while interest rates lag, at least outside the US, fiscal authorities will have further opportunities to reduce debt, and create fiscal space for stimulus measures when the next cyclical downturn inevitably arrives. But policymakers are not doing this, which suggests that they have prioritized largely political considerations over fiscal prudence.

After the recent elections in the Netherlands and France, a growing chorus is now proclaiming that “peak populism” has passed. But one could argue just as easily that populist ideals are being absorbed into more mainstream political and economic agendas. As a result, politicians, particularly in Europe, have no choice but to favor inclusive growth policies and scrutinize the potential impact that a given policy could have on the income distribution.

This political environment is hardly conducive to fiscal consolidation. Any tax increases or spending cuts will have to be designed exceptionally well – perhaps impossibly so – for leaders to avoid a populist backlash. Some people will always lose more than others from fiscal consolidation, and deciding who those people are is never a pleasant exercise.

So far, those decisions are being delayed on political grounds. But the economic implications of high government debt cannot be ignored forever. Monetary policy is already starting to change in the US, and it could be on the verge of changing globally. One way or another, fiscal authorities will have to confront challenging tradeoffs in the years ahead.

A New Course for Economic Liberalism


July 17, 2017

A New Course for Economic Liberalism

by Sebastian Buckup

Sebastian Buckup is Head of Programming at the World Economic Forum.

How policymakers can manage the opposing forces of economic diffusion and concentration.

Résultat de recherche d'images pour "Macron"

The New Man in France–President Emmanuel Macron

Since the Agrarian Revolution, technological progress has always fueled opposing forces of diffusion and concentration. Diffusion occurs as old powers and privileges corrode; concentration occurs as the power and reach of those who control new capabilities expands. The so-called Fourth Industrial Revolution will be no exception in this regard.

Already, the tension between diffusion and concentration is intensifying at all levels of the economy. Throughout the 1990s and early 2000s, trade grew twice as fast as GDP, lifting hundreds of millions out of poverty. Thanks to the globalization of capital and knowledge, countries were able to shift resources to more productive and higher-paying sectors. All of this contributed to the diffusion of market power.

Résultat de recherche d'images pour "Opposing forces of economic diffusion and concentration"

But this diffusion occurred in parallel with an equally stark concentration. At the sectoral level, a couple of key industries – most notably, finance and information technology – secured a growing share of profits. In the United States, for example, the financial sector generates just 4% of employment, but accounts for more than 25% of corporate profits. And half of US companies that generate profits of 25% or more are tech firms.

The same has occurred at the organizational level. The most profitable 10% of US businesses are eight times more profitable than the average firm. In the 1990s, the multiple was only three.

Such concentration effects go a long way toward explaining rising economic inequality. Research by Cesar Hidalgo and his colleagues at MIT reveals that, in countries where sectoral concentration has declined in recent decades, such as South Korea, income inequality has fallen. In those where sectoral concentration has intensified, such as Norway, inequality has risen.

A similar trend can be seen at the organizational level. A recent study by Erling Bath, Alex Bryson, James Davis, and Richard Freeman showed that the diffusion of individual pay since the 1970s is associated with pay differences between, not within, companies. The Stanford economists Nicholas Bloom and David Price confirmed this finding, and argue that virtually the entire increase in income inequality in the US is rooted in the growing gap in average wages paid by firms.

Such outcomes are the result not just of inevitable structural shifts, but also of decisions about how to handle those shifts. In the late 1970s, as neoliberalism took hold, policymakers became less concerned about big firms converting profits into political influence, and instead worried that governments were protecting uncompetitive companies.

With this in mind, policymakers began to dismantle the economic rules and regulations that had been implemented after the Great Depression, and encouraged vertical and horizontal mergers. These decisions played a major role in enabling a new wave of globalization, which increasingly diffused growth and wealth across countries, but also laid the groundwork for the concentration of income and wealth within countries.

The growing “platform economy” is a case in point. In China, the e-commerce giant Alibaba is leading a massive effort to connect rural areas to national and global markets, including through its consumer-to-consumer platform Taobao. That effort entails substantial diffusion: in more than 1,000 rural Chinese communities – so-called “Taobao Villages” – over 10% of the population now makes a living by selling products on Taobao. But, as Alibaba helps to build an inclusive economy comprising millions of mini-multinationals, it is also expanding its own market power.

Policymakers now need a new approach that resists excessive concentration, which may create efficiency gains, but also allows firms to hoard profits and invest less. Of course, Joseph Schumpeter famously argued that one need not worry too much about monopoly rents, because competition would quickly erase the advantage. But corporate performance in recent decades paints a different picture: 80% of the firms that made a return of 25% or more in 2003 were still doing so ten years later. (In the 1990s, that share stood at about 50%.)

To counter such concentration, policymakers should, first, implement smarter competition laws that focus not only on market share or pricing power, but also on the many forms of rent extraction, from copyright and patent rules that allow incumbents to cash in on old discoveries to the misuse of network centrality. The question is not “how big is too big,” but how to differentiate between “good” and “bad” bigness. The answer hinges on the balance businesses strike between value capture and creation.

Moreover, policymakers need to make it easier for startups to scale up. A vibrant entrepreneurial ecosystem remains the most effective antidote to rent extraction. Digital ledger technologies, for instance, have the potential to curb the power of large oligopolies more effectively than heavy-handed policy interventions. Yet economies must not rely on markets alone to bring about the “churn” that capitalism so badly needs. Indeed, even as policymakers pay lip service to entrepreneurship, the number of startups has declined in many advanced economies.

Finally, policymakers must move beyond the neoliberal conceit that those who work hard and play by the rules are those who will rise. After all, the flipside of that perspective, which rests on a fundamental belief in the equalizing effect of the market, is what Michael Sandel calls our “meritocratic hubris”: the misguided idea that success (and failure) is up to us alone.

This implies that investments in education and skills training, while necessary, will not be sufficient to reduce inequality. Policies that tackle structural biases head-on – from minimum wages to, potentially, universal basic income schemes – are also needed.

Neoliberal economics has reached a breaking point, causing the traditional left-right political divide to be replaced by a different split: between those seeking forms of growth that are less inclined toward extreme concentration and those who want to end concentration by closing open markets and societies. Both sides challenge the old orthodoxies; but while one seeks to remove the “neo” from neoliberalism, the other seeks to dismantle liberalism altogether.

The neoliberal age had its day. It is time to define what comes next.

 

Globalisation: The Rise and Fall of an Idea that swept the World


July 15, 2017

Globalisation: The Rise and Fall of an Idea that swept the World

It’s not just a populist backlash – many economists who once swore by free trade have changed their minds, too. How had they got it so wrong?

by Nikil Saval

https://www.theguardian.com

Image result for dani rodrik--the paradox of globalisation

The Annual January gathering of the World Economic Forum in Davos is usually a placid affair: a place for well-heeled participants to exchange notes on global business opportunities, or powder conditions on the local ski slopes, while cradling champagne and canapes. This January, the ultra-rich and the sparkling wine returned, but by all reports the mood was one of anxiety, defensiveness and self-reproach.

The future of economic globalisation, for which the Davos men and women see themselves as caretakers, had been shaken by a series of political earthquakes. “Globalisation” can mean many things, but what lay in particular doubt was the long-advanced project of increasing free trade in goods across borders. The previous summer, Britain had voted to leave the largest trading bloc in the world. In November, the unexpected victory of Donald Trump, who vowed to withdraw from major trade deals, appeared to jeopardise the trading relationships of the world’s richest country. Elections in France and Germany suddenly seemed to bear the possibility of anti-globalisation parties garnering better results than ever before. The barbarians weren’t at the gates to the ski-lifts yet – but they weren’t very far.

In a panel titled Governing Globalisation, economist Dambisa Moyo, otherwise a well-known supporter of free trade, forthrightly asked the audience to accept that “there have been significant losses” from globalisation. “It is not clear to me that we are going to be able to remedy them under the current infrastructure,” she added. Christine Lagarde, the head of the International Monetary Fund, called for a policy hitherto foreign to the World Economic Forum: “more redistribution”. After years of hedging or discounting the malign effects of free trade, it was time to face facts: globalisation caused job losses and depressed wages, and the usual Davos proposals – such as instructing affected populations to accept the new reality – weren’t going to work. Unless something changed, the political consequences were likely to get worse.

The backlash to globalisation has helped fuel the extraordinary political shifts of the past 18 months. During the close race to become the Democratic party candidate, Senator Bernie Sanders relentlessly attacked Hillary Clinton on her support for free trade. On the campaign trail, Donald Trump openly proposed tilting the terms of trade in favour of American industry. “Americanism, not globalism, shall be our creed,” he bellowed at the Republican national convention last July. The vote for Brexit was strongest in the regions of the UK devastated by the flight of manufacturing. At Davos in January, British Prime Minister Theresa May, the leader of the party of capital and inherited wealth, improbably picked up the theme, warning that, for many, “talk of greater globalisation … means their jobs being outsourced and wages undercut.” Meanwhile, the European far right has been warning against free movement of people as well as goods. Following her qualifying victory in the first round of France’s presidential election, Marine Le Pen warned darkly that “the main thing at stake in this election is the rampant globalisation that is endangering our civilisation.”

It was only a few decades ago that globalisation was held by many, even by some critics, to be an inevitable, unstoppable force. “Rejecting globalisation,” the American journalist George Packer has written, “was like rejecting the sunrise.” Globalisation could take place in services, capital and ideas, making it a notoriously imprecise term; but what it meant most often was making it cheaper to trade across borders – something that seemed to many at the time to be an unquestionable good.

In practice, this often meant that industry would move from rich countries, where labour was expensive, to poor countries, where labour was cheaper. People in the rich countries would either have to accept lower wages to compete, or lose their jobs. But no matter what, the goods they formerly produced would now be imported, and be even cheaper. And the unemployed could get new, higher-skilled jobs (if they got the requisite training). Mainstream economists and politicians upheld the consensus about the merits of globalisation, with little concern that there might be political consequences.

Back then, economists could calmly chalk up anti-globalisation sentiment to a marginal group of delusional protesters, or disgruntled stragglers still toiling uselessly in “sunset industries”. These days, as sizable constituencies have voted in country after country for anti-free-trade policies, or candidates that promise to limit them, the old self-assurance is gone. Millions have rejected, with uncertain results, the punishing logic that globalisation could not be stopped. The backlash has swelled a wave of soul-searching among economists, one that had already begun to roll ashore with the financial crisis. How did they fail to foresee the repercussions?

Anti-Globalisation protesters in Seattle, 1999. Photograph: Eric Draper/AP

In the heyday of the globalisation consensus, few economists questioned its merits in public. But in 1997, the Harvard economist Dani Rodrik published a slim book that created a stir. Appearing just as the US was about to enter a historic economic boom, Rodrik’s book, Has Globalization Gone Too Far?, sounded an unusual note of alarm.

Rodrik pointed to a series of dramatic recent events that challenged the idea that growing free trade would be peacefully accepted. In 1995, France had adopted a programme of fiscal austerity in order to prepare for entry into the eurozone; trade unions responded with the largest wave of strikes since 1968. In 1996, only five years after the end of the Soviet Union – with Russia’s once-protected markets having been forcibly opened, leading to a sudden decline in living standards – a communist won 40% of the vote in Russia’s presidential elections. That same year, two years after the passing of the North American Free Trade Agreement (NAFTA), one of the most ambitious multinational deals ever accomplished, a white nationalist running on an “America first” programme of economic protectionism did surprisingly well in the presidential primaries of the Republican party.

What was the pathology of which all of these disturbing events were symptoms? For Rodrik, it was “the process that has come to be called ‘globalisation’”. Since the 1980s, and especially following the collapse of the Soviet Union, lowering barriers to international trade had become the axiom of countries everywhere. Tariffs had to be slashed and regulations spiked. Trade unions, which kept wages high and made it harder to fire people, had to be crushed. Governments vied with each other to make their country more hospitable – more “competitive” – for businesses. That meant making labour cheaper and regulations looser, often in countries that had once tried their hand at socialism, or had spent years protecting “homegrown” industries with tariffs.

These moves were generally applauded by economists. After all, their profession had long embraced the principle of comparative advantage – simply put, the idea countries will trade with each other in order to gain what each lacks, thereby benefiting both. In theory, then, the globalisation of trade in goods and services would benefit consumers in rich countries by giving them access to inexpensive goods produced by cheaper labour in poorer countries, and this demand, in turn, would help grow the economies of those poorer countries.

Construction workers in Beijing, China. Photograph: Ng Han Guan/AP

But the social cost, in Rodrik’s dissenting view, was high – and consistently underestimated by economists. He noted that since the 1970s, lower-skilled European and American workers had endured a major fall in the real value of their wages, which dropped by more than 20%. Workers were suffering more spells of unemployment, more volatility in the hours they were expected to work.

While many economists attributed much of the insecurity to technological change – sophisticated new machines displacing low-skilled workers – Rodrik suggested that the process of globalisation should shoulder more of the blame. It was, in particular, the competition between workers in developing and developed countries that helped drive down wages and job security for workers in developed countries. Over and over, they would be held hostage to the possibility that their business would up and leave, in order to find cheap labour in other parts of the world; they had to accept restraints on their salaries – or else. Opinion polls registered their strong levels of anxiety and insecurity, and the political effects were becoming more visible. Rodrik foresaw that the cost of greater “economic integration” would be greater “social disintegration”. The inevitable result would be a huge political backlash.

As Rodrik would later recall, other economists tended to dismiss his arguments – or fear them. Paul Krugman, who would win the Nobel prize in 2008 for his earlier work in trade theory and economic geography, privately warned Rodrik that his work would give “ammunition to the barbarians”.

It was a tacit acknowledgment that pro-globalisation economists, journalists and politicians had come under growing pressure from a new movement on the left, who were raising concerns very similar to Rodrik’s. Over the course of the 1990s, an unwieldy international coalition had begun to contest the notion that globalisation was good. Called “anti-globalisation” by the media, and the “alter-globalisation” or “global justice” movement by its participants, it tried to draw attention to the devastating effect that free trade policies were having, especially in the developing world, where globalisation was supposed to be having its most beneficial effect. This was a time when figures such as the New York Times columnist Thomas Friedman had given the topic a glitzy prominence by documenting his time among what he gratingly called “globalutionaries”: chatting amiably with the CEO of Monsanto one day, gawking at lingerie manufacturers in Sri Lanka the next. Activists were intent on showing a much darker picture, revealing how the record of globalisation consisted mostly of farmers pushed off their land and the rampant proliferation of sweatshops. They also implicated the highest world bodies in their critique: the G7, World Bank and IMF. In 1999, the movement reached a high point when a unique coalition of trade unions and environmentalists managed to shut down the meeting of the World Trade Organization in Seattle.

In a state of panic, economists responded with a flood of columns and books that defended the necessity of a more open global market economy, in tones ranging from grandiose to sarcastic. In January 2000, Krugman used his first piece as a New York Times columnist to denounce the “trashing” of the WTO, calling it “a sad irony that the cause that has finally awakened the long-dormant American left is that of – yes! – denying opportunity to third-world workers”.

Where Krugman was derisive, others were solemn, putting the contemporary fight against the “anti-globalisation” left in a continuum of struggles for liberty. “Liberals, social democrats and moderate conservatives are on the same side in the great battles against religious fanatics, obscurantists, extreme environmentalists, fascists, Marxists and, of course, contemporary anti-globalisers,” wrote the Financial Times columnist and former World Bank economist Martin Wolf in his book Why Globalization Works. Language like this lent the fight for globalisation the air of an epochal struggle. More common was the rhetoric of figures such as Friedman, who in his book The World is Flat mocked the “pampered American college kids” who, “wearing their branded clothing, began to get interested in sweatshops as a way of expiating their guilt”.

Arguments against the global justice movement rested on the idea that the ultimate benefits of a more open and integrated economy would outweigh the downsides. “Freer trade is associated with higher growth and … higher growth is associated with reduced poverty,” wrote the Columbia University economist Jagdish Bhagwati in his book In Defense of Globalization. “Hence, growth reduces poverty.” No matter how troubling some of the local effects, the implication went, globalisation promised a greater good.

Image result for Jagdish Bhagwati in his book In Defense of Globalization.

 

The fact that proponents of globalisation now felt compelled to spend much of their time defending it indicates how much visibility the global justice movement had achieved by the early 2000s. Still, over time, the movement lost ground, as a policy consensus settled in favour of globalisation. The proponents of globalisation were determined never to let another gathering be interrupted. They stopped meeting in major cities, and security everywhere was tightened. By the time of the invasion of Iraq, the world’s attention had turned from free trade to George Bush and the “war on terror,” leaving the globalisation consensus intact.

Image result for dani rodrik--the paradox of globalisation

Above all, there was a widespread perception that globalisation was working as it was supposed to. The local adverse effects that activists pointed to – sweatshop labour, starving farmers – were increasingly obscured by the staggering GDP numbers and fantastical images of gleaming skylines coming out of China. With some lonely exceptions – such as Rodrik and the former World Bank chief and Columbia University Professor Joseph Stiglitz – the pursuit of freer trade became a consensus position for economists, commentators and the vast majority of mainstream politicians, to the point where the benefits of free trade seemed to command blind adherence. In a 2006 TV interview, Thomas Friedman was asked whether there was any free trade deal he would not support. He replied that there wasn’t, admitting, “I wrote a column supporting the CAFTA, the Caribbean Free Trade initiative. I didn’t even know what was in it. I just knew two words: free trade.”

In the wake of the financial crisis, the cracks began to show in the consensus on globalisation, to the point that, today, there may no longer be a consensus. Economists who were once ardent proponents of globalisation have become some of its most prominent critics. Erstwhile supporters now concede, at least in part, that it has produced inequality, unemployment and downward pressure on wages. Nuances and criticisms that economists only used to raise in private seminars are finally coming out in the open.
Advertisement

A few months before the financial crisis hit, Krugman was already confessing to a “guilty conscience”. In the 1990s, he had been very influential in arguing that global trade with poor countries had only a small effect on workers’ wages in rich countries. By 2008, he was having doubts: the data seemed to suggest that the effect was much larger than he had suspected.

In the years that followed, the crash, the crisis of the eurozone and the worldwide drop in the price of oil and other commodities combined to put a huge dent in global trade. Since 2012, the IMF reported in its World Economic Outlook for October 2016, trade was growing at 3% a year – less than half the average of the previous three decades. That month, Martin Wolf argued in a column that globalisation had “lost dynamism”, due to a slackening of the world economy, the “exhaustion” of new markets to exploit and a rise in protectionist policies around the world.

Image result for Wolf Why Globalization Works

In an interview earlier this year, Wolf suggested to me that, though he remained convinced globalisation had not been the decisive factor in rising inequality, he had nonetheless not fully foreseen when he was writing Why Globalization Works how “radical the implications” of worsening inequality “might be for the US, and therefore the world”. Among these implications appears to be a rising distrust of the establishment that is blamed for the inequality. “We have a very big political problem in many of our countries,” he said. “The elites – the policy making business and financial elites – are increasingly disliked. You need to make policy which brings people to think again that their societies are run in a decent and civilised way.”

That distrust of the establishment has had highly visible political consequences: Farage, Trump, and Le Pen on the right; but also in new parties on the left, such as Spain’s Podemos, and curious populist hybrids, such as Italy’s Five Star Movement. As in 1997, but to an even greater degree, the volatile political scene reflects public anxiety over “the process that has come to be called ‘globalisation’”. If the critics of globalisation could be dismissed before because of their lack of economics training, or ignored because they were in distant countries, or kept out of sight by a wall of police, their sudden political ascendancy in the rich countries of the west cannot be so easily discounted today.

Over the past year, the opinion pages of prestigious newspapers have been filled with belated, rueful comments from the high priests of globalisation – the men who appeared to have defeated the anti-globalisers two decades earlier. Perhaps the most surprising such transformation has been that of Larry Summers. Possessed of a panoply of elite titles – former Chief Economist of the World Bank, former Treasury Secretary, President emeritus of Harvard, former Economic Adviser to President Barack Obama – Summers was renowned in the 1990s and 2000s for being a blustery proponent of globalisation. For Summers, it seemed, market logic was so inexorable that its dictates prevailed over every social concern. In an infamous World Bank memo from 1991, he held that the cheapest way to dispose of toxic waste in rich countries was to dump it in poor countries, since it was financially cheaper for them to manage it. “The laws of economics, it’s often forgotten, are like the laws of engineering,” he said in a speech that year at a World Bank-IMF meeting in Bangkok. “There’s only one set of laws and they work everywhere. One of the things I’ve learned in my short time at the World Bank is that whenever anybody says, ‘But economics works differently here,’ they’re about to say something dumb.”

Over the last two years, a different, in some ways unrecognizable Larry Summers has been appearing in newspaper editorial pages. More circumspect in tone, this humbler Summers has been arguing that economic opportunities in the developing world are slowing, and that the already rich economies are finding it hard to get out of the crisis. Barring some kind of breakthrough, Summers says, an era of slow growth is here to stay.

In Summers’s recent writings, this sombre conclusion has often been paired with a surprising political goal: advocating for a “responsible nationalism”. Now he argues that politicians must recognise that “the basic responsibility of government is to maximise the welfare of citizens, not to pursue some abstract concept of the global good”.

One curious thing about the pro-globalisation consensus of the 1990s and 2000s, and its collapse in recent years, is how closely the cycle resembles a previous era. Pursuing free trade has always produced displacement and inequality – and political chaos, populism and retrenchment to go with it. Every time the social consequences of free trade are overlooked, political backlash follows. But free trade is only one of many forms that economic integration can take. History seems to suggest, however, that it might be the most destabilising one.

Nearly all economists and scholars of globalisation like to point to the fact that the economy was rather globalised by the early 20th century. As European countries colonised Asia and sub-Saharan Africa, they turned their colonies into suppliers of raw materials for European manufacturers, as well as markets for European goods. Meanwhile, the economies of the colonisers were also becoming free-trade zones for each other. “The opening years of the 20th century were the closest thing the world had ever seen to a free world market for goods, capital and labour,” writes the Harvard Professor of Government Jeffry Frieden in his standard account, Global Capitalism: Its Fall and Rise in the 20th Century. “It would be a hundred years before the world returned to that level of globalisation.”

Image result for Jeffry Frieden Global Capitalism: Its Fall and Rise in the 20th Century.

 

In addition to military force, what underpinned this convenient arrangement for imperial nations was the gold standard. Under this system, each national currency had an established gold value: the British pound sterling was backed by 113 grains of pure gold; the US dollar by 23.22 grains, and so on. This entailed that exchange rates were also fixed: a British pound was always equal to 4.87 dollars. The stability of exchange rates meant that the cost of doing business across borders was predictable. Just like the eurozone today, you could count on the value of the currency staying the same, so long as the storehouse of gold remained more or less the same.

When there were gold shortages – as there were in the 1870s – the system stopped working. To protect the sanctity of the standard under conditions of stress, central bankers across the Europe and the US tightened access to credit and deflated prices. This left financiers in a decent position, but crushed farmers and the rural poor, for whom falling prices meant starvation. Then as now, economists and mainstream politicians largely overlooked the darker side of the economic picture.

In the US, this fuelled one of the world’s first self-described “populist” revolts, leading to the nomination of William Jennings Bryan as the Democratic party candidate in 1896. At his nominating convention, he gave a famous speech lambasting gold backers: “You shall not press down upon the brow of labour this crown of thorns, you shall not crucify mankind upon a cross of gold.” Then as now, financial elites and their supporters in the press were horrified. “There has been an upheaval of the political crust,” the Times of London reported, “and strange creatures have come forth.”

Businessmen were so distressed by Bryan that they backed the Republican candidate, William McKinley, who won partly by outspending Bryan five to one. Meanwhile, gold was bolstered by the discovery of new reserves in colonial South Africa. But the gold standard could not survive the first world war and the Great Depression. By the 1930s, unionisation had spread to more industries and there was a growing worldwide socialist movement. Protecting gold would mean mass unemployment and social unrest. Britain went off the gold standard in 1931, while Franklin Roosevelt took the US off it in 1933; France and several other countries would follow in 1936.

The prioritisation of finance and trade over the welfare of people had come momentarily to an end. But this wasn’t the end of the global economic system.

The trade system that followed was global, too, with high levels of trade – but it took place on terms that often allowed developing countries to protect their industries. Because, from the perspective of free traders, protectionism is always seen as bad, the success of this postwar system has been largely under-recognised.

Over the course of the 1930s and 40s, liberals – John Maynard Keynes among them – who had previously regarded departures from free trade as “an imbecility and an outrage” began to lose their religion. “The decadent international but individualistic capitalism, in the hands of which we found ourselves after the war, is not a success,” Keynes found himself writing in 1933. “It is not intelligent, it is not beautiful, it is not just, it is not virtuous – and it doesn’t deliver the goods. In short, we dislike it, and we are beginning to despise it.” He claimed sympathies “with those who would minimise, rather than with those who would maximise, economic entanglement among nations,” and argued that goods “be homespun whenever it is reasonably and conveniently possible”.

The international systems that chastened figures such as Keynes helped produce in the next few years – especially the Bretton Woods agreement and the General Agreement on Tariffs and Trade (GATT) – set the terms under which the new wave of globalisation would take place.

The key to the system’s viability, in Rodrik’s view, was its flexibility – something absent from contemporary globalisation, with its one-size-fits-all model of capitalism. Bretton Woods stabilised exchange rates by pegging the dollar loosely to gold, and other currencies to the dollar. GATT consisted of rules governing free trade – negotiated by participating countries in a series of multinational “rounds” – that left many areas of the world economy, such as agriculture, untouched or unaddressed. “GATT’s purpose was never to maximise free trade,” Rodrik writes. “It was to achieve the maximum amount of trade compatible with different nations doing their own thing. In that respect, the institution proved spectacularly successful.”

Partly because GATT was not always dogmatic about free trade, it allowed most countries to figure out their own economic objectives, within a somewhat international ambit. When nations contravened the agreement’s terms on specific areas of national interest, they found that it “contained loopholes wide enough for an elephant to pass”, in Rodrik’s words. If a nation wanted to protect its steel industry, for example, it could claim “injury” under the rules of GATT and raise tariffs to discourage steel imports: “an abomination from the standpoint of free trade”. These were useful for countries that were recovering from the war and needed to build up their own industries via tariffs – duties imposed on particular imports. Meanwhile, from 1948 to 1990, world trade grew at an annual average of nearly 7% – faster than the post-communist years, which we think of as the high point of globalisation. “If there was a golden era of globalisation,” Rodrik has written, “this was it.”

GATT, however, failed to cover many of the countries in the developing world. These countries eventually created their own system, the United Nations conference on trade and development (UNCTAD). Under this rubric, many countries – especially in Latin America, the Middle East, Africa and Asia – adopted a policy of protecting homegrown industries by replacing imports with domestically produced goods. It worked poorly in some places – India and Argentina, for example, where the trade barriers were too high, resulting in factories that cost more to set up than the value of the goods they produced – but remarkably well in others, such as east Asia, much of Latin America and parts of sub-Saharan Africa, where homegrown industries did spring up. Though many later economists and commentators would dismiss the achievements of this model, it theoretically fit Larry Summers’s recent rubric on globalisation: “the basic responsibility of government is to maximise the welfare of citizens, not to pursue some abstract concept of the global good.”

The critical turning point – away from this system of trade balanced against national protections – came in the 1980s. Flagging growth and high inflation in the west, along with growing competition from Japan, opened the way for a political transformation. The elections of Margaret Thatcher and Ronald Reagan were seminal, putting free-market radicals in charge of two of the world’s five biggest economies and ushering in an era of “hyperglobalisation”. In the new political climate, economies with large public sectors and strong governments within the global capitalist system were no longer seen as aids to the system’s functioning, but impediments to it.

Not only did these ideologies take hold in the US and the UK; they seized international institutions as well. GATT renamed itself as the World Trade Organization (WTO), and the new rules the body negotiated began to cut more deeply into national policies. Its international trade rules sometimes undermined national legislation. The WTO’s appellate court intervened relentlessly in member nations’ tax, environmental and regulatory policies, including those of the United States: the US’s fuel emissions standards were judged to discriminate against imported gasoline, and its ban on imported shrimp caught without turtle-excluding devices was overturned. If national health and safety regulations were stricter than WTO rules necessitated, they could only remain in place if they were shown to have “scientific justification”.

The purest version of hyper-globalisation was tried out in Latin America in the 1980s. Known as the “Washington Consensus”, this model usually involved loans from the IMF that were contingent on those countries lowering trade barriers and privatising many of their nationally held industries. Well into the 1990s, economists were proclaiming the indisputable benefits of openness. In an influential 1995 paper, Jeffrey Sachs and Andrew Warner wrote: “We find no cases to support the frequent worry that a country might open and yet fail to grow.”

But the Washington consensus was bad for business: most countries did worse than before. Growth faltered, and citizens across Latin America revolted against attempted privatisations of water and gas. In Argentina, which followed the Washington Consensus to the letter, a grave crisis resulted in 2002, precipitating an economic collapse and massive street protests that forced out the government that had pursued privatising reforms. Argentina’s revolt presaged a left-populist upsurge across the continent: from 1999 to 2007, left wing leaders and parties took power in Brazil, Venezuela, Bolivia and Ecuador, all of them campaigning against the Washington consensus on globalisation. These revolts were a preview of the backlash of today.

Rodrik – perhaps the contemporary economist whose views have been most amply vindicated by recent events – was himself a beneficiary of protectionism in Turkey. His father’s ballpoint pen company was sheltered under tariffs, and achieved enough success to allow Rodrik to attend Harvard in the 1970s as an undergraduate. This personal understanding of the mixed nature of economic success may be one of the reasons why his work runs against the broad consensus of mainstream economics writing on globalisation.

“I never felt that my ideas were out of the mainstream,” Rodrik told me recently. Instead, it was that the mainstream had lost touch with the diversity of opinions and methods that already existed within economics. “The economics profession is strange in that the more you move away from the seminar room to the public domain, the more the nuances get lost, especially on issues of trade.” He lamented the fact that while, in the classroom, the models of trade discuss losers and winners, and, as a result, the necessity of policies of redistribution, in practice, an “arrogance and hubris” had led many economists to ignore these implications. “Rather than speaking truth to power, so to speak, many economists became cheerleaders for globalisation.”

In his 2011 book The Globalization Paradox, Rodrik concluded that “we cannot simultaneously pursue democracy, national determination, and economic globalisation.” The results of the 2016 elections and referendums provide ample testimony of the justness of the thesis, with millions voting to push back, for better or for worse, against the campaigns and institutions that promised more globalisation. “I’m not at all surprised by the backlash,” Rodrik told me. “Really, nobody should have been surprised.”

But what, in any case, would “more globalisation” look like? For the same economists and writers who have started to rethink their commitments to greater integration, it doesn’t mean quite what it did in the early 2000s. It’s not only the discourse that’s changed: globalisation itself has changed, developing into a more chaotic and unequal system than many economists predicted. The benefits of globalisation have been largely concentrated in a handful of Asian countries. And even in those countries, the good times may be running out.

Statistics from Global Inequality, a 2016 book by the development economist Branko Milanović, indicate that in relative terms the greatest benefits of globalisation have accrued to a rising “emerging middle class”, based preponderantly in China. But the cons are there, too: in absolute terms, the largest gains have gone to what is commonly called “the 1%” – half of whom are based in the US. Economist Richard Baldwin has shown in his recent book, The Great Convergence, that nearly all of the gains from globalisation have been concentrated in six countries.

Barring some political catastrophe, in which right wing populism continued to gain, and in which globalisation would be the least of our problems – Wolf admitted that he was “not at all sure” that this could be ruled out – globalisation was always going to slow; in fact, it already has. One reason, says Wolf, was that “a very, very large proportion of the gains from globalisation – by no means all – have been exploited. We have a more open world economy to trade than we’ve ever had before.” Citing The Great Convergence, Wolf noted that supply chains have already expanded, and that future developments, such as automation and the use of robots, looked to undermine the promise of a growing industrial workforce. Today, the political priorities were less about trade and more about the challenge of retraining workers, as technology renders old jobs obsolete and transforms the world of work.

Rodrik, too, believes that globalisation, whether reduced or increased, is unlikely to produce the kind of economic effects it once did. For him, this slowdown has something to do with what he calls “premature deindustrialisation”. In the past, the simplest model of globalisation suggested that rich countries would gradually become “service economies”, while emerging economies picked up the industrial burden. Yet recent statistics show the world as a whole is deindustrialising. Countries that one would have expected to have more industrial potential are going through the stages of automation more quickly than previously developed countries did, and thereby failing to develop the broad industrial workforce seen as a key to shared prosperity.

For both Rodrik and Wolf, the political reaction to globalisation bore possibilities of deep uncertainty. “I really have found it very difficult to decide whether what we’re living through is a blip, or a fundamental and profound transformation of the world – at least as significant as that one brought about the first world war and the Russian revolution,” Wolf told me. He cited his agreement with economists such as Summers that shifting away from the earlier emphasis on globalisation had now become a political priority; that to pursue still greater liberalisation was like showing “a red rag to a bull” in terms of what it might do to the already compromised political stability of the western world.

Rodrik pointed to a belated emphasis, both among political figures and economists, on the necessity of compensating those displaced by globalisation with retraining and more robust welfare states. But pro-free-traders had a history of cutting compensation: Bill Clinton passed NAFTA, but failed to expand safety nets. “The issue is that the people are rightly not trusting the centrists who are now promising compensation,” Rodrik said. “One reason that Hillary Clinton didn’t get any traction with those people is that she didn’t have any credibility.”

Rodrik felt that economics commentary failed to register the gravity of the situation: that there were increasingly few avenues for global growth, and that much of the damage done by globalisation – economic and political – is irreversible. “There is a sense that we’re at a turning point,” he said. “There’s a lot more thinking about what can be done. There’s a renewed emphasis on compensation – which, you know, I think has come rather late.”

https://www.theguardian.com/world/2017/jul/14/globalisation-the-rise-and-fall-of-an-idea-that-swept-the-world

How Effective is Economic Theory?


June 28, 2017

How Effective is Economic Theory?

by Arnold Kling
In the end, can we really have effective theory in economics? If by effective theory we mean theory that is verifiable and reliable for prediction and control, the answer is likely no. Instead, economics deals in speculative interpretations and must continue to do so.–Arnold Kling
 

In 1980, following a decade of high inflation and unemployment — a combination that economists had previously thought to be impossible over extended periods — The Public Interest ran a special issue titled “The Crisis in Economic Theory.” Today, there is little talk of a crisis in economic theory. But in the past decade, we have experienced a financial crisis and subsequent decline in employment that also followed a path economists had previously thought to be impossible. Economists seem more confident than they did in 1980, but are they more deserving of confidence? If anything, some of the questions confronting economics should run deeper now than then.

In fact, the basic question of how economics should understand itself now demands urgent attention. Since the American Economic Association was founded in the 1880s, economists in this country have sought special status as scientifically grounded policy experts. Over the past 50 years, in particular, they have largely attained that status. Whether they deserve it is less clear. And what a scientific economics would really look like is not nearly as clear as some economists now imagine, either.

And it’s not just the practice: Even the ideal of economics as a science now demands serious scrutiny. If economic theory is not in crisis, maybe it deserves to be.

EFFECTIVE THEORY

Instead of “science,” we might want to think about economics in terms of “effective theory.” As explained by Harvard physicist Lisa Randall,

Effective theory is a valuable concept when we ask how scientific theories advance, and what we mean when we say something is right or wrong. Newton’s laws work extremely well. They are sufficient to devise the path by which we can send a satellite to the far reaches of the Solar System and to construct a bridge that won’t collapse. Yet we know quantum mechanics and relativity are the deeper underlying theories. Newton’s laws are approximations that work at relatively low speeds and for large macroscopic objects. What’s more is that an effective theory tells us precisely its limitations — the conditions and values of parameters for which the theory breaks down. The laws of the effective theory succeed until we reach its limitations when these assumptions are no longer true or our measurements or requirements become increasingly precise.

Whereas the term “science” often is used to connote absolute truth in an almost religious sense, effective theory is provisional. When we are certain that in a particular context a theory will work, then and only then is the theory effective.

Effective theory consists of verifiable knowledge. To be verifiable, a finding must be arrived at by methods that are generally viewed as robust. Any researcher who tries to replicate a finding using appropriate methods should be able to confirm it. The strongest confirmation of the effectiveness of a theory comes from prediction and control. Lisa Randall’s example of sending a spacecraft to the far reaches of the solar system illustrates such confirmation.

This notion of effective theory sets a useful standard for considering economics. Economists are not without knowledge. We know that restrictions on trade tend to help narrow interests at the expense of broader prosperity. We know that market prices are important for coordinating specialization and division of labor in a complex economy. We know that the profit incentive promotes the introduction of improved products and processes, and that our high level of well-being results from the cumulative effect of such improvements. We know that government control over prices and production, as in communist countries, leads to inefficiency and corruption. We know that the laws of supply and demand tend to frustrate efforts to make goods more “affordable” by subsidizing them or to lower “costs” by fixing prices.

But policymakers have goals that go far beyond or run counter to such basic principles. They want to steer the economy using fiscal stimulus. They want to shape complex and important markets, including those of health insurance and home mortgages. It is doubtful that the effectiveness of economic theory is equal to such tasks.

Most scholarly research in economics is ultimately motivated by the unrealistic goal of providing effective theory to implement such technocratic objectives. But the resulting economic theory cannot be applied with the same confidence as Newtonian physics. Even worse is the fact that economists, unlike physicists, are not clear about the limits of the effectiveness of their theories. In short, when it comes to effective theory, economists promise more than they can deliver.

Over the last 50 years, questions about the effectiveness of economic theory have revolved around five interlocking subjects in particular: mathematical modeling, homo economicus, objectivity, testing procedures, and the particular status of the sub-discipline of macroeconomics.

With mathematical modeling, the question concerns the tradeoff between rigor and relevance. Mathematical models are considered more rigorous than verbal arguments, but the process of modeling serves to narrow the scope of economic thinking. Are mathematical economists ignoring important topics and missing insights that a nonmathematical economics might explore?

With homo economicus, the question concerns the advantages and disadvantages of assuming that the individual behaves with economic rationality. This assumption appears to be a very powerful tool for prediction and control of economic behavior. But what are the limits to its applicability?

With objectivity, the question concerns the relationship between facts and values, or between analysis and policy preferences. Physicists and astronomers are almost never accused of letting a partisan political outlook affect their views on the phenomena that they study. Can economists aspire to that same level of objectivity?

With testing procedures, the question concerns the ability of economists to bring persuasive evidence to bear on questions of theory. In the history of the natural sciences, hypotheses have been confirmed or discarded on the basis of decisive experiments. How can economists obtain verifiable knowledge when dealing with phenomena that are not as readily subject to experimental methods?

With macroeconomics, the question is whether economists really can predict and control the overall behavior of unemployment and inflation in a nation’s economy. Economists became optimistic about their prospects for doing this during periods of favorable economic performance, such as the mid-1960s or the two decades that preceded the financial crisis of 2008. But these episodes were rudely interrupted by the unexpected convulsions of the Great Stagflation of the 1970s and the Great Recession that followed the financial crisis. Is macroeconomic theory a success or a failure?

Economists’ views about these questions have shifted over the past five decades. It is interesting to compare what they were saying in 1966 with what they were saying in 1980 and what they have been saying more recently. Such comparisons might help us consider what we should expect of economics in the years to come.

TECHNOCRATIC OPTIMISM

In 1966, several economists took up the five interlocking questions noted above in a book of essays entitled The Structure of Economic Science, edited by Sherman Roy Krupp. In an essay on the subject of mathematical modeling, William Baumol summarized the tradeoff between rigor and relevance.

Strong mathematical results must, therefore, be viewed by the practitioner with somewhat mixed feelings. At best they represent a relevant revelation about his problem; at worst they may cast doubt upon the appropriateness of his assumptions.

As of 1966, there were still economists who opposed mathematical modeling, but they were in retreat and aging out of the profession. As Martin Bronfenbrenner put it in his essay in the Krupp volume,

Until perhaps a generation ago, it might have been necessary to justify the use of mathematics in economics and the other social sciences….The shoe now threatens, indeed, to pass to the other foot, with the non-mathematical practitioner laboring under a darkening suspicion…that nothing he does will be worthwhile unless formulated mathematically and subjected to statistical testing.

On the issue of relevance, Bronfenbrenner wrote that any mathematical model requires what he called an “applicability theorem,” showing that the conditions under which it is true obtain in the real world. “And since it relates to the actual world, an applicability theorem may be highly  probable but is never absolutely certain.”

If Lisa Randall is correct, then physicists usually know when Newton’s theories are effective and when they are not. In contrast, economists often do not know when their theories are effective. How many firms must there be for the theory of “perfect competition” to be applicable? What assumptions are required in order for workers’ wages to be tied closely to productivity, and do these assumptions hold in practice?

Image result for james buchanan economist

Nobel Laureate in Economics–James Buchanan

On the topic of homo economicus, James Buchanan wrote in his contribution to the volume that “the central predictive proposition of economics….amounts to saying that individuals, when confronted with effective choice, will choose more rather than less.”

Previously, Milton Friedman had argued in favor of the stronger definition of homo economicus, in which individuals and firms optimize over their choices. In his book Essays in Positive Economics, published in 1953, Friedman argued that this assumption could be justified on the basis of its ability to predict economic outcomes. He developed a famous analogy in which an observer is asked to predict the behavior of a billiard player. Although the billiard player does not employ the laws of physics, Friedman argued that the observer can use the laws of physics to predict how the billiard player will line up his shot. By analogy, Friedman argued that the economist can use mathematical optimization models to predict how consumers and firms will behave.

Image result for milton friedman

Nobel Laureate in Economics–Milton Friedman, University of Chicago

As of 1966, the profession had largely defaulted to Friedman’s view. The main alternative was Herbert Simon’s “bounded rationality” or “satisficing,” in which economic agents are assumed to stop short of total optimization. Although many economists were intrigued by Simon’s ideas, which helped him to win a Nobel Prize in 1978, the overwhelming body of economic research ignored them and continued to assume optimization.

Image result for economist Herbert Simon

Concerning objectivity, many economists believed in a form of positivism, in which facts could be separated from values. The positivist position is that technical expertise is separate from preferences. The public expresses preferences about outcomes, and the technical expert then prescribes policies to achieve those outcomes. Buchanan argued firmly against economists interjecting their own policy preferences into their work:

[The economist] verges dangerously on irresponsible action when he allows his zeal for social progress, as he conceives this, to take precedence over his search for and respect of scientific truth, as determined by the consensus of his peers….If the economist can learn from his colleagues in the physical sciences…that the respect for truth takes precedence above all else and that it is the final value judgment that must pervade all science, he may, yet, rescue the discipline from its currently threatened rush into absurdity, oblivion, and disrepute.

Such an attitude is also essential to distinguishing between technical and political disputes in economics. Bronfenbrenner wrote,

Does not the economists’ notorious failure to agree suggest or prove the “prescientific” character of economics as such? I follow my professional bias (vested interest?) on the negative side of this proposition. Much of the disagreement is inevitable since it centers around economic values and policy recommendations and involves normative rather than positive economics….

This is not to deny the existence of disagreements in positive economics, of which there are plenty….Yet we have faith that most if not all such positive disagreements will eventually be resolved, as parallel disagreements have been resolved in the natural sciences.

Like many other economists at that time, Buchanan and Bronfenbrenner believed that values and scientific investigation could be separate, and that economists are on firmer ground when they stick with scientific investigation.

With regard to testing procedures, there were doubts expressed in 1966 by two heterodox thinkers, Emile Grunberg and Kenneth Boulding. Grunberg wrote that, “In fact, the history of the social sciences shows no clearcut case in which a theory has been disconfirmed by contradictory evidence.”

He went on to suggest that the reason for this is that social scientists work with open systems, in which the number of factors that could affect an outcome is intractably large. In contrast, physical scientists are able to work with closed systems in which every factor can be accounted for. Boulding wrote,

All predictions, even in the physical sciences, are really conditional predictions. They say that if the system remains unchanged and the parameters of the system remain unchanged, then such and such will be the state of the system at certain times in the future. If the system does change, of course the prediction will be falsified, and this is what happens in social systems all the time….What this means is that the failure of prediction in social systems does not lead to the improvement of our knowledge of these systems, simply because there is nothing there to know….[T]he possibility that our knowledge of society is sharply limited by the unknowable is something that must be taken into consideration.

While these comments were prescient, they were ignored at the time they were written. Instead, economists were confident that their statistical techniques were capable of sifting among hypotheses to find reliable ones. In fact, the mid-1960s was when economists were particularly optimistic about econometrics, and especially the technique of multiple regression. Multiple regression was thought to be a way to achieve with non-experimental data the ideal of controlling for extraneous influences.

For example, suppose that you want to examine whether private schools outperform public schools, using student test scores as the metric. However, you know that many factors affect test scores, including each student’s ability and family environment. With multiple regression, the investigator introduces variables representing these other factors into the statistical analysis, and in theory this means that those variables are controlled for.

Multiple regression requires extensive computation, so it was largely impractical before the advent of computers. By the late 1960s, many universities had mainframe computers that could handle these calculations, and multiple regression was rising in popularity. Multiple regression and computers were a particular boon to macroeconomists. As of  1966, many  economists were very optimistic that large-scale macroeconometric models of the economy would prove useful in prediction and control.

In fact, as of 1966, the consensus Keynesian view of macroeconomics was so widely accepted that macroeconomics was not controversial. In the Krupp volume, the special topic of macroeconomics only came up in one essay, by Fritz Machlup. He wrote,

Anyone who has done empirical work with national-income statistics or foreign-trade statistics is aware of thousands and thousands of arbitrary decisions that the statisticians had to make in executing the operations dictated or suggested by one of the large variety of definitions accepted for the terms in question. One cannot expect with any confidence that any of the theories connecting the pure constructs of the relevant aggregative magnitudes will be borne out by an examination of their operational counterparts.

Although practically a footnote in the ’60s, this concern was a sign of things to come.

THE CRISIS OF 1980

Image result for daniel bell quotes

Economist Daniel Bell

Fourteen years after the Krupp volume, the situation had changed dramatically. In 1980, when The Public Interest put out its special issue on “The Crisis in Economic Theory,” the title hardly needed to be justified. As Daniel Bell put it in his contribution,

Today, there is general agreement that government economic management and policy is in disarray. Many economists argue that prescriptions derived from previous historical situations no longer apply, but there is little consensus as to new prescriptions.

By this time, macroeconomics was the most troubled sub-discipline of economics. Among professional economists as well as laymen, Keynesian economics had been discredited by the Great Stagflation, in which unemployment and inflation both soared to levels far above those seen in the 1960s. This experience suggested that the Keynesians could neither predict nor control the economy.

And yet, in terms of our first methodological question, concerning the role of mathematics, 1980 might have been the high point for the belief that insight would come from greater mathematical sophistication. I call the late 1970s, which is when I did my graduate work, the era of “peak math.”

In the 1970s, two of the most prestigious journals for a young economist were Econometrica and the Journal of Economic Theory, which published the most mathematically difficult articles. In the 1970s, the five recipients of the John Bates Clark Medal, a highly prestigious award given to an American economist under 40, collectively had published 25 articles in Econometrica and nine in the Journal of Economic Theory by the year they received the award (they published more in those journals subsequently).

In the 1980 Public Interest volume, two of the premier mathematical economists of the time, Kenneth Arrow and Frank Hahn, both used their essays to provide a perspective on macroeconomics. They argued that it was possible to reconcile microeconomic reasoning with Keynesian macroeconomic theory. But there was no discussion in the Public Interest volume of the role of math per se.

The idea of homo economicus was questioned by Bell:

Since men act variously by habit and custom, irrationally or zealously, by conscious design to change institutions or redesign social arrangements, there is no intrinsic order, there are no “economic Laws” constituting the “structure” of the economy; there are only different patterns of historical behavior. Thus, economics, and economic theory, cannot be a “closed system.”

However, within the economics profession, the fashion was quite the opposite. Many economists, represented in the Public Interest volume by Mark Willes, thought that the problem with Keynesian economics was that it did not impute enough rationality to economic man. Following Robert Lucas, Jr., these economists argued that macroeconomic models had to assume rationality in the way that individuals formed expectations about the future. Models that employed rational expectations also happened to be mathematically difficult; one of Lucas’s most important papers appeared in the Journal of Economic Theory in 1972.

There was considerable distance between the macroeconomic views of Willes and those of Arrow and Hahn, and even more distance from those of Paul Davidson, who represented the “post-Keynesian” (further to the left) school in the issue. And yet nowhere in the volume is there a discussion of the issue of bias.

Elsewhere, economists were becoming aware of the bias in macroeconomics. Robert Hall coined the term “freshwater economics vs. saltwater economics” to characterize the contrast between the views prevalent at the Universities of Chicago, Minnesota, and Rochester on the one hand, and those prevalent at Harvard, MIT, Yale, Stanford, and Berkeley on the other. The two schools of thought had both different beliefs about how the economy works and different ideological predilections. Freshwater economists believed that attempts to control unemployment and output using monetary and fiscal policy were ineffective, and they also tended to believe in conservative economic policy. Saltwater economists took the opposite view. However, neither would have admitted that their political inclinations had any effect on their beliefs about the effectiveness of discretionary fiscal and monetary policy.

Regarding the question of testing procedures, the economics profession was roiled in that period by the “Lucas critique” as applied to macroeconometric models. In 1976, Lucas argued that, under rational expectations, a model that had a robust statistical fit with the past could nonetheless break down completely going forward.

The Lucas critique grabbed the spotlight in the late 1970s and beyond. However, another critique would prove to have greater significance. In 1983, Edward Leamer published an article titled “Let’s Take the Con Out of Econometrics.” He wrote,

The econometric art as it is practiced at the computer terminal involves fitting many, perhaps thousands, of statistical models. One or several that the researcher finds pleasing are selected for reporting purposes. This searching for a model is often well intentioned, but there can be no doubt that such a specification search invalidates the traditional theories of inference.

When considering, for instance, whether private schools or public schools are more effective, the investigator can choose which factors to control for and how to specify the variables. In practice, each investigator iterates through many plausible choices before selecting the one to report. The economist behaves like an experimenter who is able to tweak the conditions of the experiment to obtain a desired result. This is not conducive to reliability.

All of these were serious challenges. But in 1980, the most significant by far was the apparent failure of macroeconomics to provide a reliable means of predicting and directing the behavior of the economy. This was the essence of the crisis.

AFTER THE CRISIS

As this is being written, a half-century after the Krupp volume, mathematical modeling is still the standard in the major economics journals. But there is nothing like the same faith in higher mathematics that characterized the “peak math” era of the 1970s. The five Clark medalists from 2011-2015 published a total of four papers in Econometrica and none in the Journal of Economic Theory.

Economists no longer insist that homo economicus be modeled as rational. Instead, there is a popular field known as behavioral economics, which studies the biases and heuristics that affect individual decision-making and attempts to trace through the economic implications of these deviations from rationality.

Image result for economist paul romer

Economist Paul Romer

Economists continue to preach the positivist ideal of scientific objectivity, without questioning whether it is achievable. However, the problems of bias are occasionally aired. In 2015, Paul Romer wrote a provocative essay entitled “Mathiness in the Theory of Economic Growth.” Despite its title, it is by no means a criticism of the use of math in economics. Rather, Romer complained that some economists are producing biased theory in mathematical guise.

The style that I am calling mathiness lets academic politics masquerade as science. Like mathematical theory, mathiness uses a mixture of words and symbols, but instead of making tight links, it leaves ample room for slippage between statements in natural versus formal language and between statements with theoretical as opposed to empirical content.

Recall from the Krupp volume the phrase “applicability theorem,” meaning the manner of connecting the mathematical model to real-world observables. In physics, this process seems to be straightforward. But in economics, there is room for disagreement about the circumstances to which a mathematical model applies. Romer’s view is that certain economists, primarily of the freshwater school, are guilty of abusing assumptions about how their equations connect with reality. They make interpretations that suit their political biases, but otherwise their interpretations are not justified.

Others take Romer’s concern much further. For example, Noah Smith wrote of Romer,

He singles out Lucas, Prescott and a few others for having tenuous or sloppy links between mathematical elements and the real world. But from what I can see, such tenuous and sloppy links are the rule in macro fields.

There are two problems embedded in these mathiness critiques. One problem, emphasized by both Romer and Smith, is that theorists produce papers with what we might term false applicability theorems. That is, because the concepts or assumptions clearly do not relate to the real world, they produce insights that have no practical value. The second problem, emphasized by Romer, is that the insights are not only inapplicable to the real world but are driven by personal biases of the authors, not by an attempt to arrive at scientific truth.

Economists’ testing procedures have changed dramatically in recent decades. Rather than rely on multiple regression, economists often look for “natural experiments.” For example, in assessing the quality of charter schools, some economists have used the fact that in some cases charter-school students are selected by lottery from qualified applicants. This creates a “natural experiment,” in which the students who were chosen in the lottery to attend a charter school can be compared to presumably similar students who participated in the lottery but were not chosen and therefore wound up in public schools.

In addition, some economists now use actual experiments. They put experimental subjects in situations that involve economic decision-making and test how subjects respond under varying experimental conditions. This approach has been particularly helpful in behavioral economics, where it borrows key techniques from experimental psychology. Note, however, that neither actual experiments nor natural experiments are readily applicable to macroeconomics. The problem of arriving at reliable tests of macroeconomic hypotheses is still unsolved.

Unlike in 1980, however, macroeconomists today are fairly well satisfied with their field, in spite of (or perhaps because of) the inability to rigorously test their theories. The “macro wars” of the 1970s gave way to a consensus view that monetary policy could stabilize inflation,  and that this in turn would limit the severity of recessions. The Great Moderation that prevailed from the mid-1980s until 2008 appeared to confirm this view.

Although this complacent consensus was clearly falsified by the deep recession and painfully slow recovery that followed the financial crisis, economists migrated fairly easily to a new consensus about this episode. In this view, the loss of wealth due to the collapse of housing prices caused households to sharply curtail spending at the same time that the extreme leverage of some major financial institutions and the tight interconnections among financial firms caused markets to “seize up” when investors lost confidence in mortgage securities, leading to widespread declines in the availability of credit. The fact that short-term interest rates dropped to the “zero bound” in the wake of the crisis then forced policymakers to undertake creative steps to prop up demand, including “quantitative easing” by the Federal Reserve and the stimulus enacted early in the Obama administration. Had these steps not been taken, the crisis would have been far worse, with unemployment perhaps approaching the levels reached in the Great Depression.

We cannot re-run history without the stimulus and without quantitative easing. There is no way to test the claim that there would have been another Great Depression without those policies. But there are serious reasons for disputing this consensus explanation. In my own view, for instance, there is no simple monetary or fiscal solution to unemployment. Instead, I believe that unemployment results from the fragility of the intricate patterns of specialization and trade that emerge in the economy. Sometimes, patterns of specialization that were profitable yesterday are not profitable today, and some people will be without jobs until new patterns can be discovered. In this view, the path that the economy took in 2008 and its aftermath was not decisively affected by fiscal and monetary policy.

But my views are heterodox. In the academic community, macroeconomics is not nearly as contentious or acrimonious as it was when the economy took an unexpected turn in the 1970s — despite the equally unexpected turn it has taken in the last decade.

THINGS TO COME

Although economics is not now in a state of abject crisis, as it was in the late ’70s, it is nonetheless likely to be entering a period of great change on all five of the disciplinary challenges we have been tracing.

First, there is reason to believe that in the coming years economists will reluctantly come to recognize the importance of mental-cultural factors as determinants of economic outcomes, reducing the power of mathematical modeling as an approach. There is really no avoiding some movement in the direction of understanding economics as an interpretive discipline, a little like history. In trying to interpret the decline in labor-force participation of working-age males over the past two decades, or to understand the phenomenon of many retail firms offering special deals on “Black Friday,” there is certainly some room to use mathematical models to aid the analysis. But they are neither necessary for coming up with interpretations nor sufficient to render one interpretation superior to all others. In examining subjects like these, economists could greatly reduce their usage of mathematical expression without losing anything in terms of effective theory.

In the 1980 Public Interest volume, Israel Kirzner wrote,

Economic theory needs to be reconstructed so as to recognize at each stage the manner in which changes in external phenomena modify economic activity strictly through the filter of the human mind. Economic consequences, that is, dare not be linked functionally and mechanically to external changes, as if the consequences emerge independently of the way in which the external changes are perceived, of the way in which these changes affect expectations, and of the way in which these changes are discovered at all. [Emphasis in original.]

Kirzner is a practitioner of “Austrian” economics, which was heterodox then and remains so now. But a number of “Austrian” ideas are likely to gradually penetrate the orthodoxy, particularly the emphasis on the role of non-material factors in affecting economic phenomena.

Social scientists are inclined toward materialistic explanations. They want to explain economic phenomena on the basis of resource endowments and technical capacities. They want to explain voting behavior on the basis of demographic and economic factors. The alternative to this materialistic reductionism is to say that ideas matter. It turns out that one cannot explain the tremendous rise in economic growth in the past two centuries on the basis of capital accumulation alone. The remarkable gains in the standard of living have been mostly due to the development and application of new ideas for products and production methods.

Another non-material factor is cultural norms and social institutions. One cannot explain differences in wealth across countries simply on the basis of resources. It is not that South Korea is resource-rich relative to North Korea, or that Israel is resource-rich relative to its Arab neighbors. South Korea and Israel have political and cultural institutions that are friendlier toward enterprise, and that is what accounts for their relatively strong economic performance.

Economists prefer to examine people as individuals. However, individuals get their ideas mostly from other people. The world of mental phenomena is predominantly a cultural world. And these mental-cultural factors in social behavior make economics less deterministic and less individualistic than many economists would prefer it to be. Like it or not, this reduces the advantage of mathematical modeling relative to verbal reasoning.

Another reason to suppose that mathematical modeling will wane is the shifting media landscape. As of now, academic economists still must publish in journals to be successful, and these require mathematical modeling. However, in the age of the internet, the print journal is a very inefficient forum for disseminating ideas. As economists increasingly make use of other forums, including social media, this may break the lock that print journals currently hold on career prospects. That in turn could facilitate more variety in the means of expression, breaking the monopoly currently held by mathematical modeling.

Second, and very much related to the likely decline in the prominence of modeling, economists are likely also to reluctantly come to recognize that, because cultural factors matter, the simple model of the individual homo economicus has only limited applicability.

Image result for Daniel Kahneman

Psychologist Daniel Kahneman, Presidential Medal of Freedom Awardee

The biggest threat to the assumption of homo economicus is not alternative theories of individual psychology, such as those in behavioral economics. In fact, behavioral economics has been caught up in what in psychology is known as the “replication crisis.” Rather, the need to go beyond the assumption of homo economicus will mostly arise from a recognition of the importance of culture as a determinant of behavior. Economists will need to see economic decisions as embedded in cultural circumstances. In order to understand economic phenomena, we will have to pay attention to the role of beliefs and social norms.

Because ideas and cultural context matter, there are many potential causal factors in economic phenomena. Those curmudgeons who argued that economics is not a “closed system” were correct. It is up to each economist to choose which causal factors to study and which to ignore. Unfortunately, this means that it is possible for different economists to arrive at — and to stick with — different conclusions based on predilections.

And this points to the third plausible development in economic theory. There is a very real possibility that over the next 20 years academic economics will congeal into a discipline, like sociology today, which is definitively shaped by an ideologically driven point of view. Among highly educated people, ideological polarization is increasing. Economists have always had their biases about which sorts of theories seemed reasonable; some of these biases are idiosyncratic, as when one economist is inclined to believe that labor demand responds very little to a change in wage rates and another is inclined to believe that labor demand responds a great deal. But going forward, biases are likely to increasingly be driven by political viewpoints rather than by other considerations.

This will be evident in beliefs of economists that are politically consistent but analytically contradictory. For example, it is politically consistent for someone on the left to believe that a rise in the minimum wage would not reduce hiring and also that more immigration would not depress wages. Analytically, however, these are opposite views. The minimum-wage increase will not reduce hiring if one treats labor demand as highly inelastic (so that a small change in hiring will be associated with a given change in wages). Increased immigration will not depress wages if one treats labor demand as highly elastic (so that a large change in hiring will be associated with a given change in wages). I think we are already starting to see economists opt for political consistency at the expense of analytical consistency.

This more political profession is very likely to point toward the left. Economists are part of an academic community in which peer pressure and community values push left. It is inevitable that the social life of an academic is going to involve interacting with people from other disciplines who are overwhelmingly on the left. This makes it uncomfortable on campus to espouse the free-market views that one used to hear from conservatives like Milton Friedman.

There are signs that the momentum within the profession is toward the left. For example, of the five major economics journals, the one that has experienced the largest increase in impact in recent decades has been the Quarterly Journal of Economics, associated with Harvard and its interventionist economics. Some of this has come at the expense of the Journal of Political Economy, associated with the University of Chicago and its free-market economics.

The left has a more appealing and more unified narrative of the financial crisis. Economists on the left treat the crisis as a product of individual irrationality on the part of home buyers, recklessness and greed on the part of bankers, and laxity on the part of financial regulators. In contrast, the right is split on its narrative. Peter Wallison blames housing policy and the actions of Freddie Mac and Fannie Mae. Monetarist John Taylor blames loose money in the years leading up to the crisis. Other monetarists, notably Scott Sumner and Robert Hetzel, blame tight money during the crisis.

I myself am inclined toward mental-cultural explanations. All of the participants in the run-up to the crisis, including the regulators, were embedded in a culture that saw housing as a socially desirable, low-risk investment. Regulators thought that banks were safer holding mortgage-backed securities than other assets, and they used capital regulations to steer banks in that direction. Although a few regulators expressed doubts about sub-prime mortgage lending, the general view was that, if anything, the availability of mortgage loans was too restricted. The same investment bankers who now are viewed as reckless were at the time regarded as experts in risk management.

In addition, the election of Donald Trump as President may lead even conservative economists to want to distance themselves from the right, at least as Trump defines it. Economists on the right are likely to be uncomfortable with Trump both in substance, particularly on trade and immigration, and in style. So we are likely to see less outspoken conservatism from academic economists than we would have if someone else, Democrat or Republican, had been elected in 2016. And the result will be an intensification of all the other trends pushing the profession to the left.

Fourth, it seems unavoidable that economists will reluctantly come to recognize that they deal in the realm of patterns and stories, rather than decisive hypothesis testing. It simply isn’t possible for economists and other social scientists to achieve the same rigor as is found in the natural sciences, and economists seem increasingly to be accepting this reality. One favorable sign is the increased focus on replicability of empirical work in economics. Many top journals are imposing requirements for transparency of data. There are also economists who are advocating that studies be “registered” in advance, so that scholars can track the studies that are not published because they fail to find “interesting” results.

While the goal of these sorts of efforts is often to chase the Holy Grail of total reliability, they may well have the interim effect of making it clear that existing studies lack reliability. Eventually, more economists may be willing to acknowledge that the Holy Grail is not attainable.

Edward Leamer titled one of his books Macroeconomic Patterns and Stories. In the introduction, he wrote,

You may want to substitute the more familiar scientific words “theory and evidence” for “patterns and stories.” Do not do that. With the phrase “theory and evidence” come hidden stow-away after-the-fact myths about how we learn and how much we can learn. The words “theory and evidence” suggest an incessant march toward a level of scientific certitude that cannot be attained in the study of the complex self-organizing human system that we call the economy. The words “patterns and stories” much more accurately convey our level of knowledge, now, and in the future as well. It is literature, not science. [Emphasis in the original.]

In 2009, when this book was published, Leamer’s views were contrarian and not widely shared. But as economists come to acknowledge the mental-cultural determinants of economic phenomena, and the complexity that this creates, they may come around to acknowledging the limited applicability of scientific methods in economics.

Finally, we come to the special case of macroeconomists, who were in an acknowledged state of crisis in 1980 but are oddly complacent today. My own view is that the attempt to interpret economic phenomena in aggregative terms, as if all workers were identical and all investment were in machinery, is proving untenable. Workers differ markedly in the nature of their skills and in the market value of those skills. Firms are investing not just in plants and equipment but in new products and processes. They are increasingly hiring people to develop organizational capital rather than to produce output.

As a result, we may come to see diminished interest in looking at the economy in aggregate terms — that is, as possessed of a single price level, unemployment rate, productivity-growth rate, and the like. Instead, there will be much more research done on divergences: among regions, among industries, among demographic groups, and so on.

Already, economists are aware that prices have been rising relatively rapidly in major service sectors, such as education and health care, while prices have been rising slowly or even falling in major goods industries, such as home electronics and computers. We are aware that wage and employment prospects continue to diverge between college graduates and those with less education, and between college graduates with STEM degrees and other degrees. Housing and labor markets in coastal cities look very different from those in Midwestern towns.

A few economists, led by Daron Acemoglu, have started to look at industry linkages. They are finding that various industry clusters respond to events in different ways. This increasing study of divergence could point toward the death of macroeconomic modeling as I learned it in graduate school and as it has until recently been taught. Where the tradition is to speak of the “representative agent,” the trend will increasingly be toward “I contain multitudes.” This ought to lead economists to focus on the processes by which new patterns of specialization and trade are formed and old patterns are rendered unsustainable. This research will offer insights in precisely the areas in which traditional macroeconomics is stale and unreliable.

DIVERSITY GAINED, DIVERSITY LOST

In the end, can we really have effective theory in economics? If by effective theory we mean theory that is verifiable and reliable for prediction and control, the answer is likely no. Instead, economics deals in speculative interpretations and must continue to do so.

This reality is far from new. But economists are still grappling with its implications. They seem to resist one implication in particular: that the claim of economists to scientific expertise is no longer tenable.

Professional economists are increasingly aware of the mental-cultural factors that affect economic behavior. As a result, they are willing to broaden their methods beyond the strict reliance on mathematical derivations and multiple regression that prevailed 40 years ago. But if economics has become less of a monoculture with respect to methods, it is now more uniform in its support for Federal Reserve technocratic efforts and for economic activism in general. In 1980, the critics of Keynesian economic policies enjoyed respect in the leading economics departments and journals. This is much less true today.

Young economists who employ pluralistic methods to study problems are admired rather than marginalized, as they were in 1980. But economists who question the wisdom of interventionist economic policies seem headed toward the fringes of the profession.

In this respect, the barriers to effective theory in economics are different and perhaps more worrisome than was the case in 1980. The contemporary state of economic theory reflects a broader crisis in the social sciences and a deepening cleavage between the college campus and the rest of society.

Arnold Kling is an adjunct scholar with the Cato Institute and a member of the Financial Markets Working Group at the Mercatus Center at George Mason University. 

 

Xi Jinping’s Marco Polo Strategy


June 13, 2017

Xi Jinping’s Marco Polo Strategy

by Joseph S. Nye*@www.project-syndicate.org

*Joseph S. Nye, Jr., a former US assistant secretary of defense and chairman of the US National Intelligence Council, is University Professor at Harvard University. He is the author of Is the American Century Over?

Last month, Chinese President Xi Jinping presided over a heavily orchestrated “Belt and Road” forum in Beijing. The two-day event attracted 29 heads of state, including Russia’s Vladimir Putin, and 1,200 delegates from over 100 countries. Xi called China’s Belt and Road Initiative (BRI) the “project of the century.” The 65 countries involved comprise two-thirds of the world’s land mass and include some four and a half billion people.

Image result for xi jinping's Marco Polo Road

Originally announced in 2013, Xi’s plan to integrate Eurasia through a trillion dollars of investment in infrastructure stretching from China to Europe, with extensions to Southeast Asia and East Africa, has been termed China’s new Marshall Plan as well as its bid for a grand strategy. Some observers also saw the Forum as part of Xi’s effort to fill the vacuum left by Donald Trump’s abandonment of Barack Obama’s Trans-Pacific Partnership trade agreement.

China’s ambitious initiative would provide badly needed highways, rail lines, pipelines, ports, and power plants in poor countries. It would also encourage Chinese firms to increase their investments in European ports and railways. The “belt” would include a massive network of highways and rail links through Central Asia, and the “road” refers to a series of maritime routes and ports between Asia and Europe.

Marco Polo would be proud. And if China chooses to use its surplus financial reserves to create infrastructure that helps poor countries and enhances international trade, it will be providing what can be seen as a global public good.

Of course, China’s motives are not purely benevolent. Reallocation of China’s large foreign-exchange assets away from low-yield US Treasury bonds to higher-yield infrastructure investment makes sense, and creates alternative markets for Chinese goods. With Chinese steel and cement firms suffering from overcapacity, Chinese construction firms will profit from the new investment. And as Chinese manufacturing moves to less accessible provinces, improved infrastructure connections to international markets fits China’s development needs.

But is the BRI more public relations smoke than investment fire? According to the Financial Times, investment in Xi’s initiative declined last year, raising doubts about whether commercial enterprises are as committed as the government. Five trains full of cargo leave Chongqing for Germany every week, but only one full train returns.

Shipping goods overland from China to Europe is still twice as expensive as trade by sea. As the FT puts it, the BRI is “unfortunately less of a practical plan for investment than a broad political vision.” Moreover, there is a danger of debt and unpaid loans from projects that turn out to be economic “white elephants,” and security conflicts could bedevil projects that cross so many sovereign borders. India is not happy to see a greater Chinese presence in the Indian Ocean, and Russia, Turkey, and Iran have their own agendas in Central Asia.

Xi’s vision is impressive, but will it succeed as a grand strategy? China is betting on an old geopolitical proposition. A century ago, the British geopolitical theorist Halford Mackinder argued that whoever controlled the world island of Eurasia would control the world. American strategy, in contrast, has long favored the geopolitical insights of the nineteenth-century admiral Alfred Mahan, who emphasized sea power and the rimlands.

At World War II’s end, George F. Kennan adapted Mahan’s approach to develop his Cold War strategy of containment of the Soviet Union, arguing that if the US allied with the islands of Britain and Japan and the peninsula of Western Europe at the two ends of Eurasia, the US could create a balance of global power that would be favorable to American interests. The Pentagon and State Department are still organized along these lines, with scant attention paid to Central Asia.

Much has changed in the age of the Internet, but geography still matters, despite the alleged death of distance. In the nineteenth century, much of geopolitical rivalry revolved around the “Eastern Question” of who would control the area ruled by the crumbling Ottoman Empire. Infrastructure projects like the Berlin to Baghdad railway roused tensions among the Great Powers. Will those geopolitical struggles now be replaced by the “Eurasian Question”?

With the BRI, China is betting on Mackinder and Marco Polo. But the overland route through Central Asia will revive the nineteenth-century “Great Game” for influence that embroiled Britain and Russia, as well as former empires like Turkey and Iran. At the same time, the maritime “road” through the Indian Ocean accentuates China’s already fraught rivalry with India, with tensions building over Chinese ports and roads through Pakistan.

The US is betting more on Mahan and Kennan. Asia has its own balance of power, and neither India nor Japan nor Vietnam want Chinese domination. They see America as part of the solution. American policy is not containment of China – witness the massive flows of trade and students between the countries. But as China, enthralled by a vision of national greatness, engages in territorial disputes with its maritime neighbors, it tends to drive them into America’s arms.

Indeed, China’s real problem is “self-containment.” Even in the age of the Internet and social media, nationalism remains a most powerful force.

Overall, the United States should welcome China’s BRI. As Robert Zoellick, a former US Trade Representative and World Bank president, has argued, if a rising China contributes to the provision of global public goods, the US should encourage the Chinese to become a “responsible stakeholder.” Moreover, there can be opportunities for American companies to benefit from BRI investments.

The US and China have much to gain from cooperation on a variety of transnational issues like monetary stability, climate change, cyber rules of the road, and anti-terrorism. And while the BRI will provide China with geopolitical gains as well as costs, it is unlikely to be as much of a game changer in grand strategy, as some analysts believe. A more difficult question is whether the US can live up to its part.