June 28, 2017
How Effective is Economic Theory?
In 1980, following a decade of high inflation and unemployment — a combination that economists had previously thought to be impossible over extended periods — The Public Interest ran a special issue titled “The Crisis in Economic Theory.” Today, there is little talk of a crisis in economic theory. But in the past decade, we have experienced a financial crisis and subsequent decline in employment that also followed a path economists had previously thought to be impossible. Economists seem more confident than they did in 1980, but are they more deserving of confidence? If anything, some of the questions confronting economics should run deeper now than then.
In fact, the basic question of how economics should understand itself now demands urgent attention. Since the American Economic Association was founded in the 1880s, economists in this country have sought special status as scientifically grounded policy experts. Over the past 50 years, in particular, they have largely attained that status. Whether they deserve it is less clear. And what a scientific economics would really look like is not nearly as clear as some economists now imagine, either.
And it’s not just the practice: Even the ideal of economics as a science now demands serious scrutiny. If economic theory is not in crisis, maybe it deserves to be.
Instead of “science,” we might want to think about economics in terms of “effective theory.” As explained by Harvard physicist Lisa Randall,
Effective theory is a valuable concept when we ask how scientific theories advance, and what we mean when we say something is right or wrong. Newton’s laws work extremely well. They are sufficient to devise the path by which we can send a satellite to the far reaches of the Solar System and to construct a bridge that won’t collapse. Yet we know quantum mechanics and relativity are the deeper underlying theories. Newton’s laws are approximations that work at relatively low speeds and for large macroscopic objects. What’s more is that an effective theory tells us precisely its limitations — the conditions and values of parameters for which the theory breaks down. The laws of the effective theory succeed until we reach its limitations when these assumptions are no longer true or our measurements or requirements become increasingly precise.
Whereas the term “science” often is used to connote absolute truth in an almost religious sense, effective theory is provisional. When we are certain that in a particular context a theory will work, then and only then is the theory effective.
Effective theory consists of verifiable knowledge. To be verifiable, a finding must be arrived at by methods that are generally viewed as robust. Any researcher who tries to replicate a finding using appropriate methods should be able to confirm it. The strongest confirmation of the effectiveness of a theory comes from prediction and control. Lisa Randall’s example of sending a spacecraft to the far reaches of the solar system illustrates such confirmation.
This notion of effective theory sets a useful standard for considering economics. Economists are not without knowledge. We know that restrictions on trade tend to help narrow interests at the expense of broader prosperity. We know that market prices are important for coordinating specialization and division of labor in a complex economy. We know that the profit incentive promotes the introduction of improved products and processes, and that our high level of well-being results from the cumulative effect of such improvements. We know that government control over prices and production, as in communist countries, leads to inefficiency and corruption. We know that the laws of supply and demand tend to frustrate efforts to make goods more “affordable” by subsidizing them or to lower “costs” by fixing prices.
But policymakers have goals that go far beyond or run counter to such basic principles. They want to steer the economy using fiscal stimulus. They want to shape complex and important markets, including those of health insurance and home mortgages. It is doubtful that the effectiveness of economic theory is equal to such tasks.
Most scholarly research in economics is ultimately motivated by the unrealistic goal of providing effective theory to implement such technocratic objectives. But the resulting economic theory cannot be applied with the same confidence as Newtonian physics. Even worse is the fact that economists, unlike physicists, are not clear about the limits of the effectiveness of their theories. In short, when it comes to effective theory, economists promise more than they can deliver.
Over the last 50 years, questions about the effectiveness of economic theory have revolved around five interlocking subjects in particular: mathematical modeling, homo economicus, objectivity, testing procedures, and the particular status of the sub-discipline of macroeconomics.
With mathematical modeling, the question concerns the tradeoff between rigor and relevance. Mathematical models are considered more rigorous than verbal arguments, but the process of modeling serves to narrow the scope of economic thinking. Are mathematical economists ignoring important topics and missing insights that a nonmathematical economics might explore?
With homo economicus, the question concerns the advantages and disadvantages of assuming that the individual behaves with economic rationality. This assumption appears to be a very powerful tool for prediction and control of economic behavior. But what are the limits to its applicability?
With objectivity, the question concerns the relationship between facts and values, or between analysis and policy preferences. Physicists and astronomers are almost never accused of letting a partisan political outlook affect their views on the phenomena that they study. Can economists aspire to that same level of objectivity?
With testing procedures, the question concerns the ability of economists to bring persuasive evidence to bear on questions of theory. In the history of the natural sciences, hypotheses have been confirmed or discarded on the basis of decisive experiments. How can economists obtain verifiable knowledge when dealing with phenomena that are not as readily subject to experimental methods?
With macroeconomics, the question is whether economists really can predict and control the overall behavior of unemployment and inflation in a nation’s economy. Economists became optimistic about their prospects for doing this during periods of favorable economic performance, such as the mid-1960s or the two decades that preceded the financial crisis of 2008. But these episodes were rudely interrupted by the unexpected convulsions of the Great Stagflation of the 1970s and the Great Recession that followed the financial crisis. Is macroeconomic theory a success or a failure?
Economists’ views about these questions have shifted over the past five decades. It is interesting to compare what they were saying in 1966 with what they were saying in 1980 and what they have been saying more recently. Such comparisons might help us consider what we should expect of economics in the years to come.
In 1966, several economists took up the five interlocking questions noted above in a book of essays entitled The Structure of Economic Science, edited by Sherman Roy Krupp. In an essay on the subject of mathematical modeling, William Baumol summarized the tradeoff between rigor and relevance.
Strong mathematical results must, therefore, be viewed by the practitioner with somewhat mixed feelings. At best they represent a relevant revelation about his problem; at worst they may cast doubt upon the appropriateness of his assumptions.
As of 1966, there were still economists who opposed mathematical modeling, but they were in retreat and aging out of the profession. As Martin Bronfenbrenner put it in his essay in the Krupp volume,
Until perhaps a generation ago, it might have been necessary to justify the use of mathematics in economics and the other social sciences….The shoe now threatens, indeed, to pass to the other foot, with the non-mathematical practitioner laboring under a darkening suspicion…that nothing he does will be worthwhile unless formulated mathematically and subjected to statistical testing.
On the issue of relevance, Bronfenbrenner wrote that any mathematical model requires what he called an “applicability theorem,” showing that the conditions under which it is true obtain in the real world. “And since it relates to the actual world, an applicability theorem may be highly probable but is never absolutely certain.”
If Lisa Randall is correct, then physicists usually know when Newton’s theories are effective and when they are not. In contrast, economists often do not know when their theories are effective. How many firms must there be for the theory of “perfect competition” to be applicable? What assumptions are required in order for workers’ wages to be tied closely to productivity, and do these assumptions hold in practice?
Nobel Laureate in Economics–James Buchanan
On the topic of homo economicus, James Buchanan wrote in his contribution to the volume that “the central predictive proposition of economics….amounts to saying that individuals, when confronted with effective choice, will choose more rather than less.”
Previously, Milton Friedman had argued in favor of the stronger definition of homo economicus, in which individuals and firms optimize over their choices. In his book Essays in Positive Economics, published in 1953, Friedman argued that this assumption could be justified on the basis of its ability to predict economic outcomes. He developed a famous analogy in which an observer is asked to predict the behavior of a billiard player. Although the billiard player does not employ the laws of physics, Friedman argued that the observer can use the laws of physics to predict how the billiard player will line up his shot. By analogy, Friedman argued that the economist can use mathematical optimization models to predict how consumers and firms will behave.
Nobel Laureate in Economics–Milton Friedman, University of Chicago
As of 1966, the profession had largely defaulted to Friedman’s view. The main alternative was Herbert Simon’s “bounded rationality” or “satisficing,” in which economic agents are assumed to stop short of total optimization. Although many economists were intrigued by Simon’s ideas, which helped him to win a Nobel Prize in 1978, the overwhelming body of economic research ignored them and continued to assume optimization.
Concerning objectivity, many economists believed in a form of positivism, in which facts could be separated from values. The positivist position is that technical expertise is separate from preferences. The public expresses preferences about outcomes, and the technical expert then prescribes policies to achieve those outcomes. Buchanan argued firmly against economists interjecting their own policy preferences into their work:
[The economist] verges dangerously on irresponsible action when he allows his zeal for social progress, as he conceives this, to take precedence over his search for and respect of scientific truth, as determined by the consensus of his peers….If the economist can learn from his colleagues in the physical sciences…that the respect for truth takes precedence above all else and that it is the final value judgment that must pervade all science, he may, yet, rescue the discipline from its currently threatened rush into absurdity, oblivion, and disrepute.
Such an attitude is also essential to distinguishing between technical and political disputes in economics. Bronfenbrenner wrote,
Does not the economists’ notorious failure to agree suggest or prove the “prescientific” character of economics as such? I follow my professional bias (vested interest?) on the negative side of this proposition. Much of the disagreement is inevitable since it centers around economic values and policy recommendations and involves normative rather than positive economics….
This is not to deny the existence of disagreements in positive economics, of which there are plenty….Yet we have faith that most if not all such positive disagreements will eventually be resolved, as parallel disagreements have been resolved in the natural sciences.
Like many other economists at that time, Buchanan and Bronfenbrenner believed that values and scientific investigation could be separate, and that economists are on firmer ground when they stick with scientific investigation.
With regard to testing procedures, there were doubts expressed in 1966 by two heterodox thinkers, Emile Grunberg and Kenneth Boulding. Grunberg wrote that, “In fact, the history of the social sciences shows no clearcut case in which a theory has been disconfirmed by contradictory evidence.”
He went on to suggest that the reason for this is that social scientists work with open systems, in which the number of factors that could affect an outcome is intractably large. In contrast, physical scientists are able to work with closed systems in which every factor can be accounted for. Boulding wrote,
All predictions, even in the physical sciences, are really conditional predictions. They say that if the system remains unchanged and the parameters of the system remain unchanged, then such and such will be the state of the system at certain times in the future. If the system does change, of course the prediction will be falsified, and this is what happens in social systems all the time….What this means is that the failure of prediction in social systems does not lead to the improvement of our knowledge of these systems, simply because there is nothing there to know….[T]he possibility that our knowledge of society is sharply limited by the unknowable is something that must be taken into consideration.
While these comments were prescient, they were ignored at the time they were written. Instead, economists were confident that their statistical techniques were capable of sifting among hypotheses to find reliable ones. In fact, the mid-1960s was when economists were particularly optimistic about econometrics, and especially the technique of multiple regression. Multiple regression was thought to be a way to achieve with non-experimental data the ideal of controlling for extraneous influences.
For example, suppose that you want to examine whether private schools outperform public schools, using student test scores as the metric. However, you know that many factors affect test scores, including each student’s ability and family environment. With multiple regression, the investigator introduces variables representing these other factors into the statistical analysis, and in theory this means that those variables are controlled for.
Multiple regression requires extensive computation, so it was largely impractical before the advent of computers. By the late 1960s, many universities had mainframe computers that could handle these calculations, and multiple regression was rising in popularity. Multiple regression and computers were a particular boon to macroeconomists. As of 1966, many economists were very optimistic that large-scale macroeconometric models of the economy would prove useful in prediction and control.
In fact, as of 1966, the consensus Keynesian view of macroeconomics was so widely accepted that macroeconomics was not controversial. In the Krupp volume, the special topic of macroeconomics only came up in one essay, by Fritz Machlup. He wrote,
Anyone who has done empirical work with national-income statistics or foreign-trade statistics is aware of thousands and thousands of arbitrary decisions that the statisticians had to make in executing the operations dictated or suggested by one of the large variety of definitions accepted for the terms in question. One cannot expect with any confidence that any of the theories connecting the pure constructs of the relevant aggregative magnitudes will be borne out by an examination of their operational counterparts.
Although practically a footnote in the ’60s, this concern was a sign of things to come.
THE CRISIS OF 1980
Economist Daniel Bell
Fourteen years after the Krupp volume, the situation had changed dramatically. In 1980, when The Public Interest put out its special issue on “The Crisis in Economic Theory,” the title hardly needed to be justified. As Daniel Bell put it in his contribution,
Today, there is general agreement that government economic management and policy is in disarray. Many economists argue that prescriptions derived from previous historical situations no longer apply, but there is little consensus as to new prescriptions.
By this time, macroeconomics was the most troubled sub-discipline of economics. Among professional economists as well as laymen, Keynesian economics had been discredited by the Great Stagflation, in which unemployment and inflation both soared to levels far above those seen in the 1960s. This experience suggested that the Keynesians could neither predict nor control the economy.
And yet, in terms of our first methodological question, concerning the role of mathematics, 1980 might have been the high point for the belief that insight would come from greater mathematical sophistication. I call the late 1970s, which is when I did my graduate work, the era of “peak math.”
In the 1970s, two of the most prestigious journals for a young economist were Econometrica and the Journal of Economic Theory, which published the most mathematically difficult articles. In the 1970s, the five recipients of the John Bates Clark Medal, a highly prestigious award given to an American economist under 40, collectively had published 25 articles in Econometrica and nine in the Journal of Economic Theory by the year they received the award (they published more in those journals subsequently).
In the 1980 Public Interest volume, two of the premier mathematical economists of the time, Kenneth Arrow and Frank Hahn, both used their essays to provide a perspective on macroeconomics. They argued that it was possible to reconcile microeconomic reasoning with Keynesian macroeconomic theory. But there was no discussion in the Public Interest volume of the role of math per se.
The idea of homo economicus was questioned by Bell:
Since men act variously by habit and custom, irrationally or zealously, by conscious design to change institutions or redesign social arrangements, there is no intrinsic order, there are no “economic Laws” constituting the “structure” of the economy; there are only different patterns of historical behavior. Thus, economics, and economic theory, cannot be a “closed system.”
However, within the economics profession, the fashion was quite the opposite. Many economists, represented in the Public Interest volume by Mark Willes, thought that the problem with Keynesian economics was that it did not impute enough rationality to economic man. Following Robert Lucas, Jr., these economists argued that macroeconomic models had to assume rationality in the way that individuals formed expectations about the future. Models that employed rational expectations also happened to be mathematically difficult; one of Lucas’s most important papers appeared in the Journal of Economic Theory in 1972.
There was considerable distance between the macroeconomic views of Willes and those of Arrow and Hahn, and even more distance from those of Paul Davidson, who represented the “post-Keynesian” (further to the left) school in the issue. And yet nowhere in the volume is there a discussion of the issue of bias.
Elsewhere, economists were becoming aware of the bias in macroeconomics. Robert Hall coined the term “freshwater economics vs. saltwater economics” to characterize the contrast between the views prevalent at the Universities of Chicago, Minnesota, and Rochester on the one hand, and those prevalent at Harvard, MIT, Yale, Stanford, and Berkeley on the other. The two schools of thought had both different beliefs about how the economy works and different ideological predilections. Freshwater economists believed that attempts to control unemployment and output using monetary and fiscal policy were ineffective, and they also tended to believe in conservative economic policy. Saltwater economists took the opposite view. However, neither would have admitted that their political inclinations had any effect on their beliefs about the effectiveness of discretionary fiscal and monetary policy.
Regarding the question of testing procedures, the economics profession was roiled in that period by the “Lucas critique” as applied to macroeconometric models. In 1976, Lucas argued that, under rational expectations, a model that had a robust statistical fit with the past could nonetheless break down completely going forward.
The Lucas critique grabbed the spotlight in the late 1970s and beyond. However, another critique would prove to have greater significance. In 1983, Edward Leamer published an article titled “Let’s Take the Con Out of Econometrics.” He wrote,
The econometric art as it is practiced at the computer terminal involves fitting many, perhaps thousands, of statistical models. One or several that the researcher finds pleasing are selected for reporting purposes. This searching for a model is often well intentioned, but there can be no doubt that such a specification search invalidates the traditional theories of inference.
When considering, for instance, whether private schools or public schools are more effective, the investigator can choose which factors to control for and how to specify the variables. In practice, each investigator iterates through many plausible choices before selecting the one to report. The economist behaves like an experimenter who is able to tweak the conditions of the experiment to obtain a desired result. This is not conducive to reliability.
All of these were serious challenges. But in 1980, the most significant by far was the apparent failure of macroeconomics to provide a reliable means of predicting and directing the behavior of the economy. This was the essence of the crisis.
AFTER THE CRISIS
As this is being written, a half-century after the Krupp volume, mathematical modeling is still the standard in the major economics journals. But there is nothing like the same faith in higher mathematics that characterized the “peak math” era of the 1970s. The five Clark medalists from 2011-2015 published a total of four papers in Econometrica and none in the Journal of Economic Theory.
Economists no longer insist that homo economicus be modeled as rational. Instead, there is a popular field known as behavioral economics, which studies the biases and heuristics that affect individual decision-making and attempts to trace through the economic implications of these deviations from rationality.
Economist Paul Romer
Economists continue to preach the positivist ideal of scientific objectivity, without questioning whether it is achievable. However, the problems of bias are occasionally aired. In 2015, Paul Romer wrote a provocative essay entitled “Mathiness in the Theory of Economic Growth.” Despite its title, it is by no means a criticism of the use of math in economics. Rather, Romer complained that some economists are producing biased theory in mathematical guise.
The style that I am calling mathiness lets academic politics masquerade as science. Like mathematical theory, mathiness uses a mixture of words and symbols, but instead of making tight links, it leaves ample room for slippage between statements in natural versus formal language and between statements with theoretical as opposed to empirical content.
Recall from the Krupp volume the phrase “applicability theorem,” meaning the manner of connecting the mathematical model to real-world observables. In physics, this process seems to be straightforward. But in economics, there is room for disagreement about the circumstances to which a mathematical model applies. Romer’s view is that certain economists, primarily of the freshwater school, are guilty of abusing assumptions about how their equations connect with reality. They make interpretations that suit their political biases, but otherwise their interpretations are not justified.
Others take Romer’s concern much further. For example, Noah Smith wrote of Romer,
He singles out Lucas, Prescott and a few others for having tenuous or sloppy links between mathematical elements and the real world. But from what I can see, such tenuous and sloppy links are the rule in macro fields.
There are two problems embedded in these mathiness critiques. One problem, emphasized by both Romer and Smith, is that theorists produce papers with what we might term false applicability theorems. That is, because the concepts or assumptions clearly do not relate to the real world, they produce insights that have no practical value. The second problem, emphasized by Romer, is that the insights are not only inapplicable to the real world but are driven by personal biases of the authors, not by an attempt to arrive at scientific truth.
Economists’ testing procedures have changed dramatically in recent decades. Rather than rely on multiple regression, economists often look for “natural experiments.” For example, in assessing the quality of charter schools, some economists have used the fact that in some cases charter-school students are selected by lottery from qualified applicants. This creates a “natural experiment,” in which the students who were chosen in the lottery to attend a charter school can be compared to presumably similar students who participated in the lottery but were not chosen and therefore wound up in public schools.
In addition, some economists now use actual experiments. They put experimental subjects in situations that involve economic decision-making and test how subjects respond under varying experimental conditions. This approach has been particularly helpful in behavioral economics, where it borrows key techniques from experimental psychology. Note, however, that neither actual experiments nor natural experiments are readily applicable to macroeconomics. The problem of arriving at reliable tests of macroeconomic hypotheses is still unsolved.
Unlike in 1980, however, macroeconomists today are fairly well satisfied with their field, in spite of (or perhaps because of) the inability to rigorously test their theories. The “macro wars” of the 1970s gave way to a consensus view that monetary policy could stabilize inflation, and that this in turn would limit the severity of recessions. The Great Moderation that prevailed from the mid-1980s until 2008 appeared to confirm this view.
Although this complacent consensus was clearly falsified by the deep recession and painfully slow recovery that followed the financial crisis, economists migrated fairly easily to a new consensus about this episode. In this view, the loss of wealth due to the collapse of housing prices caused households to sharply curtail spending at the same time that the extreme leverage of some major financial institutions and the tight interconnections among financial firms caused markets to “seize up” when investors lost confidence in mortgage securities, leading to widespread declines in the availability of credit. The fact that short-term interest rates dropped to the “zero bound” in the wake of the crisis then forced policymakers to undertake creative steps to prop up demand, including “quantitative easing” by the Federal Reserve and the stimulus enacted early in the Obama administration. Had these steps not been taken, the crisis would have been far worse, with unemployment perhaps approaching the levels reached in the Great Depression.
We cannot re-run history without the stimulus and without quantitative easing. There is no way to test the claim that there would have been another Great Depression without those policies. But there are serious reasons for disputing this consensus explanation. In my own view, for instance, there is no simple monetary or fiscal solution to unemployment. Instead, I believe that unemployment results from the fragility of the intricate patterns of specialization and trade that emerge in the economy. Sometimes, patterns of specialization that were profitable yesterday are not profitable today, and some people will be without jobs until new patterns can be discovered. In this view, the path that the economy took in 2008 and its aftermath was not decisively affected by fiscal and monetary policy.
But my views are heterodox. In the academic community, macroeconomics is not nearly as contentious or acrimonious as it was when the economy took an unexpected turn in the 1970s — despite the equally unexpected turn it has taken in the last decade.
THINGS TO COME
Although economics is not now in a state of abject crisis, as it was in the late ’70s, it is nonetheless likely to be entering a period of great change on all five of the disciplinary challenges we have been tracing.
First, there is reason to believe that in the coming years economists will reluctantly come to recognize the importance of mental-cultural factors as determinants of economic outcomes, reducing the power of mathematical modeling as an approach. There is really no avoiding some movement in the direction of understanding economics as an interpretive discipline, a little like history. In trying to interpret the decline in labor-force participation of working-age males over the past two decades, or to understand the phenomenon of many retail firms offering special deals on “Black Friday,” there is certainly some room to use mathematical models to aid the analysis. But they are neither necessary for coming up with interpretations nor sufficient to render one interpretation superior to all others. In examining subjects like these, economists could greatly reduce their usage of mathematical expression without losing anything in terms of effective theory.
In the 1980 Public Interest volume, Israel Kirzner wrote,
Economic theory needs to be reconstructed so as to recognize at each stage the manner in which changes in external phenomena modify economic activity strictly through the filter of the human mind. Economic consequences, that is, dare not be linked functionally and mechanically to external changes, as if the consequences emerge independently of the way in which the external changes are perceived, of the way in which these changes affect expectations, and of the way in which these changes are discovered at all. [Emphasis in original.]
Kirzner is a practitioner of “Austrian” economics, which was heterodox then and remains so now. But a number of “Austrian” ideas are likely to gradually penetrate the orthodoxy, particularly the emphasis on the role of non-material factors in affecting economic phenomena.
Social scientists are inclined toward materialistic explanations. They want to explain economic phenomena on the basis of resource endowments and technical capacities. They want to explain voting behavior on the basis of demographic and economic factors. The alternative to this materialistic reductionism is to say that ideas matter. It turns out that one cannot explain the tremendous rise in economic growth in the past two centuries on the basis of capital accumulation alone. The remarkable gains in the standard of living have been mostly due to the development and application of new ideas for products and production methods.
Another non-material factor is cultural norms and social institutions. One cannot explain differences in wealth across countries simply on the basis of resources. It is not that South Korea is resource-rich relative to North Korea, or that Israel is resource-rich relative to its Arab neighbors. South Korea and Israel have political and cultural institutions that are friendlier toward enterprise, and that is what accounts for their relatively strong economic performance.
Economists prefer to examine people as individuals. However, individuals get their ideas mostly from other people. The world of mental phenomena is predominantly a cultural world. And these mental-cultural factors in social behavior make economics less deterministic and less individualistic than many economists would prefer it to be. Like it or not, this reduces the advantage of mathematical modeling relative to verbal reasoning.
Another reason to suppose that mathematical modeling will wane is the shifting media landscape. As of now, academic economists still must publish in journals to be successful, and these require mathematical modeling. However, in the age of the internet, the print journal is a very inefficient forum for disseminating ideas. As economists increasingly make use of other forums, including social media, this may break the lock that print journals currently hold on career prospects. That in turn could facilitate more variety in the means of expression, breaking the monopoly currently held by mathematical modeling.
Second, and very much related to the likely decline in the prominence of modeling, economists are likely also to reluctantly come to recognize that, because cultural factors matter, the simple model of the individual homo economicus has only limited applicability.
Psychologist Daniel Kahneman, Presidential Medal of Freedom Awardee
The biggest threat to the assumption of homo economicus is not alternative theories of individual psychology, such as those in behavioral economics. In fact, behavioral economics has been caught up in what in psychology is known as the “replication crisis.” Rather, the need to go beyond the assumption of homo economicus will mostly arise from a recognition of the importance of culture as a determinant of behavior. Economists will need to see economic decisions as embedded in cultural circumstances. In order to understand economic phenomena, we will have to pay attention to the role of beliefs and social norms.
Because ideas and cultural context matter, there are many potential causal factors in economic phenomena. Those curmudgeons who argued that economics is not a “closed system” were correct. It is up to each economist to choose which causal factors to study and which to ignore. Unfortunately, this means that it is possible for different economists to arrive at — and to stick with — different conclusions based on predilections.
And this points to the third plausible development in economic theory. There is a very real possibility that over the next 20 years academic economics will congeal into a discipline, like sociology today, which is definitively shaped by an ideologically driven point of view. Among highly educated people, ideological polarization is increasing. Economists have always had their biases about which sorts of theories seemed reasonable; some of these biases are idiosyncratic, as when one economist is inclined to believe that labor demand responds very little to a change in wage rates and another is inclined to believe that labor demand responds a great deal. But going forward, biases are likely to increasingly be driven by political viewpoints rather than by other considerations.
This will be evident in beliefs of economists that are politically consistent but analytically contradictory. For example, it is politically consistent for someone on the left to believe that a rise in the minimum wage would not reduce hiring and also that more immigration would not depress wages. Analytically, however, these are opposite views. The minimum-wage increase will not reduce hiring if one treats labor demand as highly inelastic (so that a small change in hiring will be associated with a given change in wages). Increased immigration will not depress wages if one treats labor demand as highly elastic (so that a large change in hiring will be associated with a given change in wages). I think we are already starting to see economists opt for political consistency at the expense of analytical consistency.
This more political profession is very likely to point toward the left. Economists are part of an academic community in which peer pressure and community values push left. It is inevitable that the social life of an academic is going to involve interacting with people from other disciplines who are overwhelmingly on the left. This makes it uncomfortable on campus to espouse the free-market views that one used to hear from conservatives like Milton Friedman.
There are signs that the momentum within the profession is toward the left. For example, of the five major economics journals, the one that has experienced the largest increase in impact in recent decades has been the Quarterly Journal of Economics, associated with Harvard and its interventionist economics. Some of this has come at the expense of the Journal of Political Economy, associated with the University of Chicago and its free-market economics.
The left has a more appealing and more unified narrative of the financial crisis. Economists on the left treat the crisis as a product of individual irrationality on the part of home buyers, recklessness and greed on the part of bankers, and laxity on the part of financial regulators. In contrast, the right is split on its narrative. Peter Wallison blames housing policy and the actions of Freddie Mac and Fannie Mae. Monetarist John Taylor blames loose money in the years leading up to the crisis. Other monetarists, notably Scott Sumner and Robert Hetzel, blame tight money during the crisis.
I myself am inclined toward mental-cultural explanations. All of the participants in the run-up to the crisis, including the regulators, were embedded in a culture that saw housing as a socially desirable, low-risk investment. Regulators thought that banks were safer holding mortgage-backed securities than other assets, and they used capital regulations to steer banks in that direction. Although a few regulators expressed doubts about sub-prime mortgage lending, the general view was that, if anything, the availability of mortgage loans was too restricted. The same investment bankers who now are viewed as reckless were at the time regarded as experts in risk management.
In addition, the election of Donald Trump as President may lead even conservative economists to want to distance themselves from the right, at least as Trump defines it. Economists on the right are likely to be uncomfortable with Trump both in substance, particularly on trade and immigration, and in style. So we are likely to see less outspoken conservatism from academic economists than we would have if someone else, Democrat or Republican, had been elected in 2016. And the result will be an intensification of all the other trends pushing the profession to the left.
Fourth, it seems unavoidable that economists will reluctantly come to recognize that they deal in the realm of patterns and stories, rather than decisive hypothesis testing. It simply isn’t possible for economists and other social scientists to achieve the same rigor as is found in the natural sciences, and economists seem increasingly to be accepting this reality. One favorable sign is the increased focus on replicability of empirical work in economics. Many top journals are imposing requirements for transparency of data. There are also economists who are advocating that studies be “registered” in advance, so that scholars can track the studies that are not published because they fail to find “interesting” results.
While the goal of these sorts of efforts is often to chase the Holy Grail of total reliability, they may well have the interim effect of making it clear that existing studies lack reliability. Eventually, more economists may be willing to acknowledge that the Holy Grail is not attainable.
Edward Leamer titled one of his books Macroeconomic Patterns and Stories. In the introduction, he wrote,
You may want to substitute the more familiar scientific words “theory and evidence” for “patterns and stories.” Do not do that. With the phrase “theory and evidence” come hidden stow-away after-the-fact myths about how we learn and how much we can learn. The words “theory and evidence” suggest an incessant march toward a level of scientific certitude that cannot be attained in the study of the complex self-organizing human system that we call the economy. The words “patterns and stories” much more accurately convey our level of knowledge, now, and in the future as well. It is literature, not science. [Emphasis in the original.]
In 2009, when this book was published, Leamer’s views were contrarian and not widely shared. But as economists come to acknowledge the mental-cultural determinants of economic phenomena, and the complexity that this creates, they may come around to acknowledging the limited applicability of scientific methods in economics.
Finally, we come to the special case of macroeconomists, who were in an acknowledged state of crisis in 1980 but are oddly complacent today. My own view is that the attempt to interpret economic phenomena in aggregative terms, as if all workers were identical and all investment were in machinery, is proving untenable. Workers differ markedly in the nature of their skills and in the market value of those skills. Firms are investing not just in plants and equipment but in new products and processes. They are increasingly hiring people to develop organizational capital rather than to produce output.
As a result, we may come to see diminished interest in looking at the economy in aggregate terms — that is, as possessed of a single price level, unemployment rate, productivity-growth rate, and the like. Instead, there will be much more research done on divergences: among regions, among industries, among demographic groups, and so on.
Already, economists are aware that prices have been rising relatively rapidly in major service sectors, such as education and health care, while prices have been rising slowly or even falling in major goods industries, such as home electronics and computers. We are aware that wage and employment prospects continue to diverge between college graduates and those with less education, and between college graduates with STEM degrees and other degrees. Housing and labor markets in coastal cities look very different from those in Midwestern towns.
A few economists, led by Daron Acemoglu, have started to look at industry linkages. They are finding that various industry clusters respond to events in different ways. This increasing study of divergence could point toward the death of macroeconomic modeling as I learned it in graduate school and as it has until recently been taught. Where the tradition is to speak of the “representative agent,” the trend will increasingly be toward “I contain multitudes.” This ought to lead economists to focus on the processes by which new patterns of specialization and trade are formed and old patterns are rendered unsustainable. This research will offer insights in precisely the areas in which traditional macroeconomics is stale and unreliable.
DIVERSITY GAINED, DIVERSITY LOST
In the end, can we really have effective theory in economics? If by effective theory we mean theory that is verifiable and reliable for prediction and control, the answer is likely no. Instead, economics deals in speculative interpretations and must continue to do so.
This reality is far from new. But economists are still grappling with its implications. They seem to resist one implication in particular: that the claim of economists to scientific expertise is no longer tenable.
Professional economists are increasingly aware of the mental-cultural factors that affect economic behavior. As a result, they are willing to broaden their methods beyond the strict reliance on mathematical derivations and multiple regression that prevailed 40 years ago. But if economics has become less of a monoculture with respect to methods, it is now more uniform in its support for Federal Reserve technocratic efforts and for economic activism in general. In 1980, the critics of Keynesian economic policies enjoyed respect in the leading economics departments and journals. This is much less true today.
Young economists who employ pluralistic methods to study problems are admired rather than marginalized, as they were in 1980. But economists who question the wisdom of interventionist economic policies seem headed toward the fringes of the profession.
In this respect, the barriers to effective theory in economics are different and perhaps more worrisome than was the case in 1980. The contemporary state of economic theory reflects a broader crisis in the social sciences and a deepening cleavage between the college campus and the rest of society.