Academics can no longer afford to pat themselves on the back and celebrate their own privileges. If they are to defend the freedom of their enterprise, they must restore dialogue with the broader public and ensure that the relevance of their research – and how research actually occurs – is well understood.
CAMBRIDGE – Academic freedom is a precious commodity, critical to ensure that discovery of the truth is not encumbered by political or ideological forces. But this does not mean that intellectuals should hide in academic bunkers that, by protecting us from criticism by “non-experts,” allow ego to flourish and enable a focus on questions that are not actually relevant to anyone else. We experts should have to explain ourselves.
The University of Cambodia, Phnom Penh
This means, first and foremost, that researchers should be communicating their results in a way that supports accountability and confirms that public funds and education benefits are being used in ways that are in taxpayers’ interests. The duty to communicate findings also ensures that the public is educated, not only about the topic itself, but also about the way research actually works.
Scholarly books and journals often give the impression that the truth is revealed through a neat, orderly, and logical process. But research is far from being a pristine landscape; in fact, it resembles a battlefield, littered with miscalculations, failed experiments, and discarded assumptions. The path to truth is often convoluted, and those who travel along it often must navigate fierce competition and professional intrigue.
Some argue that it is better to hide this reality from the public, in order to maintain credibility. For example, in 2014, physicists collaborating on a project known as BICEP2 thought that they had detected gravitational waves from the beginning of the universe. It was later realized that the signal they had detected could be entirely attributed to interstellar dust.
H.E. Dr. Kao Kim Hourn, University of Cambodia (UC) Founder, Board and Trustee Chairman, And President seeks to create a Research Culture at UC,Phnom Penh.
Some of my colleagues worried that this revelation would undermine faith in other scientific predictions, such as those involving climate change. But would hiding the truth from the public really do more for scientific and academic credibility than cultivating a culture of transparency? Probably not. In fact, being honest about the realities of research might enhance trust and create more space for innovation, with an informed public accepting that risk is the unavoidable and worthwhile cost of groundbreaking and broadly beneficial discoveries.
Another way to ensure that academia continues to innovate in useful and relevant ways is to blur the traditional boundaries among disciplines – the frontiers where invention so often happens. To that end, universities should update their organizational structure, moving away from clearly delineated departments in order to create a kind of continuum across the arts, humanities, and sciences. Students should be encouraged to take courses in multiple disciplines, so that they can weave those lessons and experiences into new patterns of knowledge.
To make this process sustainable, universities should ensure that the courses and curricula they offer help students to develop the skills that a fast-changing labor market demands. This means not just creating new curricula today, but also updating them every few years, in order to account for new trends and discoveries in areas ranging from artificial intelligence and Big Data to alternative energy sources and genome editing.
Professors, for their part, should approach their job as mentors of future leaders in science, technology, the arts, and humanities, rather than attempting to mold students in their own intellectual image. Of course, the latter approach can be useful if the goal is to advance the popularity of one’s own research program and to ensure that one’s own ideas and perspective endure. But that is not the fundamental mission of academia.
The louder the consensus in the echo chambers of academia become, the greater the ego boost for those who inhabit those chambers. But history shows that progress is sometimes advocated by a soft voice in the background, like that of Albert Einstein during his early career. Truth and consensus are not always the same. Diversity of opinion – which implies diversity of gender, ethnicity, and background – is vital to support creativity, discovery, and progress.
That is why it is so important for prizes and professional associations to be used not to reinforce mainstream perspectives, but rather to encourage independent thought and reward innovation. This does not mean that all opinions should be considered equal, but rather that alternative views should be debated and vetted on merit alone.
We in academia cannot continue to pat ourselves on the back, celebrating our own privileges and failing to look at the world in new and relevant ways. If we are to defend the freedom of our enterprise, we must restore dialogue with the broader public and ensure that the relevance of our work is well understood – including by us.
In Defense of Liberal Education
New York: W.W. Norton, 2015
208 pp., $16.00 hc
by Marvin Lansverk, PhD
Professor of English Literature
Montana State University Bozeman
“I understand that we need a certain number of philosophers, and I understand that it’s important to have a certain number of people who study history. But we’re not currently creating a lot of jobs in those areas. So we have to look at what curriculums we really need…. People who are getting degrees in philosophy and history, God bless them, it’s wonderful that they’re critical thinkers. But now they’re going back to a college of technology to get a life skill to get a job.” —Brian Schweitzer, Governor of Montana, 2005-2013 (Hechinger Report, 27 June 2012)—Marvin Lansverk
Perhaps I should start with a bias warning: I went to a liberal arts university. I teach English literature. I like the liberal arts, whether as a major or part of a broad-based undergraduate education. And I’m dismayed by the recent rhetorical turn in the media, along with legislative and policy initiatives, away from the liberal arts—as if they are suddenly passé or something to be feared your kid will become interested in, like drugs, especially when such expressions are accompanied by statements implying that the liberal arts don’t lead to employable skills. As an antidote, I like to read defenses of liberal education, whether John Henry Newman’s nineteenth century classic The Idea of a University, or articles from current CEOs explaining why they actually prefer to hire liberal arts majors, or statistics that show that the salaries of liberal arts majors stack up favorably against other majors, or books like this latest one by Fareed Zakaria, someone with a real job—if being a public intellectual, editor of Foreign Affairs and of Newsweek and Time, a TV host and commentator, a Washington Post columnist, a college professor, and an influential writer count as having a real job. Thus even before I picked it up, I expected I would like Zakaria’s recent In Defense of a Liberal Education, and I do: but not just because it validates my own views. Actually I disagree with a number of his views and am bothered by some of his analysis, which seems overly glib. But what I especially like about Zakaria’s modest book is that it isn’t simply another jeremiad about the ills of American higher education, nor an uninformed call for radical changes which too often tend to throw the proverbial baby out with the bathwater, nor an ideological rant with more ideology than information. Instead, it’s a welcome call for balance, written with balance: balancing data, personal stories, social policy, and an understanding of the history of liberal education in America and the multiple purposes of higher education, all accomplished in the context of Zakaria’s deep knowledge of the present social and political global landscape.
The book started as a commencement address defending liberal education to the 2014 graduating class of Sarah Lawrence College—certainly preaching to the choir. Ten months later, the well-received address was expanded into this book, the best audience for which now might be said to be the skeptics, or cold-cruel-world realists who wonder if our students still have time for Chaucer when our global competitiveness is at stake. To them, Zakaria says yes, the liberal arts matter, using his own life story as an important perspective on the material, making the book partly a personal memoir, partly a history of higher education, and partly a call for more informed and data-driven education policies, especially by our leaders who should know better, whether President Obama’s “I promise you, folks can make a lot more potentially, with skilled manufacturing or the trades than they might with an art history degree,” or the governors from Texas, Florida, North Carolina, and Wisconsin with their recent attempts to de-fund the liberal arts at their state universities, with Rick Scott of Florida’s: “Is it a vital interest of the state to have more anthropologists? I don’t think so.”
Zakaria’s response is this book. It is actually a collection of six essays (the six chapters of the book) with a fairly broad focus. But what ties the chapters together is Zakaria’s personal story and his ongoing ethical authority on the subject: as someone who draws daily on his liberal education and the life skills it imparted.
Chapter One, “Coming to America,” tells Zakaria’s personal story, of being raised in India in its education system focused on memorization, content, and tests (steering children, boys especially, almost exclusively into science and business), then almost on a lark finding himself applying to and getting into Yale in the 1980s (when liberal arts institutions in the U.S. were barely on the radar of Indians). Zakaria then tells how at Yale he discovered the power of a liberal education and through it also discovered his future path in international politics and economics, majoring in history (subsequently earning a PhD in Government from Harvard). What makes the story powerful and contemporary is that it’s a version of the classic “American” story, in its Global 2.0 incarnation, of an individual making good through hard work, determination, and exposure to the American system of higher education. And the story itself is a necessary reminder to policymakers now, appropriately worried about American global competitiveness and statistics showing us falling behind in the educational attainment of our population. And the moral of the story is that our education system, with all its problems, is still the envy of the world. And still producing remarkable results.
Chapter Two, “A Brief History of Liberal Education,” though brief, covers a two thousand year history, starting with the Greeks, dashing through the establishment of medieval universities, with a glance at Britain, to an examination of the American system, with a focus on Harvard’s curricular innovations, the rise of electives, and the emergence of our standard liberal arts curricula—with a core curriculum, a major, and a healthy dose of exploration and free choice. Zakaria’s theme throughout is that societies have always struggled with balancing competing needs in their education systems, that curricula in this country have always been undergoing changes, that they aren’t frozen in the medieval past (which some critics continue to claim). Nevertheless, Zakaria recognizes that improvements still need to be made: especially in increasing the scientific literacy of all students. Zakaria again offers a personal example of change, of Yale’s recent joint venture (where Zakaria had become a trustee) with the National University of Singapore to establish a new liberal arts institution in Asia, Yale-NUS College, which opened its doors Fall 2013. Recognizing Singapore’s own need to develop more of the kinds of creativity and critical thinking and entrepreneurship characteristic of American higher education—and even more of the self discovery—it has made a recent bet on more liberal education, not less.
The value of this Chapter 2 actually lies in its brevity. It isn’t that the history Zakaria tells here is new, and it is developed in far less detail than in the sources that Zakaria draws upon (carefully citing the sources in this first book since his own citation scandal in 2012 that we have seen affect other public intellectuals similarly writing at speed with research staffs, and therefore sometimes not as careful about citations as the standards of academic research require). But overviews have their role as well. And many current skeptics or other busy people paying only occasional attention to higher education debates aren’t going to take the time to read the comprehensive histories of the liberal arts (such as Wesleyan’s president, Michael Roth’s 2014 erudite Beyond the University: Why Liberal Education Matters, which Zakaria also cites). So there is value in quickly retelling the story, reminding us of how we got here, and reminding us what the liberal in liberal education means, which seems especially important for those made queasy by having any association with a term that also serves as a political label as well (Zakaria’s own political views have been variously characterized as centrist, moderate, liberal, and/or conservative). In this case, Zakaria reminds readers that the liberal in liberal education has its roots in a two thousand year history of liberation and freedom—and not in 21st century American politics.
Chapter Three, “Learning to Think,” finally gets down to the business of defending liberal education. And the lead-in is the question: but what about jobs? Thus, the arguments Zakaria makes become both philosophical and practical at the same time, matching the balance that characterizes the book. His specific arguments why liberal education must continue to be valued aren’t new, but the examples and topical asides are. In brief, what liberal education imparts, and what it did for him personally, is three things: 1) it teaches you to write, 2) to think, and 3) to learn. This bald summary isn’t that interesting but the balance of examples, anecdotes, quotes from CEOs and data that Zakaria compiles makes for compelling reading. And one of the more interesting threads Zakaria pulls on is the paradox of international test scores—such as the, the Program for International Assessment (PISA), on which the U.S. and other nations with educational systems more like ours tend to do poorly on, revealing an increasing lack of preparation and competence in a variety of subjects by our students, yet whose results don’t track with actual global competitiveness and success. While a highly complex issue, one lesson—relevant in an age of increasing testing regimes—is that not everything that matters can be measured. Quoting Singapore’s former minister of education comparing our system to theirs, Zakaria reports Tharman Shanmugaratnam’s comparative comments: “Yours is a talent meritocracy, ours is an exam meritocracy. There are some parts of the intellect that we are not able to test well—like creativity, curiosity, a sense of adventure, ambition. Most of all, America has a culture of learning that challenges conventional wisdom, even if it means challenging authority. These are areas where Singapore must learn from America.”
Chapter 4, “The Natural Aristocracy,” is an eclectic chapter continuing Zakaria’s theme of meritocracy and capitalism as effective and necessary backdrops for our education system (he takes the term natural aristocracy from Thomas Jefferson, indicating a meritocratic system based on talent rather than birth, wealth, and privilege). And he starts with a meditation on the founding fathers and especially on Ben Franklin as the poster child for the American system. Interestingly, this is also the chapter where Zakaria addresses some of the problems bedeviling higher education, including costs that continue to outpace inflation and the continued cost shifting from public sources to individuals, leading to increased individual debt. Zakaria doesn’t have a single solution to offer, but—experienced in the power of mass media to reach all parts of the globe as he is—he, like many others, is fascinated by the promises of technology and distance delivery of courses, especially MOOCs (still new enough to require an identification of the acronym: Massive Open Online Courses). Still in their infancy, they already are expanding access to information, to great teachers, and to American liberal education. One thing Zakaria finds interesting about MOOCs is that students worldwide aren’t just seeking out engineering and technical courses in this online environment; they are also interested in the liberal arts.
Chapters 5 and 6, “Knowledge and Power,” and “In Defense of Today’s Youth,” turn to even broader subjects, though are each short chapters. Chapter 5 addresses the power of knowledge to change the world, and Chapter 6 is Zakaria’s attempt to address the value of a liberal education in developing the individual life of the mind and ourselves as human beings. Though worthy subjects, both read a bit more like newspaper columns than book chapters at this point—and it’s not surprising that the most frequently referenced source in these latter chapters is New York Times columnist David Brooks, whom Zakaria sees himself in dialogue with here.
Ultimately, it is dialogue that Zakaria wants to promote with this book—informed dialogue. And his method of provoking it is to provide a “zoomed out” Google Earth view of American higher education, which—to keep the map metaphor going a bit—functions as a kind of Mercator projection with the importance of liberal education at the center. And as such, it is successful, bearing the strengths and weaknesses of such an intent. It makes effective use of Zakaria’s compelling success story, making his story emblematic of our times; it provides a good overview of issues in higher education; it provides a useful survey of many recent good books on the same subject (from Andrew Delbanco’s College: What It Was, Is, and Should Be (2012), to Academically Adrift: Limited Learning on College Campuses (2010), and Excellent Sheep (2014)—all previously reviewed in Montana Professor, the latter in this issue); it’s written in a breezy, quick-reading journalistic prose, and it provides much concrete data to counter the recent public narrative that we’ve outgrown or can no longer afford our childish preoccupation with liberal education. As for its weaknesses, like an unfocused essay, perhaps, the book tries to do too much, thereby having to cover territory too quickly, occasionally relying on too many generalizations in the meantime. As such, it’s not always possible to tell what the generalizations mean (e.g., “Bill Gates was one of the first larger-than-life private figures in contemporary America”). Also, like many books on higher education, there’s a tendency to focus on and continue our culture’s obsession with our so called “elite” or “best schools” when much of the information is actually relevant to the whole education infrastructure—including the Montana University System. And sometimes Zakaria wraps up a survey of complex issues with a simple question as a conclusion, such as “Is this so bad?” That method, however, is a good indication of the purpose of the book. Its focus is on common sense, from someone with an uncommon biography, who is criticizing what is becoming too common: taking for granted the importance of a liberal education in this country that not only can we afford, but that we can’t afford to do without.
Universities pride themselves on producing creative ideas that disrupt the rest of society, yet higher-education teaching techniques continue to evolve at a glacial pace. Given education’s centrality to raising productivity, shouldn’t efforts to reinvigorate today’s sclerotic Western economies focus on how to reinvent higher education?
CAMBRIDGE – In the early 1990s, at the dawn of the Internet era, an explosion in academic productivity seemed to be around the corner. But the corner never appeared. Instead, teaching techniques at colleges and universities, which pride themselves on spewing out creative ideas that disrupt the rest of society, have continued to evolve at a glacial pace.
Sure, PowerPoint presentations have displaced chalkboards, enrollments in “massive open online courses” often exceed 100,000 (though the number of engaged students tends to be much smaller), and “flipped classrooms” replace homework with watching taped lectures, while class time is spent discussing homework exercises. But, given education’s centrality to raising productivity, shouldn’t efforts to reinvigorate today’s sclerotic Western economies focus on how to reinvent higher education?
One can understand why change is slow to take root at the primary and secondary school level, where the social and political obstacles are massive. But colleges and universities have far more capacity to experiment; indeed, in many ways, that is their raison d’être.
For example, what sense does it make for each college in the United States to offer its own highly idiosyncratic lectures on core topics like freshman calculus, economics, and US history, often with classes of 500 students or more? Sometimes these giant classes are great, but anyone who has gone to college can tell you that is not the norm.
At least for large-scale introductory courses, why not let students everywhere watch highly produced recordings by the world’s best professors and lecturers, much as we do with music, sports, and entertainment? This does not mean a one-size-fits-all scenario: there could be a competitive market, as there already is for textbooks, with perhaps a dozen people dominating much of the market.
And videos could be used in modules, so a school could choose to use, say, one package to teach the first part of a course, and a completely different package to teach the second part. Professors could still mix in live lectures on their favorite topics, but as a treat, not as a boring routine.
A shift to recorded lectures is only one example. The potential for developing specialized software and apps to advance higher education is endless. There is already some experimentation with using software to help understand individual students’ challenges and deficiencies in ways that guide teachers on how to give the most constructive feedback. But so far, such initiatives are very limited.
Perhaps change in tertiary education is so glacial because the learning is deeply interpersonal, making human teachers essential. But wouldn’t it make more sense for the bulk of faculty teaching time to be devoted to helping students engage in active learning through discussion and exercises, rather than to sometimes hundredth-best lecture performances?3
Yes, outside of traditional brick-and-mortar universities, there has been some remarkable innovation. The Khan Academy has produced a treasure trove of lectures on a variety of topics, and it is particularly strong in teaching basic mathematics. Although the main target audience is advanced high school students, there is a lot of material that college students (or anyone) would find useful.
Moreover, there are some great websites, including Crash Course and Ted-Ed, that contain short general education videos on a huge variety of subjects, from philosophy to biology to history. But while a small number of innovative professors are using such methods to reinvent their courses, the tremendous resistance they face from other faculty holds down the size of the market and makes it hard to justify the investments needed to produce more rapid change.
Let’s face it, college faculty are no keener to see technology cut into their jobs than any other group. And, unlike most factory workers, university faculty members have enormous power over the administration. Any university president that tries to run roughshod over them will usually lose her job long before any faculty member does.
Of course, change will eventually come, and when it does, the potential effect on economic growth and social welfare will be enormous. It is difficult to suggest an exact monetary figure, because, like many things in the modern tech world, money spent on education does not capture the full social impact. But even the most conservative estimates suggest the vast potential. In the US, tertiary education accounts for over 2.5% of GDP (roughly $500 billion), and yet much of this is spent quite inefficiently. The real cost, though, is not the squandered tax money, but the fact that today’s youth could be learning so much more than they do.
Universities and colleges are pivotal to the future of our societies. But, given impressive and ongoing advances in technology and artificial intelligence, it is hard to see how they can continue playing this role without reinventing themselves over the next two decades. Education innovation will disrupt academic employment, but the benefits to jobs everywhere else could be enormous. If there were more disruption within the ivory tower, economies just might become more resilient to disruption outside it.
Despite emergence of small families and well-educated but working parents, education structure has changed little in past five decades, Michael Heng points out
Schools in Hong Kong and many cities elsewhere in Asia have not undergone significant changes since the 1960s while family structure, the economy and other elements of society have experienced great transformations. Just to name four changes that have direct bearing on education. First, families have become smaller; many children have either one or no sibling. Second, most parents today are pretty well-educated — at the very least they are literate. Third, jobs for university or polytechnic graduates are more difficult to come by. Fourth, there is an ample supply of teachers’ college graduates.
In the 1960s, schools focused mainly on transmitting knowledge to students. In line with this exam results were the key criteria to measure school performance. Not many schools had well-trained teachers. Where students felt their teachers failed their expectations, they had to turn to some kind hearted and brainy fellow classmates for help. In many cases, their parents were too poorly educated to help, and they could not afford private tuition. In such conditions, other important matters related to full development of an individual were pushed into the background. One hardly heard of schools being responsible for helping students develop social and communication skills and guide them in coping with personal problems, failures in life, etc.
Fast-forward to the 2010s, schools have changed. Though there has been open recognition of the roles of schools in the full development of an individual, the main emphasis is still on exam results. Even with a growing army of well-trained teachers and better-educated parents, we see a booming private-tuition industry. Our mindset and practices on educating our young are stuck in the 1960s, despite conditions having changed so much.
With only one sibling or no sibling, a child has lost the family environment and does not acquire the habits and skills to cope with older and younger siblings. This inadequacy is often not addressed by schools, which put children of the same age in the same class. As an alternative, primary schools can have just two kinds of classes. One kind comprises classes with children aged 6, 7 and 8, and the higher for children aged 9, 10 and 11. They not only learn from the teachers, but from each other. The younger ones do content-learning from the older ones, while the older ones learn how to teach the younger ones. There is a “risk” the older ones will fail to teach the content correctly to younger ones. But there are textbooks, well-trained teachers, and well-educated parents to correct errors. Moreover, children are exposed from a young age to develop independent thinking and to absorb materials through questioning and critical thinking. Such mental habits are immensely useful for independent pursuit of knowledge. For those familiar with Montessori educational philosophy, the approach sounds familiar.
Schools should also be reorganized in terms of time and space. Since both parents of most young families work, schools can be organized to keep students at school while parents are working. All kinds of interesting activities can be organized to fill in the hours. Homework in the traditional sense should be done during these hours. “Weaker” students should be assisted by “stronger” students, making tuition redundant. Off-school hours are free from homework and can be fruitfully spent on such activities as community work or learning extra languages.
As a very rich city, Hong Kong can afford to have small classes. Unlike a class of 40 students, where teachers sometimes have to struggle just to maintain discipline and order, what about a class of 20 to 25 students? Any person with teaching experience can testify to the benefits of small classes in schools. To offset the negative aspects of living in a concrete jungle, schools should have bushes, flowers, vegetables, plants and trees to cultivate an early respect for the natural environment.
Besides transmitting book knowledge, there are other dimensions of education — cultivating good character, attitude toward work, social-justice awareness, proper human interaction and ability to cope with failures and setbacks in life.
Good character is more than integrity and being upright. It includes the ability to help others, especially the weak and disadvantaged. Here schools should design incentives to encourage such behavior. For example, classes can be assessed on cooperation and mutual assistance among students, as a balance to competitive exams.
The major spiritual traditions attach great value to productive work, whether well-paid or otherwise. Such attitude is important especially in the current labor market where well-paid professionals may lose their jobs through no fault of their own. Those who perceive all productive work as respectable will be more flexible in facing the situation. Of course, social attitudes must also change to make it easier for redundant staff.
The author is a retired professor who had academic appointments in Australia, the Netherlands, and at six universities in Asia. He has been trained as a school teacher and has also taught in secondary schools.
We are living in worrisome economic times. One year ago, I observed that US President Donald Trump’s bullying of companies and individuals who get in his way is reminiscent of Benito Mussolini in the 1920s. Like Mussolini, Trump poses a clear danger to the rule of law.–Edmund S. Phelps
For decades, America has suffered from a long-run productivity slowdown that has sapped the economy of its former dynamism, and left median wages stagnant. Will the tax legislation recently enacted by congressional republicans and the Trump administration finally reverse this trend, or will it make a bad situation worse?
PHILADELPHIA – We are living in worrisome economic times. One year ago, I observed that US President Donald Trump’s bullying of companies and individuals who get in his way is reminiscent of Benito Mussolini in the 1920s. Like Mussolini, Trump poses a clear danger to the rule of law.
My subject here, however, is the tax legislation that Trump signed into law in December, on the promise that reducing the rate at which corporate profits are taxed will help an ailing US economy.
Political Responses to the Malaise
For several decades, the US economy has exhibited various symptoms of economic malaise. Now, we have a political upheaval on our hands. While real (inflation-adjusted) median wages have been nearly stagnant for decades, private saving from profits and enormous capital gains have continued apace. As asset prices – to say nothing of the wealth-wage ratio – have climbed to vertiginous levels, established wealth has grown more powerful, and wealth managers have done well.
Worse still, in industries hit hard by foreign trade or automation, many jobs have been eliminated, and real wages have actually declined. As these new developments continued over the past few decades, they placed increasing pressure on society as a whole. Ultimately, there was an electoral realignment, marked by a shift in voting patterns among key economic constituencies.
Remarkably, neither Democrats nor Republicans seemed to register these economic and social ailments, or the consequences they could have. When Hillary Clinton launched her 2016 presidential campaign with a speech on Roosevelt Island, she focused heavily on achieving social justice for marginalized groups. She did not address the fact that, some six decades ago, the US economy lost the sustained growth it had been generating since the 1820s, despite depressions and inflationary cycles.
While Democrats became increasingly fixated on notions of “fairness” and what academics call the “just economy,” they apparently didn’t notice that the country had been operating for decades without a good economy. Countless people have had little or no chance of feeling fully included in economic life. They have been deprived of jobs that are actually engaging, and of opportunities to feel that they have succeeded at something.
As the renowned Columbia University philosopher David Sidorsky recently pointed out to me, ancient philosophers spoke of “the good and the just” (boni et aequi), not “the just and the good.” Clearly, the Democrats put the cart before the horse. First, we need a good economy. Only then can we devise a just way to reward participants for contributions that the economy empowers them to make.
An Attempt at a Cure
After securing the presidency – in addition to both houses of Congress – in 2016, Republicans have tried to run the ball through the opening left by the Democrats. Throughout 2017, they pursued a range of reforms to address weak investment and stagnant wages, and ended the year with the newly enacted tax legislation, which cuts the tax rate on corporate profits from 35% to 21%.
Economists who support the Republicans’ tax legislation have relied on a textbook growth model to claim that it will boost investment activity. According to their model, investment will drive up the capital stock until it reaches the steady-state level where the after-tax rate of return falls to the level of the real interest rate. The real interest rate is exogenous, and reflects investors’ time preferences, world interest rates, and other factors. The point where the rate of return intersects with the real interest rate is shown in Figure 1. (A more classical case, in which the rate of return is pulled down by capital accumulation, is shown in Figure 2.)
Supporters of the tax legislation reason that if the tax cut pushes up the after-tax rate of return, investment activity will increase, and the capital stock will expand, boosting productivity until the capital stock reaches a new steady state, which they calculate will happen in around ten years.
But, as is always the case with supply-side economics and more radical forms of Keynesianism, this approach is profoundly short-sighted. After ten years, there is no reason to think the faster growth will continue.
Without the same level of indigenous innovation that was achieved during the golden era of high growth rates, from the 1820s to around 1970, the Republican tax law will amount to nothing more than a stop-gap measure. And even over the next decade, it will not deliver truly rapid growth.
The Problem with Models
More fundamentally, we ought to ask whether it is right to expect tax cuts to translate into higher productivity growth. I would argue that, because the tax package will add to the annual fiscal deficit and the public debt, it might actually block investment, and thus derail a productivity pickup.
When I was a young economist working on my 1965 monograph, Fiscal Neutrality Toward Economic Growth, I would have looked at today’s favorable short-term conditions and actually called for a tax increase across the board, in order to stanch the federal government’s fiscal hemorrhaging. A tax hike might push down bond yields, and thus bring about higher share prices and a considerable drop in interest rates over the entire yield curve, provided the US Federal Reserve didn’t offset the move by unwinding its bond holdings.
Thinking back even further, to when I was a young student, I can remember congressional Republicans voicing their opposition to fiscal deficits, and President Harry Truman, a Democrat, enacting a run of fiscal surpluses aimed at mopping up the federal debt. These policies, helped by inflation (which lowered the real value of the debt), did not lead to a depression. There was only the 1949-1950 recession.
Nowadays, a crude form of Keynesianism is so deeply ingrained in voters’ minds that any program aimed at achieving a fiscal surplus, or even balance, has become unthinkable. Yet one wonders if the new tax law will arouse worries about the sustainability of the growing federal debt, which is already high after the presidencies of George W. Bush, a supply-sider, and Barack Obama, a Keynesian. If so, such concerns would push up interest-rate risk premia in anticipation of a depreciating dollar. Yes, the Republican plan does include some provisions to raise revenue or cut spending, but that is not entirely reassuring.
Of course, those who support the law would argue that the supposed increase in investment activity will immediately push up the dollar’s exchange rate, and that the dollar’s real value would then depreciate gradually to where it had been. Otherwise, no one would want to continue holding foreign capital. This points to a paradox in the law. Trump ran on the promise of boosting American exports, but in standard models, an appreciating dollar will depress export demand.
On the other hand, a stronger dollar will prompt domestic firms in import-competing industries to cut their markups so that potential foreign rivals will be less inclined to invade the US market. As a result, wage rates might be pulled up along with the amount of labor supplied. These particular industries, then, would experience an expansion of output and employment.
An Uncertain Prognosis
But for those who do not share the perspective of the law’s supporters, this scenario is hardly a sure thing. After all, who’s to say if the tax package will drive up business investment until the marginal productivity of capital has fallen enough to raise substantially the marginal productivity of labor? That scenario might be possible; but it is in no way assured. As New York University’s Roman Frydman and I argued in a commentary last month, the real-life US economy is not a “mechanical system in which changes in tax parameters and other inputs explain exactly why and how investment occurs and the economy grows.”
Unfortunately, the economics profession has ignored the potential implications of human agency. If far more people were to start conceiving and creating innovations, investment and wage rates might rise well beyond what the textbook model would have predicted. By the same token, if fewer people engage in innovation, investment and wages rates may rise less than expected, or even fall.
In other words, the innovation factor could very well dwarf the effect of the cut in the corporate-tax rate over the next ten years. By that point, we might not have enough evidence to determine if the tax cut was effective, or merely an inconsequential drop in the bucket.
And the uncertainty goes deeper than that. The problem is not just that the traditional model’s disturbance terms may be so large that they overshadow the effects of the tax cut, but also that the coefficients for measuring the tax law’s effect on investment or wages might not even be knowable. The innovators driving (or failing to drive) gains in productivity cannot be certain ahead of time what form their new products or methods will take, or whether they will be adopted at all. How, then, could economists ever foretell precise changes in investment patterns as a result of a tax cut, or what effects new investments will have on the marginal productivity of labor?
As I suggested in November, what we call the “natural” unemployment rate can be affected by insecurity and fear. Similarly, if an unfunded tax cut conjures visions of insolvency, corporate executives might be wary of making new investments. Or they might decide to invest predominantly in labor-saving technologies, which could actually reduce wages and eliminate jobs in some industries. Given that possibility, one cannot be sure whether the tax law will have a positive or negative effect on wages, employment, or productivity.
None of this is to say that we should avoid new departures. Certainly, we must keep trying in the hopes of making progress. Or, as Candide (in the musical) tells Cunégonde after they have both endured many difficulties, “We’ll do the best we know.”
About the Author:
Edmund S. Phelps, the 2006 Nobel laureate in Economics, is Director of the Center on Capitalism and Society at Columbia University and the author of Mass Flourishing.
In 2016, the highest-paid employee of the State of California was Jim Mora, the head coach of U.C.L.A.’s football team. (He has since been fired.) That year, Mora pulled in $3.58 million. Coming in second, with a salary of $2.93 million, was Cuonzo Martin, at the time the head coach of the men’s basketball team at the University of California, Berkeley. Victor Khalil, the chief dentist at the Department of State Hospitals, made six hundred and eighty-six thousand dollars; Anne Neville, the director of the California Research Bureau, earned a hundred and thirty-five thousand dollars; and John Smith, a seasonal clerk at the Franchise Tax Board, earned twelve thousand nine hundred dollars.
I learned all this from a database maintained by the Sacramento Bee. The database, which is open to the public, is searchable by name and by department, and contains precise salary information for the more than three hundred thousand people who work for California. Today, most state employees probably know about the database. But that wasn’t the case when it was first created, in 2008. This made possible an experiment.
The experiment, conducted by four economists, was designed to test rival theories of inequity. According to one theory, the so-called rational-updating model, people assess their salaries in terms of opportunities. If they discover that they are being paid less than their co-workers, they will “update” their projections about future earnings and conclude that their prospects of a raise are good. Conversely, people who learn that they earn more than their co-workers will be discouraged by that news. They’ll update their expectations in the opposite direction.
According to a rival theory, people respond to inequity not rationally but emotionally. If they discover that they’re being paid less than their colleagues, they won’t see this as a signal to expect a raise but as evidence that they are underappreciated. (The researchers refer to this as the “relative income” model.) By this theory, people who learn that their salaries are at the low end will be pissed. Those who discover that they’re at the high end will be gratified.
The economists conducting the study sent an e-mail to thousands of employees at three University of California schools—Santa Cruz, San Diego, and Los Angeles—alerting them to the existence of the Bee’s database. This nudge produced a spike in visits to the Web site as workers, in effect, peeked at one another’s paychecks.
A few days later, the researchers sent a follow-up e-mail, this one with questions. “How satisfied are you with your job?” it asked. “How satisfied are you with your wage/salary on this job?” They also sent the survey to workers who hadn’t been nudged toward the database. Then they compared the results. What they found didn’t conform to either theory, exactly.
By bridging the fields of anthropology, evolutionary biology, behavioral ecology, geopolitics, and social science, trailblazing scientist Jared Diamond (b. September 10, 1937) has done more than anyone since Margaret Mead to decondition the Eurocentric approach to history and debunk the biological fallacies on which the monster of racism feeds. His Pulitzer-winning 1997 book Guns, Germs, and Steel: The Fates of Human Societies (public library) is a foundational text illuminating the conditions that led to inequality in the modern world and combating the broken logic that perpetuates these toxic beliefs.
At the heart of Diamond’s work is the notion that in order to understand any one society, we must contextualize it in the larger ecosystem of humanity and therefore must understand all societies. Only by grasping the richness and diversity of the entire ecosystem can we begin to dismantle our assumptions about the value of others and realize that people from different groups fared differently in history not due to their innate abilities but due to a complex cluster of environmental and geopolitical forces.
As the relative-income model predicted, those who’d learned that they were earning less than their peers were ticked off. Compared with the control group, they reported being less satisfied with their jobs and more interested in finding new ones. But the relative-income model broke down when it came to those at the top. Workers who discovered that they were doing better than their colleagues evinced no pleasure. They were merely indifferent. As the economists put it in a paper that they eventually wrote about the study, access to the database had a “negative effect on workers paid below the median for their unit and occupation” but “no effect on workers paid above median.”
The message the economists took from their research was that employers “have a strong incentive” to keep salaries secret. Assuming that California workers are representative of the broader population, the experiment also suggests a larger, more disturbing conclusion. In a society where economic gains are concentrated at the top—a society, in other words, like our own—there are no real winners and a multitude of losers.
Keith Payne, a psychologist, remembers the exact moment when he learned he was poor. He was in fourth grade, standing in line in the cafeteria of his elementary school, in western Kentucky. Payne didn’t pay for meals—his family’s income was low enough that he qualified for free school lunch—and normally the cashier just waved him through. But on this particular day there was someone new at the register, and she asked Payne for a dollar twenty-five, which he didn’t have. He was mortified. Suddenly, he realized that he was different from the other kids, who were walking around with cash in their pockets.
“That moment changed everything for me,” Payne writes, in “The Broken Ladder: How Inequality Affects the Way We Think, Live, and Die.” Although in strictly economic terms nothing had happened—Payne’s family had just as much (or as little) money as it had the day before—that afternoon in the cafeteria he became aware of which rung on the ladder he occupied. He grew embarrassed about his clothes, his way of talking, even his hair, which was cut at home with a bowl. “Always a shy kid, I became almost completely silent at school,” he recalls.
Payne is now a professor at the University of North Carolina, Chapel Hill. He has come to believe that what’s really damaging about being poor, at least in a country like the United States—where, as he notes, even most people living below the poverty line possess TVs, microwaves, and cell phones—is the subjective experience of feeling poor. This feeling is not limited to those in the bottom quintile; in a world where people measure themselves against their neighbors, it’s possible to earn good money and still feel deprived. “Unlike the rigid columns of numbers that make up a bank ledger, status is always a moving target, because it is defined by ongoing comparisons to others,” Payne writes.
Feeling poor, meanwhile, has consequences that go well beyond feeling. People who see themselves as poor make different decisions, and, generally, worse ones. Consider gambling. Spending two bucks on a Powerball ticket, which has roughly a one-in-three-hundred-million chance of paying out, is never a good bet. It’s especially ill-advised for those struggling to make ends meet. Yet low-income Americans buy a disproportionate share of lottery tickets, so much so that the whole enterprise is sometimes referred to as a “tax on the poor.”
One explanation for this is that poor people engage in riskier behavior, which is why they are poor in the first place. By Payne’s account, this way of thinking gets things backward. He cites a study on gambling performed by Canadian psychologists. After asking participants a series of probing questions about their finances, the researchers asked them to rank themselves along something called the Normative Discretionary Income Index. In fact, the scale was fictitious and the scores were manipulated. It didn’t matter what their finances actually looked like: some of the participants were led to believe that they had more discretionary income than their peers and some were led to believe the opposite. Finally, participants were given twenty dollars and the choice to either pocket it or gamble it on a computer card game. Those who believed they ranked low on the scale were much more likely to risk the money on the card game. Or, as Payne puts it, “feeling poor made people more willing to roll the dice.”
In another study, this one conducted by Payne and some colleagues, participants were divided into two groups and asked to make a series of bets. For each bet, they were offered a low-risk / low-reward option (say, a hundred-per-cent chance of winning fifteen cents) and a high-risk / high-reward option (a ten-per-cent chance of winning a dollar-fifty). Before the exercise began, the two groups were told different stories (once again, fictitious) about how previous participants had fared. The first group was informed that the spread in winnings between the most and the least successful players was only a few cents, the second that the gap was a lot wider. Those in the second group went on to place much chancier bets than those in the first. The experiment, Payne contends, “provided the first evidence that inequality itself can cause risky behavior.”
People’s attitude toward race, too, he argues, is linked to the experience of deprivation. Here Payne cites work done by psychologists at N.Y.U., who offered subjects ten dollars with which to play an online game. Some of the subjects were told that, had they been more fortunate, they would have received a hundred dollars. The subjects, all white, were then shown pairs of faces and asked which looked “most black.” All the images were composites that had been manipulated in various ways. Subjects in the “unfortunate” group, on average, chose images that were darker than those the control group picked. “Feeling disadvantaged magnified their perception of racial differences,” Payne writes.
“The Broken Ladder” is full of studies like this. Some are more convincing than others, and, not infrequently, Payne’s inferences seem to run ahead of the data. But the wealth of evidence that he amasses is compelling. People who are made to feel deprived see themselves as less competent. They are more susceptible to conspiracy theories. And they are more likely to have medical problems. A study of British civil servants showed that where people ranked themselves in terms of status was a better predictor of their health than their education level or their actual income was.
All of which leads Payne to worry about where we’re headed. In terms of per-capita income, the U.S. ranks near the top among nations. But, thanks to the growing gap between the one per cent and everyone else, the subjective effect is of widespread impoverishment. “Inequality so mimics poverty in our minds that the United States of America . . . has a lot of features that better resemble a developing nation than a superpower,” he writes.
Rachel Sherman is a professor of sociology at the New School, and, like Payne, she studies inequality. But Sherman’s focus is much narrower. “Although images of the wealthy proliferate in the media, we know very little about what it is like to be wealthy in the current historical moment,” she writes in the introduction to “Uneasy Street: The Anxieties of Affluence.”
Sherman’s first discovery about the wealthy is that they don’t want to talk to her. Subjects who agree to be interviewed suddenly stop responding to her e-mails. One woman begs off, saying she’s “swamped” with her children; Sherman subsequently learns that the kids are at camp. After a lot of legwork, she manages to sit down with fifty members of the haut monde in and around Manhattan. Most have family incomes of more than five hundred thousand dollars a year, and about half have incomes of more than a million dollars a year or assets of more than eight million dollars, or both. (At least, this is what they tell Sherman; after a while, she comes to believe that they are underreporting their earnings.) Her subjects are so concerned about confidentiality that Sherman omits any details that might make them identifiable to those who have visited their brownstones or their summer places.
“I poked into bathrooms with soaking tubs or steam showers” is as far as she goes. “I conducted interviews in open kitchens, often outfitted with white Carrara marble or handmade tiles.”
A second finding Sherman makes, which perhaps follows from the first, is that the privileged prefer not to think of themselves that way. One woman, who has an apartment overlooking the Hudson, a second home in the Hamptons, and a household income of at least two million dollars a year, tells Sherman that she considers herself middle class. “I feel like, no matter what you have, somebody has about a hundred times that,” she explains. Another woman with a similar household income, mostly earned by her corporate-lawyer husband, describes her family’s situation as “fine.”
“I mean, there are all the bankers that are heads and heels, you know, way above us,” she says. A third woman, with an even higher household income—two and a half million dollars a year—objects to Sherman’s use of the word “affluent.”
“ ‘Affluent’ is relative,” the woman observes. Some friends of hers have recently flown off on vacation on a private plane. “That’s affluence,” she says.
This sort of talk dovetails neatly with Payne’s work. If affluence is in the eye of the beholder, then even the super-rich, when they compare their situation with that of the ultra-rich, can feel sorry for themselves. The woman who takes exception to the word “affluent” makes a point of placing herself at the “very, very bottom” of the one per cent. “The disparity between the bottom of the 1 percent and the top of the 1 percent is huge,” she observes.
Sherman construes things differently. Her subjects, she believes, are reluctant to categorize themselves as affluent because of what the label implies. “These New Yorkers are trying to see themselves as ‘good people,’ ” she writes. “Good people work hard. They live prudently, within their means. . . . They don’t brag or show off.” At another point, she observes that she was “surprised” at how often her subjects expressed conflicted emotions about spending. “Over time, I came to see that these were often moral conflicts about having privilege in general.”
Whatever its source—envy or ethics—the discomfort that Sherman documents matches the results of the University of California study. Inequity is, apparently, asymmetrical. For all the distress it causes those on the bottom, it brings relatively little joy to those at the top.
As any parent knows, children watch carefully when goodies are divvied up. A few years ago, a team of psychologists set out to study how kids too young to wield the word “unfair” would respond to unfairness. They recruited a bunch of preschoolers and grouped them in pairs. The children were offered some blocks to play with and then, after a while, were asked to put them away. As a reward for tidying up, the kids were given stickers. No matter how much each child had contributed to the cleanup effort, one received four stickers and the other two. According to the Centers for Disease Control and Prevention, children shouldn’t be expected to grasp the idea of counting before the age of four. But even three-year-olds seemed to understand when they’d been screwed. Most of the two-sticker recipients looked enviously at the holdings of their partners. Some said they wanted more. A number of the four-sticker recipients also seemed dismayed by the distribution, or perhaps by their partners’ protests, and handed over some of their winnings. “We can . . . be confident that these actions were guided by an understanding of equality, because in all cases they offered one and only one sticker, which made the outcomes equal,” the researchers reported. The results, they concluded, show that “the emotional response to unfairness emerges very early.”
If this emotional response is experienced by toddlers, it suggests that it may be hardwired—a product of evolution rather than of culture. Scientists at the Yerkes National Primate Research Center, outside Atlanta, work with brown capuchin monkeys, which are native to South America. The scientists trained the monkeys to exchange a token for a slice of cucumber. Then they paired the monkeys up, and offered one a better reward—a grape. The monkeys that continued to get cucumbers, which earlier they’d munched on cheerfully, were incensed. Some stopped handing over their tokens. Others refused to take the cucumbers or, in a few cases, threw the slices back at the researchers. Like humans, capuchin monkeys, the researchers wrote, “seem to measure reward in relative terms.”
Preschoolers, brown capuchin monkeys, California state workers, college students recruited for psychological experiments—everyone, it seems, resents inequity. This is true even though what counts as being disadvantaged varies from place to place and from year to year. As Payne points out, Thomas Jefferson, living at Monticello without hot water or overhead lighting, would, by the standards of contemporary America, be considered “poorer than the poor.” No doubt inequity, which, by many accounts, is a precondition for civilization, has been a driving force behind the kinds of innovations that have made indoor plumbing and electricity, not to mention refrigeration, central heating, and Wi-Fi, come, in the intervening centuries, to seem necessities in the U.S.
Still, there are choices to be made. The tax bill recently approved by Congress directs, in ways both big and small, even more gains to the country’s plutocrats. Supporters insist that the measure will generate so much prosperity that the poor and the middle class will also end up benefitting. But even if this proves true—and all evidence suggests that it will not—the measure doesn’t address the real problem. It’s not greater wealth but greater equity that will make us all feel richer. ♦