BOOK REVIEW: The Science of Values: The Moral Landscape by Sam Harris

February 4, 2018

BOOK REVIEW: The Science of Values: The Moral Landscape by Sam Harris

Note: My friend and ex-Diplomat, Dato Hamzah Majeed, introduced me to the refreshing writings of Sam Harris and I am hooked. By reading Harris, I am led to other writers like Carl Sagan, Stephen Hawking, Christopher Hitchens, Richard Dawkins, Bernard Lewis, Neil Degrasse Tyson, Tenzin Gyatzo (His Holiness Dalai Lama), Ayaan Hirsi Ali and others. At the present I am reading Sam’s  The End of Faith: Religion, Terror and the Future of Reason.–Din Merican

Reviewed by James W. Diller and Andrew E. Nuzzolilli

Image result for moral landscape by sam harris


In The Moral Landscape, Sam Harris (2010) proposes that science can be used to identify values, which he defines as “facts that can be scientifically understood: regarding positive and negative social emotions, retributive impulses, the effects of specific laws and social institutions on human relationships, the neurophysiology of happiness and suffering, etc.” (pp. 1–2). Harris argues that scientific principles are appropriately applied in this domain because “human well-being entirely depends on events in the world and on states of the human brain. Consequently, there must be scientific truths known about it” (p. 3). Although readers of this journal would have few problems with the assertion that behavior (here, reports of well-being and correlated responses) changes as a function of environmental events, the role of the neurophysiological correlates of these responses has been a point of debate within the conceptual literature of behavior analysis (e.g., Elcoro, 2008; Reese, 1996; Schaal, 2003).

Image result for moral landscape by sam harris

Author: Sam Harris

The Moral Landscape represents an important contribution to a scientific discussion of morality. It explicates the determinants of moral behavior for a popular audience, placing causality in the external environment and in the organism’s correlated neurological states. The contemporary science of behavior analysis has and will continue to contribute to this discussion, originating with Skinner’s seminal works Beyond Freedom and Dignity (1971) and Walden Two (1976). Neither book is explicitly a treatise on morality, but both are attempts to introduce behavioral science to a broader audience. The behavior-analytic approach (which is largely compatible with Harris’s efforts in The Moral Landscape) supports the superiority of a scientific approach to life, including questions of morality. Skinner (1976), for example, highlighted the importance of the experimenting culture to identify practices that were effective (cf. Baum, 2005).Tacit within behavior analysis is the expectation that a scientific worldview can and will improve the quality of life. Consistent with this view, Harris suggests that the currently accepted determinants of morality (e.g., religion, faith) are not what society ought to espouse. Instead, he proposes that scientific inquiry into morality as its own subject would enhance global levels of well-being. From a behavioral perspective, the study of morality is necessarily the study of behavior, including the contexts in which it occurs and the environmental events of which it is a function. Analysis in this framework may allow the successful identification of the variables that control moral behavior, and, ultimately, the development of cultural practices to increase its occurrence.

Image result for Dawkins, Hitchens, Harris and Carl Sagan

The Moral Landscape is a recent contribution to a collection of books (e.g., Dawkins, 2006; Harris, 2005; Hitchens, 2007; Sagan, 2006) that subject the claims of religion to the same standard of empirical rigor that other epistemologies (e.g., science) must abide by. Dawkins (2006), for example, criticizes the appeal to supernatural gods as explanatory agents and takes issue with the privileged place of religion within societal discourse. Harris echoes and expands on these concerns in The Moral Landscape.

Collectively, these authors take issue with the notion of nonoverlapping magisteria (NOMA; Gould, 1999), which is the assertion that science and religion are both valid systems of knowledge, and that neither discipline can inform the other. Behavior analysts take issue with the notion that scientific behavior and religious behavior are egalitarian (see Galuska, 2003, for suggestions about successful navigation of NOMA by behavior analysts). Skinner (1987) commented, “Science, not religion, has taught me my most useful values, among them intellectual honesty. It is better to go without answers than to accept those that merely resolve puzzlement” (p. 12). Although religion may be effective at inducing behavioral change among its followers, it continues to have unintended effects that, to borrow Harris’s analogy, reach the depths of the moral landscape. Hitchens (2007) makes a subtitular claim that “religion poisons everything,” supporting his thesis with discussions of demonstrably negative outcomes associated with religious practice, discussing examples of how religion leads to poorer states of human health and impedes social progress. As an alternative, he proposes a rational, scientific view of the world, which Harris applies to the study of morality.

Because they are members of a relatively small discipline, it may be beneficial for behavior analysts to align themselves with and support the authors of these works, garnering attention from the controversial coverage from popular media outlets that writers such as Dawkins and Harris regularly elicit. Perhaps controversial exposure is better than no exposure at all, especially when behavior analysis can enable the development of the hypothetical secular society that Harris, Dawkins, Hitchens, and Sagan call for. Indeed, behavior analysis may be the only discipline that can identify and establish reinforcers to motivate prosocial, so-called moral, human behavior in the absence of organized religion.

It is noteworthy that no psychologist has tackled the problem of secular values alongside these authors in spite of the contradictory facts that religion presents about human nature, facts that take away from the value of our discipline. Indeed, much of the rich “prescientific” vocabulary that inhibits psychology from becoming a natural science is either religious or metaphysical in nature (Schlinger, 2004). It is imperative for the validation of the field of psychology, as well as behavior analysis by association, to be a part of this modern empiricist movement championed by Harris.

Harris’s argument unfolds in an introduction and five subsequent chapters. In the introduction, he defines his title concept of the moral landscape as a hypothetical space representing human well-being, encompassing all human experiences. This space contains the well-being of members of all cultures and groups of individuals on the planet. The peaks of this landscape are the heights of prosperity, and the valleys represent the depths of human suffering. The goal of plotting the cartography of this landscape is to maximize “the well-being of conscious creatures” (i.e., humans) which “must translate at some point into facts about brains and their interactions with the world at large” (p. 11). For Harris, the brain is the locus of interest. We believe that it is possible to recast the argument into one about whole organisms—with correlated neurological states, perhaps—interacting with their environment to determine behavior. This scientific approach to human behavior, with a goal of improving the welfare of living organisms, is consistent with the application of behavior analysis to bring about societal change (e.g., Baer, Wolf, & Risley, 1968; Skinner, 1971, 1978).

In the subsequent chapters of his book, Harris makes the case for applying scientific thinking to determine human values. Chapter 1 outlines the knowable nature of moral truths, suggesting that they are subject to scientific (rather than religious) inquiry. In Chapter 2, Harris tackles the topics of good and evil, suggesting that these terms may be outmoded; instead, the goal of both religion and science should be to determine ways to maximize human well-being. In the third chapter, Harris explores the neurological correlates of belief, tracing the complex sets of behavior back to brain activity. In the fourth chapter, he examines the role of religious faith in contemporary society, suggesting that a scientific approach may lead to an increase in overall well-being. The final chapter outlines a plan for future work, disentangling science and philosophy, and offering an optimistic picture about the use of science to improve the human condition. In sum, Harris presents a cogent argument for the application of scientific principles to identify moral principles and values. In what follows, we describe his arguments and some intersections with the behavioral approach to this topic.

Defining Morality

The crux of Harris’s argument is that the well-being of conscious creatures should be the paramount consideration when determining whether an action is morally correct or incorrect. Harris uses the term conscious creature extensively in formulating his science of morality. Although he does not provide an explicit definition of consciousness, his use seems to be at odds with the behavioral approach to this construct. For Harris, consciousness seems to be a property of the brain, discoverable by explorations in neuroscience. In contrast, Skinner (1945) suggested that, when defining psychological terms, it is useful to identify the conditions under which those terms are used, and the history of the verbal community that produces that usage. Consistent with this analysis, Schlinger (2008) proposed that consciousness is best understood with a focus on the behaviors that are associated with the use of the word consciousness (e.g., self-talk, private behavior), rather than the study of the reified thing itself.

Consciousness, defined as a set of verbal behavior, is a prerequisite for a discussion of morality. Verbal behavior is required for us to evaluate our own subjective well-being in relation to the well-being of others, allowing us to identify the relative “goodness” or “badness” of each; such an analysis is necessarily dependent on verbal behavior. Indeed, in other media (cf. The Richard Dawkins Foundation, 2011), Harris has suggested that a universe of rocks could not define a science of morality, because consciousness (i.e., a verbal repertoire about one’s own behavior) is required to discuss subjective experience.

When providing a definition for the well-being that should be promoted, Harris likens this concept to physical health, noting,

Indeed the difference between a healthy person and a dead one is about as clear and consequential a distinction as we ever make in science. The difference between the heights of human fulfillment and the depths of human misery are no less clear even if new frontiers await us in both directions. (p. 12)

With this definition, it is possible to cast a wide net and capture a multitude of human behaviors and conditions. Harris suggests that, much like physical health, well-being eludes concise definition. Although the use of well-being is not precisely operationalized, he does define morality as “the principles of behavior that allow people to flourish” (p. 19). This phrasing is likely the closest to an operational definition of morality that is possible without undertaking the scientific analysis that Harris proposes in which fundamental principles to increase moral behavior could be discovered. The use of flourishing human life as a criterion for morality may be consistent with Skinner’s (1945) approach to evaluating terms as a function of the conditions in which they occur: Morality, for Harris, may be evident only when well-being is enhanced. The next step of the analysis would be to systematically identify the conditions that give rise to that flourishing human life, exploring the antecedents (e.g., having basic needs met, education, leisure time) and the consequences thereof. Behavioral technologies such as functional analysis (e.g., Iwata, Dorsey, Slifer, Bauman, & Richman, 1982/1994) may provide the tools required to successfully carry out this work.

A major premise of Harris’s work is that there is variability in the degree of “goodness” that individuals experience in life, and this variability can be accounted for by brain states and events in the external environment. If one accepts the distinction between “the good life” and “the bad life” and the idea that there are lawful patterns and factors that contribute to each of these outcomes (i.e., a deterministic framework), it allows the development of a scientific view of morality. This scientific view, according to Harris, stands as an alternative to traditional religious perspectives. Harris writes, “There is simply no question that how we speak about human values—and how we study or fail to study the relevant phenomena at the level of the brain—will profoundly influence our collective future” (p. 25). This theme can be found in the writings of Skinner (1971), who suggested that the scientific approach to the world’s practical problems can allow the development of solutions to those problems. Although Harris’s argument is framed in the language of neuroscience instead of Skinner’s behavioral perspective, a similarly pragmatic approach shows through.

The introduction of Harris’s book is wholly devoted to the qualification of values as scientific facts: verifiable statements about organisms and the environment around them. This argument, that utterances reflect or are symbolic of environmental events, should be familiar to those who are familiar with Skinner’s conceptualization of a verbal community. If we are to accept that utterances about “moral behavior,” “morality,” or “ethics” are not importantly different from other verbal behavior, then they too can become a topic for scientific inquiry. Such an analysis could evaluate the conditions under which this verbal behavior is emitted and the consequences thereof. With this understanding of the contingencies of reinforcement that promote and maintain these responses, it would be possible to shape the moral behavior of individuals or groups.

Harris posits that “science can, in principle, help us understand what we should do and should want—and, therefore, what other people should do and should want in order to live the best lives possible” (p. 28). This is congruent with Skinner’s acceptance of the value judgment (i.e., is or ought statements) as a tool to reveal the oftentimes subtle contingencies that control social behavior. Harris describes well-being as the conceptual basis for morality and values, stating, “there must be a science of morality … because the well-being of conscious creatures depends upon how the universe is, altogether” (p. 28). Bringing morality into the natural world makes it amenable to scientific study, and Harris’s book complements the work that behavior analysts have done with respect to questions of morality.

The behavior-analytic approach to values and morals has its origin with Skinner, who suggested that things that individuals call good are reinforcing, and that “any list of values is a list of reinforcers” (1953, p. 35). When describing Skinner’s approach, Ruiz and Roche (2007) commented that “it is important to provide translations of value statements in functional terms in order to reveal the relevant contingencies of reinforcement” (p. 4). Thus, as with the discussion of consciousness above, the conditions under which particular behaviors are morally correct or incorrect must be considered. This functional approach may expand on Harris’s proposed science of values and make it more acceptable to a behavioral audience.

Distinguishing between Philosophical Positions on morality

A large portion of Harris’s book differentiates between the religious notions of values and morality and the scientific principles thereof. Harris suggests that religious concerns about morality are related to human well-being. In Chapter 1, he describes an agenda of finding scientific truth about questions of morality. To deal with the relative unpopularity of his approach (Harris reports that more people in contemporary American society believe that morality should stem from religious than scientific inquiries), he asserts that consensus and truth are not the same thing: “One person can be right, and everyone else can be wrong. Consensus is a guide to discovering what is going on in the world, but that is all that it is. Its presence or absence in no way constrains what may or may not be true” (p. 31). Harris reports that 57% of Americans believe that preventing homosexual marriage is a moral imperative (p. 53), a clear example of a common belief that impairs the progression of well-being.

With respect to differences between perspectives of different groups, Harris writes, “those who do not share our scientific goals have no influence on scientific discourse whatsoever; but, for some reason, people who do not share our moral goals render us incapable of even speaking about moral truth” (p. 34). Here, Harris is suggesting that religious beliefs, which may be incorrect according to other epistemological systems (e.g., science), prevent other systems from declaring them to be incorrect. However, religious belief systems do comment on the “truth” of empirical inquiries, a double standard with which Harris takes issue, and a concern that is expressed by other authors such as Dawkins (2006). The ability of science to comment on affairs related to religion and morality has the potential for the further advancement of human well-being via the development of new ideas and technologies. Without a scientific response to these issues, progress seems less likely.

To determine the merits of given philosophical systems, one can adopt a relativistic position. Relativism is the belief that points of view have no absolute truth. This tradition is largely a by-product of scientific skepticism, and can be just as harmful to a science of morality as any religious doctrine. By Harris’s account, moral relativism is endemic throughout the scientific community. This is problematic for the development of the theoretical moral landscape because historically science has “had no opinion” on moral issues, which Harris ascribes to a fear of retribution by religious groups, political agendas, or intellectual laziness; he objects to the continuance of this harmful tradition. Harris suggests that relativism is accepted as an absolute position and is not subject to a contextual analysis (that relativism itself should require). He points out that this absolute acceptance of a relativistic worldview is fundamentally contradictory to the principle of relativism itself. If we are to believe that the practices in question (examples that Harris highlights include female genital mutilation and subjugation of women) are correct in the relevant cultural and historical time period, this belief must also be cast as relative and changeable, which it generally is not. In addition, Harris suggests that relativistic positions may lead to misguided beliefs about how to improve human well-being.

Perhaps at odds with Harris’s analysis, Skinner suggested that there are multiple sets of values that may emerge across cultural settings: “Each culture has its own set of goods, and what is good in one culture may not be good in another” (1971, p. 122). The reinforcers (i.e., values) identified across cultures necessarily vary as a function of the different physical and cultural environments in which the moral systems develop. For Skinner, the criterion by which to evaluate the goodness of a cultural practice is the degree to which it promotes survival of the society. Thus, although there are potentially many different ways for a culture to survive, there may be some that maximize the level of well-being of the individuals and the group. Skinner’s position is pragmatic, but has garnered criticism from within the behavior-analytic community (e.g., Ruiz & Roche, 2007). Critiques of the cultural survivability criterion emphasize the impossibility of determining which cultural practices will, in fact, enhance survivability without definite knowledge of the future. Ruiz and Roche (2007) called “for behavior analysts to consider seriously where we as a community stand on relativism and to discuss openly and thoroughly the criteria we will use in adopting ethical principles” (p. 11). Harris’s position of rejecting moral relativism in favor of universal principles to promote well-being may help to inform the behavior-analytic discourse.


After establishing that our beliefs can, indeed, be incorrect or somehow inconsistent with reality, Harris qualifies his argument. Citing research conducted in his own laboratory on the neuroscience of belief, he posits that there is no difference between what we deem to be “knowledge,” “belief,” and “truth,” and these utterances can be attributed to functionally equivalent neurological correlates. Indeed, the brain’s endogenous reward systems reinforce beliefs and utterances that we deem “true” with positive emotional valence. He writes,

When we believe a proposition to be true, it is as though we have taken it in hand as part of our extended self, we are saying in effect, “This is mine. I can use this. This fits my view of the world.” (p. 121)

This evidence from neuroscience supports the notion that values, knowledge, belief, and truth belong to the same class of verbal behavior, but may not necessarily share discriminative stimuli (Skinner, 1945). Taking this research to its logical conclusion, one can suggest that an individual’s learning history would dictate which beliefs, truths, or bits of knowledge could fit into a person’s worldview. By Harris’s account, we dislike information that contradicts our worldviews as much as we dislike being lied to. With this bias established, it is easier to see precisely how maladaptive or harmful beliefs can be propagated.

Organism-Environment Interactions

In Chapter 2, Harris suggests that an understanding of the human brain and its states will allow an understanding of forces that improve society (e.g., prosocial behavior). He writes,

As we better understand the brain, we will increasingly understand all of the forces … that allow friends and strangers to collaborate successfully on the common projects of civilization. Understanding ourselves in this way and using the knowledge to improve human life, will be among the most important challenges to science in the decades to come. (pp. 55–56)

Cooperation is one of the mechanisms through which values may come about, and Harris contends that “there may be nothing more important than human cooperation” (p. 55). Conceptualizing the failures of cooperation as the everyday grievances of theft, deception, and violence, it is plain to see how failing to cooperate can be an impediment to human well-being and moral development.

Harris emphasizes the role of consequences in the formation of values, suggesting that

all questions of value depend upon the possibility of experiencing such value. Without potential consequences at the level of experience—happiness, suffering, joy, despair etc.—all talk of value is empty … even within religion, therefore, consequences and conscious states remain the foundation of all values. (p. 62)

In this quote, Harris suggests the power of consequences to effect change in behavior. In so doing, Harris takes morality out of his context of neurological events and places it into an environmental framework. Although he does acknowledge the behavior–environment interaction as a cause for moral responding, a behavior-analytic approach would go further, emphasizing the power of consequences to increase or decrease (i.e., reinforce or punish) the likelihood that moral behavior would occur. It is the consequences of behavior that make it more or less likely to occur in a selectionist framework (cf. Glenn & Madden, 1995), and those same consequences seem to work similarly at the neurological level (e.g., Stein, Xue, & Belluzzi, 1994). Thus, it is the interaction between the environment and the organism that leads to the development of any behavior, including moral responses and those associated with varying degrees of well-being.

Taking this environment-based approach, Harris presents contemporary research from neuroscience throughout his book. After describing the neurological precursors and correlates of behavior, he dismisses the notion of free will, citing additional biological data to suggest that it is the brain—and not an agent of free will—that is responsible for behavior. He makes a familiar argument for a deterministic framework, in which the historical and contemporary environments (including neurological states) are responsible for behavior. After dismantling free will, Harris describes ramifications for the justice system. With this knowledge, we can no longer hold people accountable for their actions because they are determined by historical and contemporary events. This view negates a justice system based on punishment or retribution. Consistent with a behavioral position (e.g., Chiesa, 2003), Harris suggests that, with increased knowledge about the brain (for which we may be able to substitute behavior without losing any meaning, because the brain necessarily belongs to a complete organism), reforms of the justice system may be necessary. A reformed justice system would be more compassionate based on its more accurate understanding of causes of behavior (i.e., the environment and biological states). Harris takes this position to an extreme, proposing that it may even be immoral to fail to consider environmental and biological factors within the context of the justice system. Here, there is a fundamental compatibility between the approach that Harris is advocating and a behavioral worldview.

Indeed, the understanding of proximate and ultimate causes that precede any event are essential to making logically coherent arguments, not just from the perspective of the justice system but in understanding the behavior of all organisms. The ultimate cause of many reprehensible human behaviors lies in the distant evolutionary past. Proximate causes can be shaped over the course of single lifetimes and may covary with environmental stimuli (Mayr, 1961; Skinner, 1981). For the purposes of Harris’s argument, we will agree that the nervous system and the brain are proximate causes of behavior, but these were influenced by both the evolutionary history of the species and the learning environment of the individual (cf. Schlinger & Poling, 1998, pp. 39–41). A more thorough discussion of ultimate cause may be a better locus to develop the science of moral behavior which he calls for.

A Program for changing moral behaviour

If Harris’s claims that morality is knowable through scientific processes are true (and, based on his arguments in the book, we, at least, are convinced), behavior analysis ought to be at the forefront of the emerging science of morality. As a discipline, behavior analysis is uniquely positioned to deal with matters that span the continuum of well-being and suffering (i.e., the peaks and valleys of Harris’s moral landscape). Behavior analysis has a history of developing and using demonstrably effective behavior-change procedures. Because Harris’s neurological correlates of well-being are isomorphic with human behavior, the methods of experimental and applied behavior analysis could be used to support the type of work that Harris proposes.

In the first chapter of the book, Harris outlines three primary directions that work in the science of morality can take: (a) Explain why people engage in particular behavior “in the name of morality”; (b) “determine which patterns of thought and behavior we should follow in the name of ‘morality’”; and (c) convince people “who are committed to silly and harmful patterns of thought and behavior in the name of ‘morality’ to break these commitments and to live better lives” (p. 49). Behavior analysts have the conceptual framework and behavior-change techniques to potentially make meaningful contributions to each of these goals.


In The Moral Landscape, Harris begins to develop a science of morality that he believes could be used to maximize the well-being of humans. Although his approach to morality is largely grounded in neuroscience (rather than the study of the whole organism), he does present an environment-based approach to morality to a wide readership, continuing in the tradition of other recent works that have espoused secular worldviews (e.g., Dawkins, 2006; Hitchens, 2007). Harris’s approach to the development of the science of morality is largely consistent with the behavioral approach; for him, as for behavior analysts, morality is behavior, and that behavior is subject to environmental (and biological) manipulation.

As part of his work, Harris describes the need to produce change in methods for producing change in moral behavior; the discipline of behavior analysis is ideally suited to contribute to this mission. The application of behavioral techniques to socially significant problems has been a hallmark of behavior analysis ever since Baer et al. (1968) laid the foundations of applied behavior analysis. Behavior change has been demonstrated from the level of the individual to the level of society, and these same principles could be applied to moral behavior, as described by Harris, to promote universal well-being. No other discipline matches behavior analysis in its scientific understanding of behavior or its tools to modify it. If Harris is correct that science should take an active role in determining human values, behavior analysts must be a part of that conversation.


We thank Mirari Elcoro for her thoughtful comments on a previous version of this manuscript.


  • Baer D.M., Wolf M.M., Risley T.R. Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis. 1968;1:91–97. [PMC free article] [PubMed]
  • Baum W.M. Understanding Behaviorism: Behavior, Culture, and Evolution (2nd ed.) Malden, MA: Blackwell; 2005.
  • Chiesa M. Implications of determinism: Personal Responsibility and the Value of Science. In: Lattal K.A., Chase P.N., editors. Behavior theory and philosophy. New York, NY: Kluwer Academic/Plenum; 2003. pp. 243–258. (Eds.)
  • Dawkins R. The God Delusion. New York, NY: Houghton Mifflin; 2006.
  • Elcoro M. Including physiological data in a science of behavior: A critical analysis. Brazilian Journal of Behavioral and Cognitive Therapy. 2008;10:253–261.
  • Galuska C.M. Advancing Behaviorism in a Judeo-Christian Culture. In: Lattal K.A., Chase P.N., editors. Behavior Theory and Philosophy. New York, NY: Kluwer Academic/Plenum; 2003. pp. 259–274. (Eds.)
  • Glenn S.S., Madden G.J. Units of Interaction, Evolution, and Replication: Organic and Behavioral Parallels. The Behavior Analyst. 1995;18:237–251. [PMC free article] [PubMed]
  • Gould S.J. Rocks of Ages: Science and Religion in the Fullness of Life. New York, NY: Ballentine; 1999.
  • Harris S. The End of faith: Religion, Terror, and the Future of Reason. New York, NY: Norton; 2005.
  • Harris S. The Moral Landscape: How Science can determine Human Values. New York, NY: Free Press; 2010.
  • Hitchens C. God is not great: How Religion Poisons Everything. New York, NY: Twelve Books, Hachette Book Group; 2007.
  • Iwata B.A., Dorsey M.F., Slifer K.J., Bauman K.E., Richman G.S. Toward a Functional Analysis of Self-Injury. Journal of Applied Behavior Analysis. 1994;27:197–209. (Reprinted from Analysis and Intervention in Developmental Disabilities, 2, 3–20, 1982) [PMC free article] [PubMed]
  • Mayr E. Cause and Effect in Biology. Science. 1961;134:1501–1506. [PubMed]
  • Reese H.W. How is Physiology Relevant to Behavior analysis? The Behavior Analyst. 1996;19:61–70. [PMC free article] [PubMed]
  • The Richard Dawkins Foundation. Who says science has nothing to say about religion? 2011. [DVD] Retrieved from
  • Ruiz M.R., Roche B. Values and the Scientific Culture of Behavior analysis. The Behavior Analyst. 2007;30:1–16. [PMC free article] [PubMed]
  • Sagan C. The Varieties of Scientific Experience: A Personal View of the A Search for God. New York, NY: Penguin; 2006.
  • Schaal D.W. Explanatory Reductionism in Behavior Analysis. In: Lattal K.A., Chase P.N., editors. Behavior theory and philosophy. New York, NY: Kluwer Academic/Plenum; 2003. pp. 83–102. (Eds.)
  • Schlinger H.D. Why Psychology hasn’t kept its Promises. The Journal of Mind and Behavior. 2004;25(2):123–144.
  • Schlinger H.D. Consciousness is Nothing but a word. Skeptic. 2008;13:58–63.
  • Schlinger H.D., Poling A. Introduction to Scientific psychology. New York, NY: Plenum; 1998.
  • Skinner B.F. The Operational Analysis of Psychological Terms. Psychological Review. 1945;52:268–277.
  • Skinner B.F. Science and Human Behavior. New York, NY: Macmillan; 1953.
  • Skinner B.F. Beyond Freedom and Dignity. New York, NY: Knopf; 1971.
  • Skinner B.F. Walden Two. New York, NY: Macmillan; 1976.
  • Skinner B.F. Reflections on behaviorism and society. New York, NY: Prentice Hall; 1978.
  • Skinner B.F. Selection by consequences. Science. 1981;213:501–504. [PubMed]
  • Skinner B.F. What religion means to me. Free Inquiry. 1987;7:12–13.
  • Stein L., Xue B.G., Belluzzi J.D. In Vitro Reinforcement of Hippocampal Bursting: A Search for Skinner’s Atoms of Behavior. Journal of the Experimental Analysis of Behavior. 1994;61:155–168. [PMC free article] [PubMed]

Foreign Policy: Diplomats can be developed

February 1, 2018

Foreign Policy: Diplomats can be developed

For a long time it was held that a diplomat is born as such and that it is impossible to produce a diplomat by training. This view was based on a lack of distinction between personal characteristics and qualities of the diplomat on one hand, and the knowledge and skills he needs to do his job on the other. Whereas the first are indeed part of the physical and mental makeup a person is born with, the second can be and must be taught. The days when any well-born and well-bred dilettante of great personal charm could handle diplomatic business as a result of these in-born and in-bred qualities are long past, if they ever truly existed. However there are some characteristics and qualities a diplomat should possess if he is to perform at all well in his profession, however vast his acquired knowledge and skills may be. Thus, before going into the issue of training, we should spend a few moments to consider what these qualities and characteristics are or should be.

Image result for Henry Kissinger on Diplomacy

America’s Top Diplomats from Henry Kissinger to John Kerry

Diplomacy is not for the sickly, the weak, the neurotic and the introverts. A robust constitution and good health are needed to stand the physical and mental strain put on diplomats in many situations. Being able to sleep well in almost any circumstances is of great help. A well-balanced personality, good self-control, a natural inquisitiveness, an interest in understanding others and their manner of thinking are also essential. This should be complemented by a friendly and outgoing nature, natural courtesy and good manners, a capacity to create empathy and develop friendships. A gift for languages is a great asset, because being able to communicate with opposite numbers in their own language is becoming increasingly important, especially in some less traditional forms of diplomacy.

What  must a Diplomat know?

For a long time diplomats studied history, languages and law, and this was seen as sufficient. Even today, lawyers are over-represented in foreign ministries. A quick look at the subject-matters of present-day international relations should suffice to impress on anyone the importance of multi-disciplinary academic knowledge. To that extent the generalist-specialist controversy does not exist at all. All diplomats must have basic familiarity with history, law, economics and political science. And it is therefore not surprising that curricula of all respected training institutions includes these subjects.

Image result for condoleezza riceDr. Condoleezza Rice is an Expert on Russia and was a Secretary of State with her predecessor, General Colin Powell


But diplomats must also be able to acquire specialist knowledge in nearly any subject when needed. This may be in order to assume a certain position within the diplomatic establishment or in order to handle a temporary task like a specific mission or negotiation. It is therefore important when providing them with their initial training, or when completing such training undergone prior to the admission to the career, to promote the capacity for assimilating unfamiliar subjects at short notice. A diplomat who has this ability can look forward to a variegated career, whereas one who finds it difficult to assimilate new knowledge is likely to spend his life dealing with matters well within his range of competence, thus becoming some sort of specialist not to be considered for assignments handling other matters.

How should a Diplomat be trained?

In many countries with an old diplomatic tradition, candidates for the diplomatic service are expected to come with a sufficient baggage of basic academic knowledge to make initial training in such fields unnecessary. They undergo tests and examinations to make sure that they possess such knowledge. Training after recruitment is restricted mainly to teaching professional skills and to adding to basic academic knowledge specialised subjects of particular importance for diplomatic activities. Language training often occupies a predominant place in such systems. Other countries prefer to recruit candidates to whom basic academic disciplines for diplomacy are taught during a training stage. This kind of basic training is also provided by regional institutions such as the Mediterranean Academy of Diplomatic Studies, as many countries cannot afford to provide basic training to the few diplomats they recruit every year or only from time to time.


For many years now the need for continuous training of diplomats has been recognised, but little headway has so far been made for its satisfaction. This is quite understandable as diplomats once recruited are supposed to spend their time working and not learning. Current budgetary constraints make it even more difficult to release a diplomat for any kind of continuous training. On the other hand, the rapidly changing content of diplomatic interaction and of methods used make in-career training an inescapable necessity. Fortunately, as we shall see, new training approaches and facilities make it easier to respond to this necessity without disrupting a diplomat’s activity to an undue extent.

The evolution of training approaches and methods

Institutions training candidates for diplomacy or offering training for beginning diplomats were mostly offspring of universities or strongly influenced by academic teaching methods. When foreign ministries started to set up in-house training establishments, these again mostly relied on university lecturers for the teaching of academic subjects. Thus ex-cathedra lecturing was the dominant approach, sometimes complemented by seminars. Where practising diplomats were used to convey their experience to newly recruited colleagues, the lecture method was invariably used.

Only in the 1960s were simulations of imagined or real diplomatic situations introduced to a meaningful extent. Diplomat trainees were made to simulate pleadings before arbitral or judicial tribunals, negotiations or even complex international crises. A pioneer in these fields were the Stabex exercises conducted at the Graduate Institute of International Studies for the trainees of the Carnegie diplomatic training courses. In conformity with the reluctance in those days to upset any existing country or government, most exercises were between invented Ruritanias, any resemblance to actual countries being “entirely coincidental.” These days simulations use much more concrete approaches. Participants are made to simulate a crisis or negotiations which already took place (with the intent to show that the historical outcome was not the only possible one), or they simulate oncoming negotiations or even simulate alongside an ongoing negotiation. Such exercises allow diplomat trainees to immerse themselves in the reality of past or ongoing events rather than to amuse themselves with imagined “games.” For their conduct, the expertise of seasoned negotiators is needed, who are in the thick of ongoing activities. They are of course not always easy to get hold of and all of them do not have the ability to convey their expertise to participants.

As in other fields, information technology has introduced new possibilities and methods for the training of diplomats. Computer-assisted and computer-based training allows trainees to participate in their own formation. By breaking down subjects into relatively small teaching modules it has become possible to move from basics into any degree of detail. As a result, basic and continuous training become interlinked. A diplomat who has to assimilate specialised knowledge in a given field can start with going back to what he already knows or, if he is totally unfamiliar with the subject area, acquaint himself with the basics. Then he proceeds gradually in the direction of what he really needs and thus finds it relatively easy to achieve a considerable degree of mastery.

Information technology also allows training to become delocalised. Trainers and trainees can interact in cyberspace without having to be physically present in the same place. This enormously facilitates continuous training, as a diplomat can do a lot of learning by himself, on his computer, at the time and for the duration of his convenience. Interaction in real time then starts from this base and becomes much more intensive and lively. As indicated, a special branch of continuous learning is the preparation for a given mission or event. In the case of multilateral negotiations, chosen negotiators can do their basic learning together and even simulate their interaction before the real event. This should reduce the duration of actual meetings, a constant preoccupation of cash-strapped international institutions.

Consequences for training institutions

Should such institutions abandon their present methods of teaching, send home their students and proceed to teach them over the Internet? This would certainly be an unwelcome and extreme approach. Every teacher and most students know how important physical interaction is. Spending together not only classroom hours but also working together, studying together and discussing matters not immediately related to the teaching programmes are essential elements of learning. Moreover, the fundamental task of the diplomat is interpersonal contact and interaction. All this can to some extent be at least simulated in cyberspace, but sometimes the real thing is needed, especially in more recent forms of diplomatic interaction, where the diplomat must meet people who are not diplomats, distrust diplomats, want to be physically present with their guns and do not believe in cyber-interaction.

The approach chosen by the Mediterranean Academy of Diplomatic Studies for its distance learning programmes should therefore be highly commended. Trainees and trainers spend an initial period together in Malta or some other location before repairing to their workplaces and resuming interaction from there. Preliminary trial runs have shown that in ten days of intense cohabitation and interaction participants of a programme become a family whose members henceforth feel at ease with each other also in cyberspace.

As we are at the outset of what may well be termed a revolution in teaching approaches and methods, individual training institutions should feel free to find their own approach, for which cultural characteristics of those involved may also play an important role. It will be interesting to meet again some years hence in Malta—and not in cyberspace—to compare notes on experience acquired.

Best-Selling Author Tells GW Students to Reflect and Contemplate

January 30, 2018

Best-Selling Author Tells GW Students to Reflect and Contemplate

Dr.William Deresiewicz said students should learn to think before trying to change the world.



Author William Deresiewicz urged students to reflect and think before committing to improving the world. (Harrison Jones/GW Today)


By B.L. Wilson

At elite private colleges, the social cost of dissent is high and progressive consensus tight, according to William Deresiewicz, author of the best-selling book, “Excellent Sheep,” comparing universities to what sociologists call “total institutions” such as monasteries, prisons, mental institutions and the military.

This is notwithstanding the desire of students at colleges like George Washington University to have an impact and make the world a better place.

“Your generation is to be commended for this new spirit that is broad among America’s youth,” Dr. Deresiewicz said, “a zeal for activism and social justice that hasn’t been seen since the 1970s.”

Dr. Deresiewicz told students that “reflection, contemplation, analysis, study – in a word thought” should precede their commitment to making the world a better place.

Image result for the george washington university mount vernon campusMount Vernon, located in Fairfax County, Virginia, was the plantation home of George Washington, the First President of the United States. The property alongside the Potomac River was first owned by Washington’s great-grandfather back in 1674, who became a successful tobacco planter through the help of slave labor and indentured servants. Young George Washington came into possession of the estate in 1754, when he was about 23 years old, but he didn’t become the property’s sole owner until 1761. The estate served as the centerpiece of Washington’s military and political life,and the site stands as a powerful symbol regarding the birth of the American nation.
Image result for Din merican at Mount Vernon

Introducing the author to students and faculty crowded into Ames Hall on the George Washington University Mount Vernon Campus, Maria Frawley, executive director of the University Honors Program and professor of English, said, part of his book’s subtitle, “The Way to a Meaningful Life” is what appealed to her. “It is what all of us educators and students care most about,” she said.

Recent tensions on college campuses between freedom and equality and the struggle over restrictions on offensive speech, Dr. Deresiewicz said, prompted him to come up with a response.

He contended that the homogeneity of student populations at elite college campuses who often are from liberal upper and middle classes, multiracial, but predominantly white accounts for a progressive dogma of opinion that almost approaches religious dogma.

“Secularism is taken for granted. Environmentalism is a sacred cause. Issues of identity – principally the holy trinity of race, gender and sexuality – occupy the center of the discourse,” Dr. Deresiewicz said. “The assumption, on the left, is that we are already in full possession of the moral truth.

“The central purpose of a real education, as in liberal arts education,” he said, “is to liberate us from what Plato called doxa or opinion by teaching us to recognize it, to question it and to think our way around it.”

A liberal arts education includes not only disciplines such as the humanities, English, history, philosophy but also the sciences in which the pursuit of knowledge is conducted for its own sake, he said.

You read King Lear not to master it, he suggested. “You read King Lear for what it does to you, for the way it changes you,” he said, “and hopefully that experience enhances your mind’s capacity for experience, and the ability to learn from it.”

Bringing the talk back to where he began, Dr. Deresiewicz said the humanities lead to reflection on the big questions that are persistent questions because no one has the answers. “The heart of reflection is self-reflection,” he said. “The essence of knowledge is self-knowledge.”

Reflection, he said, can help students achieve wisdom, an application of knowledge often associated with age. “For all the desire to change the world,” he said, “it will likely take a long time to have the real power to do so.”

Asked where he would draw the line in making students uncomfortable, Dr. Deresiewicz said even though right wing groups are often deliberately provocative, he agreed with a University of Chicago dean that colleges should provide no spaces safe from debate and uncomfortable discussions.

Allen Wang, a GW freshman and an international business major, said, “Students come to GW because it is very powerful in specific tracts such as international affairs and public health. But the talk was extremely topical and eye-opening and, more importantly, inspiring because of the spirit Dr. Deresiewicz tried to communicate about academic uncertainty and the truth.”


On Knowledge and statecraft

January 24, 2017

On Knowledge and statecraft

by Muhammad Husni Mohd Amin

Image result for Najib, Zahid Hamidi and Hishamuddin HusseinThe 3 UMNO Goons–Dr. Zahid Hamidi, Hishamuddin Hussein and Najib Razak. They do not qualify as Philisopber-Kings. They are Malaysia’s penyamun tarbus.


IN Plato’s Republic, the philosopher-king is a leader who loves and embodies the cardinal virtues of wisdom, temperance, courage and justice. Therefore, the community that produced him would dispense with the mechanisms of democracy meant to curtail misuse of power by corrupt politicians who preyed upon the masses because of their ignorance.

Former British Prime Minister Winston Churchill once said, “It has been said that democracy is the worst form of government, except all those others that have been tried.” This may only refer to the inadequacies of the present set-up in producing leaders who do not require constant oversight.

The leader reflects the people. The Prophet said, “As you are, so shall your leader be.” He also said, “Each of you is a shepherd (ra‘in) and each of you is responsible for his flock (ra‘iyyah)”.

The Arabic word ra‘iyyah, from which the Malay word rakyat originated, has its root in ra‘in, which also means guide, guardian or caretaker. In the worldview of Islam, both the leader and the people form a unity; they are like a single body.

The Prophet also prophesied the emergence of leaders (umara) who “will be corrupt but God may put much right through them”. Therefore, the people are obliged to be thankful when leaders do good and patient when the leaders commit evil.

Image result for al-ghazali


The Proof of Islam, Imam al-Ghazali, in his Ihya’ ‘Ulum al-Din (Revival of the Religious Sciences), stated that religion is established through the sultan, who is not to be belittled.

We should not justify a wrongdoing when it is proven, but our limited senses may often lead us to believe that no good may come out of the things we perceive as evil because we think evil is the absence of good.

While weed follows the cultivation of rice and there seems to be no good in growing weed, it does not stop us from planting and harvesting the rice.

A well-known Sufi figure, Fudayl ibn ‘Iyad, said, “If I had one supplication that was going to be answered, I would make it for the sultan, for the sultan’s well-being and righteousness means well-being for the land and its people.”

Another Sufi figure, Sahl al-Tustari, was once asked, “Who is the best among men?” He replied that it was the ruler, which surprised his inquirers because it was thought that rulers were the worst.

Sahl continued, “Don’t be hasty! God Most High has two glances every day: one is for the safety of the Muslims’ possessions and another for their bodies. Then, God looks into the Register of Deeds and forgives him all his sins (for his protection of both).”

But the precondition for forgiveness is that the ruler must protect both.The establishment and statecraft of our centuries-old Malay sultanates mirrored those in Islam’s civilisational epicentre, which in turn were modelled after the Prophet’s Medina.

While colonial rule modernised our country’s administration, it did not abolish the sultanates but merely interrupted them. However, colonisation also displaced the ulama’s traditional role in advising the Rulers.

It also severely impaired the ability to follow the Prophetic practice called shura in consulting scholars and learned men as well as the ability to recognise and acknowledge them properly. This is the reason for today’s greater need for checks and balances.

Even so, we are lucky to be blessed with a unique system that combines constitutional monarchy and parliamentary democracy. This is the time when rulers work closely with the ruled towards the common good.

While our Rulers do not interfere in politics, adherence to royal protocols should not conceal the fact that the Rulers are in the best position to decree the people so that they would choose the best stewards for the nation.

Image result for UMNO members

UMNO is full of learned members –the dedaks led by Big Momma

The counsel of learned people is important in guiding a ruler’s politics because statecraft is like a knife in the kitchen – a housewife could wield the knife as a utensil or a burglar as a weapon.

Muhammad Husni Mohd Amin is senior research officer at Ikim’s Centre for Science and Environ­ment Studies. The views expressed here are entirely the writer’s own.

How Economics Survived the Economic Crisis

January 19, 2018

How Economics Survived the Economic Crisis

by Robert (Lord) Skidelsky

Unlike the Great Depression of the 1930s, which produced Keynesian economics, and the stagflation of the 1970s, which gave rise to Milton Friedman’s monetarism, the Great Recession has elicited no such response from the economics profession. Why?

LONDON – The tenth anniversary of the start of the Great Recession was the occasion for an elegant essay by the Nobel laureate economist Paul Krugman, who noted how little the debate about the causes and consequences of the crisis have changed over the last decade. Whereas the Great Depression of the 1930s produced Keynesian economics, and the stagflation of the 1970s produced Milton Friedman’s monetarism, the Great Recession has produced no similar intellectual shift.

Image result for Paul Krugman

The Conscience of a Liberal–Keynesianism, Friedmanian Monetarism— Macroeconomics still needs to come up with a big new idea.

This is deeply depressing to young students of economics, who hoped for a suitably challenging response from the profession. Why has there been none?

Krugman’s answer is typically ingenious: the old macroeconomics was, as the saying goes, “good enough for government work.” It prevented another Great Depression. So students should lock up their dreams and learn their lessons.

A decade ago, two schools of macroeconomists contended for primacy: the New Classical – or the “freshwater” – School, descended from Milton Friedman and Robert Lucas and headquartered at the University of Chicago, and the New Keynesian, or “saltwater,” School, descended from John Maynard Keynes, and based at MIT and Harvard.

Freshwater-types believed that budgets deficits were always bad, whereas the saltwater camp believed that deficits were beneficial in a slump. Krugman is a New Keynesian, and his essay was intended to show that the Great Recession vindicated standard New Keynesian models.

But there are serious problems with Krugman’s narrative. For starters, there is his answer to Queen Elizabeth II’s now-famous question: “Why did no one see it coming?” Krugman’s cheerful response is that the New Keynesians were looking the other way. Theirs was a failure not of theory, but of “data collection.” They had “overlooked” crucial institutional changes in the financial system. While this was regrettable, it raised no “deep conceptual issue” – that is, it didn’t demand that they reconsider their theory.

Faced with the crisis itself, the New Keynesians had risen to the challenge. They dusted off their old sticky-price models from the 1950s and 1960s, which told them three things. First, very large budget deficits would not drive up near-zero interest rates. Second, even large increases in the monetary base would not lead to high inflation, or even to corresponding increases in broader monetary aggregates. And, third, there would be a positive national income multiplier, almost surely greater than one, from changes in government spending and taxation.

These propositions made the case for budget deficits in the aftermath of the collapse of 2008. Policies based on them were implemented and worked “remarkably well.” The success of New Keynesian policy had the ironic effect of allowing “the more inflexible members of our profession [the New Classicals from Chicago] to ignore events in a way they couldn’t in past episodes.” So neither school – sect might be the better word – was challenged to re-think first principles.

Image result for Milton Friedman

This clever history of pre- and post-crash economics leaves key questions unanswered. First, if New Keynesian economics was “good enough,” why didn’t New Keynesian economists urge precautions against the collapse of 2007-2008? After all, they did not rule out the possibility of such a collapse a priori.

Krugman admits to a gap in “evidence collection.” But the choice of evidence is theory-driven. In my view, New Keynesian economists turned a blind eye to instabilities building up in the banking system, because their models told them that financial institutions could accurately price risk. So there was a “deep conceptual issue” involved in New Keynesian analysis: its failure to explain how banks might come to “underprice risk worldwide,” as Alan Greenspan put it.

Second, Krugman fails to explain why the Keynesian policies vindicated in 2008-2009 were so rapidly reversed and replaced by fiscal austerity. Why didn’t policymakers stick to their stodgy fixed-price models until they had done their work? Why abandon them in 2009, when Western economies were still 4-5% below their pre-crash levels?

The answer I would give is that when Keynes was briefly exhumed for six months in 2008-2009, it was for political, not intellectual, reasons. Because the New Keynesian models did not offer a sufficient basis for maintaining Keynesian policies once the economic emergency had been overcome, they were quickly abandoned.

Krugman comes close to acknowledging this: New Keynesians, he writes, “start with rational behavior and market equilibrium as a baseline, and try to get economic dysfunction by tweaking that baseline at the edges.” Such tweaks enable New Keynesian models to generate temporary real effects from nominal shocks, and thus justify quite radical intervention in times of emergency. But no tweaks can create a strong enough case to justify sustained interventionist policy.

Image result for Milton Friedman

The problem for New Keynesian macroeconomists is that they fail to acknowledge radical uncertainty in their models, leaving them without any theory of what to do in good times in order to avoid the bad times. Their focus on nominal wage and price rigidities implies that if these factors were absent, equilibrium would readily be achieved. They regard the financial sector as neutral, not as fundamental (capitalism’s “ephor,” as Joseph Schumpeter put it).

Image result for paul a samuelson
Image result for paul a samuelson

Paul Anthony Samuelson (1915-2009)

Without acknowledgement of uncertainty, saltwater economics is bound to collapse into its freshwater counterpart. New Keynesian “tweaking” will create limited political space for intervention, but not nearly enough to do a proper job. So Krugman’s argument, while provocative, is certainly not conclusive. Macroeconomics still needs to come up with a big new idea.

*Lord Skidelsky, Professor Emeritus of Political Economy at Warwick University and a fellow of the British Academy in history and economics, is a member of the British House of Lords. The author of a three-volume biography of John Maynard Keynes, he began his political career in the Labour party, became the Conservative Party’s spokesman for Treasury affairs in the House of Lords, and was eventually forced out of the Conservative Party for his opposition to NATO’s intervention in Kosovo in 1999.


Next to Read:

Remembering Nelson Mandela and India’s Rocket Man Dr. A P J Abdul Kalam

December 6, 2017

Remembering Nelson Mandela and India’s Rocket Man Dr. A P J Abdul Kalam–Both were embodiment of Moral Leadership, which is sadly lacking in the world today. –Din Merican

Image result for Remembering Mandela on the Anniversary of his death.latest

5 Principles for Moral Leadership

Accomplished leaders are like master craftsmen: their first principles are best practices, the felt wisdom of experience and reflection.

Take Benjamin Franklin. In his Autobiography, he describes 13 precepts for self-improvement he coined as a young man. They include Resolution (“Resolve to perform what you ought; perform without fail what you resolve”), Industry, (“Lose no time; be always employ’d in something useful; cut off all unnecessary actions”), and Order (“Let all your things have their places; let each part of your business have its time”).

Image result for benjamin franklin on moral leadership
Image result for benjamin franklin on moral leadership

When the penniless printer from Philadelphia became one of the leading men in America, his admirers understood the enormous benefit his example could provide. “[Y]ou yourself framed a plan by which you became considerable,” observed one, who implored Franklin to share it in hopes of “aiding all happiness, both public and domestic.”

For inspiration, I assign Franklin’s Autobiography to students in my business ethics class at the University of Chicago Booth School of Business. Then I challenge them to derive principles of their own with an eye toward strong moral leadership. With their permission, I wanted to share five favorites from my fall class.

1. Put a Face on It

Unlike Franklin’s experience, many of our work relationships will involve people we will never meet. Dan, an IT professional, makes the obvious but often overlooked point that it is “easy to engage in unethical or immoral behavior when you don’t have to see the person whom you are affecting.” Accordingly, he always tries to put a face with a name, finding photos of the people he interacts with on LinkedIn or other online directories. “Associating a face with the interactions reminds me that my actions affect a real person,” he says, “not just some faceless name in an email address line.”

2. Manage by Listening Rather Than Telling

Unusually precocious, Franklin knew the awkward status of being junior in age but senior in position. Drawing on her own experience working for an industrial supplier, Lindsay observes that a promising associate is often placed in a leadership role “before she may be ready,” with the result that she finds herself “fighting an uphill battle to do well and gain the respect of those around her with more tenure and experience.” Accordingly, Lindsay contends that one must establish a professional dynamic of mutual respect. “I am only successful if the people I manage have my back and respect me,” she says. “I am nothing if I do not respect and support the work that they do day in and day out.

3. Be Flexible, Not Dogmatic

Franklin’s rejection of a rigid approach to problem solving spoke to Drew, a corporate trust analyst. “Businesses leaders need to be flexible and not dogmatic about their beliefs and intellectual frameworks,” he says. Reflecting on Alan Greenspan’s leadership in the years before the financial crisis, he faults the former Fed Chair not for failing to anticipate the crisis, but for believing that such an event could never occur. “Greenspan relied too heavily on frameworks,” he says, “and not enough on doing everything in his power to rationally understand what was going on and make adjustments to his policies as needed.” For Drew, strict adherence to dogma not only binds a leader’s hands, it can blind him to problems his framework won’t admit.

4. Follow Published Rules of Conduct

Franklin wrote his precepts in a memorandum book he carried with him wherever he went. The aim was to remind him of the behavior he aspired to — and to shame him whenever he failed to live up to it. An executive at a Fortune 50 company, Megan observes that, while the “Code of Conduct” is a mainstay of the modern office, “many people disregard these published rules.” Such a tendency not only undermines the rules, when managers flout them, it reinforces a spirit of lawlessness. A fish rots from the head down. If rules are important enough to be written down, they are important enough to be followed — by everyone.

5. Respect the Bottom Line, but Don’t Worship It

Image result for The Autobiography of Benjamin Franklin


“[A]fter getting the first hundred Pound,” Franklin observed in the Autobiography, “it is more easy to get the second.” Yet, as Patrick, a student with experience working in renewable energy, observes, the additional gain can sometimes come at too high a cost. “I don’t believe that the pursuit of profit on its face is immoral,” he says, but “I do believe that a relentless focus on profit often leads to immoral behavior.” The same may be said for any single-minded focus that excludes all other goods. For leaders, a sense of perspective and an ability to step back are essential to balancing moral integrity with corporate mission. At the same time, Patrick notes, the emergence of companies that have double or triple bottom lines of profit, social impact, and sustainability “indicates that certain businesses either share this principle or have very slick PR teams.”

Benjamin Franklin wasn’t above a little “slick PR” — how else to explain a book where one presents himself as a paragon of self-improvement? — but he believed that the appearance of integrity would inevitably be undone without the reality in support of it. The principles described above are no doubt demanding, but so is any standard of leadership worth the trouble of writing down.

John Paul Rollert teaches business ethics at the University of Chicago Booth School of Business