Bank Negara’s Independence at Risk

March 20, 2016

Bank Negara’s Independence at Risk

by John Berthelsen

The projected April retirement of Zeti Akhtar Aziz, the 68-year-old governor of Malaysia’s central bank, Bank Negara, has kicked off intense speculation about who will follow her into the job. Over recent months, as the investigation into the personal finances of Prime Minister Najib Razak has droned on, Zeti is almost the sole person in authority who has not knuckled under and let the investigation die.

It appears unlikely, given the political situation, that her successor will be Muhammad Ibrahim (above), the Deputy Governor and ranking professional in the bank. That could be a mistake. Bank Negara since the 1980s has suffered crisis after crisis as politicians, particularly former Prime Minister Mahathir Mohamad, have attempted to direct monetary policy.

In October, Bank Negara, which is charged with regulating the country’s financial institutions, credit system and monetary policy, issued a statement saying it had requested a criminal investigation into the affairs of the scandal-plagued 1Malaysia Development Bhd investment fund despite the fact that Najib’s hand-picked Attorney General Mohamad Apandi Ali, to whom it had forwarded the case, said there was no reason for prosecution. Apandi Ali ultimately turned down the request.

Bank Negara, however, responded with a statement contradicting the Attorney General’s office and saying 1MDB had secured permits for investment abroad based on inaccurate or incomplete disclosure of information, breaching banking regulations, and added that it had revoked three permits granted to 1MDB for investments abroad totaling US$1.83 billion (RM7.53 billion) and ordered the state fund to repatriate the funds to Malaysia

Sources in Kuala Lumpur said Zeti, one of the world’s most respected central bankers, faced the danger of blackmail by forces aligned with Najib over concerns that the government might prosecute her husband, Tawfik Ayman, because of secret overseas accounts. Rosmah Mansor, the Prime Minister’s wife, was reportedly involved in a campaign to drive Zeti from her position.

The Wall Street Journal last week printed a story saying the fawning Irwan Serigar Abdullah, the Secretary General of the Ministry of Finance, was favored for the job. Serigar is considered close to Najib and his appointment would be tantamount to handing control of the now-independent institution to the prime minister. However, that has been denied and other candidates, including Dr. Awang Adek Hussain, the current Malaysian Ambassador to the United States, and Abdul Wahid Omar, the former Chief Executive Officer for Malayan Banking Bhd. who is now Minister in the Prime Minister’s Department in charge of Economic Planning. Both are considered to be clean.

The bottom line is that Najib, in the fight of his life over a plethora of international and domestic scandals, will want someone he can control. Thus the job probably won’t go to Muhammad, the bank’s deputy governor, who is assumed to be too independent for Najib ‘s tastes. He is a career banker who has risen up the ranks and who is not going to compromise his integrity.


In fact, the independence of the central bank has largely been engineered by Zeti, who is widely respected and credited for pushing reforms and sound policies, as well as protecting the bank’s independence. Prior to her appointment, former Prime Minister Mahathir Mohamad used the central bank for his own personal punching bag, causing it to lose billions of dollars.

In 1985, following the so-called Plaza Accord of finance ministers in New York, which pushed the value of the US dollar down sharply, Bank Negara’s dollar reserves fell sharply when the bank was wrong-footed. The then-governor, Jaffar Hussein, began trading speculatively in an effort to make up the losses, apparently at Mahathir’s behest. The bank became a major player in the foreign exchange market, with the Federal Reserve requesting that the bank rein in its activities. At one time, the central bank’s exposure was rumored to be in the region of RM270 billion  (US$66.65 billion at current exchange rates) – three times the country’s gross domestic product and more than five times its foreign reserves at the time.

In 1992 and 1993, Mahathir became convinced he could make billions of ringgit by taking advantage of a British recession, rising unemployment and a decision by the British government to float the pound sterling free of the European Exchange Rate Mechanism.

Mahathir ordered Bank Negara to buy vast amounts of pounds sterling on the theory that the British currency would appreciate once it floated. However, in what has been described as the greatest currency trade ever made, the financier and currency wizard George Soros’s Quantum hedge fund established short positions, borrowing in pounds and investing in Deutschemark-denominated assets as well as using options and futures positions.

In all, Soros’s positions alone ac­counted for a gargantuan US$10 billion. Many other investors, sensing Quantum was in for the kill, soon followed, putting strenuous downward pressure on the pound. The collapse was inevitable. Quantum walked away with US$1 billion in a single day, earning Mahathir’s eternal enmity and earning Soros the title “the man who broke the Bank of England.”

Mahathir and Bank Negara, on the other hand, walked away with a US$4 billion loss, followed by another US$2.2 billion loss in 1993, the total equivalent of RM15.5 billion. Although the disastrous trades destroyed the entire capital base of Bank Negara, after first denying it had taken place, the then-Finance Minister Anwar Ibrahim repeatedly reassured parliament that the losses were only “paper losses.”

Eventually, the Finance Ministry had to recapitalize the central bank, almost unheard of for any government anywhere. It is reliably estimated that Bank Negara lost as much as US$30 billion in this and other disastrous currency trades, costing the head of the central bank and his currency trader deputy their jobs.

If anything, that ought to be an argument for a professional from the ranks of the central bank to take over the reins when Zeti steps down. But politics, especially the politics of preserving Najib’s job, has taken precedence in Malaysia. It would be foolish to bet against a candidate aligned with the Prime Minister.


The Rise and Decline of Islam

March 19, 2016

The Rise and Decline of Islam

by: Kassim Ahmad

Revised and expanded on March 19, 2016

The Quran in Surah Ali-‘Imran (3) states that “The only religion approved by God is Islam.” The Arabic word ‘deen’ essentially mean ‘way of life’ rather that the restricted ritualistic meaning of the word ‘religion’.

Former political activist Kassim Ahmad © speaks to journalist outside the Kuala Lumpur High Court in Kuala Lumpur on January 6, 2015. The Malaysian Insider/Najjua Zulkefli

Former political activist Kassim Ahmad © speaks to journalist outside the Kuala Lumpur High Court in Kuala Lumpur on January 6, 2015. The Malaysian Insider/Najjua Zulkefli

This religion of strict monotheism is taught by all prophet-messengers from Adam to its completion and perfection by Muhammad, the last of all prophet-messegers. But, as it is wont with human beings, corruption and deterioration set in and complete their work in after about 300 years (10 generations) to change the original teachings. Thus, the monotheism of Prophet Moses became polytheism in Judaism, of Prophet Jesus polytheism in Christianity, and of Muhammad polytheism in Sunnism. Sunnism is polytheistic in that it has elevated Muhammad to a second god, against his will. [1]

Sunnism is sectarian “Islam”, worshiping two gods. [2] Two gods are one too many. It is polititeism. Fortunately for mankind, the last of God’s scripture, the Quran, is divinely  protected so that all mankind can always refer to it as its guide.

This divine protection lies internally in the scripture in a mathematically awesome and impossible to imitate structure called Code 19. This Code is stated in the Quran in Surah Al-Muddaththir  (74), verses 30-31.

The verses go, “Over it is nineteen. We appointed angels to be guardians of Hell, and we assign their number (19) (1) to disturb the disbelievers, (2) to convince the Christians and the Jews (that this is a divine scripture), (3) to strengthen the faith of the faithful, (4) to remove all traces of doubt from the hearts of Christians, Jews, as well as believers, and (5) to expose those who harbor doubt in their hearts. The disbelievers will say, ‘What does God mean by this allegory?’ God thus sends astray whomever He wills, and guides whomever He wills. None knows the soldiers of your Lord except He. This is a reminder for the people.”

The rise of Islam, beginning with the reign of Prophet Muhammad in the Arabian Peninsular in early seventh century, within a short time of only sixty years shot up to be the Number One power in the then world, beating the two superpowers, the Byzantian Empire and the Persian Empire.

Historian Philip K. Hitti, in his book, History of the Arabs (1970), states, “If someone in the first third of the seventh Christian century had the audacity to prophesy that within a decade some unheralded, unforeseen power from hitherto barbarous and little-known land of Arabia was to make its appearance, hurl itself against the only two world powers of the age, fall heir to the one — the Sasanid – and strip the other – the Byzantine — of its fairest provinces, he would have undoubtedly have been declared a lunatic. Yet that was exactly what happened.

After the death of the Prophet sterile Arabia seems to have been converted as if by magic into a nursery of heroes the like of whom both in number and quality is hard to find anywhere. The military campaigns of Khalid ibn-al-Walid and ‘Amar ibn-al-‘As which ensued in al-Iraq, Persia, Syria and Egypt remain among the most brilliantly executed in the history of warfare and bear favourable comparison with those of Napoleon, Hannibal or Alexander.” (p. 142).

A Western philosophical historian, Robert Briffault, in his epoch-making book, The Making of Humanity (1919), after denouncing a conspiracy of silence by most Western historians on the contributions of Muslim science to modern Europe,  surmarised the contribution of Muslim science to civilization, thus: “The debt of our science to that of the Arabs does not consist in startling discoveries or revolutionary theories. Science owes a great deal more to  Arab culture , it owes its existence. The ancient world was, as we saw, pre-scientific. The astronomy and mathematics of the Greeks were a foreign importation never thoroughly acclamatised in Greek culture.

The Greeks systematized, generalized and  theorized, but he patient ways of investigation, the accumulation of positive knowledge, the minute methods of science, detailed and prolonged observation, experimental inquiry, were altogether alien to the Greek temperament. … What we called science arose in Europe as result of a new spirit of inquiry, of new methods of investigation, of the method of experiment, observation, measurement, of the development of mathematics in a form unknown to the Greeks. That spirit and those methods were introduced into the European world by the Arabs.” (p. 191)

Muslim civilization lasted eight centuries. In that time, Baghdad became the capital of the world and Europe became students at the feet of Baghdad. When the rot set in, Europe took over the banner of civilization and what is known as the European Renaissance began. Will Western leadership last for ever? Only time can  tell. But basing ourselves on its truncated epistemology, we can say that it cannot last forever, at most another two or three decades.

One of two thing will happen. Either Europe and the United States will adopt the true revolutionary doctrine of Islam, which  I characterize as “revolutionary, life-affirming, and death-defying”, or the Muslims themselves will be reborn with that true spirit of the Quran and borne in the life Prophet Muhammad and the early republican-democratic Caliphates.

In the meanwhile, Muslim leaders must answer the question why the Muslim way of life, guaranteed by God, has collapsed, and how they can rebuild it. To answer this all-important question, they must re-study the Quran with a scientific methodology. I can suggest a few signposts.

First, at a certain point in time, Muslim science froze and deteriorated, due to wrong teaching of certain so-called masters. These were made into masters by a new priesthood class adopted in imitation of medieval Hinduism and Christianity. In Islam there is no priesthood class.

Second, at a certain point in time, a certain attitude of fatalism developed in Islam due a new theology preached in accordance to hadith teachings. Hadiths are essentially fabrications falsely ascribed to the great name of Prophet  Muhammad.

Third, that new theology also preached salvation in the Afterlife, in a nondescript Theologians’ Nirvana in imitation of Buddhism. This led to Muslim apathy in a life waiting for death. At this point, roughly from the Fourteenth Century onwards, this false Islam died, with the false Muslims.

Fourth and last to rebuild, the Muslims must re-study the Quran (which is their and mankind’s book of guidance) and the examples of their great leaders in the republican democratic period  , to find correct answers to their current plight.

I have surmarised the teachings of the Quran as “revolutionary, life-affirming and death-defying”.  We must seek salvation in this life by raising our souls to a higher level. It is this raising of our souls to a higher level that is necessary for the coming Second Muslim Civilization, which must come.

 [1] See Quran (16: 51) which states: “Do not worship two gods. There is only one God.” Further Surah 63, verse I, invalidates the second syahadah which is uttered by hypocrites.

[2] God has proclaimed: “Do not worship two gods; there is only on god. Therefore you shall reverence Me.” (Quran, 16: 51).

Cultivate the Art of Serendipity

January 5, 2016

NY Times Sunday Review | Opinion

SundayReview | Opinion

Cultivate the Art of Serendipity

03kennedy-1451576711727-blog427In 2008, an inventor named Steve Hollinger lobbed a digital camera across his studio toward a pile of pillows. “I wasn’t trying to make an invention,” he said. “I was just playing.” As his camera flew, it recorded what most of us would call a bad photo. But when Mr. Hollinger peered at that blurry image, he saw new possibilities. Soon, he was building a throwable videocamera in the shape of a baseball, equipped with gyroscopes and sensors. The Squito (as he named it) could be rolled into a crawlspace or thrown across a river — providing a record of the world from all kinds of “nonhuman” perspectives. Today, Mr. Hollinger holds six patents related to throwable cameras.

A surprising number of the conveniences of modern life were invented when someone stumbled upon a discovery or capitalized on an accident: the microwave oven, safety glass, smoke detectors, artificial sweeteners, X-ray imaging. Many blockbuster drugs of the 20th century emerged because a lab worker picked up on the “wrong” information.

While researching breakthroughs like these, I began to wonder whether we can train ourselves to become more serendipitous. How do we cultivate the art of finding what we’re not seeking?

For decades, a University of Missouri information scientist named Sanda Erdelez has been asking that question. Growing up in Croatia, she developed a passion for losing herself in piles of books and yellowed manuscripts, hoping to be surprised. Dr. Erdelez told me that Croatian has no word to capture the thrill of the unexpected discovery, so she was delighted when — after moving to the United States on a Fulbright scholarship in the 1980s — she learned the English word “serendipity.”

Today we think of serendipity as something like dumb luck. But its original meaning was very different.

In 1754, a belle-lettrist named Horace Walpole retreated to a desk in his gaudy castle in Twickenham, in southwest London, and penned a letter. Walpole had been entranced by a Persian fairy tale about three princes from the Isle of Serendip who possess superpowers of observation. In his letter, Walpole suggested that this old tale contained a crucial idea about human genius: “As their highnesses travelled, they were always making discoveries, by accident and sagacity, of things which they were not in quest of.” And he proposed a new word — “serendipity” — to describe this princely talent for detective work. At its birth, serendipity meant a skill rather than a random stroke of good fortune.

Dr. Erdelez agrees with that definition. She sees serendipity as something people do. In the mid-1990s, she began a study of about 100 people to find out how they created their own serendipity, or failed to do so.

Her qualitative data — from surveys and interviews — showed that the subjects fell into three distinct groups. Some she called “non-encounterers”; they saw through a tight focus, a kind of chink hole, and they tended to stick to their to-do lists when searching for information rather than wandering off into the margins. Other people were “occasional encounterers,” who stumbled into moments of serendipity now and then. Most interesting were the “super-encounterers,” who reported that happy surprises popped up wherever they looked. The super-encounterers loved to spend an afternoon hunting through, say, a Victorian journal on cattle breeding, in part, because they counted on finding treasures in the oddest places. In fact, they were so addicted to prospecting that they would find information for friends and colleagues.

You become a super-encounterer, according to Dr. Erdelez, in part because you believe that you are one — it helps to assume that you possess special powers of perception, like an invisible set of antennas, that will lead you to clues.

A few months ago, I was having a drink in Cambridge, Mass., with a friend, a talented journalist who was piecing together a portrait of a secretive Wall Street wizard. “But I haven’t found the real story yet; I’m still gathering string,” my friend told me, invoking an old newsroom term to describe the first stage of reporting, when you’re looking for something that you can’t yet name. Later that night, as I walked home from the bar, I realized “gathering string” is just another way of talking about super-encountering. After all, “string” is the stuff that accumulates in a journalist’s pocket. It’s the note you jot down in your car after the interview, the knickknack you notice on someone’s shelf, or the anomaly that jumps out at you in Appendix B of an otherwise boring research study.

As I navigated the brick sidewalk, passing under the pinkish glow of a streetlight, I thought about how string was probably hiding all around me. A major story might lurk behind the Harvard zoology museum ahead or in the plane soaring above. String is everywhere for the taking, if you have the talent to take it.

In the 1960s, Gay Talese, then a young reporter, declared that “New York is a city of things unnoticed” and delegated himself to be the one who noticed. Thus, he transformed the Isle of Manhattan into the Isle of Serendip: He traced the perambulations of feral cats, cataloged shoeshine purveyors, tracked down statistics related to the bathrooms at Yankee Stadium and discovered a colony of ants at the top of the Empire State Building. He published his findings in a little book titled “New York: A Serendipiter’s Journey.”

The term “serendipiter” breathed new life into Walpole’s word, turning serendipity into a protagonist and a practitioner. After all, those ants at the top of the Empire State Building didn’t find themselves; Mr. Talese had to notice them, which was no easy matter. Similarly, Dr. Erdelez came up with the term super-encounterer to give us a way to talk about the people rather than just the discoveries. Without such words, we tend to become dazzled by the happy accident itself, to think of it as something that exists independent of an observer.

We can slip into a twisted logic in which we half-believe the penicillin picked Alexander Fleming to be its emissary, or that the moons of Jupiter wanted to be seen by Galileo. But discoveries are products of the human mind.

As people dredge the unknown, they are engaging in a highly creative act. What an inventor “finds” is always an expression of him- or herself. Martin Chalfie, who won a Nobel Prize for his work connected with green fluorescent protein — the stuff that makes jellyfish glow green — told me that he and several other Nobel Prize winners benefited from a chain of accidents and chance encounters on the way to their revelations. Some scientists even embrace a kind of “free jazz” method, he said, improvising as they go along: “I’ve heard of people getting good results after accidentally dropping their experimental preparations on the floor, picking them up, and working on them nonetheless,” he added.

So how many big ideas emerge from spills, crashes, failed experiments and blind stabs? One survey of patent holders (the PatVal study of European inventors, published in 2005) found that an incredible 50 percent of patents resulted from what could be described as a serendipitous process. Thousands of survey respondents reported that their idea evolved when they were working on an unrelated project — and often when they weren’t even trying to invent anything. This is why we need to know far more about the habits that transform a mistake into a breakthrough.

IN the late 1980s, Dr. John Eng, an endocrinologist, became curious about certain animal poisons that damaged the pancreas, so he ordered lizard venom through the mail and began to play around with it. As a result of this curious exercise, he discovered a new compound in the saliva of a Gila monster, and that in turn led to a treatment for diabetes. One of Dr. Eng’s associates (quoted in a 2005 newspaper article) remarked that he was capable of seeing “patterns that others don’t see.”

Is this pattern-finding ability similar to the artistic skill of a painter like Georgia O’Keeffe? Is it related to the string-gathering prowess of Gay Talese? We still know so little about creative observation that it’s impossible to answer such questions.

That’s why we need to develop a new, interdisciplinary field — call it serendipity studies — that can help us create a taxonomy of discoveries in the chemistry lab, the newsroom, the forest, the classroom, the particle accelerator and the hospital. By observing and documenting the many different “species” of super-encounterers, we might begin to understand their minds.

A number of pioneering scholars have already begun this work, but they seem to be doing so in their own silos and without much cross-talk. In a 2005 paper (“Serendipitous Insights Involving Nonhuman Primates”), two experts from the Washington National Primate Research Center in Seattle cataloged the chance encounters that yielded new insights from creatures like the pigtail macaque. Meanwhile, the authors of a paper titled “On the Exploitation of Serendipity in Drug Discovery” puzzled over the reasons the 1950s and ’60s saw a bonanza of breakthroughs in psychiatric medication, and why that run of serendipity ended. And in yet another field of study, a few information scientists are trying to understand the effects of being bombarded on social media sites with countless tantalizing pieces of “string.”

What could these researchers discover if they came together for one big conversation?

Of course, even if we do organize the study of serendipity, it will always be a whimsical undertaking, given that the phenomenon is difficult to define, amazingly variable and hard to capture in data. The clues will no doubt emerge where we least expect them, perhaps in the fungi clinging to the walls of parking garages or the mating habits of bird-watchers. The journey will be maddening, but the potential insights could be profound: One day we might be able to stumble upon new and better ways of getting lost.

A version of this op-ed appears in print on January 3, 2016, on page SR1 of the New York edition with the headline: Cultivating the Art of Serendipity.

‘Brief Candle in the Dark,’ by Richard Dawkins

November 29, 2015

NY Times Sunday Book Review

‘Brief Candle in the Dark,’ by Richard Dawkins

Some lumbering robot, this Richard Dawkins. “Lumbering robots” was one of the ways in which this scarily brilliant evolutionary biologist described human beings vis-à-vis their genes in “The Selfish Gene,” his first and probably still his most influential book — more than a million copies sold. (His atheist manifesto, “The God Delusion,” has sold more than three million.) We’re essentially a means of physical and, more important, temporal transportation for our genes, he explained. They can live on for eons after we take our own inherited genes and mate with those of that handsome boy behind us in the ­movie-ticket line who ended up sitting next to us or the ones belonging to that pretty girl whose change we picked up by mistake at the newsstand and with whom we then had an apologetic coffee. And so on down the line. Our lines. Dawkins has also called us “throwaway survival machines” for our genes. But only, I think, to make a biological point.

In all of his work — including this new memoir, “Brief Candle in the Dark: My Life in Science” (a sort of sequel to “An Appetite for Wonder,” about his early life) — Dawkins himself gives the existential lie to the notion that if we are here for any reason, we are here primarily, maybe exclusively, to provide Uber service for our genes and, just a little more altruistically, for the genes of those biologically most closely related to us. Because his genes don’t know anything about him and he knows just about everything about them.

In “Brief Candle in the Dark” — a title that I have to admit made me say, “Oh, please!” — Dawkins gives us a chronologically helter-skelter account of his grown-up research, discoveries, reflections, collaborations and controversies (especially about religion), along with reports on his appearances at various events, debates and conferences. So many events, so many conferences. He has become what Yeats calls himself in “Among School Children,” a “smiling public man.” (Though not always smiling, in Dawkins’s case, especially when it comes to his atheism.)

“Helter-skelter”? The book is “organized” achronologically, with, for example, sections devoted to the author’s academic progress, culminating in his appointment as Oxford’s first Charles Simonyi professor of public understanding of science; a chapter about his publishing history; another about “Debates and Encounters.” “If you don’t like digressive anecdotes,” Dawkins tells us, “you might find you’re reading the wrong book.”

Here is Dawkins describing Jane Brockmann’s experiments with the burrows of the female digger wasp, which he used to demonstrate the principle of evolutionarily stable strategy: “We need ESS theory whenever it happens that the best strategy for an animal depends on which strategy most other animals in the population have adopted.” Here he is three pages later introducing at some admiring length his Oxford University student Alan Grafen, who helped with the math of the digger-wasp-burrow study. A page later, still nominally among the wasp burrows, we find a Monty Python-esque description of the Great Annual Punt Race, in which the Animal Behavior Research Group rows against the Edward Grey Institute of Field Ornithology.

Dawkins’s tributes to teachers, colleagues, students and public figures mingle with fairly extensive reprises on and further thoughts about the scientific research and philosophical positions he has developed in his 12 previous works. (They are all still in print, Dawkins tells us, presumably with a little blush.) There is his tribute to one of his “heroes,” the Nobel Prize-winning biologist Peter Medawar, admired “as much for his writing style as for his science.” And another to David Attenborough, brother of Richard, a “marvelous man.” And to Susan Blackmore, a “briskly intelligent psychologist.” Then there’s Christopher Hitchens, with his “intellect, wit, lightning repartee.” And so on.

These encomiums and credit-givings complement Dawkins’s persistent efforts to leaven his recollections with humor, applying a generally light touch: “An agent was a good thing to have,” and Caroline Dawnay “was a good representative of the genus.” “The snort of a pig-frog . . . may affect another pig-frog as the nightingale affected Keats, or the skylark Shelley.” Together, these mots — bon and otherwise — and Dawkins’s acknowledgments of the talents and the contributions of others to his life and work add up to a kind of self-­effacement campaign. The crucial element in “self-effacement” is “self.” Self-effacement is not the same as modesty or humility — it is an effort of will, not a unitary psychological state. Nevertheless, that Dawkins mounts this campaign in “Brief Candle in the Dark” is surprisingly sweet, and admirable. That he loses the battle is in no way shameful. If anyone in modern science deserves to regard his or her own contributions with pride, even with triumph, it is Richard Dawkins.

The sections of “Brief Candle in the Dark” that deal with religion and atheism are middle-aged if not old hat to anyone who knows anything about the public Dawkins, along with Sam Harris, Lawrence Krauss and Christopher Hitchens. But they are still entertaining. The often long passages that involve pure science are sometimes difficult and thus, sadly, require short shrift in a book review. “Natural selection, at each locus independently, favors whichever allele cooperates with the other genes with whom it shares a succession of bodies: And that means it cooperates with the alleles at those other loci, which cooperate in their turn.” But work on them and they become, as you might expect, cogent précis of Dawkins’s life’s work, and vastly illuminating: “Animals are islands in this hyperspace, vastly spaced out from one another as if in some Hyperpolynesia, surrounded by a fringing reef of closely related animals.” “If one identical twin were good at three-­dimensional visualization, I would expect that his twin would be too. But I’d be very surprised to find genes for gothic arches, postmodern finials or neoclassical ­architraves.”

Especially bright is the light thrown in summary on replication and adaptation and connectedness, not only biological but cultural, especially in the concept of the “meme” — a word coined by Dawkins to describe images, phrases, references, pieces of music, that are themselves replicated and then spread virally throughout the world’s cultural consciousness. The meme is at best, I think, a metaphorically baggy analogue to the gene, but it serves the purpose of emphasizing the recursiveness and interrelatedness of our experience of the world.

Sometimes you get the feeling that ­Dawkins sees — and believes we should see — everything as connected to everything else, everything affecting everything else, everything determining and being determined by everything else. In fact, in “Brief Candle in the Dark,” he recursively recites something pertinent to this point that he wrote in “Unweaving the Rainbow,” about the compatibility of art and science: “The living world can be seen as a network of interlocking fields of replicator power.”

In his marveling at art and music and the accomplishments of his predecessors, in his sense of wonder, unspoiled — in fact amplified — by science, Dawkins proves we’re not in any way reducible to mere lumbering (or any other kinds of) robots for our genes. Even though the price of our ability to learn and marvel is death, and our genes have at least theoretical immortality, they’re really but tiny vehicles for our own wonder.

Daniel Menaker’s most recent book is a memoir, “My Mistake.”

A version of this review appears in print on November 29, 2015, on page BR8 of the Sunday Book Review with the headline: In His Genes. Today’s Paper


Remembering an Original Thinker–Physicist Richard P. Feynman

November 26, 2015

Remembering an Original Thinker–Physicist Richard P. Feynman


Richard Feynman: Life, the universe and everything

Flowers, music, strip clubs…Richard Feynman’s scientific curiosity knew no bounds. Christopher Riley pays tribute to an eccentric genius

by Christopher Riley

In these days of frivolous entertainments and frayed attention spans, the people who become famous are not necessarily the brightest stars. One of the biggest hits on YouTube, after all, is a video of a French bulldog who can’t roll over. But in amongst all the skateboarding cats and laughing babies, a new animated video, featuring the words of a dead theoretical physicist, has gone viral. In the film, created from an original documentary made for the BBC back in the early Eighties, the late Nobel Prize-winning professor, Richard Feynman, can be heard extolling the wonders of science contained within a simple flower.

There is “beauty”, he says, not only in the flower’s appearance but also in an appreciation of its inner workings, and how it has evolved the right colours to attract insects to pollinate it. Those observations, he continues, raise further questions about the insects themselves and their perception of the world. “The science,” he concludes, “only adds to the excitement and mystery and awe of the flower.” This interview was first recorded by the BBC producer Christopher Sykes, back in 1981 for an episode of Horizon called “The Pleasure of Finding Things Out”. When it was broadcast the following year the programme was a surprise hit, with the audience beguiled by the silver-haired professor chatting to them about his life and his philosophy of science.

Now, thanks to the web, Richard Feynman’s unique talents – not just as a brilliant physicist, but as an inspiring communicator – are being rediscovered by a whole new audience. As well as the flower video, which, to date, has been watched nearly a quarter of a million times, YouTube is full of other clips paying homage to Feynman’s ground-breaking theories, pithy quips and eventful personal life.

The work he did in his late twenties at Cornell University, in New York state, put the finishing touches to a theory which remains the most successful law of nature yet discovered. But, as I found while making a new documentary about him for the BBC, his curiosity knew no bounds, and his passion for explaining his scientific view of the world was highly contagious. Getting to glimpse his genius through those who loved him, lived and worked with him, I grew to regret never having met him; to share first-hand what so many others described as their “time with Feynman”.

Richard Phillips Feynman was born in Far Rockaway — a suburb of New York – in May 1918, but his path in life was forged even before this. “If he’s a boy I want him to be a scientist,” said his father, Melville, to his pregnant wife. By the time he was 10, Feynman had his own laboratory at home and, a few years later, he was employing his sister Joan as an assistant at a salary of four cents a week. By 15, he’d taught himself trigonometry, advanced algebra, analytic geometry and calculus, and in his last year of high school won the New York University Math Championship, shocking the judges not only by his score, but by how much higher it was than those of his competitors.

He graduated from the Massachusetts Institute of Technology in 1939 and obtained perfect marks in maths and physics exams for the graduate school at Princeton University — an unprecedented feat. “At 23 there was no physicist on Earth who could match his exuberant command over the native materials of theoretical science,” writes his biographer James Gleick.

Such talents led to him being recruited to the Manhattan Project in the early Forties. Together with some of the greatest minds in physics in the 20th century, Feynman was put to work to help build an atom bomb to use against the Germans before they built one to use against the Allies. Security at the top-secret Los Alamos labs was at the highest level. But for Feynman — a born iconoclast – such control was there to be challenged. When not doing physics calculations he spent his time picking locks and cracking safes to draw attention to shortcomings in the security systems.

“Anything that’s secret I try and undo,” he explained years later. Feynman saw the locks in the same way as he saw physics: just another puzzle to solve. He garnered such a reputation, in fact, that others at the lab would come to him when a colleague was out-of-town and they needed a document from his safe.

Between the safe cracking and the physics calculations, the pace of life at Los Alamos was relentless. But for Feynman these activities were a welcome distraction from a darker life. His wife, Arline, who was confined to her bed in a sanatorium nearby, was slowly dying of TB.

When she died in the summer of 1945, Feynman was bereft. This misery was compounded, a few weeks later, when the first operational atom bomb was dropped on Japan, killing more than 80,000 people. His original reason for applying his physics to the war effort had been to stop the Germans. But its use on the Japanese left Feynman shocked. For the first time in his life he started to question the value of science and, convinced the world was about to end in a nuclear holocaust, his focus drifted.

He became something of a womaniser, dating undergraduates and hanging out with show girls and prostitutes in Las Vegas. In a celebrated book of anecdotes about his life – Surely You’re Joking Mr Feynman – the scientist recounts how he applied an experimental approach to chatting up women. Having assumed, like most men, that you had to start by offering to buy them a drink, he explains how a conversation with a master of ceremonies at a nightclub in Albuquerque one summer prompted him to change tactics. And to his surprise, an aloof persona proved far more successful than behaving like a gentleman.

William Hurt as Richard Feynman in a BBC drama based on his role in the Challenger disaster report

His other method of relaxation in those years was music; his passion for playing the bongos stayed with him for the rest of his life. Physics had slipped down his list of priorities, but he suddenly rediscovered his love for the subject in a most unexpected way. In the canteen at Cornell one lunchtime he became distracted by a student, who had thrown a plate into the air. As it clattered onto the floor Feynman observed that the plate rotated faster than it wobbled. It made him wonder what the relationship was between these two motions.

Playing with the equations which described this movement reminded him of a similar problem concerning the rotational spin of the electron, described by the British physicist Paul Dirac. And this, in turn, led him to Dirac’s theory of Quantum Electrodynamics (QED); a theory which had tried to make sense of the subatomic world but had posed as many questions as it answered. What followed, Feynman recalled years later, was like a cork coming out of a bottle. “Everything just poured out,” he remembered.

“He really liked to work in the context of things that were supposed to be understood and just understand them better than anyone else,” says Sean Carroll, a theoretical physicist who sits today at Feynman’s old desk at Caltech, in Pasadena. “That was very characteristic of Feynman. It required this really amazing physical intuition – an insight into what was really going on.” Applying this deep insight, Feynman invented an entirely new branch of maths to work on QED, which involved drawing little pictures instead of writing equations.

Richard’s sister, Joan, recalls him working on the problem while staying with her one weekend. Her room-mate was still asleep in the room where Richard had been working. “He said to me, ‘Would you go in the room and get my papers, I wanna start working’,” she remembers. “So I went in the room and I looked for them, but there was no mathematics. It was just these silly little diagrams and I came out and said, ‘Richard, I can’t find your papers, it’s just these kind of silly diagrams’. And he said, ‘That is my work!’” Today Feynman’s diagrams are used across the world to model everything from the behaviour of subatomic particles to the motion of planets, the evolution of galaxies and the structure of the cosmos.

Applying them to QED, Feynman came up with a solution which would win him a share of the 1965 Nobel Prize for Physics. Almost half a century later QED remains our best explanation of everything in the universe except gravity. “It’s the most numerically precise physical theory ever invented,” says Carroll.

Discovering a law of nature and winning a Nobel Prize, for most people, would represent the pinnacle of a scientific career. But for Feynman these achievements were mere stepping stones to other interests. He took a sabbatical to travel across the Caltech campus to the biology department, where he worked on viruses. He also unravelled the social behaviour of ants and potential applications of nanotechnology. And he was active beyond the world of science, trading physics coaching for art lessons with renowned Californian artist Jirayr Zorthian. (While at Caltech he also began frequenting a local strip club, where he would quietly work out his theories on napkins; he found it the ideal place in which to clear his head.)

But it was his talent as a communicator of science that made him famous. In the early Sixties, Cornell invited him to give the Messenger Lectures – a series of public talks on physics. Watching them today, Feynman’s charisma and charm is as seductive as it was 50 years ago.

“He loved a big stage,” says Carroll. “He was a performer as well as a scientist. He could explain things in different ways than the professionals thought about them. He could break things down into their constituent pieces and speak a language that you already shared. He was an amazingly good teacher and students loved him unconditionally.”

Recognising this ability, in 1965 Caltech asked him to rewrite the undergraduate physics course. The resulting Feynman Lectures on Physics took him three years to create and the accompanying textbooks still represent the last word on the history of physics. The lectures themselves were brimming with inspiring “showbiz demonstrations” as his friend Richard Davies describes them. Most memorably, Feynman used to set up a heavy brass ball on a pendulum, send it swinging across the room, and then wait for it to swing back towards him. Students would gasp as it rushed towards his face, but Feynman would stand stock still, knowing it would stop just in front of his nose. Keen to capitalise on these talents for engaging an audience, Christopher Sykes made his film for Horizon. “He took enormous pleasure in exploring life and everything it had to offer,” remembers Sykes. “More than that, he took tremendous pleasure in telling you about it.”

In the late Seventies, Feynman discovered a tumour in his abdomen. “He came home and reported, ‘It’s the size of a football’,” remembers his son Carl. “I was like ‘Wow, so what does that mean?’ And he said, ‘Well, I went to the medical library and I figure there’s about a 30 per cent chance it will kill me’.” Feynman was trying to turn his predicament into something fascinating, but it was still not the kind of thing a son wanted to hear from his father.

A series of operations kept Feynman alive and well enough to work on one final important project. In 1986, he joined the commission set up to investigate the Challenger disaster. The space shuttle had exploded 73 seconds after launch, killing the entire crew of seven astronauts. Feynman fought bureaucratic intransigence and vested interests to uncover the cause of the accident: rubber O-ring seals in the shuttle’s solid rocket boosters that failed to work on the freezing morning of the launch. At a typically flamboyant press conference, Feynman demonstrated his findings by placing a piece of an O-ring in a glass of iced water. But the inquiry had left him exhausted. With failing kidneys and in a great deal of pain he decided not to go through surgery again and went into hospital for the last time in February 1988.

His friend Danny Hillis remembers walking with Feynman around this time: “I said, ‘I’m sad because I realise you’re about to die’. And he said, ‘That bugs me sometimes, too. But not as much as you’d think. Because you realise you’ve told a lot of stories and those are gonna stay around even after you’re gone.’” Twenty-five years after his death, thanks to the web, Feynman’s prophecy has more truth than he could ever have imagined.

Christopher Riley is a visiting professor at the University of Lincoln. His film ‘The Fantastic Mr Feynman’ is on BBC Two on Sunday.

Thanks Loess74