CitationDownload as .RIS
Emerald Group Publishing Limited
Ethics lessons for business managers, organizations and researchers
Article Type: Book review From: Competitiveness Review, Volume 24, Issue 2
I am reviewing three recent books written in the topic related to lying, cheating and moral failures in professionals, business managers, researchers and business organizations. Here is the list.
Book 1 - The Honest Truth About Dishonesty: How We Lie to Everyone - Especially Ourselves
309 pp. (I reviewed Kindle edition)
File size: 1113KB
Book 2 - Blind Spots: Why We Fail to Do What's Right and What to Do About It
Max H. Bazerman and Ann E. Tenbrunsel
Princeton University Press
208 pp. (I reviewed Kindle edition)
File size: 538KB
Book 3 - Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients
Faber and Faber Inc.
449 pp. (I reviewed Kindle edition)
File size: 1208KB
A recent article reports on a survey, which shows the general population's trust on business professionals to be as low as it is for politicians. The assessment is just natural given the recent prevalence of headlines on scandals and failures (The Economist, 2013). It is also very natural, in the prevailing business environment, that academics also write about (lack of) ethics and lies among professionals and organizations. The first of two books (one by Ariely and the other by Bazerman and Tenbrunsel, respectively) are written against this backdrop. The first one (Ariely) discusses honesty and ethics of professionals and other people at the personal level. While still focusing on the same issues, the second book (Bazerman and Tenbrunsel) extends that a little bit and touches on the issue of ethics in the organizational and even in societal levels.
The author of the last book (Goldacre) is clearly disappointed with many of the practices of the medical fields, and in particular those of the pharmaceutical industry. There are indeed more than few instances when big pharmaceuticals have paid billions (of dollars) in fines. For example, one of the leading firms GlaxoSmithKline (GSK) paid US$3 billion in fines in 2012 on fraud charges. What's more, the list of those paying $1 billion or more in fines looks like a "who's who?" in the pharmaceutical business (chapter, "Afterword: better data"). This book is complete on its own right. However, considering it together with other two, it can be taken as providing a series of cases within the medical field, mostly supporting the points made in the first two books.
The author of the first book is Dan Ariely who, according to the Dan Ariely-shopping enabled Wikipedia page on Amazon (2013), is a Professor of psychology and behavioral economics at Duke University. He got a second PhD in business at the urging of his mentor, Daniel Kahneman, who is an eminent behavioral economist and a Nobel laureate in economics, over his original one in psychology. Naturally he takes a behavioral economist's approach to the issue of lying. Also, like his mentor he is an empiricist. He cites various lab experiments to make his points. When he points out that people may make decisions following their gut feelings before bothering to understand the situations fully, and then later make up the stories to rationalize their actions to themselves (and to others if necessary), he reminds us of the approach taken by Kahneman in his 2011 book Thinking Fast and Slow.
Ariely's approach is different from the rational one and is called simple model of rational choice (SMORC). According to the model, people cheat to the extent the potential benefit from it is higher than the potential cost. The cost is calculated based on severity of punishment and the potential of being caught. This could true for criminals; however, most of the behavior of professionals and many other honorable people are not captured by this model. For example, unlike criminals, we do not take money out of our colleague's carelessly dropped wallet, even if no one could have known, because we want to be able to think of ourselves as good and honorable persons.
Yet, we are constantly tempted. So, for example, when students cheat, they like to think they knew how to the solve problem all along; a little support just facilitated them to earn the points they should have earned any way. People may take pencils from office. Pencils are little things and do not matter much. But money is a no-no. People mostly fiddle on the flexible boundary of ethics. This phenomenon is described as the "fudge factor". Still, at the end of the day, everyone wants to tell himself/herself that they are a good and honorable person. In the similar vein, Ariely gives one example of a former employee professing to have believed in the official Enron position - that it was a great innovative company working on the cutting edge of technology, and great things were happening there (chapter "Introduction"). In situations like this many respectable professionals revert to (what is described as) "wishful blindness" to justify their associations with tainted companies.
Soon, this "fudge factor" (i.e. tinkering on the edge), may become slippery slope for people to reach "what the hell" point. There, they do not require even the pretense of ethics. The most important factor that takes people down the slippery slope is conflict of interest. Doctors investing in diagnostic machines have conflict of interest. So do doctors who have taken (even legitimate) money from pharmaceutical firm to advertise their products. People tend to believe what comes out of their mouth. Similarly, when we know everyone is doing it then ethical standards slide downwards. When students know everyone is doing it as well they tend to cheat more.
He gives many examples throughout the book. Chapter 3 especially has some noteworthy ones; especially because the third book reviewed here (the one by Goldacre) deals and expands on these issues.
One example is about the treatment of data points from his experiment. He carried one experiment in which he had a certain hypothesis in mind. He saw that the result was likely to be as he expected, except for one participant. If he was to drop that particular data point he would get what he expected. And, there could even be (a sort of) justification for dropping it. That particular participant was drunk. There was a temptation, but that would be unethical because, one has to restrict to the rules made while designing the experiment.
People cannot be objective when they are also the paid consultant, is his general conclusion in the same chapter. For instance, professors of one of the most prestigious medical schools were acting more like promoters of certain drugs in the classes they taught. He also points out that people tend to oblige to social relationships without feeling guilty. He points out the pharmaceutical representatives were developing close social-like intimacy with doctors to get them push their drugs.
On the whole this book gives what is described as a semi-optimistic conclusion: people are generally good and do not want to do badly, but they are conflicted; and the standard of ethics can be improved. When there is good supervision ethical standards tend to rise. People tend to take their pledge seriously. Periodic reminders of the standard and making people to pledge to abide by the rules can improve overall moral standards. At the same time lax standards and enforcement regimes can lead to a downward spiral in the ethics.
The second book I am reviewing is also written by business school professors and behaviorists. The first author, Max H. Bazerman (according to the Max H. Bazerman-shopping enabled Wikipedia page on Amazon (2013)), is Professor of Harvard Business School. His area is decision making, negotiations and organizations. As someone taking behavioral approach in decision making, this book also uses the concept of fast and slow decision making discussed in Kahneman (2011).
Although this book is similar to the previous one in its overall approach and arguments, it has a stronger undertone. The book starts with the belief that people always judge themselves to be more ethical than they actually are, and judge others harshly. They also fail to fully appreciate the adverse impacts of their behavior on others.
This book uses the terms like "bounded ethnicity" and "ethical fading". Bounded ethnicity suggests that people are not fully aware of ethical dilemmas when making decisions. There is always a difference between what people think they would do and what they end up doing. Issues can reframed to make ethical issue nonexistent in the mind of the decision makers. When a problem is designated as an administrative decision, the ethics part can easily be forgotten. Similarly, when car is discussed with the sole purpose of increasing sales, concern for safety becomes secondary at best. People call the sacking of employees as layoff (or even rightsizing), loss of lives as collateral damage, etc. and shrug off the pain they inflict on other people's lives. People, thus handicapped, make decisions which are inconsistent to their own standard, and sleep easily thinking they are ethical people after all. Ethical fading means people's commitment to abide by ethical rules fade with time. People learn to avoid breaching the letter of the codes while ignoring the spirit. And, bounded rationality and ethical fading together lead people to the slippery slope of ethics violation.
This book asks people to reflect on their own behaviors and amend them. After all we all know what we should do. Knowledge of ethics itself may not be that helpful. So, this book unlike the previous one does not see benefit from ethical training in the organizations. Ethicists do not behave more ethically than general people, after all. For organizations, it suggests that bosses should be constantly audited on their decisions. Moreover, it recommends that dominant culture within the organization should be identified and influenced. For instance, in organizations driven by sales, concern for of safety could easily be drowned. This happened when Ford pushed Pinto. At that time, Ford was more concerned with the market share gained by Volkswagen's small car, and wanted to bring out its alternative (i.e. Pinto) as soon as possible. So, safety concerns for its faulty gas tank design were not appropriately addressed (Chapter 4).
It is especially harsh on lobbying efforts, which is detailed in Chapter 7. They are not doing something legally wrong but not something of morally high standard. He takes examples of tobacco, auditing business and oil industry lobbyists. The tobacco industry spent lots of time and money fighting the evidence that tobacco can cause cancer. The oil industry is now engaged in argument about global warming concerns. The auditing industry was successful in diluting the provisions forbidding conflict of interest. According to this book all of them rely on three techniques:
1. obfuscation and encouragement of reasonable doubt,
2. the claimed need to search for a smoking gun,
3. shifting views on facts (Chapter 7).
The third book is written by Ben Goldacre, who is a physician, academic and a science writer (according to the Ben Goldacre-shopping enabled Wikipedia page on Amazon (2013)). His frustration with the pharmaceutical industry is palpable in this book. He starts the book with a statement: "Medicine is broken". The reason for this is what he sees as systematic distortion of the data. Marketing is another area on which he has much to say. On data, his concern is similar to that shown by Ariely in his experiment discussed above. In marketing, he elaborates on how the pharmaceutical industry hooks doctors into the conflict of interests. Here, the issues of conflict of interest are similar to the ones much discussed in the preceding two books. The main difference being, this book is focused in one industry (pharmaceutical), while the others are more conceptual types using examples from various situations.
He seethes at what he sees as missing data and concludes that it remains the main problem. As Dan Ariely stated, selective publication of positive outcomes and restating (and massaging data) to make the outcome look favorable are bad research practices and unethical in any discipline. Ben Goldacre elaborates this in great detail and shows how this practice is prevalent in pharmaceutical industry.
For example, he cites a study on the trials pertaining to antidepressants, between 1987 and 2004. Some researchers found total of 74 studies - of which 38 had positive results and 36 had negative. Out of those 38 positive results, 37 were published. Of those with negative results, 22 were never published and 11 appeared in the academic literature as if the trials were success (remember Ariely's discussions) (Chapter 1).
There could be many other examples and this phenomenon is not unique to the pharmaceutical industry. Researchers and academicians (in any field) are evaluated on the basis of the number of publications. And, there is a publication bias. Studies yielding positive results are more likely to be published compared to negative ones; however, experiments with negative results often get "lost". So, people are constantly tempted to restate the hypothesis after the fact or massage the data to make the outcome suitable. It is not a good practice, and every researcher is taught not to do that. But, the temptation is there. The main difference is that while in many other fields, research papers can be just that - a research paper, written by academicians and read by others (including PhD students who also have to write papers) - but for doctors it appears those academic papers are the sole source of information, on the basis of which they have to make (often life or death) decisions.
For example, the author, a practicing physician himself, prescribed one particular antidepressant drug. This drug was shown to be an effective treatment for the conditions which his particular patient was suffering from in research published in an academic journal. Now, it turned out there were total of seven experiments done. The only one published was the one with the positive result. In effect, the totality of the data showed that the drug in question was not even as effective as the placebo. In another case he cites an example of antiarrhythmic drug given to heart patients in the hope that it will help them. However, it turned out that there was one unpublished experiment which showed that the drug caused more deaths rather than preventing them. Because the trial was not published and doctors did not know the consequences, this drug was prescribed for quite a while. He estimates that these prescriptions might have caused 100,000 unnecessary deaths.
The pharmaceutical industry, driven by sales and revenue, (at times) pushes products with questionable benefits and sometimes hiding demonstrable risks. It is not very different from the situation discussed by Bazerman and Tenbrunsel (giving example of Ford's Pinto debacle) in the second book. Furthermore, the tactics used by pharmaceutical companies to frustrate the quest to make trial data available, as described in this book, also reminds us of methods used by lobbyists in the second one - "obfuscation", asking for "smoking gun" and finally "shifting views" on facts.
Another important issue Goldacre discusses is marketing. The pharmaceutical industry is known for its innovation. But, its marketing expenses are twice that of its research and development (Chapter 6). Most of it goes in influencing doctors' prescriptions. The practices described here are similar to those described by Ariely in his book (the first book in this review), but in more detail. Goldacre has cited many examples and studies to back up the argument.
In essence, this book argues for the need of ethical and fact-based decision making in the medical field, and shows frustration when he sees that this is not happening. The need for ethical and fact-based decisions is there in other industries as well. For example, people give examples of the behavior of recent debacle with the ratings of mortgage-backed derivatives by rating agencies as examples of lack of ethics and fact-based decision making in business. The second book of this review (Bazerman and Tenbrunsel) discusses these issues as well (Chapter 5). Other books not related to ethics, such as Silver's (2012) The Signal and the Noise: Why So Many Predictions Fail - But Some Don't, charge them with lying (in Chapter 1). Therefore, the issues of data transparency and need for ethics and scientific decision making are as important and applicable in other fields also. That makes this book a good read for people outside of medical/ pharmaceutical fields as well.
I read all three books and liked each of them individually. All the students, researchers and practitioners have to be aware of the issues related to business ethics, and scientific (and ethical) decision making discussed so nicely in each of these three books. Each of the books individually, as well as all of them together could be a good addition to the shelves of those concerned.
Deepak Subedi, Marshall University, Huntington, West Virginia, USA
Ben Goldacre-shopping enabled Wikipedia page on Amazon (2013), "Ben Goldacre-shopping enabled Wikipedia page on Amazon", available at: www.amazon.com/wiki/Ben_Goldacre/ref=ntt_at_bio_wiki (accessed April 24, 2013).
Dan Ariely-shopping enabled Wikipedia page on Amazon (2013), "Dan Ariely-shopping enabled Wikipedia page on Amazon", available at: www.amazon.com/wiki/Dan_Ariely/ref=ntt_at_bio_wiki (accessed April 24, 2013).
(The) Economist (2013), "SCHUMPETER Companies' moral compass: some ideas for restoring faith in firms", The Economist, May 2, Kindle edition.
Kahneman, D. (2011), Thinking Fast and Slow, Kindle edition, Farrar, Straus and Giroux, New York, NY
Max H. Bazerman-shopping enabled Wikipedia page on Amazon (2013), "Max H. Bazerman-shopping enabled Wikipedia page on Amazon", available at: www.amazon.com/wiki/Max_H._Bazerman/ref=ntt_at_bio_wiki (accessed April 24, 2013)
Silver, N. (2012), The Signal and the Noise: Why So Many Predictions Fail - But Some Don't, Kindle edition, Penguin Press HC, New York, NY.