Behavioral strategy in the wild

Wayne Borchardt (Graduate School of Business, University of Cape Town, Cape Town, South Africa and NOVA School of Business and Economics, Carcavelos, Portugal)
Takhaui Kamzabek (The University of Sydney Business School, The University of Sydney, Sydney, Australia)
Dan Lovallo (The University of Sydney Business School, The University of Sydney, Sydney, Australia)

Management Research Review

ISSN: 2040-8269

Article publication date: 5 August 2022

Issue publication date: 12 August 2022

1632

Abstract

Purpose

A decade after Powell et al.’s (2011) seminal article on behavioral strategy, which called for models to solve real-world problems, the authors revisit the field to ask whether behavioral strategy is coming of age. The purpose of this paper is to explain how behavioral strategy can and has been used in real-world settings.

Design/methodology/approach

This study presents a conceptual review with case study examples of the impact of behavioral strategy on real-world problems.

Findings

This study illustrates several examples where behavioral strategy debiasing has been effective. Although no causal claims can be made, with the stark contrast between the negative impact of biased strategies and the positive results emerging from debiasing techniques, this study argues that there is evidence of the benefits of a behavioral strategy mindset, and that this should be the mindset of a responsible strategic leader.

Practical implications

This study presents a demonstration of analytical, debate and organizational debiasing techniques and how they are being used in real-world settings, specifically military intelligence, Mergers and acquisitions deal-making, resource allocation and capital projects.

Social implications

Behavioral strategy has broad application in private and public sectors. It has proven practical value in various settings, for example, the application of reference class forecasting in large infrastructure projects.

Originality/value

A conceptual review of behavioral strategy in the wild.

Keywords

Citation

Borchardt, W., Kamzabek, T. and Lovallo, D. (2022), "Behavioral strategy in the wild", Management Research Review, Vol. 45 No. 9, pp. 1185-1204. https://doi.org/10.1108/MRR-12-2021-0876

Publisher

:

Emerald Publishing Limited

Copyright © 2022, Wayne Borchardt, Takhaui Kamzabek and Dan Lovallo.

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

“A behavioral approach to strategy – what’s the alternative?” (Levinthal, 2011, p. 1517).

This provocative title of Levinthal’s (2011) paper quoted above might be a play on Charlie Munger’s famous quote: “How could economics not be behavioral. If it isn’t behavioral, what the hell is it?” (Munger, 1995, p. 1). These quotes point to a conceptual transition from an ideal of rational choice to an acceptance of the boundedness of people. Boundedness is where real people do not have the cognitive capabilities to live up to neo-classical economic theory’s requirements of rational choice (Simon, 1955), according to which rational agents have well-defined preferences, unbiased beliefs, make optimal choices based on these beliefs and preferences and act in their own self-interest (Thaler, 2016).

Recognizing our boundedness, fields such as economics, finance, decision theory and others, including strategic management, now consider the impact of judgment and decision-making. In the context of strategic management, Lovallo and Sibony (2010, p. 2) adopt the term “behavioral strategy,” arguing for “behavioral strategy” in practice, asserting that “left unchecked, subconscious biases will undermine strategic decision making.” Gavetti (2012), Greve (2013) and Powell et al. (2011) have extended Lovallo and Sibony’s work. Gavetti (2012) argues that behavioral failures of rationality, plasticity and shaping limit firms’ pursuit of cognitively distant opportunities. His key insight is that “superior opportunities tend to be cognitively distant” (p. 269), yet must be attainable to be strategically relevant. From this he suggests that strategic leaders need to acquire the skills that bring cognitively distant opportunities nearer for themselves and other stakeholders (Gavetti, 2012). Greve (2013) discusses four levels of bounded rationality in strategy: first, firms are inclined to repeat past strategies without examining the consequences; second, firms incorrectly respond to feedback because of false assumptions of causality; third, firms infer that successful strategies of others will be good for them; and fourth, the most rational of the strategies, albeit still inclined to suffer from bounded rationality, is where firms develop strategies based on their predictions of the actions of others.

Powell et al. (2011, p. 1371) build on rich theoretical foundations (from behavioral decision theory, political theory, organizational theory, social cognition theory, management perception, sense-making, cognitive schema, language, meaning and enacted environments) to set out their objectives for behavioral strategy:

Behavioral strategy aims to bring realistic assumptions about human cognition, emotions, and social behavior to the strategic management of organizations and, thereby, to enrich strategy theory, empirical research, and real-world practice.

Powell et al. (2011, p. 1382) engage with Levinthal's (2011) question, saying: “we need models that solve the problems faced by thinking and feeling human beings, and this requires a robust and dynamic field of behavioral strategy.”

We respond to Powell et al.’s (2011) call for models to solve real problems faced by real people. We refer to these models as debiasing techniques, which we categorize as: analytical, debate and organizational. We discuss business managers’ and political or military leaders’ strategic decisions in four real-world settings, where, in the absence of debiasing, we are likely to encounter problematic decision making due to confirmation bias, over-optimism, inertia due to anchoring, and the planning fallacy. In each of these four settings we describe a failure case and discuss the behavioral biases involved, focusing on one major bias and its mechanism. We then introduce a relevant debiasing technique and demonstrate it has been used successfully to debias in a similar setting.

Military intelligence and confirmation bias

“If you torture the data long enough, it will confess to anything” Ronald Coase.

The USA’s devastating handling of the Vietnam war has been extensively documented. Here, we focus on lesser-known insights to illustrate how confirmation bias, acting at three levels, influenced U.S. policy and contributed to its Vietnam debacle. At the highest level, we show how U.S. Secretary of Defense, Robert McNamara, fell victim to poor reasoning. At the intermediate level, we discuss the RAND Corporation (RAND), a think tank and how its intelligence process failed. And, at the lowest level, we explore how two well-respected senior intelligence analysts formed opposite interpretations of the same data.

Failure case

McNamara was regarded as a brilliant manager in the decades before and after the Vietnam war, but his emphasis on rational analysis based on quantifiable data led to grave errors (Rosenzweig, 2010). Specifically, data that was difficult to quantify, such as qualitative intelligence on intangibles such as motivation, hope, resentment or courage, tended to be overlooked (Rosenzweig, 2010), despite the potential for this information to play a critical role in US war strategy. McNamara’s failings offer some insight into how management thinking has progressed (Rosenzweig, 2010). We now know that people are not the rational creatures suggested by neo-classical economics but exhibit systematic biases of judgment (Rosenzweig, 2010). We also know that organizational processes have their own dynamics – such as the escalation of commitment to a losing course of action, and the tendency to silence dissenting views – that can lead to flawed decisions.

McNamara’s intelligence briefings came from, among other sources, RAND, which advised the US Government during the Vietnam War, including via its Viet Cong Motivation and Morale Project, established to examine the organisation, operations, motivation and morale of the Viet Cong and North Vietnamese Army (Donnell et al., 1965). Yet RAND as an organisation, which prided itself on objectivity, also succumbed to bias.

Leon Goure and Konrad Kellen were two well-respected RAND senior intelligence analysts. Goure was born in 1922 in Moscow, went into exile in Berlin to escape Lenin’s liquidation of the Mensheviks, later fleeing to the USA to escape the Nazis (Elliott, 2010). He became a Soviet specialist in RAND’s Social Science Department, gaining fame as an expert on Soviet civil defense (Elliott, 2010). Goure led RAND’s Viet Cong Motivation and Morale Project, which interviewed hundreds of defectors and prisoners-of-war and produced 62,000 pages of interview transcripts (Gladwell, 2016). These transcripts were a key source of intelligence, however, a member of RAND’s Social Science team said that the transcripts could “support anybody’s perspective on anything” (Elliott, 2010, p. 165).

Goure had a profound dislike and distrust of communism. When Goure read the transcripts, his view was that the Viet Cong had lost the “fight for hearts and minds” (Elliott, 2010, p. 164). Based on his earlier work, he was allegedly already an advocate of airpower as a weapon of counterinsurgency (Elliott, 2010). According to Gladwell (2016), Goure’s background led him to believe that “if we just bomb some more, we'll destroy their will.” This view was questioned by “a fair number of analysts […] convinced that Goure was interpreting selectively from the interviews” (Elliott, 2010, p. 125). Goure’s behavior is consistent with what Snyder calls the “ideology of the offensive” (Snyder, 2013).

To strike a balance and provide a broader base from which to draw inferences and discern trends regarding the Viet Cong, RAND brought in Kellen (Elliott, 2010), a Jewish man born in Berlin in 1913, who had escaped the Nazis. Kellen moved to the USA in 1935 and worked in US Army intelligence, where he dealt with prisoner interrogation material in Second World War and Korea and defectors from Eastern Europe (Elliott, 2010). Based on his reading of Viet Cong interviews, he told a colleague: “Prisoners and defectors tell you what they think you want to hear. These people, you can’t get them to say anything critical of their regime” (Elliott, 2010, p. 231). Kellen concluded that they “could not be coerced” (Elliott, 2010, p. 231).

Goure and Kellen read the same interview transcripts and arrived at opposite perspectives. Gladwell (2016) sums it up:

That’s how intelligence failures happen. It’s not because someone screws up, or is stupid, or is lazy, it’s because the people that make sense of intelligence are human beings, with their own histories and biases.

As Kellen says, in his account of RAND’s leadership:

I can only say that the people that I knew who talked a lot about scientific talk and scientific this and that were the most unscientific people you can imagine. They just picked somebody and if they agreed with him or he agreed with them, then he was an expert, and if he didn’t agree with them, he was not an expert and they ruled it out (Gladwell, 2016).

In other words, RAND’s leaders were also undermined by confirmation bias.

Of course, intelligence failures are not unique to Vietnam and their implications can be devastating. Military conflicts exact a substantial cost in lives, livelihoods, and dollars. The Afghanistan war cost the USA more than $2tn and almost 250,000 people have died as a direct result (Crawford and Lutz, 2021). Intelligence failures happen in the market economy too. Central bankers’ failure [1] to interpret signs of the impending global financial crisis is attributed to confirmation bias based on their frame of an efficient market hypothesis (EMH) worldview (Stiglitz, 2012).

What behavioral biases are at play?

Military intelligence is a key determinant of whether and how conflicts occur. Bar-Joseph and Levy (2009) describe seven sources of intelligence failures: a lack of information; a “noisy” environment (too much information, making it difficult to extract the signal); strategic deception (where one side deliberately deceives the other); individual psychology (including cognitive and motivational biases); small group dynamics (such as group-think); organizational behavior (leading to fragmentation or concentration of information); and politicization of intelligence (intelligence is deliberately aligned to policy preferences, for example, the claim of weapons of mass destruction in Iraq).

How does confirmation bias affect military intelligence?

Bar-Joseph and Levy (2009) recognize that many or all the above factors might be at play in an intelligence failure, but in the case of Israel’s surprise at the Yom Kippur War they argue that the root cause of the failure was individual psychology. The director of military intelligence’s (DMI) conviction that his own assessment was correct led him to conceal information about recent actions by the Egyptians and Syrians and not carry out orders to implement critical intelligence procedures (Bar-Joseph and Levy, 2009). The DMI’s assumptions about Egypt’s military strategy (Chen, 2016) caused him to dismiss evidence of an impending attack by Egypt because it was not consistent with his view (Chen, 2016).

There are many other examples of large-scale surprise attacks, which Dahl (2013) argues are not the result of an inability to detect the signal from the noise and/or an inability to connect the dots, but because specificity of tactical level intelligence is required and then policymakers must be receptive to that intelligence. Dahl refers to Heuer’s (1999) analysis of how many intelligence failures are the result of cognitive biases and mindsets that are resistant to change. Developing a strong point-of-view and then not being receptive to disconfirming intelligence is a demonstration of confirmation bias.

Nickerson (1998, p. 175) defines this bias as “the seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or a hypothesis in hand.” Others define confirmation bias as our tendency to discount disconfirming evidence (Kappes et al., 2020). Importantly, confirmation bias relates to “unwitting selectivity in the acquisition and use of evidence” (Nickerson, 1998, p. 175). In the real world, the line between “unwitting selectivity” and the deliberate marshalling of evidence to support one’s case is often not clearly defined (Nickerson, 1998). Hence, it can be challenging to disentangle the cognitive bias (we see what we expect to see) and motivational bias (we see what we want or need to see) factors that lead to a judgment (Bar-Joseph and Levy, 2009; Nickerson, 1998).

What should we do about it?

Numerous debiasing techniques have been identified for each of the behavioral biases discussed in this article. In each section, we focus on just one technique. Table 1 outlines other debiasing techniques, categorized as organizational, debate and analytical techniques.

Red team/blue team is a debate technique for debiasing confirmation bias. Red teams are empowered to generate alternative perspectives to challenge strategic assumptions and plans (Zhang and Gronvall, 2020). Red teaming has been used by the military for more than a century and has also been used in the public and private sectors to better understand the interests, intentions and capabilities of rivals (Zenko, 2015b), for example, in capital project bidding (Heiligtag et al., 2017), investment decision making (Gatlin et al., 2017), improving R&D productivity (Smietana et al., 2015), terrorism defense (Zhang and Gronvall, 2020), cyber-security (Mirkovic et al., 2008) and even assessing survivability of space systems (Stokes et al., 2006).

Zenko (2015b, p. 17) writes that “An astonishing number of senior leaders are systemically incapable of identifying their organization’s most glaring and dangerous shortcomings.” This occurs for two reasons: cognitive biases, including confirmation bias; and organizational biases, where employees become captured by the institutional culture (Zenko, 2015b). Zenko (2015b) describes three broad categories of red-teaming techniques: simulations, including “war games,” designed to model a diverse range of situations and ultimately spur decision-makers to respond to various scenarios; vulnerability probes, when a red team actively tests defensive systems and procedures to identify key weaknesses; and alternative analyses, where key assumptions or information quality are challenged by promoting unconventional thinking (Zenko, 2015b).

The red team (attackers) is pitted against the blue team (defenders) (Boyens et al., 2012), which typically must defend against real or simulated attacks over a significant period, in a representative operational context, and according to rules established and monitored by a neutral monitoring group (the white team) (Boyens et al., 2012). By applying fresh eyes on a complex situation or intentionally opposing a certain position, red teams can greatly improve the accuracy of forecasts (Zenko, 2015b).

Success case

Red teams can deliver impressive results, such as giving businesses a competitive edge, finding flaws and vulnerabilities in military intelligence and troubleshooting dangerous military missions in advance (Zenko, 2015b). The successful 2011 US Navy SEAL mission that killed Osama bin Laden used red team preparation that included the developing, testing and refining of strategies, thus enabling a response to an unforeseen situation, the crash of one of the two transport helicopters (Zenko, 2015b).

The red team responsible for the raid benefited from a decade of deliberate efforts by the CIA that were triggered by the unprecedented terrorist attacks of 9/11. Following 9/11, senior White House officials believed that there were additional plots against the US The Director of Central Intelligence George Tenet formed a group of contrarian thinkers to challenge conventional wisdom in the intelligence community (Zenko, 2015a). This group, known as the Red Cell, is a semi-independent unit devoted to “alternative analysis,” including techniques like “what ifs,” Team A/Team B exercises, and premortem analysis Klein (2007), which can identify holes in a plan, model an adversary to understand their weaknesses or consider in advance all the conceivable ways a plan can fail and thereby mitigate these risks (Zenko, 2015a).

By design, the initial Red Cell focused on the bin Laden raid did not include any terrorism experts and only had one Middle East specialist (Zenko, 2015a). Instead, members were individually selected on the basis of being creative, analytically fearless, excellent writers, deeply knowledgeable about history and world affairs and able to work in a team (Zenko, 2015a). Analysts typically served on the Red Cell for a period of three months, to keep participants fresh and to immerse as many analysts as possible in its techniques (Zenko, 2015a).

A red-team approach has also been shown to create value in a business context, as illustrated by Warren Buffet, who actively seeks contrary viewpoints (Gatlin et al., 2017), including assigning two independent groups (red team and blue team) to represent opposing sides on potential acquisitions and they are paid a bonus if their views prevail (De Smet et al., 2019). The marginal cost of these two viewpoints can often be justified by the magnitude of these deals. Buffet provides a useful reminder of how we might think about confirmation bias: “You don’t ask the barber whether you need a haircut” (Buffett, 1994).

Mergers and acquisitions deal-making and over-optimism

“Thinking rosy futures is as biological as sexual fantasy” (Tiger, 1979, p. 35).

It is not just confirmation bias that should concern us when undertaking acquisitions – numerous behavioral biases contribute to a generally poor track record in mergers and acquisitions (M&A) transactions. Here, we focus on over-optimism.

Failure case

In April 2014, the Australian department store business David Jones (DJs) was bought by South African-based Woolworths Holdings (WHL) for AUD 2.1bn. Recognizing that it was buying a struggling business, WHL’s chief executive officer (CEO) announced that WHL could triple profitability in five years and said: “we can transform this business” (Australian Associated Press, 2014). But in 2018 and 2019, WHL wrote off a total of AUD 1.2bn of DJs, more than half of the deal value. With these impairments, the strategic rationale was in tatters.

There are three reasons why over-optimism appears to have been at play here. First, the acquisition was made with a 25% premium and was seen by analysts as expensive (Hayward and Hambrick, 1997). Second, earnings were overestimated: The earnings margin expectation of at least 10% became 4% in actuals. The FY19 expectation was also 10%, yet the actuals were a mere 2%. WHL revised down their guidance for FY20 to 7%–9%, but still only delivered 2%. Third, this repeated failure to meet expectations is consistent with overconfident CEOs being less responsiveness to corrective feedback (Chen et al., 2015).

While M&As are important vehicles for strategic growth (Lubatkin, 1987), they often fail to deliver intended performance improvements (Garbuio et al., 2010). Vinogradova (2021) shows that capital markets still perceive acquisitions as value destructive, whereas Rehm et al. (2012) finds that large acquisitions have only a 44% chance of delivering returns above the industry average and show a negative median excess total return to shareholders. Acquisitions with the most value destructive prospects are “large deals,” like the case of WHL–DJs (Rehm et al., 2012). Martin (2016) calls M&A a “mug’s game,” reporting that 70%–90% of acquisitions are abysmal failures, yet M&A deals continue to be pervasive (Weber, 2018). The prospects for “programmatic deals” are far better, delivering an average of 4.5 per centage points greater excess total returns to shareholders than “large deals” (Rehm et al., 2012).

What behavioral biases are at play?

Researchers identify rational behaviours, such as agency issues (Eisenhardt, 1989; Jensen, 1986), where managers may pursue their own objectives at the expense of shareholder’s interests, as one of the reasons that M&A deals are value destructive. Stock-based compensation means that CEOs can benefit substantially from even failed acquisitions (Martin, 2016). Other drivers are accounting regulatory changes following the global financial crisis making acquisitions more attractive (Martin, 2016), and Yoo and McCardle's (2020) “valuator’s curse,” which provides a rational explanation for the over-valuation of acquisitions.

Researchers also identify non-rational behaviours, driven by cognitive biases, across the M&A lifecycle (Garbuio et al., 2010). Garbuio et al. (2010) note empire building and the lemming effect during target pursuit, as well as confirmation bias, external advisors’ role-conferred bias, over-optimism and the planning fallacy. They also highlight the role of the availability heuristic during preliminary due diligence, the winner’s curse during the bidding phase and anchoring and adjustment and the sunk-cost fallacy during final due diligence.

How does over-optimism affect M&A deals?

Warren Buffett likened acquisitions to the fairy tale The Frog Prince, where the corporate acquirer is a beautiful princess and the acquisition target the frog, which can be turned into a handsome prince with a kiss – or in this case, over-payment. Buffett’s (1981) view of this over-optimism – “We’ve observed many kisses but very few miracles” – finds robust theoretical and empirical support.

From Roll (1986) we learn that managers are subjected to strong pressure to maintain high performance and that, coupled with hubris, this drives risk-seeking behaviours. The predicted consequence for M&A is that bidding managers over-estimate their ability to manage the target firm and hence over-pay. Hayward and Hambrick (1997) offer empirical support for this theory, finding losses in acquiring firms’ shareholder wealth following an acquisition, with the greater the CEO hubris and acquisition premiums, the greater the shareholder losses. Park et al. (2018) provide further empirical support in their analysis of CEO hubris in Korean firms. Billett and Qian (2008) support Roll’s theory, finding that CEOs become over-confident after a successful acquisition, and therefore more likely to follow it with acquisitions that negatively impact their firm’s stock. Malmendier and Tate (2008) also find that over-confident CEOs over-estimate their ability to generate returns, over-paying for target companies and undertaking value-destroying mergers. Over-confident CEOs also tend to complete more deals (Hwang et al., 2020) and hence the problem is amplified.

What should we do about it?

A rich variety of techniques exist to debias over-optimism. Some of these are displayed in Table 1. We choose to focus on Mediating Assessments Protocol (MAP), which is both recent and promising. MAP is a meta-debiasing technique incorporating reference class forecasting [2] (RCF) and other debiasing practices, such as postponing the use of intuition, using relative scales and benefiting from the wisdom of the crowd (Kahneman et al., 2019). It is based on research relating to the job interview, essentially treating a strategic decision such as a job candidate. The research outlines the value of a structured process that identifies key traits, each of which is independently evaluated and serve as inputs into the overall decision, which is delayed until all these inputs have been gathered (Kahneman et al., 2019).

Success case

Kahneman et al. (2021) demonstrate the use of MAP in a confidential case where an acquisition is assessed by a private equity firm, following six key steps: first, structure the decision into a set of mediating assessments; second, whenever possible conduct the mediating assessment using the “outside view” (defined as simple statistical analysis of analogous efforts completed earlier (Lovallo and Kahneman, 2003)); third, keep the assessments independent of one another where possible; fourth, review each assessment separately; fifth, have participants make their judgments individually, then explain them to the group, then make a new estimate in response to the estimates and explanations of others [3]; sixth, make the final decision by holistically considering the mediating assessments and allowing intuition. This approach has now been adopted by several private equity firms.

Resource allocation and anchoring

“Even if you are on the right track, you’ll get run over if you just sit there.” Will Rogers

M&A transactions, market entry strategies, capital projects and any other strategic decisions require deliberate choices about the allocation of a firm’s scarce resources. Here, we examine behavioral biases, specifically anchoring, that contribute to inertia, which impedes a firm’s resource allocation.

Failure case

Nokia's success in the early 2000s was linked to the technology development for its Symbian-based handsets. At the end of 2007, the Symbian operating system had a market share of 65%, well above its competitors (Alcacer et al., 2014). However, the industry was shifting to software-focused ecosystems with the emergence of smartphones, such as the iPhone and Android smartphones manufactured by HTC, Motorola, Sony, Samsung. Internal bureaucracy meant that Nokia continued to concentrate on low-end feature phones and its patented Symbian operating system (Vuori and Huy, 2016). By 2010, Apple’s Appstore hosted 300,000 apps, Android 130,000 apps and Nokia just 30,000 apps (Vuori and Huy, 2016). Symbian had fallen to less than 5% market share when it was discontinued in 2012 (Han and Cho, 2016).

Failure to effectively allocate resources in the light of changing industry dynamics is not uncommon. Corporations have a tendency to focus on all areas of their business at once (also called peanutbuttering [4]), with typically little or no benefit (Bardolet et al., 2011; Bradley et al., 2018). Considering firm performance in tertiles, the most successful firms shifted more than half of their capital across their business units over a 15-year period, earning 30% higher total return to shareholders than the bottom tertile firms (Hall et al., 2012). A more recent study arrived at a similar conclusion (Lovallo et al., 2020).

What behavioral biases are at play?

Inertia primarily occurs due to cognitive biases such as sunk cost fallacy, status quo bias and anchoring in combination with corporate politics (Lovallo et al., 2020). Capital allocation failures also result from agency problems emerging from information asymmetry and incentive misalignments (Harris and Raviv, 1996). Of these various distorting factors, anchoring has been shown to be the most robust bias contributing to inertia (Tversky and Kahneman, 1974), and is especially evident in the most recent resource allocation decisions made by executives (Garbuio et al., 2011; Hall et al., 2012).

How does anchoring affect resource allocation?

Resource allocation decisions are impacted by anchoring for four reasons. First, last year’s budget usually serves as a ready and justified reference point (Bardolet et al., 2011; Hall et al., 2012). Second, our initial judgments carry significant weight, and we do not react sufficiently to new information (Tversky and Kahneman, 1974). Third, anchoring is reinforced by loss aversion (Hall et al., 2012). Fourth, anchoring might occur because we value things more when they belong to us (Thaler, 1980), otherwise known as “endowed anchoring”(Garbuio et al., 2011). For example, Collinson and Wilson (2006), in a study of Japanese firms, reveal that established interdivisional and supplier relationships are over-valued because they are treated like an endowment, hampering firms’ strategic flexibility.

What should we do about it?

A recommended debiasing technique is using a CEO piggybank, where a large contingency fund is set aside to seize opportunities, whether to nurture existing businesses or acquire new assets. This “changes the environment”, enabling a more rational solution (Sibony et al., 2017; Soll et al., 2015b). The CEO piggybank helps managers become less anxious about under-performing units so fewer resources are directed to fixing them (Arrfelt et al., 2013). This method has been shown to reduce under-investment in over-performing business units (Arrfelt et al., 2013; Lovallo et al., 2020; Lungeanu et al., 2016). Consequently the CEO piggybank provides more flexibility to change the composition of firms’ assets (Bradley et al., 2018; Lovallo et al., 2020).

Success case

The CEO piggybank can be implemented by putting a certain percentage of the organizational portfolio up for sale each year, changing the burden of proof such that managers must justify the resources that they need and giving the CEO sole discretion to allocate a certain percentage of the company’s capital. For example, as a leader of Exxon Mobil, Lee Raymond required executives to identify 3%–5% of their asset base for disposal, which helped identify non-strategic assets and prepare excess cash (Hall et al., 2012; Lovallo et al., 2020). When the CEO piggybank technique is applied, managers are instructed that a proportion of their assets are to be sold unless they can justify otherwise. This has been shown to minimize political infighting over budgets. In almost all cases, unit leaders in Exxon Mobil could not make a case for retaining their assets, which were then sold. Furthermore, allowing CEOs to allocate capital provides an opportunity to move the organisation more quickly toward what the CEO believes are exciting growth opportunities without first having to fight for resources with the company’s executive committee (Hall et al., 2012).

A similar approach involves categorizing the portfolio into different groups to determine allocation priority. For example, during Alan Lafley’s leadership of P&G, the business was divided into three categories: “Future Stars,” businesses with potential growth; “Local Jewels,” businesses with strong brands in specific countries; and “Under-performers,” businesses for divestiture (Wells and Danskin, 2014). Lafley presided over the discontinuation or sale of about 15 businesses a year between 2000 and 2009 (Lafley and Martin, 2013).

We cannot know whether the CEO piggybank had a causal impact on firm performance, but during Lafley's and Raymond's application of CEO piggybank, P&G and Exxon Mobil each doubled sales and more than quadrupled net profit (Lafley and Martin, 2013; The Economist, 2005).

Capital projects and the planning fallacy

“We learn from history that we learn nothing from history” (Shaw, 1903/1948, p. 485).

Capital projects make large demands on a firm or government’s resources. Here, we explore the planning fallacy, which plagues capital projects. We show that techniques such as RCF are gaining traction in practice.

Failure case

In 2008, CA voters approved the construction of a 150-minute train route from Los Angeles to San Francisco at a cost of $40bn with a construction period of 20 years. However, problems soon arose and the timeline was repeatedly pushed back, while the budget increased to a staggering $100bn (Flyvbjerg and Gardner, 2021). An even more dramatic example is the construction of the Sydney Opera House, scheduled to finish in 1963 at a cost of $7m, but delayed by 10 years and costing $102m (Buehler et al., 1994).

The planning fallacy is a common experience for many business and public projects. The Standish Group’s research on IT projects between 2005 and 2020 finds that less than one-third were successfully completed on time and within budget, around 43%–46% were delivered late and over-budget, or did not deliver all required features, whereas 19%–21% were cancelled or never used (Johnson and Mulder, 2021). Flyvbjerg and Sunstein (2016) find that the average cost over-run in public projects ranged from 24% to 96%, and even over-runs of 100% or more are not uncommon (Flyvbjerg et al., 2009).

What behavioral biases are at play?

These statistics demonstrate that people often under-estimate task completion times and costs. Forecasting failures are primarily induced by cognitive bias such as the planning fallacy, where planners inadequately adjust predictions despite previous projects taking longer than planned (Buehler et al., 1994; Flyvbjerg, 2009). Combined with anchoring bias and optimism (Lovallo and Kahneman, 2003), overly sanguine forecasts are triggered and the principal–agent dilemma magnifies the actual cost and time (Flyvbjerg et al., 2009).

How does the planning fallacy affect capital projects?

The planning fallacy, like over-optimism, is a manifestation of our “inside-view” behavior. However, it differs in that planners’ optimism persists even in the face of historical evidence to the contrary (Buehler et al., 2010). By neglecting the past, we expect to finish our tasks before we actually do (Buehler et al., 2010), that is, we construct a narrative by focusing on cases that justify our optimism and fail to consider alternative scenarios (Buehler et al., 1994; Kahneman and Tversky, 1979).

What should we do about it?

RCF is an analytical technique that can help mitigate the planning fallacy. RCF gives managers an “outside view,” so they can gather information from previous, similar projects, irrespective of the project’s success. This prevents managers from focusing on similar, easily recalled projects that succeeded and are close in time and space to the decision at hand (Kahneman and Lovallo, 1993). By using realized outcomes of past projects, rather than manipulated estimates of the current project, RCF enables managers to forecast project estimates using more reliable, top-down estimates of the project’s true costs, schedule and benefits (Lovallo et al., 2012).

Lovallo and Kahneman (2003) offer a five-step process for RCF. First, select a set of past projects as the reference class, evaluating similarities and differences to determine which projects are most important to planning the current project. Ensure there are enough cases to be statistically valid and that the reference projects are comparable to the current project. Second, assess the distribution of outcomes to determine the probability distribution of actual outcomes in this reference class. This distribution will be used to determine the needed uplift for the new project. The decision-maker should document the results in terms of relevant variables (e.g. total cost, schedule, etc.), showing extreme values, median values and any clusters. Third, estimate your project’s position in the distribution. Decision-makers compare their projects with other reference class projects to arrive at an intuitive estimate, which is most likely biased, so the next two steps are applied to eliminate those biases. Fourth, assess the degree to which the type of information available in this case allows accurate prediction of outcomes. Based on the historical precedent, estimate the correlation between reference class past predictions and outcomes to assess the reliability of the forecast made in Step 3. Fifth, correct your intuitive estimate, which is likely to be optimistic, and adjust the mean based on the predictability analysis in Step 4. The less reliable the prediction, the more the estimate must be regressed to the mean. To illustrate, assume an intuitive construction cost projection of $4bn for a rail project and that average reference-class rail projects cost $7bn. Assume further that the correlation coefficient is estimated to be 0.6. The regression estimate of the construction cost is: $7bn + [0.6 ($4bn–$7bn)] = $5.2bn (Flyvbjerg et al., 2009).

Success case

RCF has been actively used in large transportation projects. For example, since 2003, it has become mandatory to apply RCF for infrastructure investments larger than £40m in the UK (Park, 2021), and is also a requirement in Denmark, Germany, Norway, Sweden, Switzerland, The Netherlands and the US (Park, 2021). In order to be approved, all transportation projects in the UK over the past 20 years use the Infrastructure and Projects Authority database as a reference class, with a requirement to have a 50% probability of being completed within their original budgets. Before the adoption of RCF, the average level of cost over-run of large public projects in the UK was 38% (MacDonald, 2002). Park (2021) finds that, of 39 large UK projects planned and delivered subsequent to the RCF requirement, including road, rail and building projects, the probability of successful projects completed within budget was 62%, surpassing the targeted probability by 12%, while cost over-runs were reduced from 38% to 5%.

The validity of RCF analytics has been tested in various other cases and settings, including a medium-sized construction company (Batselier and Vanhoucke, 2016), a retrospective analysis of Hong Kong transportation (Flyvbjerg et al., 2016), the “Stuttgart 21” railway project (Steininger et al., 2020) and Turkish building projects (Bayram and Al-Jibouri, 2018).

RCF is perhaps the best illustration of the success of debiasing “in the wild.” RCF, an analytical technique, is also being used as part of the emerging Mediating Assessments Protocol, an organizational technique that was discussed earlier. That debiasing techniques are being combined seems appropriate because biases in the wild also often combine. For example, at the World Economic Forum in Davos, when central bankers were saying “who could have foreseen this?” in reference to the global financial crisis Stiglitz (2012), Joseph Stiglitz commented that the bankers’ EMH training led them to frame the events as being consistent with EMH (framing effect and availability heuristic) and, given the variety and complexity of the information, led them to select information confirming their views and these biases, along with their overconfidence, led them to disregard information suggesting a bubble was forming (confirmation bias), resulting in their inaction (Stiglitz, 2012).

Conclusion

Q1.

Is behavioral strategy coming of age?

Q2.

Has it moved from the lab into the wild?

Q3.

Do strategists recognize the need to take a behavioral approach to strategy, acknowledging their susceptibilities to behavioral biases?

In this paper, we have considered these questions by discussing debiasing techniques in different real-world settings. These techniques vary in their relative effectiveness; for example, analytical methods significantly outperformed debate methods in sales forecasts (Sanders and Manrodt, 2003); organizational techniques help managers make better decisions due to the engagement of System 1 (automatic and effortless) processes (Kahneman, 2011), which require less effort than analytical techniques that rely on System 2 (deliberate and effortful) processes (Kahneman, 2011; Liu et al., 2017); and, the debate technique is more popular among practitioners (Muntwiler, 2021).

Notwithstanding the above studies, we find a paucity of studies exploring behavioral strategy in the wild. We find no studies considering more complex real-world settings, where multiple biases are interacting. Future research should build on these “in-the-wild” studies to offer a more comprehensive assessment of the relative effectiveness of behavioral strategies.

The cases presented in this paper suggest that the answer to whether “behavioral strategy is coming of age” is mixed. There have been profound strategic decision failures that were at least partly due to cognitive biases that were not debiased and promising successes when debiasing has been applied, which demonstrates that deliberate debiasing can improve the quality of strategic decisions. What is clear from the above cases is that strategic managers need to debias their business decisions. Our success cases provide evidence that debiasing can work, while our failure cases show that not debiasing can be disastrous. Leaving strategic decisions to chance seems both foolhardy and irresponsible. We urge humility as a foundation for practical, proven debiasing techniques grounded in optimism, but not over-optimism!

Summary of debiasing techniques applicable to the four behavioral biases covered in this paper and categorized by organizational, debate and analytical

Biases Organizational techniques Debate techniques Analytical techniques
Confirmation bias Two-level governance – A governance structure to challenge the decision team’s investment decisions. Having two layers of decision helps catch flawed judgments that make it past the decision team (Sibony et al., 2017) Red team/blue team – Two separate teams develop competing recommendations on a proposal. One group – a blue team – investigates with a view to proposing the project or investment, while another – the red team – builds a case against the project or investment. The independent decision-maker decides based on the opposing cases presented (Heiligtag et al., 2017; Zhang and Gronvall, 2020).
Mandatory alternatives – A rule whereby every team or individual proposing a project for approval is required to propose not one, but two. As a result “yes/no” or “whether or not” questions are outlawed and it becomes normal, not exceptional, to see projects being rejected (Sibony, 2020).
Advance checklist “What would need to be true” – In advance of the facts of the proposal being known and discussed the decision-makers agree on the criteria they would require to be true in order for the decision to be made (McGrath and MacMillan, 2009)
Qualitative scenario analysis – Informs decisions by developing a set of qualitative, representative scenarios of alternative futures and identifying the likely consequences of the decision under consideration (Clemons, 1995)
Over-optimism Mediating assessment protocol – Incorporates reference class forecasting and other debiasing practices, such as postponing the use of intuition, using relative scales, and benefiting from the wisdom of the crowd (Kahneman et al., 2019) Premortem – The leader asks the group to imagine themselves in the future in which the project they are considering has been a total failure. The group is asked to consider the reasons that it went wrong. This serves to elicit weaknesses and risks (Klein, 2007).
Consider the opposite/“What if we’re wrong” – Consideration of alternative hypotheses by asking “What are some reasons that our initial judgment might be wrong?” (Larrick, 2004)
Test, learn, adapt – A systematic policy of piloting concepts in advance of full roll-out. Instead of trying to predict the future, test a potential solution actively by trying it on a small scale (Reis, 2011; Smit and Lovallo, 2014).
Periphery scan – Include learning from the past (e.g. What have been our past blind spots? What is happening in these areas now?), examining the present (e.g. What are your mavericks and outliers trying to tell you?) and envisioning new futures (e.g. What emerging technologies could change the game?) (Day and Schoemaker, 2005)
Anchoring/inertia CEO piggybank – An approach to budgeting in which a large contingency fund is set aside to seize opportunities, whether to nurture existing businesses with additional capital or to acquire new assets at knockdown prices (Bradley et al., 2018; Lovallo et al., 2020) If this was your money – An exercise in which each participant is asked to allocate funds, assuming this is their individual portfolio, not corporate funds.
Reanchoring – Debate on cases where there is a large discrepancy between history (i.e. this year’s target) and model, and to allow a discussion in which large amounts are reallocated. Done using an outside set of forecasts (e.g. competitor benchmarking) (Lovallo and Sibony, 2012)
Inertia benchmarking – Measures the correlation between the percentage of resources each cell (e.g. division) in a portfolio received in the most recent year and what it received in previous years. This draws attention to whether resource allocation is too stable (Hall et al., 2012)
Planning fallacy Trip-wires – Development of an early warning system that triggers one to act when certain pre-defined conditions are met (Soll et al., 2015a).
Incentives/motivation – Establishes financial and non-financial reward policies for an accurate estimate of project and also establishes punishments for inaccuracies (Flyvbjerg, 2009).
Share financial responsibility – Budget, cost over-runs and benefit shortfalls are shared between proposing and approving agencies. This reduces the agency problem driver of the planning fallacy (Flyvbjerg et al., 2009)
Additional downside – The rule of thumb to apply this approach is to “Add 20%–25% more downside to the most pessimistic scenario” and then decide whether the plan is still viable (Belsky and Gilovich, 2010).
Unpacking a task – An exercise in which participants break down multifaceted tasks into precise subcomponents. Unpacking helps to consider under-counted components, and will provide a longer and more accurate forecast (Kruger and Evans, 2004)
Reference class forecast – A method of forecasting based on a sample of relevant comparable cases. Requires explicitly creating a large enough “reference class” (often from the experiences of other companies) (Lovallo and Kahneman, 2003).
Similarity-based forecasting – An application of Reference Class Forecast where reference classes are not weighted equally but by similarity (Lovallo et al., 2012)

Notes

1.

Discussed in more detail at the end of the planning fallacy section.

2.

Discussed in detail in the capital projects section of this paper.

3.

Also termed the estimate-talk-estimate method or the mini-Delphi method.

4.

Managers’ tendency to allocate resources smoothly across the whole enterprise at a general level, despite opportunities in some areas being greater than in others (Bradley et al., 2018; Viguerie et al., 2008).

References

Alcacer, J., Khanna, T. and Snively, C. (2014), “The rise and fall of Nokia”, Harvard Business School Case 714-428.

Arrfelt, M., Wiseman, R.M. and Hult, G.T.M. (2013), “Looking backward instead of forward: aspiration-driven influences on the efficiency of the capital allocation process”, Academy of Management Journal, Vol. 56 No. 4, pp. 1081-1103.

Australian Associated Press (2014), “We will transform David Jones, say South Africans after $2.15bn offer”, The Guardian, 9 April, available at: www.theguardian.com/world/2014/apr/09/south-africans-vow-transform-david-jones-

Bardolet, D., Fox, C.R. and Lovallo, D. (2011), “Corporate capital allocation: a behavioral perspective”, Strategic Management Journal, Vol. 32 No. 13, pp. 1465-1483.

Bar-Joseph, U. and Levy, J.S. (2009), “Conscious action and intelligence failure”, Political Science Quarterly, Vol. 124 No. 3, pp. 461-488.

Batselier, J. and Vanhoucke, M. (2016), “Practical application and empirical evaluation of reference class forecasting for project management”, Project Management Journal, Vol. 47 No. 5, pp. 36-51.

Bayram, S. and Al-Jibouri, S. (2018), “Cost forecasting using RCF: a case study for planning public building projects costs in Turkey”, International Journal of Construction Management, Vol. 18 No. 5, pp. 405-417.

Belsky, G. and Gilovich, T. (2010), Why Smart People Make Big Money Mistakes and How to Correct Them: Lessons from The Life-Changing Science of Behavioral Economics, Simon and Schuster, New York, NY.

Billett, M.T. and Qian, Y. (2008), “Are overconfident CEOs born or made? Evidence of self-attribution bias from frequent acquirers”, Management Science, Vol. 54 No. 6, pp. 1037-1051.

Boyens, J., Paulsen, C., Bartol, N., Moorthy, R. and Shankles, S. (2012), “Notional supply chain risk management practices for federal information systems”, NIST Interagency Report, p. 1.

Bradley, C., Hirt, M. and Smit, S. (2018), Strategy beyond the Hockey Stick: People, Probabilities, and Big Moves to Beat the Odds, John Wiley & Sons, Hoboken, NJ.

Buehler, R., Griffin, D. and Ross, M. (1994), “Exploring the ‘planning fallacy’: why people underestimate their task completion times”, Journal of Personality and Social Psychology, Vol. 67 No. 3, p. 366.

Buehler, R., Griffin, D. and Peetz, J. (2010), “Chapter one – the planning fallacy: cognitive, motivational, and social origins”, in Zanna, M.P. and Olson, J.M. (Eds), Advances in Experimental Social Psychology, Elsevier, Academic Press, pp. 1-62.

Buffett, W. (1981), “Annual report to shareholders”, Berkshire Hathaway Corporation.

Buffett, W.E. (1994), “1994 Berkshire Hathaway annual meeting”, Warren Buffett Archive.

Chen, J. (2016), “Cognitive biases: the root of irrationality in military decision-making”, Journal of the Singapore Armed Forces, Vol. 42, pp. 61-72.

Chen, G., Crossland, C. and Luo, S. (2015), “Making the same mistake all over again: ceo overconfidence and corporate resistance to corrective feedback”, Strategic Management Journal, Vol. 36 No. 10, pp. 1513-1535.

Clemons, E.K. (1995), “Using scenario analysis to manage the strategic risks of reengineering”, MIT Sloan Management Review, Vol. 36 No. 4, p. 61.

Collinson, S. and Wilson, D.C. (2006), “Inertia in Japanese organizations: knowledge management routines and failure to innovate”, Organization Studies, Vol. 27 No. 9, pp. 1359-1387.

Crawford, N.C. and Lutz, C. (2021), “Costs of war”, Watson Institute, available at: https://watson.brown.edu/costsofwar/files/cow/imce/papers/2021/Human%20and%20Budgetary%20Costs%20of%20Afghan%20War%2C%202001-2022.pdf

Dahl, E.J. (2013), Intelligence and Surprise Attack: Failure and Success from Pearl Harbor to 9/11 and Beyond, Georgetown University Press, Washington, DC.

Day, G.S. and Schoemaker, P.J. (2005), “Scanning the periphery”, Harvard Business Review, Vol. 83 No. 11, p. 135.

De Smet, A., Koller, T. and Lovallo, D.P. (2019), “Bias busters: getting both sides of the story”, McKinsey and Company, available at: www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/bias-busters-getting-both-sides-of-the-story

Donnell, J.C., Pauker, G.J. and Zasloff, J.J. (1965), “Viet Cong motivation and morale in 1964: a preliminary report”, Rand Corp., Santa Monica, CA, available at: www.rand.org/pubs/research_memoranda/RM4507z3.html

Eisenhardt, K.M. (1989), “Agency theory: an assessment and review”, The Academy of Management Review, Vol. 14 No. 1, pp. 57-74.

Elliott, M. (2010), RAND in Southeast Asia: A History of the Vietnam War Era, Rand Corporation, Santa Monica, CA.

Flyvbjerg, B. (2009), “Survival of the unfittest: why the worst infrastructure gets built – and what we can do about it”, Oxford Review of Economic Policy, Vol. 25 No. 3, pp. 344-367.

Flyvbjerg, B. and Sunstein, C.R. (2016), “The principle of the malevolent hiding hand; or, the planning fallacy writ large”, Social Research: An International Quarterly, Vol. 83 No. 4, pp. 979-1004.

Flyvbjerg, B. and Gardner, D. (2021), “For infrastructure projects to succeed, think slow and act fast”, available at: www.bostonglobe.com/2021/04/01/opinion/bidens-transportation-projects-succeed-think-slow-act-fast/

Flyvbjerg, B., Garbuio, M. and Lovallo, D. (2009), “Delusion and deception in large infrastructure projects: two models for explaining and preventing executive disaster”, California Management Review, Vol. 51 No. 2, pp. 170-194.

Flyvbjerg, B., Hon, C.K. and Fok, W.H. (2016), “Reference class forecasting for Hong Kong’s major roadworks projects”, Proceedings of the Institution of Civil Engineers-Civil Engineering, Thomas Telford, pp. 17-24.

Garbuio, M., Lovallo, D. and Horn, J. (2010), “Overcoming biases in M&A: a process perspective”, in Cooper, C.L. and Finkelstein, S. (Eds), Advances in Mergers and Acquisitions, Emerald Group Publishing, Bradford, pp. 83-104.

Garbuio, M., King, A.W. and Lovallo, D. (2011), “Looking inside: psychological influences on structuring a firm’s portfolio of resources”, Journal of Management, Vol. 37 No. 5, pp. 1444-1463.

Gatlin, K.P., Hallock, D. and Cooley, L.G. (2017), “Confirmation bias among business students: the impact on decision-making”, Review of Contemporary Business Research, Vol. 6 No. 2, pp. 10-15.

Gavetti, G. (2012), “Perspective – toward a behavioral theory of strategy”, Organization Science, Vol. 23 No. 1, pp. 267-285.

Gladwell, M. (2016), “Saigon, 1965”, available at: www.pushkin.fm/episode/saigon-1965/

Greve, H.R. (2013), “Microfoundations of management: behavioral strategies and levels of rationality in organizational action”, Academy of Management Perspectives, Vol. 27 No. 2, pp. 103-119.

Hall, S., Lovallo, D. and Musters, R. (2012), “How to put your money where your strategy is”, McKinsey Quarterly, Vol. 2, pp. 27-38.

Han, Q. and Cho, D. (2016), “Characterizing the technological evolution of smartphones: insights from performance benchmarks”, Proceedings of the 18th Annual International Conference on Electronic Commerce: e-Commerce in Smart Connected World, pp. 1-8.

Harris, M. and Raviv, A. (1996), “The capital budgeting process: incentives and information”, The Journal of Finance, Vol. 51 No. 4, pp. 1139-1174.

Hayward, M.L. and Hambrick, D.C. (1997), “Explaining the premiums paid for large acquisitions: evidence of CEO hubris”, Administrative Science Quarterly, Vol. 42 No. 1, pp. 103-127.

Heiligtag, S., Webb, A. and Günther, B. (2017), “The Debiasing advantage: how one company is gaining it”, McKinsey on Risk, No. 3, pp. 18-21.

Heuer, R.J. (1999), Psychology of Intelligence Analysis, Center for the Study of Intelligence, Washington, DC.

Hwang, H.D., Kim, H.-D. and Kim, T. (2020), “The blind power: power-led CEO overconfidence and M&A decision making”, The North American Journal of Economics and Finance, Vol. 52, p. 101141.

Jensen, M.C. (1986), “Agency costs of free cash flow, corporate finance, and takeovers”, The American Economic Review, Vol. 76 No. 2, pp. 323-329.

Johnson, J. and Mulder, H. (2021), “Endless modernization: how infinite flow keeps software fresh”, available at: https://content.microfocus.com/app-modernization-dx-tb/endless-modernization

Kahneman, D. (2011), Thinking, Fast and Slow, Macmillan, New York, NY.

Kahneman, D. and Tversky, A. (1979), “Intuitive prediction: biases and corrective procedures”, TIMS Studies in Management Science, Vol. 12, pp. 313-327.

Kahneman, D. and Lovallo, D. (1993), “Timid choices and bold forecasts: a cognitive perspective on risk taking”, Management Science, Vol. 39 No. 1, pp. 17-31.

Kahneman, D., Lovallo, D. and Sibony, O. (2019), “A structured approach to strategic decisions”, MIT Sloan Management Review, Vol. 60 No. 3, pp. 67-73.

Kahneman, D., Sibony, O. and Sunstein, C.R. (2021), Noise: A Flaw in Human Judgment, Little, Brown, New York, NY.

Kappes, A., Harvey, A.H., Lohrenz, T., Montague, P.R. and Sharot, T. (2020), “Confirmation bias in the utilization of others’ opinion strength”, Nature Neuroscience, Vol. 23 No. 1, pp. 130-137.

Klein, G. (2007), “Performing a project pre mortem”, Harvard Business Review, Vol. 85 No. 9, pp. 18-19.

Kruger, J. and Evans, M. (2004), “If you don’t want to be late, enumerate: unpacking reduces the planning fallacy”, Journal of Experimental Social Psychology, Vol. 40 No. 5, pp. 586-598.

Lafley, A.G. and Martin, R.L. (2013), Playing to Win: How Strategy Really Works, Harvard Business Press, Boston, MA.

Larrick, R.P. (2004), “Debiasing”, in Koehler, D.J. and Harvey, N. (Eds), Blackwell Handbook of Judgment and Decision Making, Blackwell Publishing, Malden, pp. 316-338.

Levinthal, D.A. (2011), “A behavioral approach to strategy – what’s the alternative?”, Strategic Management Journal, Vol. 32 No. 13, pp. 1517-1523.

Liu, C., Vlaev, I., Fang, C., Denrell, J. and Chater, N. (2017), “Strategizing with biases: making better decisions using the mindspace approach”, California Management Review, Vol. 59 No. 3, pp. 135-161.

Lovallo, D. and Kahneman, D. (2003), “Delusions of success”, Harvard Business Review, Vol. 81 No. 7, pp. 56-63.

Lovallo, D. and Sibony, O. (2010), “The case for behavioral strategy”, McKinsey Quarterly, 1-14 March.

Lovallo, D. and Sibony, O. (2012), “Re-anchor your next budget meeting”, Harvard Business Review, available at: https://hbr.org/2012/03/can-you-re-anchor-your-next-bu

Lovallo, D., Clarke, C. and Camerer, C. (2012), “Robust analogizing and the outside view: two empirical tests of case‐based decision making”, Strategic Management Journal, Vol. 33 No. 5, pp. 496-512.

Lovallo, D., Brown, A.L., Teece, D.J. and Bardolet, D. (2020), “Resource re‐allocation capabilities in internal capital markets: the value of overcoming inertia”, Strategic Management Journal, Vol. 41 No. 8, pp. 1365-1380.

Lubatkin, M. (1987), “Merger strategies and stockholder value”, Strategic Management Journal, Vol. 8 No. 1, pp. 39-53.

Lungeanu, R., Stern, I. and Zajac, E.J. (2016), “When do firms change technology‐sourcing vehicles? The role of poor innovative performance and financial slack”, Strategic Management Journal, Vol. 37 No. 5, pp. 855-869.

McGrath, R.G. and MacMillan, I.C. (2009), Discovery-Driven Growth: A Breakthrough Process to Reduce Risk and Seize Opportunity, Harvard Business Press, Boston, MA.

MacDonald, M. (2002), Review of Large Public Procurement in the UK, HM Treasury, Croydon, London.

Malmendier, U. and Tate, G. (2008), “Who makes acquisitions? CEO overconfidence and the market's reaction”, Journal of Financial Economics, Vol. 89 No. 1, pp. 20-43.

Martin, R.L. (2016), “M&A: the one thing you need to get right”, Harvard Business Review, Vol. 94 No. 6, p. 12.

Mirkovic, J., Reiher, P., Papadopoulos, C., Hussain, A., Shepard, M., Berg, M. and Jung, R. (2008), “Testing a collaborative DDoS defense in a red team/blue team exercise”, IEEE Transactions on Computers, Vol. 57 No. 8, pp. 1098-1112.

Munger, C.T. (1995), “The psychology of human misjudgement”, Speech at Harvard Law School, available at: www.tilsonfunds.com/mungerpsych.pdf

Muntwiler, C. (2021), “Debiasing management decisions: overcoming the practice/theory gap within the managerial decision process”, TAKE 2021 Theory and Applications in the Knowledge Economy, available at: www.alexandria.unisg.ch/publications/263136

Nickerson, R.S. (1998), “Confirmation bias: a ubiquitous phenomenon in many guises”, Review of General Psychology, Vol. 2 No. 2, pp. 175-220.

Park, J. (2021), “Curbing cost overruns in infrastructure investment: has reference class forecasting delivered its promised success?”, European Journal of Transport and Infrastructure Research, Vol. 21 No. 2, pp. 120-136.

Park, J.-H., Kim, C., Chang, Y.K., Lee, D.-H. and Sung, Y.-D. (2018), “CEO hubris and firm performance: exploring the moderating roles of CEO power and board vigilance”, Journal of Business Ethics, Vol. 147 No. 4, pp. 919-933.

Powell, T.C., Lovallo, D. and Fox, C.R. (2011), “Behavioral strategy”, Strategic Management Journal, Vol. 32 No. 13, pp. 1369-1386.

Rehm, W., Uhlaner, R. and West, A. (2012), “Taking a longer-term look at M&A value creation”, McKinsey Quarterly, pp. 1-7.

Reis, E. (2011), The Lean Startup, Crown Business, New York, NY.

Roll, R. (1986), “The hubris hypothesis of corporate takeovers”, The Journal of Business, Vol. 59 No. 2, pp. 197-216.

Rosenzweig, P. (2010), “Robert S. McNamara and the evolution of modern management”, Harvard Business Review, available at: https://hbr.org/2010/12/robert-s-mcnamara-and-the-evolution-of-modern-management

Sanders, N.R. and Manrodt, K.B. (2003), “The efficacy of using judgmental versus quantitative forecasting methods in practice”, Omega, Vol. 31 No. 6, pp. 511-522.

Shaw, B. (1903/1948), Man and Superman: A Comedy and a Philosophy, The University Press, Cambridge.

Sibony, O. (2020), You're about to Make a Terrible Mistake: How Biases Distort Decision-Making and What You Can Do to Fight Them, Little, Brown Spark, New York, NY.

Sibony, O., Lovallo, D. and Powell, T.C. (2017), “Behavioral strategy and the strategic decision architecture of the firm”, California Management Review, Vol. 59 No. 3, pp. 5-21.

Simon, H.A. (1955), “A behavioral model of rational choice”, The Quarterly Journal of Economics, Vol. 69 No. 1, pp. 99-118.

Smietana, K., Ekstrom, L., Jeffery, B. and Møller, M. (2015), “Improving R&D productivity”, Nature Reviews Drug Discovery, Vol. 14 No. 7, p. 455.

Smit, H. and Lovallo, D. (2014), “Creating more accurate acquisition valuations”, MIT Sloan Management Review, Vol. 56 No. 1, p. 63.

Snyder, J. (2013), The Ideology of the Offensive, Cornell University Press, Ithaca, New York, NY.

Soll, J.B., Milkman, K.L. and Payne, J.W. (2015a), “Outsmart your own biases”, Harvard Business Review, Vol. 93 No. 5, pp. 64-71.

Soll, J.B., Milkman, K.L. and Payne, J.W. (2015b), “A user’s guide to Debiasing”, in Keren, G. and Wu, G. (Eds), Wiley-Blackwell Handbook of Judgment and Decision Making, Blackwell, New York, NY, pp. 924-951.

Steininger, B.I., Groth, M. and Weber, B.L. (2020), “Cost overruns and delays in infrastructure projects: the case of Stuttgart 21”, Journal of Property Investment and Finance, Vol. 39 No. 3, pp. 256-282.

Stiglitz, J. (2012), “Creating a learning society”, available at: www.youtube.com/watch?v=-pdZXBshad8

Stokes, G.H., Schue, H.K., Betz, J., Brock, J., Dougherty, L., Fleischman, J., Hastings, D., Hull, G., Razouk, R. and Whelan, D. (2006), Space Survivability, Scientific Advisory Board (Air Force), Washington, DC.

Thaler, R. (1980), “Toward a positive theory of consumer choice”, Journal of Economic Behavior and Organization, Vol. 1 No. 1, pp. 39-60.

Thaler, R.H. (2016), “Behavioral economics: past, present, and future”, American Economic Review, Vol. 106 No. 7, pp. 1577-1600.

The Economist (2005), “Life after lee”, The Economist, pp. 87-88.

Tiger, L. (1979), Optimism: The Biology of Hope, Simon and Schuster, New York, NY.

Tversky, A. and Kahneman, D. (1974), “Judgment under uncertainty: heuristics and biases”, Science, Vol. 185 No. 4157, pp. 1124-1131.

Viguerie, P., Smit, S. and Baghai, M. (2008), The Granularity of Growth: How to Identify the Sources of Growth and Drive Enduring Company Performance, John Wiley & Sons, Hoboken, NJ.

Vinogradova, V. (2021), “Capital markets and performance of strategic corporate M&A – an investigation of value drivers”, European Journal of Management and Business Economics, Vol. 30 No. 3, pp. 357-385.

Vuori, T.O. and Huy, Q.N. (2016), “Distributed attention and shared emotions in the innovation process: how Nokia lost the smartphone battle”, Administrative Science Quarterly, Vol. 61 No. 1, pp. 9-51.

Weber, Y. (2018), “Managerial biases in mergers and acquisitions”, in Vrontis, D., Weber, Y., Thrassou, A., Riad Shams, S.M. and Tsoukatos, E. (Eds), Innovation and Capacity Building, Palgrave Macmillan, Cham, pp. 255-269.

Wells, J.R. and Danskin, G. (2014), “Procter and Gamble, 2015”, Harvard Business School Case 715-429.

Yoo, O.S. and McCardle, K. (2020), “The valuator’s curse: decision analysis of overvaluation and disappointment in acquisition”, Decision Analysis, Vol. 17 No. 4, pp. 299-313.

Zenko, M. (2015a), “Inside the CIA Red Cell: How an experimental unit transformed the intelligence community”, Foreign Policy, available at: http://foreignpolicy.com/2015/10/30/inside-the-cia-red-cell-micah-zenko-red-team-intelligence/

Zenko, M. (2015b), Red Team: How to Succeed by Thinking like the Enemy, Basic Books, New York, NY.

Zhang, L. and Gronvall, G.K. (2020), “Red teaming the biological sciences for deliberate threats”, Terrorism and Political Violence, Vol. 32 No. 6, pp. 1225-1244.

Further reading

Invernizzi, A.C., Menozzi, A., Passarani, D.A., Patton, D. and Viglia, G. (2017), “Entrepreneurial overconfidence and its impact upon performance”, International Small Business Journal: Researching Entrepreneurship, Vol. 35 No. 6, pp. 709-728.

Corresponding author

Wayne Borchardt can be contacted at: wgb@tdag.biz

Related articles