In “What Leaders Get Wrong About Data-Driven Decisions” in the current issue of MIT Sloan Management Review, Bart de Langhe and Stefano Puntoni posit (provocatively) that leaders and leadership teams need to reshape the paradigm around decision-making and data. “If you were to ask any major CEO about good management practices today,” they say, “data-driven decision-making would invariably come up.” Citing an Accenture white paper, “Closing the Data Value Gap” (2019), they write that “[c]ompanies have more data than ever, but many executives say their data analytics initiatives do not provide actionable insights and produce disappointing results overall.”
Why might that be?
As the authors note, “[i]n practice, making decisions with data often comes down to finding a purpose for the data at hand.” This is backward thinking, they argue, and it is compelling when one sees it reframed. The solution is straightforward, they suggest. “Instead of finding a purpose for data, find data for a purpose. We call this approach decision-driven analytics.”
They point out two fault lines in the ‘data-driven’ approach:
(1) ‘Data-driven’ often means answering the wrong question
The data-drive methodology relies on available data. For instance, a school may use sophisticated predictive algorithms that take into account how long a family has been with the school, how many children have enrolled over a period of time, their residential location, donations to the school, website usage, and behavior on the school’s social media channels in order to quantify the likelihood that a family may withdraw their students from the school. The school, therefore, makes some sort of ‘special offer’ to the family in order to retain them as a paying family. The authors would argue that this approach to retention is flawed. They would have us look at the central question the school is addressing with the aforementioned analysis: how likely is a family to leave? While valuable information, they say, “it does not address the question that is relevant here: ‘what is the effect of a [special offer] on a [family’s] likelihood to leave?'” As they point out, “this question cannot be answered based on data the [school] has already gathered and requires further data collection and analysis.” So, this entire data-driven exercise is anchored on available data, which, in turn, “often leads decision makers to focus on the wrong question.” By contrast, decision-driven data analytics “starts from a proper definition of the decision that needs to be made and the data that is needed to make that decision.”
(2) ‘Data-driven’ often means reinforcing preexisting beliefs
I’ll take great liberties with the authors’ text, and turn it in the direction of education in order to provide meaningful context for our purposes here. I’ll use most of their language; in some ways, the following scenario is a direct quote…almost. Imagine a school; let’s call them Independent School Academy (ISA). ISA wants to know how its Twitter advertising of school programs and open houses is impacting admissions. According to Twitter (“About Measuring Sales Impact”), it offers a three-step attribution process to evaluate the admissions impact of advertising on its platform. First, a data broker shares identifying information from ISA’s existing families (think: browser cookies, email addresses, and phone numbers) with Twitter. Then, Twitter searches for those customers in its records and, if there is a match, adds information about the families’ activities on the platform (e.g., whether the family members have viewed or clicked on ISA’s tweets). Finally, analysts compare enrollment decisions of families who engaged with ISA’s brand on Twitter with enrollment decisions of families who did not.
As the authors point out, “[t]his approach invariably reveals stark differences for a brand: [families] who have seen and engaged with [ISA] on Twitter visit [campus] more often, and they spend more [time] on each visit. Taking the view that this data suggests social media advertising has a large impact on [admissions] supports Twitter’s business model. It’s also consistent with [ISA’s] belief that social media advertising works and that its effectiveness can be easily measured.”
This is flawed data-driven decision-making, the authors say. “Comparing [families] who have seen a brand’s [admissions] content with [families] who have not seen the content is like comparing apples and oranges. These [families] differ in many other ways. [ISA’s] most loyal [families] are more likely to engage with the [ISA] brand on Twitter, and they are also more likely to [enroll at ISA, if they have other children]. They don’t [enroll] because they saw the [ISA] brand appear in their Twitter feed. These loyal [families enroll] because they like the product, and because of this they also follow the brand on social media–not the other way around. [emphasis mine] Twitter’s approach dramatically exaggerates the impact of advertising on [admissions].” As the authors state clearly, “the risk is that decision makers will focus on data that is consistent with their preexisting beliefs, without questioning its validity.”
What to do?
In many ways, the answer to this question is not rocket science. A school needs to begin by identifying its key decisions, as well as the people who need to make those decisions, then identifying the data needed for the purpose of decision-making. One has to be careful here, though, as it would be easy to identify data that will support a decision that has already been taken (what is called preference-driven data analytics, which is analogous to confirmation bias). The key to decision-drive data analytics is in how leaders structure decision-making. The authors identify three steps, all of which align nicely with Roger Martin’s Playing to Win strategic choices framework.
Step 1: Identify alternative courses of action
The authors call this “thinking wide.” Leaders need to work with their teams to form a narrow consideration set of alternative courses of action. Let’s return to our fictitious Independent School Academy. If the school wanted to increase the likelihood that existing families would remain enrolled at the school instead of leaving for a competitor, the school could provide some sort of ‘special offer’ mentioned earlier, whether via social media or other marketing and admission channels. However, the school could also improve how it serves current families (offer more courses and/or extracurricular choices, or perhaps a free parent education series), or it might engage in a deep content strategy to demonstrate its expertise in specific domains, and so on. The danger, though, is in having too many alternatives. Start wide, then narrow those choices, using judgment (that wonderful human faculty). “By thinking ‘wide then narrow,’ decision makers increase the likelihood that the final consideration set includes high-quality and feasible courses of action.”
Step 2: Determine what data is needed to rank alternative courses of action
Put simply, decision-makers and those who interpret the data need to develop criteria to contra-distinguish and rank the alternative courses of action identified in step 1. The goal of data insights is to enable the team to rank alternative courses of action with a high degree of objectivity. When we engage in decision-making by starting with the framing of the decision, as opposed to starting with what data we have, we identify the unknowns more quickly, and propose ways that we might judge the complexities and uncertainties through the pursuit of data that will help us. Importantly, the authors underscore that decision-driven data analytics “is not about collecting as much data as possible. It’s crucial to consider the value of data. […] Oftentimes, data collected for the purpose of making a decision has more value than data that’s already available.” If the available data are not helpful in determining an unknown, a school might need to run an A/B test in order to test a potential barrier. The A/B test will provide valuable data for making a decision.
Step 3: Select the best course of action
If the previous two steps have been followed rigorously, this step should be relatively easy. Data analytics should reveal the best course of action, within the larger framework of decision-making. ISA may have learned in an A/B test, for example, that giving that ‘special offer’ they were considering via their social media channel analysis might work for some families, but backfire spectacularly with others, resulting in the school itself *causing* families to leave. The A/B test, then, is a perfect example of risk mitigation that occurs because a school has framed its decision effectively, and is gathering evidence from tests before acting.