Reigniting the analytics-to-decisions debate
SUMMARY: Analytics tools get shinier and fancier, but the classic problem remains: are we making better decisions? This is not a new debate. But in today’s supposedly data-driven organizations, with hefty analytics investments in play, the issue takes on a new urgency.
In 2007, Tom Davenport’s book, “Competing on Analytics,” and James Taylor’s and mine, “Smart Enough Systems,” we described the value of analytical decision-making, but from different points of view. Davenport’s description of “data-driven” or “analytics-driven” organizations was a more comprehensive look at how analytics can power an organization through all sorts of issues and decisions.
James and I looked at something not terribly different from today’s RPA (Robotic Process Automation), predictive analytics and rules to handle high-volume, low-risk decisions.
Eleven years ago I wrote in InformationWeek:
The whole field of decision-making and how, or if, we can assist it with analytics is pretty murky. This is a contentious point as promoters of analytics still insist on suggesting they are helping people by “getting the right information to the right person at the right time.” It’s undeniable that getting this timely information to the right person is useful, but there is no evidence that it is even slightly sufficient. People bring all sorts of experiences, prejudices, flawed reasoning, and emotional aspects to their decisions.
I listened to a podcast with Davenport the other day, and he still adheres to the notion that the “analytics-driven” organization starts with senior management, preferably the CEO. He did cite an example of the CEO of Harrah’s, who was a “quant” and drove the company in that direction, but when he left, the organization started to recede to the old ways of doing things.
Davenport believes that some leaders of organizations may understand some of the technology behind analytics. That’s a pretty squishy assertion, especially when looking to those leaders to set the table for analytics. But it’s getting more complicated every day. I learned in business school (many years ago) that the CEO (or whatever we called them then) was not concerned with analytics and models; that’s what their direct reports did. The CEO was involved with strategy and non-quantitative problems that should be solved elsewhere. It would be interesting to see some objective research on this.
In my experience, I see the analytics-to-decisions continuum something like this:
Step 1: Everything the technology industry provides, from data to computing to domain knowledge to models and algorithms, etc.
Step 2: Understanding what all of this actually tells you.
Step3: How to implement and execute the decisions based on Steps 1 & 2.
Steps 2 & 3 are the ones that matter. Step 1 is table stakes.
This sequential approach above is simplistic. Decisions are often bundled together, are derivative or canonical. The point of the three steps is that technology is just a tool, but understanding and execution are soft skills that are typically taken for granted. In the podcast, Tom described four levels in analytics. The gist was: it’s up to the CEO to drive analytics.
It’s not always easy to determine if a decision was even made, or if it was, by whom and when?
Organizations check their progress with reports. It turns out that a structured reporting system actually decreases the accuracy and especially the completeness of reports, which is the opposite of what we expected. Standardization seems simple, but it’s not always easy. The theory is that structured reporting would make the reports intrinsically better. We’d all be using the same ideas recorded in the exact verbiage, but the complicated semantic ambiguity, even between people who work together, adds a layer of difficulty. By using standardized terms in structured reports, creators of the reports are forced to squeeze their usage and semantics into a rigid framework that doesn’t always correlate with their own.
It isn’t fashionable anymore, but if the senior executives let their guard down for a minute, many of them would say they don’t trust analytics and, in the end, make their decisions on their gut. I think they left out one category, though. James Taylor and I wrote that many decisions are simply avoided or hidden because people really don’t know what to rely on, but that’s a different topic.
Let’s get back to the subject of the article. It is no coincidence that there are so many phrases that depict thinking guts. There is gut reaction, gut feeling, gut instinct and, related but even more evocative butterflies in the stomach. As it turns out, there are good reasons for these terms because your gut actually can think in its fashion. The gut was your original brain. As we slithered across a rock, pre-head brain, the neurotransmitters we know, like dopamine, serotonin and norepinephrine, and hormones like adrenaline and insulin ran The Project from your midsection.
Your gut is lined with neural cells that even to this day act as a primitive second brain, especially in their role of managing your immune system, which can be considered more or less sentient. So let’s not be so down on that gut thing.
In 1996, I was on a Guru Panel at one of the first TDWI conferences (in fact, it may have been the first). To my right was Bill Inmon, and on my left, Herb Edelstein. Alan Paller was moderating, and a woman in the back asked this question: “Why do we need all of this data? Don’t the men (yes, she said men) who run these companies just use their gut?”
Alan handed the microphone to Herb. Herb gave a heavy sigh, and here is what he said:
Let me tell you how to run a $100 million company. First, start with a billion-dollar company, then run it on intuition.