Want better decision-making? You need role-based analytics — but providers are coming up short
.
SUMMARY: In part one of our decision-making series, I made the case for distributed BI across the organization. But that’s not enough. Getting better information does not necessarily yield better decisions.
Decision-making is problem-solving. Decision-making is not, strictly speaking, a business process. It is primarily a collaborative and iterative effort, requiring looking at the problem as a team phenomenon.
This is especially true where decision-making involves the analysis of data. Numeracy, a facility for working with numbers and programs that manipulates numbers, exists at varying levels in an organization.
Domain expertise similarly exists at multiple levels. Most interesting problems require contributions and input from more than one domain (for more on that, see part one, Better decision-making requires better BI tools — and less fear-mongering about Shadow IT).
Pricing, for example, is a joint exercise of marketing, sales, engineering, production, finance and overall strategy. If there are partners involved, their input is needed as well.
The killers of speed are handoffs, uncertainty and lack of consensus. In today’s world, an assembly line process of incremental analysis and input cannot provide the throughput to be competitive. Team speed requires that organizations break down the barriers between functions and enable information to be repurposed for multiple uses and users. Engineers want to make financially informed technical decisions, and financial analysts want to make technically informed economic decisions.
That requires analytical software and an organizational approach designed for collaboration between people of different backgrounds and abilities.
All participants need to see the answer and the path to the solution in the context of their particular roles. Most analytical tools in the market cannot support this kind of problem-solving. The urgency, complexity, and volume of data needed overwhelm them, but more importantly, they cannot provide the necessary collaborative and iterative environment. Useful, interactive and shareable analytics can, with some management assistance, directly affect decision-making cycle times.
When analysis can be shared, especially through software agents that allow others to view and interact with a stream of analysis, instead of a static report or spreadsheet, time-eating meetings and conferences can be shortened or eliminated. Questions and doubts can be resolved without the latency of scheduling meetings. Collaborative software can even eliminate some of the presentation time in meetings. Everyone can satisfy themselves beforehand by evaluating the analysis in context, not just pouring over results and summarizations.
Decision-making is iterative. Problems or opportunities that require decisions often aren’t resolved entirely but return, often slightly reframed. Karl Popper taught that in all matters of knowledge, the truth cannot be verified by testing. It can only be falsified. As a result, “science,” which we can broadly interpret to include the subject of organizational decision-making, is an evolutionary process without a distinct endpoint. He uses the simple model below:
PS(1) -> TT(1) -> EE(1) -> PS(2)
Popper’s premise was that ideas passed through a constant set of manipulations that yielded solutions with the better fit but not necessarily final solutions. While the initial problem specification PS(1) yielded some Tentative Theories TT(1), Error Elimination EE(1) generates a solution, PS(2), and the process repeats. The TT and EE steps are collaborative.
There is a current in computing based on the economics of nearly unlimited resources for computational complexity. From this, many are seeing the “end of science,” meaning the truth is in the data, and the scientific method is dead. Previously, a scientist may observe certain phenomena, come up with a theory and test it. He is a counterexample.
Using algorithms from Topology (my wasted youth), investigators can apply TDA (Topological Data Analysis) to investigate the SHAPE of very complex, very high-volume, very hi-dimensional data (1000’s of variables). Topology tools deform it in various ways to see its true nature and determine what’s going on. Traditional quantitative methods can only sample or reduce the variables using techniques like Principal Component Analysis (these variables don’t seem very important).
An organization did a retrospective analysis of every trial and study on spinal cord injuries available. What they found with TDA was that one and only one variable had a measurable effect on outcomes with patients presenting with Spinal Cord Injury (SCI) — maintaining normal blood pressure as soon as they hit the ambulance. No one had either seen or even contemplated this before.
Karl Popper was one of the most important and controversial philosophers of science of the 20th century. In “All Life is Problem Solving,” Popper claimed that “Science begins with problems. It attempts to solve them through bold, inventive theories. The great majority of theories are false and/or untestable. Valuable, testable theories will search for errors. We try to find errors and eliminate them. This is science. It consists of wild, often irresponsible ideas that it places under the strict control of error correction.”
In other words, hypothesis precedes data. We decide what we want to test and assemble the data to try it. This is the polar opposite of the data science emerging from big data.
So here’s my premise. Is Karl Popper over? Has computing killed the scientific method?
The overly-simplified model prevalent in the Business Intelligence industry is that getting better information will yield better decisions. Popper’s simple formulation highlights that this is, basically, a hallucination- every step from problem formulation to posing tentative theories to error elimination in assumptions. Finally, reformulated problem specifications require sharing of information and ideas, revision and testing. One-way report writers and dashboards cannot provide this needed functionality. Alternatively, building a one-off solution to solve a single problem, typically with spreadsheets, is a recurring cost each time it comes around.
My take
A confession: a few years ago, I was engaged to solve a supply chain problem for a company that manufactured their products in Asia and shipped them via container to the US. The main warehouse was near Seattle, and there were satellite warehouses across the US. My client was a Senior Vice President of Logistics with a nasty problem. Clients were unhappy because they were frequently out of stock. The existing solution was called (incorrectly) the On the Water Report.
The report was a three-inch-thick green bar report detailing all products ready for shipment at the plants in Thailand and Malaysia, products on ships (hence, on the water) in containers and inventory at the warehouses. My client would get the report once per week, scan the entire thing and combine, in his head, what he knew about orders, and highlight every instance where he felt a problem could occur. This took almost a full day of his time. When he was finished, the report would go to an analyst who would build an “issues” spreadsheet, and from there, various people in the organization were alerted to potential problems.
The keyword here is “alerted.” No solution was devised. The only response was damage control.
In my naïveté, I thought I would design a system to skip the green bar by eliciting his explicit and tacit knowledge and automatically generating the spreadsheet, saving him a day a week and another day of work for an analyst. The project had a positive ROI and excellent corollary benefits, such as integrated data for many other uses.
When I presented the solution to him, he responded, “Neil, you don’t get it, do you?” I have to admit that this wasn’t the first time in my consulting career I heard this. I was about to get a lesson about how things work (For me, this is the best part of consulting, learning from experts how things work and what is important).
He said, “You can save a day a week of my time and day a week of an analyst’s time, but that isn’t going to mean a damned thing in the greater scheme of things. Here is where you can do some good. Save this company the expense of sending a helicopter to a ship at sea to break open a container to satisfy a major client. Save our sales force the time they spend on the phone apologizing for missed shipments and for putting clients on allocation because we can’t get the products to the right place at the right time, and repurpose that time getting them in front of customers in a good mood and selling them things.”
That’s what we set out to do, to build an optimizing system linking sales forecasts, contract compliance, manufacturing and transportation. In uncharacteristically candid disclosure for a consultant, I regret to say the project wasn’t a success. Senior management got involved in a scandal, business deteriorated, and a very injured company was sold and largely disappeared. But the lesson for me was clear.
The moral of this story is that informing people with analytics isn’t worth a bucket of spit (as they say in Texas) if you can’t take it all the way to presenting a solution. Making things go faster or “saving time” of professional staff is not a very compelling proposition (I’m alluding to naïve RPA projects). Changing a process to provide better customer service makes an entire sales force more productive — or fine-tuning manufacturing forecasts.
This article was originally published on Diginomica.com.