In previous posts I’ve shared resources to help you educate your leaders in data literacy, but beyond knowledge, this also requires thinking differently. We need to get thinking about our thinking.
ADVERTISEMENT |
Helping leaders get their heads around thinking styles can be a challenge. Many of us spend most of the time in unconscious, automatic thinking patterns. We respond to similar problems or stimulus with repetitive thoughts or mental models. Being data-led also requires this aspect of cultural and leadership change.
To take on this topic, I am delighted to welcome back guest blogger Harry Powell. Readers may recall that Powell is director of data and analytics at Jaguar Land Rover. He has shared with us before on the how leaders should behave when being presented to, and the need for less predictive models.
This article is a slightly different style to our normal guest blog posts. Below I have collected what were previously five separate micro-posts from Harry. To help you achieve this aspect of data literacy, I think it’s helpful to see the diversity of different thinking styles.
As you read through them, I challenge you to consider which thinking style you could practice. Bring to mind a current problem you are working on. Recall how you are approaching thinking through that problem. What have you tried already? Now imagine, for each of the thinking styles that Powell outlines, approaching it that way. Could it work? What fresh insight might it offer?
Over to Harry now, to challenge us with five different thinking approaches. Helpfully for each he shares a real-life case study of how his team applied that thinking style. Here’s Harry....
Thinking style 1: Using existing data sources in unusual ways
Sometimes you can learn a lot from data that aren’t meant to have anything to do with the problem space in which you’re working. In fact, we have made good use of data that are just plain wrong.
This is because while the function that records the data may care about the numbers themselves, you might only need to care about a pattern in the data, or about how the pattern co-varies with some other factor. The fact that it is inaccurate for its original purpose may not matter.
For example, we wanted to show how delaying raising issues during the new-vehicle engineering program was causing quality and production problems. To do this we had to normalize the issues data by how much work was being put into the program. But the labor data were notoriously bad—most people didn’t bother to fill in their time sheets. We were told that there was no point in even looking at it.
It turns out that although the labor data were completely wrong, they were wrong in an independent and unbiased way. So, they were fine to use for our purpose, and we were able to show that engineers were raising issues too late and that this drove quality problems.
Thinking style 2: Working back to front
Sometimes a problem can only be solved by working back to front, starting at the end and going back to the beginning.
When we wanted to calculate the daily sales in each country, we found that, during the month, each market used different rules to determine when a “sale” is recorded (although they did reconcile at the end of the month). There was no easy way to contact the markets to ask them about their logic. So we had to start with the historic output data and infer the sales logic from those. We were then able to apply that logic to the intra-month vehicle sales information and calculate daily sales easily for the first time.
Even if it’s not 100-percent right, this approach gets you to within a couple of percentages, and that is a lot better than trading blind. It certainly helped us when lockdown hit, and we needed to reduce inventory fast.
Thinking style 3: Super-simplification
It’s always tempting to think that complex problems can best be solved by complicated solutions, but you often find that super-simplification gets you even better results. By this, I don’t just mean building parsimonious models by eliminating insignificant variables; you should always do this. I mean radically changing the approach, answering the question in a completely different, perhaps even naive, way.
We needed to build a model to select a set of cars to build and sell. Now there is a very complex logic that determines what cars can be built, and a similarly involved logic of what cars will sell in what markets. People had tried to code it up any number of times and tied themselves in knots. Then one of the team suggested just sampling from the vehicles that we had built during the last two years. It is a shockingly simple approach, and although it doesn’t sample the full distribution, it gets you 99 percent of the way there in 1 percent of the time. And then you can use the time you save overcomplicating something else!
Thinking style 4: Abstract problem solving
You can create a lot of value by thinking about your problem in abstract terms. Computer scientists are often better at this than data scientists. The first question they ask is, “What data structure is this?” or, “Is this process equivalent to an algorithm I already understand?” or, “What design pattern should I use to represent this?” Abstracting the problem often enables you to see common patterns, apply well-understood analytical frameworks, and simplify radically, which gives profound and generally applicable results.
When my team at Barclays were asked to build a general engine to extract insights from customer transactions, we were able to show that there were five common archetypes of insight. Plus, relevance could be thought of as a form of local ranking, and that a couple of mathematical abstractions (called “monoids” and “monads”—very different things even if spelled similarly) could be used to simplify and streamline distributed calculations. That left us with a very generalized insight engine, one which would run very fast and could be configured to address lots of different types of problems.
Abstracting a problem back to its bare essentials is hard. But it is probably the most valuable thing that a problem solver can do.
Thinking style 5: Don’t be constrained by other people’s limitations
Perhaps the most obvious tool in the original thinker’s toolbox is ignoring people who tell you what can and can’t be done. At a university I once asked a math professor for a hint as to how to solve a problem set. He said, “Here’s my hint: It can be done.” Things are much easier when you believe they can be done.
A big win for the team at JLR was when we found a way of working out what parts go into a car. You’d have thought that this would be straightforward, but it isn’t. We could always do this at the point the car was actually built (obvious), but to do it in advance wasn’t possible because it is done in a proprietary system where we don’t have access to the rule set. To do so we had to parse a 56 million-line XML file, where the rules are hidden. We were told there was no point trying because it couldn’t be done.
Turns out it can (thanks to my brilliant team). We can now simulate millions of cars a day, when the base system could do only a few thousand for use by the factory alone. This allows us to optimize and simulate all sorts of scenarios around how to build cars better.
It is an error to assume that just because you haven’t observed something in your sample that it doesn’t exist in the population. There’s a risk in pursuing something that “can’t be done,” but if the prize is big enough, it’s worth having a go. Don’t expect people to believe in the truth just because you prove them wrong. It takes a while for the impossible to sink in, so stick with it.
Hat tip to James Watkinson, Martin Brett, Matt Collins, Stephen Halil, and all the other great people who built the “impossible.”
So, how will you try thinking differently?
What ideas have these posts given you? Which thinking style appealed? Which case study reminded you of a problem that you are facing? I’d love to hear your feedback on that.
Of all the examples of thinking differently that Powell shares, my favorite is super-simplification. Too often I or those I’ve worked with are just too close to a problem to step back and see this. It is too tempting to add more complexity to our thinking or code as we seek a solution. Yet some of the best solutions I’ve seen in practice are simple ones. Those that came from stepping back, generalizing, and accepting good enough.
I hope this article inspires you to think differently and to challenge your leaders to do the same.
First published July 28, 2021, on the Customer Insight Leader blog.
Add new comment