It's becoming harder to predict what's important -- especially over the long term -- and machine learning is underperforming the hype around its ability to help us in this department. So how then do we make good decisions and how do we help our clients make good decisions?
Can events be accurately described as historic at the time they are happening? Claims of this sort are in effect predictions about the evaluations of future historians; that is, that they will regard the events in question as significant. Here we provide empirical evidence in support of earlier philosophical arguments that such claims are likely to be spurious and that, conversely, many events that will one day be viewed as historic attract little attention at the time. We introduce a conceptual and methodological framework for applying machine learning prediction models to large corpora of digitized historical archives. We find that although such models can correctly identify some historically important documents, they tend to overpredict historical significance while also failing to identify many documents that will later be deemed important, where both types of error increase monotonically with the number of documents under consideration. On balance, we conclude that historical significance is extremely difficult to predict, consistent with other recent work on intrinsic limits to predictability in complex social systems However, the results also indicate the feasibility of developing ‘artificial archivists’ to identify potentially historic documents in very large digital corpora.
The current state of the art in predicting what will turn out to be important -- even with the assistance of the heavily hyped category of tools know as machine learning -- is pretty unimpressive. I wonder what this implies for the question of how we help clients (and ourselves) make better decisions about where to invest, how exactly to invest, and how to think about this question from a strategic viewpoint.
First, it seems we've always known that predicting the future is difficult. So research proving that it's... difficult to predict the future... isn't all that surprising, except perhaps for the finding that current machine learning (ML) models don't help all that much.
Might this cultural understanding of the difficulty of predicting future events explain the usage of benchmarking?
A measurement of the quality of an organization's policies, products, programs, strategies, etc., and their comparison with standard measurements, or similar measurements of its peers. The objectives of benchmarking are (1) to determine what and where improvements are called for, (2) to analyze how other organizations achieve their high performance levels, and (3) to use this information to improve performance.
Benchmarking is about figuring out what peers or competitors are doing, abstracting that into a set of measurements, and then trying to change what you/your client does to achieve similar or better results.
You've probably heard that saying: "you are the average of the 5 people you spend the most time with". This is us as individuals "benchmarking" the group of the 5 people we spend the most time with and using that information to inform our own decisions, which results in us becoming more and more like that group of people.
Of course, you can only take this idea so far, but it does contain at least a kernel of truth. And it points to the limitation of benchmarking, which is more of a "let's make things suck less" than a "let's figure out how to be extraordinary" kind of tool. Benchmarking is focused on the near past rather than on near-future or far-future innovation. What other tools exist for improving business decision making? What tools might help us navigate at least the near future?
This will be an incomplete list. Highly incomplete, I imagine. But over time, it'll get less incomplete. :) Even in its incomplete state, it's useful.
The first set of decision making tools you might reach for are all designed to reduce uncertainty. "Reduce uncertainty" is what seems like a very carefully-worded phrase from Douglas Hubbard.
If it was phrased as "increase certainty", it would capture the wrong idea (complete certainty is impossible). If it was phrased as "gather more data", it would focus on the wrong part of the process (the inputs, rather than the outcomes). So "reduce uncertainty" is exactly what we are doing when we:
Use focused, scrappy research methods to measure or understand things that could effect the decision.
Incorporate more context into our decisions. Context can be increased breadth (current context) and increased historical context.
Replace false certainty (generally stemming from cognitive biases) with evidence-based probabilistic understanding.
I can't help but notice that -- seen through the lens of human emotion -- these uncertainty-reducing tools require us to be humble enough to let external facts at least co-exist with our ego and hopefully override our ego in the decision making process.
The next category of decision-making tools focus on reducing the scope of the decision. Instead of deciding everything about a large scope of variables (waterfall-style decision making), you focus on deciding just a few things about a small scope of variables. This is the familiar and (over?) hyped Lean/Agile approach.
I see Lean/Agile as a way of increasing flexibility, and moving decision-making from a purely abstract context to a more experiential/grounded context. It's a way of acknowledging the imperfection of any decision and committing to continuous improvement, short iterations, and the flexibility that comes with humility.
I don't think Agile/Lean are a complete decision making toolkit. In fact, they're almost more of a cultural context within which other decision making tools can be deployed.
If you could com pletely control gravity in the area around you for up to 10 seconds at a time, you would pretty much never worry about falling again. In fact, you might look forward to falling, just so you could alter the rules of gravity in the second or two before you land, turning a vicious fall into a soft landing. You would be changing the context in which you fall, and making it so whatever caused you to fall has few or no negative consequences.
Big companies do this as often as they can. They attempt to control more variables in order to reduce risk or simply make it so that any decision is a good one. An alternate way to see this is externalizing the effects of a bad decision by creating a monopoly so customers bear the cost of the company's mediocrity in decision making and execution.
We'd all be using Google Plus as our primary social network (at least for a few years while the competitive set re-organizes) if Google had been able to acquire and shut down Twitter and Facebook. G+ probably would have sucked just as bad as it ever did, but we'd have no real alternatives, so Google would have externalized the consequences of their bad social network design decisions onto customers.
Among smaller businesses, we "suspend physics" around our decisions in several ways:
Specializing, so we make the pond smaller rather than making the fish bigger. This gives us more latitude to make the occasional bad decision with fewer negative consequences. Even if they have bad breath or hired a rude front office person, you'll go to see the specialist physician you need help from if they're the only one in town.
Humanizing, so our relationships with our clients are multi-dimensional and therefore less fragile. Human relationships that have a money/business component as just one among several components are able to absorb more imperfect decisions than relationships that only have a money/business component.
Applied portfolio theory
Applying the idea of modern portfolio theory to decision making is another useful tool. Here, we model and implement decisions as relatively small bets within a larger portfolio that has an intentional design. We shift the focus to the portfolio's performance, and manage the items within that portfolio as a cohesive portfolio rather than an un-connected set of projects or decisions. We incorporate an understanding of risk and our risk profile into our decision making, which means that we forsake benchmarks and best practices because those generally don't incorporate the nuance of varying risk profiles into the decision making.
I like to apply portfolio thinking in several areas:
My choice of clients: I'll choose to work with some clients based on the 10-years-from-now potential that I see in them, even if the compensation-right-now is not super high for me. I'm investing more in these clients than I might in others, but the decision to do so is made within the context of a larger portfolio of client investments I'm making.
Where to spend time: I see the financial performance of my business as a lagging indicator of the value of the expertise I've invested in creating over the preceding years. So the decision about where I spend time inquiring, learning, researching, and synthesizing what I learn now is the biggest influence2 on how much money I'll make 3, 5, 10 or more years from now. I can treat this as a waterfall style decision, in which case I'm making one big bet on a large scope of variables, or... I can apply Lean/Agile thinking and portfolio thinking and treat this as a group of smaller bets on a smaller scope of variables.
I hope going through this list of decision making tools reassures you that predicting the future is not the only way to make better decisions. In fact, attempting to predict the future is actually a pretty shitty way to make decisions.
It's much better to acknowledge our very limited ability to predict, and instead use the other decision making tools (reduce uncertainty, increase flexibility, suspend physics, and applied portfolio theory) to make better decisions within the context of uncertainty about the future.
I find Marginal Revolution to be one of a few really excellent filters/curators that I follow as part of exploring the "horizontal stroke" in my T-shaped expertise, and that's why you see me frequently referencing Tyler and Alex' work there. The authors probably lean more libertarian than I do, and so it's really refreshing to get a regular, thoughtful deposit of that perspective. ↩
It's not, of course, the only influence. But I do see it as the biggest one. ↩