As consumer demand falters and raw material costs rise, the pressure on business leaders, marketers, finance executives, and operating officers to pinpoint what is driving – or hindering – progress toward their performance objectives ratchets up.
It’s not for lack of information about performance drivers. In most organizations there is simply too much of the stuff. In fact, few functions these days are exempt from the high-stakes search for what matters in the sea of performance indicators they can track.
And the main tool organizations use to focus attention on the right indicators and align work effort against them is buckling under the pressure. Between 47% and 70% of executives in recent polls expressed dissatisfaction with balanced scorecards – the performance dashboards that balance short- and long-term goals. The criticism is that balanced scorecards end up tracking irrelevant factors and growing too long to provide focus.
Progressive firms are experimenting with better ways to determine what really matters to what they’re trying to do. Like blindfolded men probing different parts of an elephant, however, they don’t always see that their experiments reflect different parts of the same beast – or, in this case, different steps in a common process.
Here is the outline of a process that unifies three themes in the efforts of progressive companies making real headway in simplifying their performance management and pinpointing what really matters. The themes are: be bold, be skeptical, and be humble. The steps they reflect constitute a revolution in how we use information by focusing on relevance – and not just on levels of detail.
The first theme, being bold, is central to a group of organizations practicing assumption-based planning. Instead of trying to find strategic patterns in a sea of data, they start with testable strategies in the form of short lists of assumptions that have a big impact on expected results but are highly uncertain. These strategic assumptions provide a framework and a baseline for understanding preliminary results.
Alcoa and the U.S. Army have both made significant use of assumption-based planning. It’s easiest to appreciate the power of this first theme, however, through the history of a firm that for a time planned not so much on the basis of a testable strategy as a list of requirements.
When Herb Henkel first steered Ingersoll Rand away from its traditional base in locks, equipment, and electric vehicles to providing customer solutions, the firm’s balanced scorecard looked like a comprehensive list of requirements for success – necessary ingredients but no clear recipe. For example, it aimed at both customer intimacy – critical for a solutions provider – and growth through innovation – a requirement for a product specialist.
Soon the firm was acquiring specialist producers of parts that it could always have procured for its clients at arm’s length as a solutions provider. Analysts quipped that it provided great solutions at prices inflated by M&A premiums.
Lists of requirements don’t add up to testable strategies. You can meet every requirement in a list without hitting your goal. Testable strategies are much more specific. You learn something about them – and about the world – every time you miss a target.
The second theme, being skeptical, is the domain of continuous improvement specialists like Capital One and Toyota. Their focus is on the performance measures that best challenge their assumptions – and not, surprisingly, on the balance between long- and short-term goals struck by balanced scorecards.
The key question they ask is what assumption will most likely explain the next thing that could go wrong. When Toyota finds itself performing well in terms of a factor like customer loyalty, for example, it shifts its attention elsewhere to a factor where it may uncover a problem.
Of course, most of us prefer metrics that show we’re doing just fine, but they can be red herrings. Saatchi & Saatchi focused on a red herring when it proposed an ingenious strategy for achieving the goal of permanently infatuated clients in the 1990s. The idea was to reposition – and not just promote – clients’ products. But it didn’t identify indicators to show that’s what it was really doing.
To be sure, the firm tracked the number of “big, fabulous ideas” its agents generated and its fame for idea leadership. But you can win fame for ideas that savage your clients’ competitors instead of repositioning their products. Saatchi & Saatchi’s balanced scorecard showed success generating ideas and success with clients but it didn’t test the effectiveness of product repositioning per se. And if you can’t be sure why you’re succeeding, you may not reproduce the success.
If there’s a problem with balanced scorecards, it’s that they provide no clear priorities among the potentially unlimited number of factors they can track. And there’s a huge temptation to fill them with red herrings. Assumptions provide a very clear set of priorities, however, and they point to the toughest questions.
The biggest challenge is finding metrics that really test your biggest assumptions. In other words, it’s not always easy to tell what’s relevant even after you identify a critical assumption. The best solution I’ve seen is to look for a metric whose negative results could disprove the assumption and whose favorable results would be more surprising if it were wrong.
The last theme, being humble, is about measuring the impact of both strategy and execution on results. The issue is huge because nothing destroys morale faster than blaming staff for the full difference between goals and actual results when part of the problem could be unjustifiable goals and the strategic assumptions underlying them.
Nestle seems to be attacking this challenge by setting broad cross-product goals for each of its regions and global goals for each of its product groups. Managers have some leeway to shift goals across products or regions until goals for products within regions, and for regions within products, reflect equal tension. That way, flaws in new global strategies show up as conspicuous patterns in the hits and misses across products and regions.
Many executives think it’s impossible to determine how much of a difference between actual and expected results is due to execution and how much is due to mistaken goals and assumptions. It’s hard enough to split these performance surprises between the impact of controllable factors and risk factors.
But this is the payoff of being bold and being skeptical – of spelling out expectations in the form of a testable strategy and focusing on metrics that test the underlying assumptions. Testable strategies and relevant metrics reveal strategy gaps.
A strategy gap is the difference between expected and actual results remaining after you take into account the expected effect of execution shortfalls and risk factors. It’s a measure of the impact of mistakes in goals and strategic assumptions. You can measure it even if your optimal strategy is unknown.
Nestle seems to have found a way to reveal strategy gaps using results from widely varying products and regions. It’s a powerful morale boost because it lets managers know they will not be held accountable for flaws in goals and strategic assumptions outside of their control. And it shows humility at the top.
The efforts of some of these experimenters would be worth a look if only to assure your team that your organization scrutinizes its goals as critically as execution. But the testable strategies and relevant metrics that make it possible to understand the impact of planning as well as execution have huge benefits of their own. They can strip down your performance dashboard to a handful of key metrics. And they will relentlessly clarify what matters to results.