great price, and are staying loyal, that’s a good thing. Other metrics like low cost, high productivity of labour or capital, rapid growth, new product innovation rate, staff turnover, right-first-time rates, etc. only show one element of a balanced scorecard, and if you are forced to optimise one you may be able to do so by sacrificing many of the others. My experience with benchmarking “loss of face” factors (e.g. lost time accidents, quality incidents) is that many companies go to extraordinary lengths to manipulate the figures, e.g. by giving manual workers with broken limbs some clerical chores to do, so they won’t be categorised as “off work”, or inventing a “low spec” product.
So my recommendation is to benchmark the widest range of metrics – money, time, quality, health / safety / environment (subject to pitfall 2 above) – that are relevant. Praise managers who are willing to lose face. Only include success metrics for which the entity’s management is clearly responsible.
Pitfall 4: wrong choice of peers
The first mistake in many cases is to restrict the comparison set to direct competitors. Obviously for some things – particularly production-related ones – only competitors have comparable processes. But for many things, particularly marketing, R&D, HR, finance, IT, logistics, and even for many production overhead processes, you can look outside your industry for analogous peers facing similar enough challenges in similar enough environments. The PIMS (Profit Impact of Market Strategy) database has proved that cross-industry comparison is valid even at the level of profitability, growth, and business strategy.
Small competitors often try to copy the most successful big player in their industry. The military equivalent of this would be to say “who has the strongest army, what terrain are they best at fighting on . . . let’s attack them there”
A small competitor should benchmark against other small competitors in analogous markets and see what the winners do to differentiate against big successful players. Similarly, market leaders should learn from best “look-alike” leaders.
In general, the best comparison is against peers who are like you in terms of the intrinsic challenge (the drivers of performance outside management control) but are doing a better job than you in terms of the drivers within management control.
Pitfall 5: simplistic league tables
Imagine you are a plant manager who is told he is in the worst quartile of inventory control because he has 60 working days of raw material stock. Best quartile is 10 days. Unfortunately his raw material comes by ship from 3000 miles away, and the cost-efficient thing is to have two ships a year. So each delivery comprises 120 days of raw material (5 days/week, 48 weeks/year), and going down from 120 days to zero the average stock must be half 120, i.e. 60 days. He can of course get down to <10 days, by subcontracting someone else to receive the two shiploads a year and truck some to him every week. But that just creates an intrinsically longer and less efficient supply chain.
If the problem is something else even more outside his control, such as greater complexity or higher service levels or less flexible production equipment, he can only get to first quartile by changing the job he has to do. While this may be a relevant discussion for him to have with corporate centre, it is counter-productive to tell him he is “not world class”. He may be, he may not be, but always he is convinced that benchmarking is a waste of time.
You have to take account of the differences that make a difference, and learn from those who are like you on the intrinsic drivers you cannot change.
Pitfall 6: problems with confidentiality or even legality
Surprisingly often, benchmarking results are presented as a big matrix of numbers, where the rows are the various metrics and the columns are the various observations (albeit not named). You get told you are Column H. In my experience most users of such benchmarking spend the next few hours working out which competitor is Column B, Column C, etc., and are very often right. Not naming the columns has not achieved the desired confidentiality.
- Any data related to pricing have to be historic. No current data or forward projections are allowed.
- The format of presentation must not allow for the identification of individual competitors, even by an intelligent insider. In the USA there must be at least 5 participants in a benchmarking circle.
It is clear that the presentation format described above is not only contrary to the interests of users, by destroying confidentiality, but is actually illegal.
So much for the six pitfalls of bad benchmarking, what are the six pillars of good benchmarking?
Pillar 1: correct each benchmark for key intrinsic differences
This is particularly important when you want the best single-metric benchmark to compare against actual performance, e.g. for a bonus calculation. The way to do this is some form of multivariate statistical analysis, e.g. regression. For any success metric, this finds the best mathematical combination of the various drivers and yields a “par” or expected value. It also gives you an analysis of underlying strengths and weaknesses – how much each driver is driving the par away from the overall mean (if a driver is at its mean, then its impact is zero).
If you have time-series as well as cross-sectional data, there are various “causal modelling” techniques that analyse leads and lags to give an equation with extra weight on factors that are clear lead indicators or causes of success.
Pillar 2: use “look-alikes” to pinpoint improvement areas
This is particularly important when you want to get multiple metrics in a consistent pattern that helps you arrive at a prescription for how to improve. You have a success metric and relevant intrinsic drivers as in pillar 1, but you search the database for observations “like you” on the drivers and learn from the ones performing better.