Benchmarking in an age of disruption
Benchmarking of IT and business operations is a time-tested approach to gauging pricing and service level against market standards. In recent years, however, enthusiasm for benchmarking has waned. Many CIOs concluded that traditional benchmarks are too slow, too rigid and too expensive to provide much value in an environment of rapidly changing market conditions, innovation and disruptive technology. Put simply, they’ve just not seemed very sexy.
That negative view is changing. Recent developments have significantly enhanced benchmarking capabilities in terms of speed, cost and analytical scope. As a result, benchmarks today are becoming increasingly relevant to operational strategies aimed not only at optimising existing operations, but at assessing new technologies, modelling future scenarios and designing, implementing and managing operational transformation.
One fundamental problem with traditional benchmarks is that they are too slow, cumbersome and expensive. The process of manually collecting and analysing data requires significant time and effort on the part of the client organisation, and often results in engagements stretching out for months. Today, with contracts becoming increasingly shorter in term and smaller in total value, investing in such a benchmarking exercise makes little sense.
Another issue is that benchmarks have been limited to a historical, point-in-time perspective and confined to discrete service towers. They’re designed to analyse the performance of, say, a data centre over the course of a year. By assessing that data centre’s performance in the context of top-performing data centres of similar size and scope, you can identify gaps and gain insight into how to improve efficiency.
That’s all well and good – if the IT environment is relatively static. Today, of course, that’s not the case, and “historical perspective” is becoming irrelevant to a world where market conditions that prevailed six months ago are already obsolescent.
Another challenge is that traditional benchmarks rely exclusively on data from similar peer organizations to provide the comparative reference. This sets an unrealistically high bar of expectations, and because it’s invariably difficult to find enough current and comparable data, raises questions about the validity of the data at the outset of the initiative. Rather than focusing on results, the client and provider squabble over the numbers.
So what’s changed to make benchmarks better?
For one thing, the application of automation to the functions and processes of benchmarking has been a fundamental game-changer. Rather than advisor and client teams spending weeks poring over spreadsheets of cost, productivity and quality metrics, data collection and analysis can now be done largely with automated tools. This has dramatically reduced the cost of the exercise, which in turn has made benchmarking cost-effective for service providers seeking to gauge the competitiveness of their pricing.
Moreover, peer-group comparative models are being replaced by the use of cross-industry market data, which encourages acceptance of the benchmark results and allows clients and providers to go into the exercise with agreement that the numbers are valid.
Finally, benchmarks are increasingly able to project and assess a variety of future scenarios, thanks to the application of sophisticated heuristics and regression analysis models. Coupled with automated data collection tools, these capabilities enable real-time and ongoing analyses of how new technologies are impacting pricing trends.
This is critical, as it allows benchmarks to take a future-oriented view, and to be used even if historical data is sparse. For example, a traditional benchmark might find that storage costs are not aligned with market standards, quantity the gap and provide a target for a pricing adjustment. Today, by contrast, a benchmark can assess alternatives to a conventional storage solution, such as the costs, benefits and transition implications over time of an investment in a cloud-based storage model.
Benchmarks are also helping enterprises navigate the challenges and opportunities of Intelligent Process Automation (IPA) and cognitive computing. Analysts predict that intelligent tools will drive savings of 45 per cent for IT services and up to 60 per cent for managed service solutions, as well as drive staff reductions of up to 45 per cent in certain areas. Alsbridge has observed a dramatic increase in the volume of IT outsourcing contracts with an IPA component, with providers reduce pricing bids by 25 per cent to 30 per cent due to autonomics.
While few would argue that IPA is transforming IT and business operations, gauging the precise impact on pricing is a challenge, as a number of factors are at play. Enterprise maturity is one consideration. Ironically, the most mature and the least mature organisations are likeliest to see the biggest and fastest benefit from IPA. Advanced companies have laid the groundwork and are well positioned to take the next step, while the least advanced have the most to gain because their uncoordinated and very manual processes can be automated in a bite-the-bullet approach.
Moreover, market prices historically have declined at different rates for different services, as shown below. As such, there’s no standard formula to calculate how IPA will impact IT costs, as the impact will vary by tower.
Finally, assessment of an IPA initiative must consider implementation costs, which include software license fees, system deployment and configuration, knowledge transfer, replacement of existing tools and staff training. In fact, the first year of an ITO contract with an IPA component will typically see higher costs than its traditional ITO counterpart.
Benchmarking can be applied to account for these variable and calculate the cost impact of IPA on a tower-by-tower basis, as well its cumulative impact over time versus traditional outsourcing, as shown below.
The capabilities of today’s benchmarks allow for an agile and forward-looking perspective that assesses multiple scenarios and variables, providing CIOs with critical insight needed to chart the uncertain waters of a constantly changing marketplace. Benchmarks are also contributing to a more positive client/provider relationship, by shifting the conversation from a contentious debate over pricing tactics to a constructive dialogue on moving the business forward in a way that benefits all parties.
About the Author