Image Image Image Image Image Image Image Image Image Image

Outsource magazine: thought-leadership and outsourcing strategy | August 24, 2017

Scroll to top

Top

No Comments

Benchmarking of IT Outsourcing on an operational level

Benchmarking of IT Outsourcing on an operational level
Joerg Stimmer

First insights from an user-driven initiative in Germany

(This article is co-written by regular columnist Joerg Stimmer in collaboration with Andreas Maaser, Head of Technology Development, Common Database & Prepaid of Telefónica O2 Germany)

Recently, Joerg Stimmer wrote on the expected “standardisation in outsourcing” and its benefits to customers as well as the implications to service providers. One major driver of standardisation is to ensure one immediately benefits from lessons learnt from others and has the option to easily compare one’s own status with someone else.

Despite benchmarking IT budgets or other parameters on a company level (a typical example being “IT cost vs. revenue”) or to compare basic data such as “dayrates of an offshore contract”, the authors see challenges when outsourcing users want to understand their – and their service provider’s – performance on a project level. The goal is to be able to identify the major drivers for improvement and tackle them. A direct exchange of lessons learnt and experience between different users are explicitly encouraged.

In September this year, several large and mid-sized companies, being experienced users of outsourcing services, set up a “users-only” benchmark working group to determine and jointly agree on a common set of parameters which are suitable to compare outsourcing services on an operational level. The main areas of interest are, first, quality and performance; and second, costs for the outsourced services. The major challenges are to have a well-defined and commonly accepted method of calculating the benchmarking parameters being based on easily measurable input. Furthermore, this method should be independent from users, processes and company specifics to guarantee a “realistic” comparison of different projects.

Here, we will give you some examples of the proposals for the case of software development, support and maintenance which are currently discussed within the benchmark working group.

Quality
One dimension of quality is the defect density which can be easily measured by the number of defects divided by delivery volume (or function points according to the standard IFPUG as another quantity of the delivery volume). To compare the defect density of different providers, it is important to define which defects will be counted and how the severity will be weighted. For example it makes a difference in the amount of defects if the provider is responsible to deliver software which is “only” component tested or if it is in his responsibility to deliver integrated software. Furthermore it should be defined what “component tested” or “fully integrated” means.

Another extend for quality is the response and resolution time for defects. In most outsourcing contracts these parameters are well defined. But with focus on the severity of the defects, how often are these timelines exceeded (and why?), also in comparison to the delivery volume (respective function points)?

Productivity
Whenever there is a discussion about productivity it often seems that there is only one answer: function points. Nobody really likes them because the measurement of function points is complex, expensive and doesn’t fulfil all needs (e.g. function points cannot be applied if you are using “Commercial Off-the-Shelf” (COTS) products where the development effort is “limited” on configuration.)

Currently there is a discussion within the benchmarking group using an automatic code quality tool (being standard within the pliXos platform) which measures complexity and guideline compliance of the software code. In comparison to the number of adapted or new lines of code there is maybe an opportunity to automatic calculate a kind of productivity which can be easily compared within the benchmark working group.

Cost
It seems to be the case that the easiest part is the comparison of the cost benchmarks, e.g. average cost per man-day. Nevertheless these figures are only comparable if you have the same productivity, complexity and the same maturity level of the provider and customer relationship. For this reason you need additional benchmarks, e.g. the amount of internal staff which is necessary to manage the provider or number of successful projects (however you define “successful projects”).

An overview of the benchmarks which are currently in discussion within the working group is shown in the following overview:

benchmarking-Stimmer-Maas1

An example how to calculate the aforementioned parameters is given for the ones highlighted in red:

benchmarking-Stimmer-Maas2

benchmarking-Stimmer-Maaser-3

To agree on a common definition of benchmarking parameters and how to calculate them is just a tiny first step. There is consensus that all members of the “users only” benchmarking initiative will sign a common NDA and start to share real operational facts and corresponding data. Even the development of a fancy benchmarking tool allowing one to graphically view trends, averages, upper and lower limits is in planning:

benchmarking-Stimmer-Maaser-4

benchmarking-Stimmer-Maaser-5

In order to secure this critical data, besides the mentioned NDA as well as a data encryption of the tool, including a specific weighting factor, only known to the allowed users is under discussion.

A final thought which gets the authors excited: in case a majority of outsourcing users agree on such a common set of operational parameters, their calculation is well defined and the necessary input will be measured by tools provided by a neutral third party. How would that impact the overall services delivery in outsourcing?

The next workshop of the “users-only” benchmark working group is on November 10 in Munich. Besides the final agreement on the parameters, organisers want to initiate sharing of further valuable operational insights and start to work on the benchmarking tool. In case of any comments or input, the authors are happy to get an email!

 


About the Authors

Dr. Joerg Stimmer: see columnist profile

Dr. Andreas Maaser, Head of Technology Development, Common Database & Prepaid of Telefónica O2 Germany builds on more than 10 years’ experience within Telco IT. He gained vast experience in successfully initiating and managing outsourcing projects during large-scale projects in recent years and currently heads a development department which successfully leverages outsourcing capabilities in India. Furthermore, he is responsible for the strategic vendor management of Telefónica O2 Germany and is a recognized as a key person concerning governance of offshoring projects (andreas.maaser@o2.com).

Submit a Comment