My recent blog on Watertight, not watermelon SLAs had a fantastic response, with nearly 5,000 reads via LinkedIn. It also drove a number of discussions and I established some new contacts as a result. This subject clearly ‘hit a nerve’ so this is the follow up to that with more detail around what this means and what service experience management and metrics is all about.
As a recap
There’s a real need to move away from IT focussed SLAs and associated reporting, as this often does not represent customer / user experience, or show how services meet business demands. Its not accurate or healthy to have too much focus on individual IT components and an IT-departmental view of what’s important. All stakeholders need to be involved in defining targets and metrics that help to identify if value is being delivered, or if not, where this is failing.
Traditional SLAs don’t go far enough and often miss the mark on how or where to improve. Customer feedback on its own can also fail to show business value being achieved or understood. Whilst traditional IT metrics show performance in specific technical areas, the concept of ‘value metrics’ should reflect a number of results and outcomes as a wider set of business results and areas of customer experience.
In the absence of real intelligence around how these ‘SLA’ metrics are compiled and presented, service providers often fall back on producing volume rather than quality – listings and reports and details that no one wants to see. They can also fail by producing ‘industry’ metrics when specific business related outputs are required. This all adds to the confusion and lack of trust between providers and their customers.
Metrics must reflect the increasing complexity of interconnected systems and services. But they must also do so in a way that shows the ‘wood from the trees’ – i.e. a rounded view of ‘value’ and not just a vast forest of unintelligible data.
Before we go further, we should just also be clear on the following;
· Operational metrics are useful – for internal quality monitoring and as building blocks for integrated reporting and OXMs.
· SLA metrics can be useful – as long as these are seen to be related to specific requirements and agreements with customers.
· Customer satisfaction feedback data is highly useful but should be seen in context – event based surveys reflect a moment in time, often, periodic surveys are also needed for context and perspective.
· Internal employee satisfaction data is useful if seen in relation to other indicators and feedback – some surveys on their own can either show data that the organisation wants to hear, or only negative data. A lot here depends on how the data is captured – i.e. if this is genuinely confidential etc. Organisational trust and culture is important here.
So how do we do this…? How do we measure ‘value’?
In simple terms, by using a number of different types, sources and formats of metrics and combining these together. This is done with weighting that reflects relative importance and therefore value. When discussing agreements and targets for these composite metrics, stakeholders can focus mostly on the outputs and relative value of different metrics, without needing to know each individual component in detail. The resultant combined and weighted metrics represent a broad spectrum of measurements of experiences, outcomes and results.
Metrics should also be considered as fluid and in relation to changing contexts, so different metrics may measure the same things, in different situations, like e.g. service availability (of the same service) across different business periods. So, service availability at 9am may not require much priority, whereas at 3pm it may be business critical, if that is when a key business transaction take place.
These ‘compound’ metrics then can be considered watertight and robust views of the value delivered through services.
Analogy – aircraft biometrics
As a quick analogy, consider the number of measurements (biometrics) that are taken of an aircraft – these may involve the same measure at different parts of a flight, on the ground in the air etc. Tyre pressure is of little actual value during a flight, but really important on landing. When we measure we need to ensure that we are considering the context at any given time. The flight would also include a number of other metrics around customer service (cabin crew), employee job satisfaction, on-time arrival, cost efficiency etc – all of these are relevant and need to be considered and viewed in context. All of these then contribute to the overall value and quality delivered during the flight.
Building OXMs – Outcome and Experience based metrics
To build up a useful set of compound metrics, my suggestion is to use 4 key areas of measurement:
For experience data:
- Customer feedback
- Employee feedback
For business outcome data:
- Process and performance metrics
- Key business metrics
- Customer feedback – these would involve various sources of customer feedback, from surveys, meetings, NPS, complaints etc.
- Employee feedback – these would include employee feedback from internal surveys, regular meetings and updates, sense checks on morale etc.
- Process and performance metrics – these would include a number of traditional metrics produced for SLAs, operational performance, incident response and turnaround times, MTTR, service availability etc
- Key business metrics – these would include the business outcomes derived from use of the services. This will vary across different organisations, sectors and levels of maturity although in all cases they require input from users and customers to identify their nature and importance. (This consultation process is described here)
All of these areas contain an number of individual metrics that can be weighted and measured against target thresholds. The overall outcomes can also then be prioritised and weighted in accordance with user/customer preference – so e.g. business outcome may have a higher weightings over individual processes or user satisfaction. These preferences and relative weightings could also change in different situations, e.g. Where user satisfaction may be more important than business outcome in some situations.
The overall dashboard view can then reflect user preference on relative weighting and thresholds, showing RAG (Red/Amber/green traffic lights) status as required.
In the above case the business and performance metrics have been met, not the customer satisfaction targets.
In this example the experience and performance metrics have been met, but not the key business outcome.
In both of the above cases the overall result may or not be acceptable to the customer – the discussions with customer will determine this. From experience, building up the bundles of metrics in each areas is a useful task which also requires some customer input – this also helps both parties to fully understand and work through needs and expectation of service delivery and reporting. In turn this also helps to build a rich and trusting relationship across teams.
In all of the examples above, metrics, thresholds and weightings are examples – these will be different in each organisation. There is no ‘standard’ for this – understanding the requirement is part of the relationship building and stakeholder value building process.
OXMs not SLAs?
The approach suggested here refers in particular to metrics – Outcome and experienced based ‘metrics’ – not SLAs or XLAs. ‘Agreements’ can be difficult to achieve without first developing this type of approach. My experience has been that it is helpful to develop these metrics as a means to building agreements in future. In many cases ‘formal’ agreements may not be needed, if there is a good working relationship built on the metrics and what they can deliver. It’s vital to understand that the process of building these metrics (i.e. through collaboration) is equally if not more valuable than the outcomes of the work. Formal agreements may not be needed – however it is always sensible to use Goodharts law – i.e. to always measure, sometimes formalize, avoiding SLAs and targets on their own becoming de facto goals.
A further stage of maturity that can also be developed is to use this type of model to drive forecasting and demand management – i.e. where changes to performance or capability are also modelled in relation to the impact on Customer or employee experience, and vice versa. I am currently looking at developing models and possibly tools in this area – If you are interested in this please contact me to discuss.
All of this can be achieved without the need necessity to train and certify your entire department in one methodology or framework or another, although that is of course useful and I would recommend building awareness and briefing sessions in ITIL and other approaches as part of transformations.
However, why not try it out..? I’ve used this technique in various forms for some time – it works and delivers some great results.
I’ll also be discussing and presenting more on this topic in my forthcoming Brighttalk Webinar – Thursday 24 September, 16:00 (UK).
I will explore further aspects of this subject in subsequent blog and webinars, in particular the approach to napping and building views of services, internal value-streams and also customer ‘journeys’.
I also offer direct services – workshops and consultancy support – for organisations who wish to move towards achieving value through good service management and service experience metrics – SLAs, XLAs and OXM. Contact me at [email protected].