I have recently encountered several examples of the idea that higher process performance target scores are obviously better than lower ones, just because they are … well … higher; that setting a target of, say, 95% is, without doubt, better than a target of 88%, and in striving for improvement we should go 'as high as possible'.
I'm not convinced.
In various discussions in training courses and consulting projects, it has been clear that when setting process performance targets, the natural assumption is that setting a higher, more difficult target is a 'good thing'.
I remain in doubt.
Setting process improvement targets of 100% and 0%, i.e., perfection, have also been suggested because, of course, we want to be the best we possibly can be.
All doubt has vanished — this is just plain wrong.
Process performance improvement needs to be evidence-based and related to mindful changes in objective, measurable business benefits. It is not good enough to define performance improvement objectives only as general aspirations such as: "reduce revenue leakage", "reduce negative customer satisfaction impact", "remove excess time and cost", or even "reap rich dividends" — I've recently heard all of these. These are fine aspirations, but the performance effect must be quantified, else how do we know if we have succeeded? How do we know the performance gap was worth closing? We can reduce process execution time from five hours to four hours and fifty-five minutes, but that's probably not what anyone had in mind in seeking "reduce execution time".
Each process exists to deliver value of some form to direct and indirect customers. Depending on the level and position of the process in the process architecture, this might be the actual customer or some other internal or external stakeholder. If we want to know how well a process is performing, we must first understand what value it is meant to deliver, and to whom. Then we can assess the performance gap(s).
We might easily identify a problem — indeed, they often identify themselves or our customers point them out to us — but that must just be the start. Yes, that's right, you are well ahead of me — we must also identify the cause, the root cause, of the problem because we want to be dealing with causes, not just effects. In most methodologies, once we know the problem and its cause, we are on the home straight; all we need to do is remove the cause and that is the finishing post flashing by.
But, it's not as simple as that. Who said we need the fix the problem in the first place?
Life is full of problems, and we can't fix them all; we can't remove all the causes. What's missing in the 'problem and root cause' approach is knowing the impact of the problem. What pain will be experienced if the process is not changed; what will be gained if it is?
The simple existence of a problem is never enough reason to invest in fixing it, perhaps not now, perhaps not ever.
We need a measurable, objective way to define the problem, its root cause, AND the business impact of the problem. Given that we can't fix everything, we need to prioritize. Changing a process will have a cost, and we need to understand that cost in the context of the expected benefits of the change.
Elsewhere, I have written in detail about the Tregear Circles, a metamodel for continuous process improvement.
The PO circle has three nodes: target, assess, respond. Target–assess–respond is the essential cycle of process-based management. Identify a process and set a performance target, assess actual performance, and respond if intervention is warranted, either to improve performance or change the target. This is the main game; all process management and process improvement come down to this.
At the target node of the PO circle we determine which 'critical few' measures would indicate that the process is working well enough for a consensus of key stakeholders. An important parallel requirement is ensuring an effective and sustainable measurement method, that is, a practical way of gathering the performance data.
The assessment node is quite deliberately not called 'Measure'; it is called 'Assess' because more is involved than just the measurement of performance data. Measured performance data will be important, and we also look for other opportunities for improvement — solutions looking for problems, as well as problems looking for solutions.
Without an appropriate response, assessment is waste. The purpose of assessment and measurement is to correct problems and, importantly, to find ways to avoid their reoccurrence.
Generally, there are three types of response.
We need this systemic approach for process improvement; not random acts of management, but evidence-based, prioritized, targeted interventions where the business case shows that most benefits can be gained now.
Is 95% better than 88%? I don't know, and neither do you until we know the business impact of (a) not making the change, and (b) making the change. Then we can make an informed decision about what, if any, investment to make in the change.