I am happy to add this post from my colleague Mike Read to The Kitbag. Mike and I have worked together on several project and his experience has added much to this work. This post started out as a response to my earlier post "Statistics, sampling and other mysteries of the universe" that I posted on 18 May 2015. Mike can be reached directly through his website: www.mikeread.org
Can we be 100% confident that everything is perfect? Although we might all like to think so, the answer - especially when it comes to certification - is almost certainly not! Can we check every auditor and every system on every occasion? Are auditors going to check every site, every day? And will they look at everything? Of course not. That would make certification so expensive no one would buy it. There is an inevitable trade-off between cost and certainty. But how do we get the best value for money, and what is the minimum level of certainty that is acceptable? And while we’re at it: ‘certainty of what, exactly?’ These questions are vital to the credibility of certification, and what is certified, but the truth is that answers are routinely fudged.
Most systems mix some kind of sampling into their fudge recipes. But before we get to the joys of how to choose a wise sample, there’s another question that is even more rarely asked. Whose opinion matters?
It’s unlikely to be just your certification scheme that bases its reputation on the quality of the assurance you offer. Maybe a big supermarket or chain of cafés insists that your logo appears on products it sells to ensure its brand retains a good image, and to avoid any nasty surprises and exposés of bad practice. Whether you like it or not, you’re in the business of selling them risk management. And perhaps their customers depend on it too for their own sense of well-being, and would switch to a different supplier if something went wrong.
Maybe you would happily put your logo on products coming from a source where one in 20 of your certification criteria are not being met, if corrective action is in place to remedy any failings. Do you share this tolerance level with your client? Do you know that they find it acceptable? What if they would only accept one in 100? Or maybe you accept that your chain of custody scheme can only ensure that 995 out of every 1,000 labelled products are from properly certified sources, Perhaps your customers expect 999 or even 1,000?
Without knowing your value chain partners’ acceptance of risk as well as their acceptance of cost, how can you properly design your assurance system? Surely it’s far better to engage them in conversation before something goes wrong, rather than after. But how many schemes are scared of doing so?
And so, now, to sampling. We may be familiar with the idea of sampling giving a certain confidence in the overall result. But when a statistical test provides 95% or 99% ‘confidence’, beware! You need to be sure you can answer ‘confident of what?’ You might be 95% confident that you have identified 100% of non-conformities, or 100% confident that you have identified 95% of non-conformities. Think about it for a moment: these are different things with different potential implications. More realistically you might aim to be 95% confident that you have identified somewhere between 90 and 100% of non-conformities. In other words we need to properly understand the difference between confidence levels and confidence intervals (or limits).
Like any established branch of science, statistics and sampling can be incredibly complicated. Yet almost all of it is based on ‘laboratory conditions’ and ‘probability theory’ and almost none of it works very well in the real world of natural product certification. And in passing, the all-too-familiar ‘square root rule’ has no foundation even in lab conditions, but still hangs around like a bad smell.
Statistics and sampling should only ever be used as guidance rather than direction. This is very much a place for informed common sense. And be sure you know exactly what you’re trying to find out, as a carefully phrased question is already well on the way to being answered.
An approach being adopted by a number of certification schemes sees sampling based in part on assessment of where problems are most likely to occur. In other words you check more often or more intensely in regions, or products, or with suppliers where a comprehensive and dispassionate risk analysis tells you problems are more likely to arise. This can keep costs down and really strengthen your assurance processes. You might also want to combine this with cyclical sampling, making sure that over a set period as many as possible – or even all – elements of the system are checked. How to do this well is perhaps the subject for another blog.
Look at the patterns in your own data, perhaps the way, the number and the location of problems that you find out about. This can tell you a lot about how well your assurance system is working and any sampling that you might be doing. But beware those ‘unknown unknowns’. Maybe your data reveals no problems because you’re not looking in the right place at the right time with the right eyes! As Carl Sagan neatly reminded us ‘absence of evidence is not evidence of absence’.
Mike Read Associates would be very happy to help with your risk assessment, risk management, and sampling strategies. So you can deliver assurance that you and your partners’ clients and customers can rely on, and can afford.