Management System Planner – Time to Check
Posted by Mike Stevens – Client Service Director
The essential difference between a management system in theory and a management system in practice is that the latter can prove itself. Peter Drucker’s famous adage that “you can’t manage what you can’t measure” can be appended to the end of that quotation from the equally stentorious Lord Kelvin, who considered information that you can’t express in numbers to be “of a meagre and unsatisfactory kind”. What keeps you awake at night is not only the knowledge that you might have an accident or be making people sick. It is also the knowledge that hindsight is 20-20. What is reasonably practicable is in the eye of the beholder. Management systems are pointless if they aren’t dealing with the essential worry of running a business: what if something went wrong.
Compliance is an unsatisfactory way to go about this. Right now one of your employees could be doing something that is technically against the law, attractive of potential litigation or wasteful and dangerous. The idea that you might declare yourself to be “100% health and safety compliant” is patent nonsense. The management system works best when it measures what is going wrong and seeks to propose a potential improvement that will not only rectify the situation but provide the basis for sustained higher levels of performance. We need the CHECK process to do two things:
It is from these that we derive some assurance and a good nights kip. [We have purposefully left out reactive measures of H&S such as accident and incidents. They’re a topic for a separate article]
Hopefully you have set up your targets and objectives using the SMART methodology which means they are measurable and being measured. We can spend a bit more time on this when we look at the ACT process. So I want to concentrate on the second part of the CHECK process. The biggest chunk of workplace performance data comes from inspection, safety tours and surveys: the world of underlying causes. The maximum value here is when critical faculties can be applied to the workplace. You have to know what to look for – unsafe acts and unsafe conditions – and be able to prioritise what you find. Little wonder that hazard spotting and observation are the key skills honed by practitioners over years. What is surprising is that we don’t always apply the same root cause techniques to inspection as we do to accidents. Whatever you find that is out of place, defective, damaged, missing, open, grimy, littered and so on, didn’t used to be like that. And it is probably not supposed to be either. I always remember the first line of that “Safety Differently” mantra: “People are the solution, not the problem”. It’s true that unsafe conditions arise because of unsafe acts. But they don’t necessarily have solely unsafe acts as their causes. There is either some perceived benefit or alternative objective other than safety. Or else entropy and time have worked upon the system to render what was once good, as no longer so.
The most frequent observation of a workplace is “poor housekeeping”. This is a perfect example of how management system failures hide in plain sight. Why is housekeeping poor?
Admittedly that’s a lot of questions. But not getting to the bottom of poor hygiene means being condemned to have a lot of failed inspections and inertia.
You’ve heard that all an engineer needs are duct tape and WD-40? If it should move and it doesn’t, give it a squirt of oil. It if moves and it shouldn’t, then stick it down. Our need to quickly fix problems and move on is instinctive. Asking “why are things the way they are?” suggests that they maybe ought to be different.
There is a danger here. H&S practitioners are rarely also experts in the tasks that others are carrying out. A certain ingenuity is called for. I usually ask someone to explain to me how something works, or to tell the history of a particular operation, how long it has been running and what the maintenance arrangements are. A narrative account of the circumstances we find ourselves in avoids blame culture, but still preserves ownership. Note the “we” is a critical language choice. You are getting alongside someone, not standing in judgement or opposition to them. Then I might enquire as to what improvements the operator would make if they could. Anyone working with a system, machine or activity for a while will have had thoughts about how it could be bettered.
The rules of evidence in a court the quantum (amount), quality, and type of evidence and what is most surprising is that most faith is put in witness testimony over physical or documentary evidence. When you ask people they always think others are unreliable whereas data has integrity to it. The reality is that we ought to set most store by what people say. In that way we can gain the quickest access to the most information. After that we can try this against other forms of evidence in the process called corroboration. Therefore I made a conscious decision about fifteen years ago to get away from checkboxes and spend more time talking to people in inspection. If there is data to collect, set up a procedure for them to collect it as part of their job and then audit the results. When you interact with people though, doing it as a fellow human being gets much better results.
Inspections can and should sometimes be conducted as a surprise. You’re not trying to catch anyone out, necessarily. But you are seeking to capture what every portrait photographer craves: candour. Quantum physicists aren’t the only ones to know that that which is observed is somehow altered by the fact of being observed. This is particularly true of unsafe behaviours. If I suspect someone to be behaving differently because I am there, saying “don’t mind me” doesn’t always help. Self-conscious behaviours are unnatural ones. Instead I tend to introduce myself and ask them to show me the process they are carrying out. Behavioural scientists have distinguished our modes of thought as two systems: System 1 thinking is automatic, quick, intuitive, emotional and reactive. System 2 thinking is conscious, effortful, logical, and deliberate. Most people get good enough at their jobs to be in System 1 most of the time. In fact you free time up for planning or daydreaming, especially if the work doesn’t require concentration. And bad habits are sometimes the product of the System 1 brain autopiloting around. When you meet people with an enquiry it challenges them to have a System 2 explanation for a System 1 activity. What is usually apparent when someone has to explain what they are doing to another person is the disparity between how something should be done and how it is actually being done becomes apparent to them. In many cases, you don’t need to say anything else. The operator will notice the distinction and explain it. Be open to the possibility that the operator has actually found a potentially better way of doing something and be prepared to hear their viewpoint too. But also acknowledge that at some point, with any machine, system or activity, a designer probably wrote down precisely how they thought something should be done. What is before us is the opportunity to explore the difference between “work as done” and “work as planned”. It is the most fruitful area of inquiry for process improvement and performance data gathering. At the very least the operator will get to know why things are supposed to be done differently to how they have been doing them. At best you will have a data set of “work as planned v work as done” which you can make a measurable improvement to over time.
Statutory examples of inspection such as scaffolding or excavations in construction, needn’t just be viewed as BAU. They can be developed into trend data and even aggregated so that all the inspections across a site could provide a heat map by colour coding the different inspection criteria by condition and/or risk in red/amber/green. See below for example
The aim is to give senior management simple data to look at. They are usually challenged on time and rarely read long reports. Visual indicators of performance and dashboards can be helpful in this respect. [We’ll look in more depth at these tools and the value of audits in the final part of this series when we come to the ACT part of PDCA and examine how we review performance data and make plans for the future].
Surveys are the other tool that operates in the margin of proactive performance data gathering. It is as well to get some information about how your people are doing here as well as how their workplace is doing. Attitude surveys can be used to take the temperature of the workplace. Positive health benefits can be emphasised through engagement and opinion surveys and reward schemes (discounts on the costs of positive health activities). They can be a great source of ideas and innovations as well. A well–planned survey should be short and extract measurable information. Avoid closed question but present qualified ones so respondents can choose how they feel according to a scale with an odd number of options – perhaps five – to allow for a neutral and extreme positions. Entering the respondents into a prize draw can improve responses. And it is imperative that you create a response to what you learn from a survey. If the data goes nowhere you have lost a mandate to ask more questions.
The last tool in the toolbox is Audit. When I ask business leaders about this, I am always surprised by how they conflate safety auditing with workplace inspection. There is a belief that an audit can be a surprise activity. It should never be so. Audits check you are doing what you say you are doing. In other words, we ask questions about the foundational aspects of your management system – plans, policies, compliance arrangements – that expose whether they are real or not.
If your organisation is a limited company, you know how much its worth by simply multiplying its share price by the number of shares in circulation. But that value changes daily. Whereas the real value is calculated by accounting for all the income and expenditure, credit and debt, assets and liabilities. What makes this account reliable? The fact that it has been audited by a trustworthy external party. Except that this external body rarely if ever adds up all of those things. They may sample them. But the audit is properly speaking an assurance that the methodology you used for working those things out is correct.
In the same way a management system audit audits the system, not the inputs and outputs. They are there for verification. They are the evidence we want to see in support of the conjecture – the hypothesis – that the organisation is being run well. The famous Deming cycle has at its heart the outrageous proposal that perhaps we will one day perfect our system to such an extent that we do not need to check the outputs. We can be assured that they will be correct. In other words, we have been relentlessly driving defect and error out of the system by perpetual tweaking of the design – the foundational conditions, if you will – to such an extent that there is no more error left. The toothpaste tube has been squeezed flat. That’s the ambition.
Next time we will pull all these threads together and look at management review.
The tool known as root cause analysis isn’t only appropriate for accidents. It can be used for any observation. You simply ask the question “why is that?” about whatever it is that you record