IOSH forums home
»
Our public forums
»
OSH discussion forum
»
Key Performance Indicators(lateral thinkers required!)
Rank: Guest
|
Posted By Richard Forster
I recently attended a fairly lively seminar where a senior HSE bod stated 'any fool can say x amount of inspections or y amount of investigations,but what did they actually achieve!' Quite right too...
If you are involved at all in setting performance indicators and are willing to swap views please contact me,lateral thinkers are welcome,pedantic so and so's are not!
|
|
|
|
Rank: Guest
|
Posted By Ian Waldram
A good challenge!
Neil Budworth presented a good paper in this area at the Harrogate Conference a couple of years ago - but I'll let him speak for himself!
Here a some of my experiences:
1) In June 1997 the IADC organised a good seminar in Aberdeen on Performance Indicators. All present agreed that measuring only Lost Time Injuries was hopeless, but there was little agreement about what to do instead, though I presented some ideas and suggested a 'high-level' indicator as "total incidents+injuries investigated/total medical-treatment injuries" - which is positive, encourages 'near-hit' reporting, uses less severe injuries and shouldn't require extra effort to collect. When the offshore industry launched the Step Change initiative the following year, they spend some effort trying to agree the right indicator to measure a 50% improvement against, but in the end decided on Lost Time Injuries!
2) The next edition of the IOSH Journal (due in a month or so) includes a paper from myself on lessons learned from Safety Case regimes - among them a lesson about how a leading indicator of averaged audit scores seemed to correlate well with process hazards, but NOT with injuries.
3) There is a good 'trick' in the Govt. 10-year improvement targets set in 'Revitalising Health & Safety', which some may have missed. The 'lost time' reduction is NOT in number of injuries, but total days lost, i.e. it includes severity, which I think is clever. Will those organisations that use LTA data follow suit I wonder?
|
|
|
|
Rank: Guest
|
Posted By Jerry Hill
I agree with the previous respondant. Neil Budworth allowed me to use his presentation (notes and all) when I had to present on this very subject. His presentation is totally objective, highlighting the fors and againsts of all proactve and reactive performance indicators. (Now this is the bit where it get's a bit like a Superman movie) "NEIL...IF YOU'RE OUT THERE. WE NEED YOUR HELP!"
|
|
|
|
Rank: Guest
|
Posted By Bruce Sutherland
interesting challenges!! it needs sorting but.... usually on an organisation basis populations are too small for LTI / Riddor measure to be particucularly effective especially when one takes all the other software bits into account ie poor morale, remumeration ..piece work higher pay for industrial absence rather than being sick - maybe what we need to be developing is culture benchmark - but now you have the inter industry problem eg agriculture v nuclear ie is it good performance within sector or versus other sectors . No doubt someone will publish a learned paper on it all.... talking of KPI's the construction ones are interesting as they have appear to have choosen to use RIDDOR - uncorrected for Labour Force Survey as the KPI rather than splitting by sector
Keep up the thinking
Bruce Sutherland
|
|
|
|
Rank: Guest
|
Posted By Stuart Nagle
I am sure that there is a lot of milage in what has been discussed above, however, I do not believe that injuries, near misses and lost time are all there is in respect of key performance indicators !!
Just look at that title: 'KEY PERFORMANCE INDICATORS'
Whilst there is a place for injuries, the reporting of near misses and of course lost time in the workplace, there is certainly much to consider in other areas.
These may be:
Evidence to support that training has been targeted correctly, it has been undertaken, it has been effective (reductions can bee seen in all of the above areas)
safe systems of work have been formulated, written, personnel have received training, the systems are employed (evidence again required) and reviewed (evidence)
The workplace itself has been investigated, operations and safety requirements observed, reported, actioned and reviewed (evidence again).
Without wishing to go on and bore everyone rigid, it can plainly be seen that there are many areas that need to be assessed and should quite rightly be included when talking about the KEY PERFORMANCE INDICATORS.
Following on from this, the establishment of 'OWNERSHIP' by peronnel of their own 'KEY PERFORMANCE INDICATORS' through establishing 'SERVICE LEVEL AGREEMENTS'. This can be achieved not only between employer/contractor/supplier, but also between employer and personnel/department/personnel.
This is a much wider area of interest than simply looking at reporting accidents/lost time and near misses. One should try and look at the BIG pricture and encompass all areas.
Regards...
Stuart Nagle
|
|
|
|
Rank: Guest
|
Posted By Neil Budworth
Firstly, if anyone would like a copy of my paper, presentation etc they are very welcome, just e mail me on budworth-n@nsk.com.
The weakness of the paper is in relation to key performance indicators.
I agree with pretty much all that Stuart has said. Ultimately what you choose as your key performance indicators depends on the state of development of your safety system, undoubtedly you will need a range of indicators.
At a site or group wide level you will need the standard things like accident numbers, as this allows easy benchmarking, you can get more sophisticated if your data collection system allows it, but I feel it is wise to progress slowly at this level otherwise the system can get confused and fall into disrepute.
At a site of departmental level, what you are really after are indicators that show that the line managers are embracing safety and have undertaken action to improve things.
So you may use things like training undertaken, or better still percentage of the target group trained on a certain topic, equally you could use number of inspections undertaken, or better still the score of the inspections. Each step down is giving you richer data focusing on the scores of inspections (to a standard checklist which you change when the scores get too good !) means you can see a) if the managers are doing their inspections and b) if they are actually actioning any of the non conformities after.
Better still identify your key risk areas and determine what action is needed by the local management to achieve an improvement and set key indicators on that.
For example in the engineering industry Dermatitis is an issue. The key elements in controlling it are :- Staff awareness, availability of protection, knowledge and control of cutting fluid strength and early detection. So the key performance indicators at a departmental, or site level maybe percentage of systems fully assessed, percentage of at risk staff trained, frequency of cutting fluid checks, number of cases of dermatitis reported and number of skin examinations undertaken.
At a group level these would not be appropriate indicators, but at a local level they will quickly allow you to see if a key risk is being well managed.
If you set up these kind of systems for your main risks you will get meaningful data which will allow you to target the areas that need support and improvement. The key issue is getting the site management to agree and buy into the targets so that they are reported back in the same way as any other business indicator and discussed at the site management meetings in the same way as business indicators.
The next level up may require the site to report progress against its own targets based on the KPIs they have identified and accidents etc. This is showing a) key risks have been identified and there is a plan to monitor them and b) If this is having an effect on the actual performance in terms of accidents and ill health.
Sorry I seem to have gone on a bit. Any comments ?
Best Regards
Neil
|
|
|
|
Rank: Guest
|
Posted By Garry Nabbe
The inspectorate in Australia were faced with the same dilemna of measuring outcomes when conducting inspections, investigations, complaints etc. The result was that all performance indicators were reviewed and a client focus was developed that had outcomes identified in time. The result was something like:- 1)Number of investigations completed within timeframes 2)No of complainants advised of outcomes within 14 days of lodgement 3)No of investigations leading to prosecutions within 12 months.
That was in my past life and I can now say that the indicators used were closely linked to what the politicians wanted to hear, or what would help the beaurocrats achieve their organisational outcome performances, rather than what would accurately reflect the state of Health and Safety.
Now thinking laterally, I believe that the only way forward is to have strong ties to the cost of an injury, illness, incident or mishap. If each injury was reflected as a real cost by using a "table of maims" coupled with a lost time cost formula, productivity losses, equipment damage, environmental damage costs offset with bonus incentives for rapid return to work programs then there would be incentives to create a total work environment that considered prevention, rehabilitation and return to work, analysis of events, minimisation of risks, and risk assessment.
[E.g. The cost of an injury is $X (table of maims)+ $Y (Lost production)+ $Z ($A times number of days off work)+ $M (Cost of restoring productivity)] To compare data from year to year you apply the consumer price index or inflation measures to achieve a like comparison.
Even the maintenance department can now set goals to contribute to the impact of an event by rapid restoration of productivity.
It is important with all collected data that if you are comparing one with another that the performance indicators are alike you can not compare apples with oranges. That is what makes the development of performance indicators such a simple task (ha ha!)
|
|
|
|
Rank: Guest
|
Posted By Philip McAleenan
Richard, Iposted the following last year which may be of interest now:
BS8800, a Guide to occupational health & safety management systems, simplifies the process. It talks about setting targets which "are the detailed performance requirements that should be achieved by designated persons or teams in order to implement the [OH&S] plan. The plan should specify who is to do what, by when and with what result".
Thus a key objective may be that all confined space entry workers are trained, assessed and certified fit and competent to use RPE within 6 months.
The results or outcomes expected will be documentation of who has been trained, when it took and the assessment results, including certificates.
A plan will be drawn up to achieve this objective and within that plan will be targets such as:
prepare a list of all those to be trained,
obtain evidence of medical and physical fitness to use RPE,
schedule training dates,
appoint trainer/assessor,
notify all listed employees of training dates,
provide training venue, equipment etc.
carry out training and assessment,
obtain assessment documentation and certificates,
review the outcomes and prepare any necessary remedial actions (e.g. where one or more failed any element of the fitness or assessment procedures).
Thus we can see that by setting a clear objective or standard, namely that all confined space workers are fit and competent to use RPE, the performance requirements to achieve that objective follow. And by monitoring each performance requirement in turn we can effectively measure progress towards achieving the key objective.
And of course once the objectives has been achieved, a new objective can be agreed, for example, maintain the level of competence and fitness to use RPE amongst all confined space workers, and corresponding targets to achieve this drawn up.
Regards, Philip
|
|
|
|
Rank: Guest
|
Posted By David Rosenfeld
Some more lateral thoughts on key safety indicators: (1) its not what you choose its how you use them (2) why correlation with injury is not so important (3) indicators are driven by your plans for change..
(working definition of key performance indicators: relatively frequent, low cost guages of programme performance - distinct from fundamental measures of success)
(1) it's not just what you choose, it's how you use them.
Indicators are typically used by others in and out of the outfit. If the aim is positive safety, one concern is the potential of indicators to reinforce a controlling approach to managers that may be negative and hindering, rather than encouraging self-examination and stimulating managers to improve beyond what's strictly measured.
The full management of safety is a big undertaking with hundreds of focal activities- objectives, systems, standards etc for each of dozens of main risks. A manageable shortlist of indicators - say 2- 20 - will cover only a fraction of underlying activity. If indicators are used negatively - to "name and shame" underperformers, or to dock performance pay, or coerce change through controlling appraisals, managers will reluctantly comply and play the numbers game - maximise just the indicators with minimum of effort and tickbox "wangles" if necessary.
(2) Pick indicators that correlate with Performance rather than Injury.
Correlation of highlevel indicators with injury is currently impractical (see in Stuart Nagles posting above). Some indicators correlate well with harm (eg near misses, behavioural violations in Neil Budworth's paper). I'd say the best predictors of harm are usually "downstream" close to the immediate causes of injury.
The problem is safety wisdom seeks to remedy "upstream" organisational factors - management framework, funded advice, These may correlate poorly with injury if succeeding stages fail ( "middle stream" factors - eg mangement training, risk assessments, self-inspections, managers time or "down stream" factors - eg failure to apply safety training .) but are still worth reporting progress on.
One answer is to selective tracking of all stages "upstream" "middlestream" and "downstream" to allow some diagnosis of where any underperformance may lie. Correlation with Performance is still important - a good upstream indicator should still rise and fall with improved performance. eg "number of courses run" is ambiguous - it could go up if a 3 day course is split into 3 separate day courses. "persondays of safety training" correlates better with this facet of performance.
(3) Indicators are affected by your plans as well as your safety systems
If you can't implement a full system in one shot, the indicators you use should relate to the priorities for the current round of change : is it focussing ownership? implementing a new policy? making interim improvements in high profile risks pending a system upgrade? establishing new system elements eg audit, participation, behavioural intervention?
It does make a difference. In our hospital's first set of proactive indicators (we'd had accident stats for years) we dropped "local policies in place" pending a new HSG65 style policy due the following year. Eg 2 we dropped "safety appraisals done" while concerns at the positive value of the current system were explored.
Any Comments?
When I first looked at indicators they seemed like a python on a string, you don't get it all in one pull and you get more than you bargained for. Is anyone's interested in me posting more lateral thoughts.. (4) when to add apples and oranges, (5) not "anything goes" .. etc do say, I don't wish to bore.
|
|
|
|
Rank: Guest
|
Posted By Barry Wilkes
I may have misunderstood but was the HSE bod looking at the effectiveness of their interventions with companies. I agree that numbers of inspections is simply what it says "the numbers game".
I agree with what a number of other people have said re looking at more "management issue" subjects is a more useful thing to do. I also agree with Philip re using an annuall action or development plan to monitor the company.
I am looking at intervention strategies at the moment particular for firms who do not accurately record accidents or sickness absence. The only real things you can measure are actions taken by the company after the interventio together with some form of measurement of improvement in culture/motivation.
I can not however see the government moving away in a hurry from the numbers game to a more indepth but possibly hands off approach.
Regards
Barry
|
|
|
|
Rank: Guest
|
Posted By Tim
Richard,
I’ve just discovered this interesting discussion. Glad to see there are some efforts being made to nurture decent debates. There is something else you and your fellow contributors might want to think about when discussing this issue. I have found on many occasions that there is an obsession with measuring things, no matter what those things are. Surely if you want to measure then you need to measure that which is critical to your success. Therefore to begin you need to;
1. define what you would consider to be a success (e.g. zero fatalities), then you need to
2. have the necessary systems and resources in place to steer you down that path. After that it is a simple matter of
3. monitoring compliance and evaluating the effectiveness of your actions.
Try to forget the management speak and concentrate on what is important. You might be interested in this short article written by one of the McAleenans (don’t know which one) You can find the whole article on their website. It details an OSHA initiative to concentrate on the critical hazards in an effort to reduce fatalities and major injuries on US construction sites.
“In 1994 OSHA introduced their focused inspection initiative as a way of recognizing responsible contractors and the efforts they had made towards building a safer workplace. This they have done through the development and implementation of effective safety and health programs. The benefits of the initiative are obvious for both OSHA inspectors and the participating contractors. The focus of the inspections is on the leading hazards that cause 90% of injuries and deaths in this industry (below). For OSHA this means that their measure of success has moved away from measuring the number of completed construction inspections. Their new measure of success is the level of improvement in construction safety and health. The contractor benefits in that OSHA inspectors on focused inspections are not required to conduct an inspection of the entire project. On a cautionary note to contractors; do not think that violations outside of the leading hazards will be ignored. Citations will be issued for any serious violations discovered during the walk-around inspections. 'Other than serious' violations not immediately abated will also be cited. For anyone thinking that this is the easy option please note that if conditions are such that the inspector determines the safety program is ineffective the focused inspection will be terminated and a comprehensive inspection conducted. The leading hazards are:
* Falls,
* Struck by,
* Caught in/ between, and
* Electrical.
To qualify for focused inspections you will need a project safety and health program that meets the requirements of 29 CFR 1926 Subpart C General Safety and Health Provisions, and have a designated competent person responsible for and capable of implementing the program.”
Food for thought!
Tim
|
|
|
|
Rank: Guest
|
Posted By Garry Nabbe
This is a truly interesting forum and although I have put my tuppence in once already I thought that this site that I am aware of may be of interest.
When you get to the site click on search and punch in "performance indicators" and you get a list of about 200 papers written on the subject with a major focus on getting away from the negative indicators such as "Lost Time Injuries"
The site is:- http://www.nohsc.gov.au/
PS. It would be nice if you could resist the temptation to measure, but the bean counters would have you for breakfast the minute you need some cash to do something.
Have fun!!
|
|
|
|
Rank: Guest
|
Posted By Adrian Watson
As we do fully understand what makes safety we cannot measure safety perfectly. However, with each attempt to measure it, we discover more about safety and therefore have a greater understanding of the subject. Therefore not measuring safety performance is not really an option.
Good discussions about the pro's and con's of different measures are found in MORT Safety Systems (Johnson 1980), Human Risk and Safety Measurement (Glendon and McKenna 1995), Analysing Safety System Effectiveness (Petersen 1996) and Prevention of Accidents Through Experience Feedback (Urban Kjellen 2000).
All authors generally agree that rare events and loss based measures are not very good for SME's. They also concur that a broad range of measures are needed. I believe that the best measures will be those based on the an analysis of the needs of your own organisation.
|
|
|
|
Rank: Guest
|
Posted By Richard Forster
Thanks for the responses.Unfortunely I have been taken off of KPI's to work on something called a 'Business Plan' time for a new thread maybe!So I started the wheel in motion but did not help to make it turn.....
My colleagues are now devising KPI's mostly around predicted areas (how many,to what effect,who,why etc)We are covering health and safety aspects and team related aspects(eg turnover times for specific areas Woolf,F2508's) So thats it-pick and mix!Anything concrete will be passed on,thank you for playing and Merry Christmas!
|
|
|
|
IOSH forums home
»
Our public forums
»
OSH discussion forum
»
Key Performance Indicators(lateral thinkers required!)
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.