Welcome Guest! The IOSH forums are a free resource to both members and non-members. Login or register to use them

Postings made by forum users are personal opinions. IOSH is not responsible for the content or accuracy of any of the information contained in forum postings. Please carefully consider any advice you receive.

Notification

Icon
Error

Options
Go to last post Go to first unread
Mark Bloxidge  
#1 Posted : 01 April 2016 14:35:43(UTC)
Rank: New forum user
Mark Bloxidge

Hi. Does anybody have an example of a safety inspection with a percentage scoring system that they would be willing to share? Many thanks
peterL  
#2 Posted : 01 April 2016 14:44:49(UTC)
Rank: Forum user
peterL

Hi Mark, The percentage scoring systems I have come across generally are flawed, unless they are heavily weighted, as they can give a false impression where a mandatory requirement (such as a fire risk assessment) has not been completed for example (i.e. the FRA only possibly attracting 10% of the overall contributory final percentage score achieved, if that makes sense), so a word of warning with this method I beleive. Pete,
JayPownall  
#3 Posted : 01 April 2016 15:03:54(UTC)
Rank: Super forum user
JayPownall

peterL wrote:
Hi Mark, The percentage scoring systems I have come across generally are flawed, unless they are heavily weighted, as they can give a false impression where a mandatory requirement (such as a fire risk assessment) has not been completed for example (i.e. the FRA only possibly attracting 10% of the overall contributory final percentage score achieved, if that makes sense), so a word of warning with this method I beleive. Pete,
Have to agree - its the weighting that's key. If you can programme your scoring system to account for 'critical' areas you might have something but its probably best to go for a more qualitative approach. (PeterL...I think I know why we agree...! Hope you're keeping well).
neil88  
#4 Posted : 04 April 2016 09:56:14(UTC)
Rank: Forum user
neil88

If it's for a generally low risk site (e.g. office) I wouldn't bother. Instead, for higher risk sites you could consider a traffic light type colour coding for the inspection findings . Where the determination of the colours is linked to your risk assessment matrix. Then at the end of your report you can show summaries of the numbers of green/yellow/red (you can add more colours if required). You can then please management by showing them interesting quarterly statistics of how the number of "reds" in their org is decreasing. As usual such as system requires good checklists and competent inspectors.
imwaldra  
#5 Posted : 04 April 2016 12:48:10(UTC)
Rank: Super forum user
imwaldra

Another approach is to score the 'gaps' rather than the successes (i.e. the compliances with the standard(s) you are inspecting to). Then the resulting actions are about how to improve and get down to zero, rather than concentrating on what % you do well. After all, if the standard(s) you are inspecting against are reasonably practical, then there should be zero non-compliances? It's fairly simple to create generic checklists of statements that cover the full scope of expectations for each of the relevant subjects that include a column for N/A (not applicable in this particular case). Ticking the Y (yes column) means full compliance with that item, for anything less than full compliance tick N and record the deficiency. Then it's a simple sum to calculate the % Ns divided by Y+N (i.e. all that apply. The result tends to over-estimate the non-compliance because you will have recorded some Ns where there is partial compliance - but that's where the inspectors' comments are vital, as they help to show how much effort will be needed to move each N to a Y. I've used this approach with success in a variety of situations, most recently for checking potable water systems on offshore installations, where it's important that ALL parts of the system remain uncontaminated, not just a high %age of it!
RayRapp  
#6 Posted : 04 April 2016 13:07:44(UTC)
Rank: Super forum user
RayRapp

I used one for a company based in construction/utilities some years back. I also plagiarised it for a company based in the railway industry where I later worked. I thought is was good albeit a bit complex and probably not suitable for low risk environments. The scoring provided an overall percentage which could be used to benchmark each site or to identify long-term performance. The scoring 1-10 was on a spreadsheet with various fields such as welfare, fire and emergency, excavations, W@H, equipment, etc. With each score there was a comment to assist a site manager in understanding the severity with a traffic light colour coding. If I recall correctly, 7 was industry standard (minimum) 8 company standard, 9 exceeding, 10 above and beyond. 6 or below was treated as a near miss with 4 being Improvement Notice, 3 Prohibition Notice, 2 Prosecution, 1 Fatality. Each cluster had a recommendation for action i.e. rectify within 7 days, immediately, stop work. I believe it did focus the minds of those targeted individuals.
peterL  
#7 Posted : 05 April 2016 16:09:24(UTC)
Rank: Forum user
peterL

Hi post "#3 Jay hope you are well, also. Pete,
gramsay  
#8 Posted : 05 April 2016 16:50:16(UTC)
Rank: Super forum user
gramsay

I have a current system I inherited which results in a percentage. I really dislike it and am in the middle of changing it - the new model uses exactly the methods suggested by Neil at #4 and Ian at #5. I've worked with site teams to create something which they understand better, and gives better tracking of issues. (none of it's new, by the way - these are all elements other people are using, I'm not claiming to have produced anything original!) In a system where good results add up to give a final score (weighted or otherwise) it can be quite hard to spot where the real problems are - which is what you're actually interested in. If you assign positive scores to everything that's good, and lower scores where there is an issue to resolve, it can be hard to realise that a site scoring 88% actually has twice as much needing fixed as one scoring 94% - they both look pretty good. The system I inherited also paid too little attention to the reasons for any scores less than 100%, the reports did detail these reasons but they weren't separated out as issues to track and follow up. What we'll do in future is assign each issue an A, B or C (which come with pre-agreed deadlines for closeout) and track these via a database which has already got all 2015's inspections in it, and the 331 issues raised in them. The tracking and closeout worked well last year, as well as giving much better info for analysis and reporting. To compare overall inspection results we rejected the idea of assigning number values to A, B and C (eg a site with 1 A might score better than one with 4 Bs once you've added them up). Instead we realised that since As will only be for fairly "Stop this now, someone might die" situations, a system like Olympic medal tables is better - one A is a more serious score than any amount of Bs or Cs. The above is all trialling at the moment, but seems to make sense to people. We'll review in 6 months and decide whether to make it permanent.
firesafety101  
#9 Posted : 05 April 2016 20:11:49(UTC)
Rank: Super forum user
firesafety101

I developed one many years ago for inspections at shop fitting sites. I had a check sheet of items I looked at and everything was worth one point if OK, nil if not. I only looked at the works that were being carried out so for instance if no work at height there were no scores for that it was simply crossed out. At the end of the inspection I added up all the points scored and all the possible points scored then calculated the scored points as a percentage of the possibles. Hope that all makes sense. I know it isnt the be all and end all but it worked to generate enthusiasm for the different contractors working for the Client. The first contractor to the 100% was awarded a case of beer by the Client.
allanwood  
#10 Posted : 06 April 2016 12:33:32(UTC)
Rank: Forum user
allanwood

We currently use I-Auditor and have developed a pro-forma based on our requirements which includes a percentage scoring system, which can be weighted when and where necessary. It works well and is very well received by our site teams. The key is to trial the inspection format first and iron out any glitches that it may have, its never going to be a finished article so be prepared to amend the document over time to suit your needs.
Users browsing this topic
Guest
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.