Rank: New forum user
|
Greetings, I hope you all are doing well today, I am consulting health and safety experts to find the critical success factors of health and safety performance benchmarking for the purpose of research. can you please tell me what you believe is the critical success factor of health and safety performance benchmarking and why? regards,
|
|
|
|
Rank: Super forum user
|
I think we firstly need to define “critical success factor of health and safety performance benchmarking”. I understand what the individual words means but not the phrase as a whole.
|
|
|
|
Rank: New forum user
|
Originally Posted by: A Kurdziel I think we firstly need to define “critical success factor of health and safety performance benchmarking”. I understand what the individual words means but not the phrase as a whole. thanks for your response,
the phrase I will explain further, what do you think is important in health and safety performance benchmarking, for example comparing incident data of different organizations for the purpose of benchmarking to improve the performance?
|
|
|
|
Rank: Forum user
|
Everyone working their stats out the same so its actually meaningful. Using the same formula and including all the same data. Unless its a level playing field another companies data is pretty much useless to benchmark against.
|
1 user thanked thunderchild for this useful post.
|
|
|
Rank: New forum user
|
Originally Posted by: thunderchild Everyone working their stats out the same so its actually meaningful. Using the same formula and including all the same data. Unless its a level playing field another companies data is pretty much useless to benchmark against. thanks,
what do you recommend for effective performance benchmarking?
|
|
|
|
Rank: Super forum user
|
Ali Thunderchild has pointed to the problem. It is very difficult to compare like with like EVEN when different organisations are playing by roughly the same rules. You cannot reasonably compare the accident performance of a business that makes bread in vast quantities with an artisan baker. One has almost everything done by automated machinery where the workers press buttons remotely, the other has the workers at the sharp end. So, you need to find organisations of comparable size and activity to benchmark against and that is not nearly as easy as it sounds. Suppose you have two major construction contractors with similar number of staff and turnover, both doing new buildings. One might subcontract just about all the more dangerous tasks and in effect operates as a project management company. One might want to do those tasks in house partly as they want a better handle on what is happening. All else being equal, the second of these contractors SHOULD have higher incidence numbers and rates. Higher does not equal Worse!
|
|
|
|
Rank: Super forum user
|
Well as thunderchild says, you have to set up definitions among the group that is benchmarking against each other in order to ensure consistency.
|
|
|
|
Rank: Super forum user
|
Ok When I was involved in bench marking ( involving half a dozen government agencies involved in scientific work) we first all agreed a set of performance indicators which were not necessarily the ones that we used internally. We also made sure that the numbers took into account the differences in agency size. So, we tended to use figures that were per FTE(Full Time Equivalent) employees. We also had to take into account that some agencies had a lot of interaction with the public while others did most of their activities behind 20 foot electrified fences. Establishing leading indicators was difficult as, for example,some agencies wanted to measure how many risk assessments they had done/reviewed while others regarded that as an opportunity to create more activity for the sake of the stats without contributing to overall improved H&S. Finally, the benchmarking was not an end in itself: we did not create a league table where we could say that agency A was much better than Agency B because of a higher score. Instead, we used this information as a starting point for a discussion as to why some agencies had “better” scores and how we could share each others best practice.
|
|
|
|
Rank: New forum user
|
Originally Posted by: peter gotch Ali Thunderchild has pointed to the problem. It is very difficult to compare like with like EVEN when different organisations are playing by roughly the same rules. You cannot reasonably compare the accident performance of a business that makes bread in vast quantities with an artisan baker. One has almost everything done by automated machinery where the workers press buttons remotely, the other has the workers at the sharp end. So, you need to find organisations of comparable size and activity to benchmark against and that is not nearly as easy as it sounds. Suppose you have two major construction contractors with similar number of staff and turnover, both doing new buildings. One might subcontract just about all the more dangerous tasks and in effect operates as a project management company. One might want to do those tasks in house partly as they want a better handle on what is happening. All else being equal, the second of these contractors SHOULD have higher incidence numbers and rates. Higher does not equal Worse! thanks, dear for the useful info.
I will use the same formula of comparing ''apples with apples'' in health and safety performance benchmarking
|
|
|
|
Rank: New forum user
|
Originally Posted by: A Kurdziel Ok When I was involved in bench marking ( involving half a dozen government agencies involved in scientific work) we first all agreed a set of performance indicators which were not necessarily the ones that we used internally. We also made sure that the numbers took into account the differences in agency size. So, we tended to use figures that were per FTE(Full Time Equivalent) employees. We also had to take into account that some agencies had a lot of interaction with the public while others did most of their activities behind 20 foot electrified fences. Establishing leading indicators was difficult as, for example,some agencies wanted to measure how many risk assessments they had done/reviewed while others regarded that as an opportunity to create more activity for the sake of the stats without contributing to overall improved H&S. Finally, the benchmarking was not an end in itself: we did not create a league table where we could say that agency A was much better than Agency B because of a higher score. Instead, we used this information as a starting point for a discussion as to why some agencies had “better” scores and how we could share each others best practice. thanks for the useful info,
is it possible to share the benchmarking report you prepare?
|
|
|
|
Rank: Super forum user
|
The way to do meaningful benchmarking as described above is to decide the criteria among the group that is participating, so they are useful to that group. Another group's benchmarking processes and results won't have any relevance, especially if they are operating in a different sector that has different H&S issues.
|
1 user thanked Kate for this useful post.
|
|
|
Rank: Super forum user
|
In a previous company we used to benchmark against others in a similar industry (20 of us). The problem was we worked out from the anonymised info that we were below average. We asked the organisation that collated the info if they would ask the one with the best stats, if we could meet, to ask if they would tell us what they were doing we were not. The answer turned out to be nothing, we were in fact doing more than them, the reason they had better stats, was they didn’t pay sick pay and they had killed someone a few years before and that was still in everyone’s mind. So benchmarking, in itself does not give you anything that useful. You are either going to be better or worse than the average. If you are better, are you just going to pat yourself on the back, and if you are worse how will you find out what the relevant issues are. Chris
|
3 users thanked chris42 for this useful post.
|
|
|
Rank: Super forum user
|
Yes, that reminds me of a time when I calculated an organisation's carbon footprint and instead of asking "How can we reduce it?" someone asked "How does it compare with other organisations?"
Benchmarking is only any use if you use it to find out the scope for improvement.
So the "critical success factor of health and safety performance benchmarking" is actually being able to use the results to learn and achieve something, and that would be accomplished in just the way Chris described, by inquiring into the sources of differences (although in that case there was no useful end result).
|
|
|
|
Rank: New forum user
|
Originally Posted by: chris42 In a previous company we used to benchmark against others in a similar industry (20 of us). The problem was we worked out from the anonymised info that we were below average. We asked the organisation that collated the info if they would ask the one with the best stats, if we could meet, to ask if they would tell us what they were doing we were not. The answer turned out to be nothing, we were in fact doing more than them, the reason they had better stats, was they didn’t pay sick pay and they had killed someone a few years before and that was still in everyone’s mind. So benchmarking, in itself does not give you anything that useful. You are either going to be better or worse than the average. If you are better, are you just going to pat yourself on the back, and if you are worse how will you find out what the relevant issues are. Chris thanks your point noted
|
|
|
|
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.