Welcome Guest! The IOSH forums are a free resource to both members and non-members. Login or register to use them

Postings made by forum users are personal opinions. IOSH is not responsible for the content or accuracy of any of the information contained in forum postings. Please carefully consider any advice you receive.

Notification

Icon
Error

2 Pages12>
Options
Go to last post Go to first unread
Lishka0  
#1 Posted : 19 April 2016 08:42:26(UTC)
Rank: Forum user
Lishka0

Good morning all I wondered whether I could have your views on something that has been proposed to us for our risk assessments. We have been finding that some of the historic risk assessments are not up to the standard and to make them even more meaningful to the trades on site it has been suggested that we remove the scoring out of risk assessments altogether. The reason behind this is that often some of the trades find the scoring confusing, lots of inconsistent scoring, different matrices used, different people score differently in their opinion of risk etc. So we have had some in depth discussions questioning "why do we need it in there?" as all the guys at the 'coal face' just really need to know what the hazards are and what they need to do to carry out the work safely. There are also differences of opinion on this as some specialists have argued that how can you show you have reduced the risk if you don't score and how do you know what are the highest risks are so you can deal with these? I would welcome people's views on this, as I have never thought about doing it this way. The feedback we have had from the trades is that they find these easier to work with this method on site as basically it shows them a quick and easy way of working safely. (Almost their method statement really?) Pros and cons (and meeting legal requirements), please discuss!
Xavier123  
#2 Posted : 19 April 2016 09:08:27(UTC)
Rank: Super forum user
Xavier123

There is no legal requirement to have a scoring system. End of. I agree that to a layman they can be confusing and unnecessary. From a strategy point of view they may assist in prioritisation of your risks but most of the time a competent h&s professional won't need them to make a decision that something more should or shouldn't be done in a given situation. Personally I don't like 'em, don't like to see them and don't use them. I stick with high, medium and low and apply professional judgement. If I needed to spend time comparing risks then I'd make it a specific piece of work. Each to their own though, they have their advantages but I think its person and situation specific as to what those advantages are.
simon73  
#3 Posted : 19 April 2016 09:23:59(UTC)
Rank: Forum user
simon73

Xavier123 wrote:
There is no legal requirement to have a scoring system. End of. I agree that to a layman they can be confusing and unnecessary. From a strategy point of view they may assist in prioritisation of your risks but most of the time a competent h&s professional won't need them to make a decision that something more should or shouldn't be done in a given situation. Personally I don't like 'em, don't like to see them and don't use them. I stick with high, medium and low and apply professional judgement. If I needed to spend time comparing risks then I'd make it a specific piece of work. Each to their own though, they have their advantages but I think its person and situation specific as to what those advantages are.
Agreed. I have always found that non H&S people find a traffic light system - red, amber, green (high, medium, low) far more user friendly.
WatsonD  
#4 Posted : 19 April 2016 09:44:33(UTC)
Rank: Super forum user
WatsonD

I agree too. Traffic light system, or simple terms such as, High, medium or low are much easier. I think that numbers are put in a lot of the time to make it look like a much more detailed analysis has been made when it hasn't, and all it serves it to make the RA less user friendly. The trouble is with using numbers is you then have to provide a key to explain their relevence.
Ron Hunter  
#5 Posted : 19 April 2016 12:06:22(UTC)
Rank: Super forum user
Ron Hunter

No legal requirement for scoring matrices, and no legal requirement to issue risk assessments to the workforce. R/A is a management tool - not a work instruction. Risks reduced SFARP, suitable and sufficient, frequent review etc. 'Evidence' that process of assessment has been applied should be by authorised signature of accountability - this is a legally discoverable document! We've had similar discussion here on several threads. Beware of "paper safety."
Invictus  
#6 Posted : 19 April 2016 12:35:25(UTC)
Rank: Super forum user
Invictus

I agree to much paper produced for the smallest risk and oten not enough when it is required. I use the matrix system, then I simply put High, Medium Low next to the number.
gramsay  
#7 Posted : 19 April 2016 13:32:53(UTC)
Rank: Super forum user
gramsay

Ron Hunter wrote:
No legal requirement for scoring matrices, and no legal requirement to issue risk assessments to the workforce. R/A is a management tool - not a work instruction. Risks reduced SFARP, suitable and sufficient, frequent review etc. 'Evidence' that process of assessment has been applied should be by authorised signature of accountability - this is a legally discoverable document! We've had similar discussion here on several threads. Beware of "paper safety."
Great reply, completely agree.
JulieBrown  
#8 Posted : 19 April 2016 14:08:27(UTC)
Rank: New forum user
JulieBrown

HI, Whilst I agree that it is not always needed and is often a big stumbling block into progressing getting the risk assessments done especially for the layman. However the IOSH Managing Safely 4 day course is based very much on that method with no thought in sight of changing it so perhaps some of us H&S professionals should talk to the course writers? I personally do like the scoring system because you have a value against a number and I believe you can justify whether or not by adding extra risk controls that you can actually bring the risk rating down. Much more of a stab in the dark without it and in my experience when using red amber green the 'layman' goes for the middle 'amber'. The risk controls and safe systems of work though are of course the important tool to get out to workforce so I can see both sides of the argument!
piobaire  
#9 Posted : 19 April 2016 15:37:42(UTC)
Rank: Forum user
piobaire

As per the previous post, there is no legal requirement to categorise risk. Having worked on both sides of the equation here, I can say that I do not find ranking the hazards particulalry useful. What goes into the risk assessment is what the person doing the job needs to know to do the job safely and as such all the controls in the risk assessment have to be implemented to do the job safely. In my experience categorising the risk at the point of execution is not of benefit. Are the controls treated with differently dependent on the categorisation of risk? If the control identified in the risk assessment is not in place the job isn't done so it makes no odds the categorisation. Hope this is useful.
jwk  
#10 Posted : 19 April 2016 15:43:07(UTC)
Rank: Super forum user
jwk

Agree, no need for numbers, or for ahring RAs with workers. The requirement is to share 'relevant information', which is usually a safe system of work. You can even get rid of the risk level altogether; replace it with the question 'Is it safe?' and take it from there using the same sort of language, John
imponderabilius  
#11 Posted : 19 April 2016 16:18:44(UTC)
Rank: Forum user
imponderabilius

RA can be either quantitative (matrix) or qualitative (who,what, by when, etc.). Both can be useful but have different applications. I think the biggest mistake companies (including mine) make with regard to managing RAs is going either this or that way, while both kinds of RA can be use successfully for different processes.
jontyjohnston  
#12 Posted : 20 April 2016 14:16:22(UTC)
Rank: Super forum user
jontyjohnston

"General" risk assessment does not require scoring. I had this very discussion with Principal Inspector a week ago and they directed me to this link; http://www.hse.gov.uk/fo...complaints/riskmodel.pdf What's a 3 to me is a 4 to someone else etc. I don't like scoring matrices but I use them in work because our corporate risk management framework has them, I have no choice :( More specialized forms of assessment are quantitative in nature, such as HAZOP / FEMA etc but sill require some qualitative element in interpreting the results etc which is why they are normally done in a team J.
chris42  
#13 Posted : 20 April 2016 15:06:21(UTC)
Rank: Super forum user
chris42

I agree with other posters that there is no need for using a numbered matrix. In the last place I worked we did use a matrix, but in my current company I don’t. I was however discussing these two ways of recording an assessment with a manager. I explained both and showed examples, saying that he would be getting some training soon on how to do risk assessment. He actually responded that he could see that the workers if they had to look at them would be slightly better off with just High, Med and Low, but if he had to do them he liked the matrix as he felt it helped take him through the process and helped him to decide easier if it was High, Med or Low. He did seem to be able to follow what was going on in the assessment logically across, without prompting from me. He then said as he knew I intend to completely rewrite all our assessments that he would want mine to have a matrix, so he could see if it was the likelihood or the severity was the thing I either considered high or low. He also expressed that he could easily see that following the controls where it is scored again that the result of the control altered the severity or likelihood (he could see the effect and importance) I guess what I got out of the conversation was that for me to do the assessment it was easy enough to go straight to High, Med or Low. However, for others especially for those who may only do them infrequently (as I will do most) then they may prefer what seemed to be considered a methodical approach. It would also allow me to consider what they were thinking when they created one (are they considering death when that was possible, but not the likely outcome). As I don’t mind which way I record it, I thought I might give all the managers a choice (majority wins). The only other comment I have is that where we used to use the matrix when we had a significant number of Med risks, it did sort of help prioritise, ie which were closest to high and which were closes to low, when considering projects to improve/ reduce to ALARP. I will admit though when this has come up on this forum in the past the scoring method seems to have been looked down upon, as not being for professionals. At the end of the day it is a tool, so do what works for you and your company. Chris
neil88  
#14 Posted : 20 April 2016 15:57:53(UTC)
Rank: Forum user
neil88

I am in favour of using scoring matrices - but at the appropriate level of risk assessment. Additionally, there are instances where you are required to demonstrate ALARP (clients, regulator) and this can be done using specific scores on a RAM. With proper instructions, training and having the correct people in the room, it is possible to use scoring matrices effectively in an organisation for classifying significant risks. This RAM is used by a many in the O&G industry: http://www.eimicrosites..../userfiles/image/ram.gif If using something like a 6x5 matrix you need to very clearly define the criteria for both your consequences and likelihoods. Using the above image as an example each of the severity boxes will be explicitly described in an accompanying procedure. Where I find things get difficult is quantifying the "likelihoods" It's not that helpful to define a likelihood of "10^-3", or "1 in 1000 years" . Applying this rigour to trivial activities is counter-productive and probably beyond the supervisor's ability. So for task specific e.g. on JSA we don't use this matrix, simply the notion of L,M,H.
achrn  
#15 Posted : 20 April 2016 16:12:31(UTC)
Rank: Super forum user
achrn

I think you've already got the answer (and I agree with the consensus in general - I dislike the numbered scoring systems), but on this specific comment:
Lishka0 wrote:
some specialists have argued that how can you show you have reduced the risk if you don't score and how do you know what are the highest risks are so you can deal with these?
Really? If you gave them the choice of going into a cage with a hungry tiger or going into one containing a sleepy dormouse they'd need to fill in numbers on a bit of paper to establish which had the highest risk? You know what's the highest risk because you've assessed the risk holistically, without arbitrarily reducing it to two trivialised numbers. Almost invariably the numbers in an A x B = C type assessment have been massaged anyway because the person filling it in knew what C they wanted and adjusted A and B to get there without contradicting the other rows in that table anyway. (There are exceptions - I've done some potentially 'risky' stuff with O&G, and they knew such matters as the energy potential of the discharge spark from a thungummy docking with a wotsit and the ignition potential of a cloud of whatever-it-was and had hard quantitative figures to go into a table, but in most other industries it seems to me someone says "oooh, let's say it's a umm, two").
RayRapp  
#16 Posted : 20 April 2016 18:55:30(UTC)
Rank: Super forum user
RayRapp

I dislike the risk matrix, it is a distraction in my opinion. The issue I have is some corporate clients insist the RA has a 5x5 matrix. Clients all too often meddle in things they should not.
jay  
#17 Posted : 21 April 2016 09:39:29(UTC)
Rank: Super forum user
jay

It depends --risk matrices have a place in the assessment process, but probably not for "simple" risks. The key to using the matrices is that there is CLARITY in the description of both the probability and consequences--and that this is properly understood by the risk assessment "team"--as most risk assessment includes those who do the work, other experts and so on. Ultimately for complex risks, there needs to be a means to justifying how one arrived at the judgement whether the risk is low, medium or high --obviously in some cases, one may not even need a risk assessment to make that judgement--!
Invictus  
#18 Posted : 21 April 2016 10:14:45(UTC)
Rank: Super forum user
Invictus

It depends on how you score, some use a Likelyhood 1-10 but have it as 1 remote, 3-Unlikely, 5-Likely, 8- very likely, 10 certain there is no room in the assessment to score anything in between, the same for conseqence. I have used them were they are 1-5 but it has likelyhood 1- remote, 2-likely, 3- may happen, 4- Very likely, 5 - certain well on the basis that anything 'might' happen you would always score a 3. I think they have a place but you need to analyis what you are poutting to the score to ensure a good outcome.
jay  
#19 Posted : 21 April 2016 11:02:22(UTC)
Rank: Super forum user
jay

That is why the headings for probability need to be supplemented by descriptors. In our case we have:- Very Likely-Possibility of Repeated Incidents • Similar event may occur at Site every 0-10 yrs • Has happened several times at Site or many times in Company 0.1 to 1 Nominally: 0.3 i.e. Probability of Occurring per year of exposure Somewhat Likely i.e Possibility of Isolated Incidents • Similar event may occur at Site every 10-40 yrs • Has happened once before at Site or several times in Company 0.01 to 0.1 Nominally: 0.03 Unlikely i.e Possibility of Occurring Sometime • Similar event may occur every 10-40 yrs at one of 10 Company Sites • Has not happened before at Site or has happened a few times in Company 0.001 to 0.01 Nominally: 0.003 Very Unlikely i.e Not Likely To Occur • Similar event may occur every 10-40 yrs at one of 100 Company Sites • Have been isolated occurrences in Company or has happened several times in industry 0.0001-0.001 Nominally: 0.0003 Practically Impossible • Has happened once or not at all in Company • Has happened a few times or not at all in industry <0.0001
Invictus  
#20 Posted : 21 April 2016 11:09:13(UTC)
Rank: Super forum user
Invictus

jay wrote:
That is why the headings for probability need to be supplemented by descriptors. In our case we have:- Very Likely-Possibility of Repeated Incidents • Similar event may occur at Site every 0-10 yrs • Has happened several times at Site or many times in Company 0.1 to 1 Nominally: 0.3 i.e. Probability of Occurring per year of exposure Somewhat Likely i.e Possibility of Isolated Incidents • Similar event may occur at Site every 10-40 yrs • Has happened once before at Site or several times in Company 0.01 to 0.1 Nominally: 0.03 Unlikely i.e Possibility of Occurring Sometime • Similar event may occur every 10-40 yrs at one of 10 Company Sites • Has not happened before at Site or has happened a few times in Company 0.001 to 0.01 Nominally: 0.003 Very Unlikely i.e Not Likely To Occur • Similar event may occur every 10-40 yrs at one of 100 Company Sites • Have been isolated occurrences in Company or has happened several times in industry 0.0001-0.001 Nominally: 0.0003 Practically Impossible • Has happened once or not at all in Company • Has happened a few times or not at all in industry <0.0001
Working off your descriptors I would argue that if it could happen every 10-40years then you don't need a risk assessment unless of course it was a fatality you were meaning. There has to be some likelyhood of it happening and if there isn't then no one is going to get hurt.
jay  
#21 Posted : 21 April 2016 12:41:40(UTC)
Rank: Super forum user
jay

Let us agree to disagree. In higher hazard industry, not necessarily COMAH Site, when undertaking scenario based and similar risk assesments, such descriptors indeed help--and also, the aim ultimately is that nobody gets hurt at all--the aim is not only to prevent fatalities! Properly designed and "used" ( by trained/expert teams) risk matrices also support justifications whether risk is potentially above, at or below SFAIRP/ALARP levels and prioritising actions The key to robust judgement of probability is reliable data--and that remains for most of us a guess but not all of us But my posting also included that risk matrices are not essential for simple risks.
Jim Tassell  
#22 Posted : 21 April 2016 14:25:50(UTC)
Rank: Forum user
Jim Tassell

I'm with the general view above that a matrix is of little value other than in very tightly defined and controlled circumstances. What matters most is the text description of the risk and actions. If you get them right then anyone reading the assessment should be able to view the circumstances through the eyes of the assessor and follow their train of thought. Otherwise, when you get to court you may be faced with the question: "Mr X.. why did you give that a 2 rather than a 3?" ... where be dragons!
boblewis  
#23 Posted : 22 April 2016 09:37:00(UTC)
Rank: Super forum user
boblewis

We all I think like certainty and this is probably where matrices developed. The problem is that except in an engineering sense for plant and equipment failure rates thus likelihoods cannot really be assigned a number. A large number of units have to be manufactured before such rates can be identified and there lies a great deal of statistical analysis and confidence calculation behind all engineering/manufacturing failure rates. Put people into it and the uncertainty climbs astronomically. Even claims of High Medium and Low rely on some background idea of a numerical figure in the assessors head. All that said people can be trained to use matrices far better than "simple high medium low based on my experience type systems" Through training and subsequent mentoring one is able to produce an in company system that tends to produce the same results regardless of the individual assessor. As long as the particular system is understood the results will tend to agree. After all we should be working towards the local management and operatives formulating the RAs rather than us "experts". We all have our own background experience and what I would regard as High might not be so by another professional. But then I have survived a major explosion, witnessed the aftermath of some nasty fatalities, witnessed close hand Occ Health deaths and talked to steeplejacks who think they are invincible. Not forgetting sundry and many operatives who can they believe climb ladders without hand hold.
andrewcl  
#24 Posted : 22 April 2016 09:43:04(UTC)
Rank: Forum user
andrewcl

Why not look at this from the other end...? What do you want? A workplace that is sufficiently free from risk of harm that the job goes off without anyone being hurt/made ill. Therefore, if you have the right controls in place, why does it make any difference how you arrived at that result? I've often said "You can prove that someone has done some form of risk assessment because when you look at the workplace and the method statement, you'll find control measures." The way you do the risk assessment (risk matrix, guesswork, reading the tea leaves etc.) and (as Ron said earlier) whether you hand it to the ops, is up to you. I've seen work packs like War and Peace - sometimes less is more, and as has been alluded to, everything an op needs should be in the method statement.
Invictus  
#25 Posted : 22 April 2016 10:11:32(UTC)
Rank: Super forum user
Invictus

Jim Tassell wrote:
I'm with the general view above that a matrix is of little value other than in very tightly defined and controlled circumstances. What matters most is the text description of the risk and actions. If you get them right then anyone reading the assessment should be able to view the circumstances through the eyes of the assessor and follow their train of thought. Otherwise, when you get to court you may be faced with the question: "Mr X.. why did you give that a 2 rather than a 3?" ... where be dragons!
Or why did you make it low instead of medium risk! Same argument.
Invictus  
#26 Posted : 22 April 2016 10:21:14(UTC)
Rank: Super forum user
Invictus

How far do you go with an assessment Matrix or not, an example: A few years ago I worked at a prison I had a report that a prisoner had recieved a electric shock from a large food trolley so I went to speak to the chef and his answer stunned me: Chef 'he did get an electric shock off the trolley because I plugged it in and I got one' and he had a witness another chef who told me 'yes he did get a shock because i stood by the switch so that I could turn it off quick if he did' Common sense! The R/A had stated that it was to be taped with warning tape which they had and property services to be called they would then PAT test and repair if required.
sadlass  
#27 Posted : 23 April 2016 09:49:04(UTC)
Rank: Forum user
sadlass

Firstly - making RAs more 'understandable for the trades' is to misunderstand the purpose of RA - which is a management decision-making process (employer duty). So it needs to be 'manager-friendly'. The information derived from the RA process is best done via specifically targeted information for users. It can be a notice, leaflet, instruction, TBT or a suite of training or all the above. Handing out RAs is pointless, especially if they are full of matrices and pseudo science mumbo-jumbo. If you follow the HSE RA model (not a matrix in sight - never has been) their example RAs could be used for informing anyone and everyone, although I am still a fan of splitting up the outcomes into 'management to do' and 'employees to know / follow' and recomposing this information into manager action plans and worker instructions. Excepting high-hazard process and engineering functions, where probabilities CAN be given a number, most other workplaces / organisations just do not need - or benefit from - such a worthless method. Who is teaching this stuff? I have suffered these in client organisations over many years. I have yet to be convinced that multiplying one guess (how likely is this?) with another guess (how bad will / could it be?) makes for any kind of actual fact - coming up with a single 'risk' number lends the process instant but fake credibility. And then - so what? This method takes energy and attention from the real task - solving and managing actual safety problems. I know that managers manipulate these numbers to get the answer they want. In a recent company, when I queried why a 10 page risk assessment form was needed when the actual work instruction (also 10 pages) started with a (better!) summary of all the hazards and control measures anyway. The manager laughed and said he wrote the instructions when the plant was set up, but then 'had to fill in a form' so just made the RA form fit the existing instruction. Of course. The 'before' and 'after' numbers game is the worst example of silliness as science. No wonder H&S (and RA) makes people groan and raise eyebrows. Junk the scoring / numbers / matrices NOW and start doing proper decision-making and communication!
boblewis  
#28 Posted : 23 April 2016 18:54:14(UTC)
Rank: Super forum user
boblewis

Sadlass But that is also swapping to another subjective assessment that depends on the assessors experience and perceptual sets!!
Ian Bell2  
#29 Posted : 23 April 2016 19:24:03(UTC)
Rank: Super forum user
Ian Bell2

Ultimately all risk assessments are subjective and a crystal ball gazing exercise. Just some are more subjective and open to opinion than others/other issues. At best any form of 'scoring' gives a guide to whats important and hence should be tackled as part of risk reduction/mitigation. Some engineering QRAs are a little less subjective as frequency failure databases are available to give failure rates of items of equipment e.g. pressure equipment, level switches etc.
sadlass  
#30 Posted : 25 April 2016 10:07:41(UTC)
Rank: Forum user
sadlass

boblewis wrote:
Sadlass But that is also swapping to another subjective assessment that depends on the assessors experience and perceptual sets!!
So what do you suggest to take the human (with all their subjective attitudes, beliefs and perceptual sets) out of any decision-making process? And many are implying that 'the assessor' is in the singular. A key tactic is surely to consult and confer on determining a) the range of safety problems and b) what can / shall we do to control or manage the ensuing risk. Although not to discuss whether something rates a 2 or a 3 because that is not important.
Corfield35303  
#31 Posted : 25 April 2016 10:55:22(UTC)
Rank: Forum user
Corfield35303

sadlass wrote:
boblewis wrote:
Sadlass But that is also swapping to another subjective assessment that depends on the assessors experience and perceptual sets!!
So what do you suggest to take the human (with all their subjective attitudes, beliefs and perceptual sets) out of any decision-making process? And many are implying that 'the assessor' is in the singular. A key tactic is surely to consult and confer on determining a) the range of safety problems and b) what can / shall we do to control or manage the ensuing risk. Although not to discuss whether something rates a 2 or a 3 because that is not important.
I couldn't agree more - the matrix is a flawed concept. A more serious outcome to an incident is possible but always less likely, likelihood V severity is inversely proportionate. A simple trip at work is quite likely, but in nearly all cases results in little or no injury (L = high, S = low). Alternatively a trip might lead to a serious injury, but this is quite rare (L = low, S = high). What a waste of time when we need to simply manage risks properly. Might be handy for individuals/teams to do some prioritisation but the focus on scores is otherwise distracting.
aud  
#32 Posted : 25 April 2016 11:46:52(UTC)
Rank: Super forum user
aud

Who put "the scoring system" INTO everyday risk assessment? Not the HSE . . . although sadly some of their inspectors now seem to expect it, despite their own advice. Not the regulators . . . "risk assessment for purpose of identifying the measures to take to comply with . . . relevant statutory provisions" Not the Courts . . . although they will pick holes in any system to be fair, if decisions were flawed or even absent.
chris42  
#33 Posted : 25 April 2016 12:16:18(UTC)
Rank: Super forum user
chris42

It’s the same difference isn’t it? If I think about a hazard and all the alternate outcomes with their individual severity and likelihoods in my mind, then decide overall the risk level is say Medium, then just write medium on the assessment. Or I do the above and assign a number to severity and Likelihood and overall level and write that down on the assessment as I go and the number 9 happens to equal medium. So what? Is there any significant difference? it is just as biased either way on the assessors training, knowledge, experiences, general attitude, if you’re a glass half full or half empty sort of person, etc, etc. It’s a bit like showing your working out in a maths test at school? Except you don’t get half marks for showing the correct process if you get the answer wrong. Perhaps some do not like showing their working out :o) The whole thing is flawed if you want to look at it that way. A Risk assessment is a tool and a flawed one at that, but it is the best we have. It at best is a structured way at looking at the issues surrounding an activity, but different people following the same process will end up with different answers, whether it is numbers or words, just as non-scientific, but at least methodical. Whether someone ends up with a Medium or a high or a 9 or a 16, we still look to see if things can be improved – so what difference really. Yes, the only possible advantages to scoring I can see is potentially the ability to rank risks easier and possibly when your assessing someone else’s assessment (who is not a H&S professional) to check they are not considering a simple trip in an office could result in death. Conversely without the numbers it is less cluttered, and possibly less daunting to those that have never seen an assessment. I would not say easier to understand as if you can’t get your head around the simple concept, then you’re going to struggle with the assessment anyway – What does Medium mean? Medium for who. One person’s medium is another’s high or low, it is all subjective.
Invictus  
#34 Posted : 25 April 2016 12:19:31(UTC)
Rank: Super forum user
Invictus

Chris, Agree I put6 the same at #25 just didn't use so many words.
jay  
#35 Posted : 25 April 2016 12:29:36(UTC)
Rank: Super forum user
jay

I have in my previous post:-
jay wrote:
It depends --risk matrices have a place in the assessment process, but probably not for "simple" risks. The key to using the matrices is that there is CLARITY in the description of both the probability and consequences--and that this is properly understood by the risk assessment "team"--as most risk assessment includes those who do the work, other experts and so on. Ultimately for complex risks, there needs to be a means to justifying how one arrived at the judgement whether the risk is low, medium or high --obviously in some cases, one may not even need a risk assessment to make that judgement--!
For example, when undertaking risk assessment that requires to make judgements regarding ALARP etc, even if it is not in one of the permissioning regimes, risk matrices do have a role, and to state that one can do without them is not " suitable & sufficient" risk assessment. While the HSE is not explicit in its guidance for the use of risk matrices, I caanot see how in specific cases one arrives at an ALARP judgement without it. The HSE References are implicit for the use of risk matrices for some aspects:- http://www.hse.gov.uk/risk/theory/alarpcheck.htm
RayRapp  
#36 Posted : 25 April 2016 12:45:39(UTC)
Rank: Super forum user
RayRapp

The irony of all this nonsense is if it was not a legal requirment to conduct a RA none of us would bother except for possibly QRAs. All we would be interested in is the controls for known hazards inherent within the task.
jay  
#37 Posted : 25 April 2016 14:19:51(UTC)
Rank: Super forum user
jay

To use the term "nonsense" even if without intent, is neither being constructive nor respectful of differing opinions. The HSE itself in its FAQs' on "Risk Management" informs:- What are risk matrices? http://www.hse.gov.uk/risk/faq.htm#q27 Most businesses will not need to use risk matrices. However, they can be used to help you work out the level of risk associated with a particular issue. They do this by categorising the likelihood of harm and the potential severity of the harm. This is then plotted in a matrix (please see below for an example). The risk level determines which risks should be tackled first. Using a matrix can be helpful for prioritising your actions to control a risk. It is suitable for many assessments but in particular to more complex situations. However, it does require expertise and experience to judge the likelihood of harm accurately. Getting this wrong could result in applying unnecessary control measures or failing to take important ones.
Bazzer  
#38 Posted : 25 April 2016 15:39:42(UTC)
Rank: Forum user
Bazzer

Personally I prefer a number matrix system, as it demonstrates the raw risk without controls and the residual risk with controls in place. I then provide High, Medium, Low against the residual risk. I have seen both systems, and it is really down to personal preference; I also find it easier for when I train risk assessors, as it is something more tangible, but other safety professionals may think differently. At the end of the day, provided all hazards and significant risks have been identified and controlled, that's the end game.
aud  
#39 Posted : 25 April 2016 16:59:49(UTC)
Rank: Super forum user
aud

Bazzer The end game maybe but at what cost? In terms of effort, energy, and beaurocracy, why go to all that trouble to come up with the answers (risk control measures) you would probably have thought of anyway if you had all sat in a room and asked "what can cause harm here and what can we do about it?" But it is the pseudo-scientific impression of 'fact' which numerical techniques infer, especially 'before and after' that really bothers me. Well said Ray. Nobody is starting from scratch these days - we have been at this "risk assessment" malarkey for 25 years, and the HSWA for even longer.There are pretty standard rules and procedures for most common processes and activities, so the test could just as easily be "is it reasonably practicable to do this, this and this?". Risk assessment doesn't produce new solutions - just recycles the old ones. (nothing wrong with that either). Not sure what is meant by 'both systems': I have counted at least 4 variations in this thread. With / without scoring matrices. Matrices for both pre-controls / post controls. With matrices ranging from 3x3 to 10x10. Or words, or colours, instead. I go back to my question - how did the scoring system arise anyway? And if it's flawed, why won't it go away?
chris42  
#40 Posted : 25 April 2016 17:20:41(UTC)
Rank: Super forum user
chris42

He He He He, Found this on the Internet ( does not make it accurate) for you Aud :0) The power of numbers arrived in the west in the early 13th century when a book entitled Liber Abaci appeared in Italy. This was 15 hand-written volumes by Leonardo Pisano, commonly known as Fibonacci who is best known for a series of numbers that answered the problem of how many rabbits will be born throughout the course of one year from one pair. He identified the power of numbers for the first time, but using them to assess risk remained centuries distant. In the UK, it was only when the Management of Health and Safety at Work Regulations were implemented in 1993 that its use in workplaces started to become more common. In 1991 the UK’s Health and Safety Executive published the guidance document Successful Health and Safety Management, where risk was identified as the product of two independent variables: ‘likelihood’ and ‘severity’. Afterwards many health and safety managers started to use a matrix-type approach to assessing risk using numbers to measure those two parameters. In the early 1990s, the Institution of Occupational Safety and Health developed its managing safely training course to include a quantitative risk measurement using a 5x5 matrix (see figure 1). A low risk was identified by a likelihood rating of one, and a severity rating of one while a high risk was identified with ratings of five. The higher the number, the higher the risk and this led to prioritised action plans. So it's all IOSH's fault :0) Chris
Users browsing this topic
Guest
2 Pages12>
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.