Welcome Guest! The IOSH forums are a free resource to both members and non-members. Login or register to use them

Postings made by forum users are personal opinions. IOSH is not responsible for the content or accuracy of any of the information contained in forum postings. Please carefully consider any advice you receive.

Notification

Icon
Error

Options
Go to last post Go to first unread
Ardee  
#1 Posted : 01 January 2025 17:59:47(UTC)
Rank: New forum user
Ardee

Hello, I'm coming to the end of my time in the Army and thinking about trying to build something of my own.

I used to spend a lot of time generating and reviewing Safe System of Work documents. It was all done manually, and I found the process quite slow.

I've been considering the idea of using AI to help with the RAMs writing and review processes. I've seen some posts here about AI, but they seem more focused on simply inputting a prompt into ChatGPT. 

I'm thinking more along the lines of something that is linked to current regs and builds a skeleton that allows you to add site-specific details or something that performs a first pass at the review, highlighting high-risk areas for human review.

Has anyone had any experiences or thoughts about using AI in this type of work? My experiences are quite biased by the army way of doing things. 

Roundtuit  
#2 Posted : 01 January 2025 19:26:21(UTC)
Rank: Super forum user
Roundtuit

AI has no way to determine what is, and is not, copyright material that it scrapes from the net.

Even linking to regulations you will find the AI struggles with interpretation.

Things done well and accurately are done slowly as short cuts always generate omissions.

If you are pursuing this route make sure you have a very large indemnity cover for when those omissions affect peoples lives and livelihoods.

Roundtuit  
#3 Posted : 01 January 2025 19:26:21(UTC)
Rank: Super forum user
Roundtuit

AI has no way to determine what is, and is not, copyright material that it scrapes from the net.

Even linking to regulations you will find the AI struggles with interpretation.

Things done well and accurately are done slowly as short cuts always generate omissions.

If you are pursuing this route make sure you have a very large indemnity cover for when those omissions affect peoples lives and livelihoods.

peter gotch  
#4 Posted : 02 January 2025 11:59:54(UTC)
Rank: Super forum user
peter gotch

Hi Ardee

Your first post here, so welcome to the Forums.

In addition to important issues (including those of copyright) already pointed to by Roundtuit, trying to get AI to do analysis based on e.g. legislative requirements is difficult even if the AI focuses entirely on legislation AND is capable of working out the latest version of any specific piece of legislation is at any time.

Starting with the premise that you are going to focus on the UK as it gets MUCH more complicated if you venture beyond.......

Let's assume that the AI is reasonably clever and realises that it needs to access the legal requirements via legislation.gov.uk and can work out that Law A applies in England and Wales but not in Scotland and Northern Ireland and DOES look at the latest variant, it might still not have the most up to date version as there might be some recent amendment that the website has yet to capture. Perhaps the on the ball OSH professional is keeping tabs on what is on the horizon and may hence realise that SI 2024 No 5799 has been made and comes into force on 5 May 2025 and so amends e.g. the PUWER Regulations.

Which already leaves a dilemma - do you base your risk assessment on what the law says now, or what it will say on 5 May?

But occupational health and safety law in the UK is much more nuanced than e.g. what it says in OSHA Regulations in the US where almost all requires are very clear. "DO this." "DON'T DO that."

Instead a mix of a very few requirements which set "strict liability" duties, amongst others qualified by "so far as practicable", "so far as reasonably practicable" [or its twin ALARP], all intermixed with a few mentions of "reasonable", "suitable" and other words that dilute the demand.

How is your AI going to decide which legal judgment on the interpretation of e.g. "reasonable practicability" to default to and decide whether or not it needs to at all?

You can, of course, programme your AI to capture everything in every single publication that HSE and other UK enforcing authorities currently have in publication, but will AI be able to find all the stuff that is out of print that might still be relevant? 

AI might also capture all the HSE press releases about successful prosecutions and assume that everything that HSE asserts in those press releases is completely accurate and not recognise that if a defendant has pleaded guilty [not necessarily as the defendant actually thinks they are guilty at all, or thinks that they were not THAT guilty] what HSE might take from that might be far from a true picture of the circumstances - sometimes you have to have been in the Court to know what was actually said and recognise when the HSE press release is not only miles away but contradictory as to the facts AGREED in Court.

Hence, if you place too much reliance on AI to do things like risk assessment, as Roundtuit indicates, you need to make sure that your insurance cover is VERY robust.

 

thanks 1 user thanked peter gotch for this useful post.
stevedm on 04/01/2025(UTC)
Acorns  
#5 Posted : 02 January 2025 12:37:05(UTC)
Rank: Super forum user
Acorns

Remember the early days of computers - GIGO garbage in =garbage out. That could be AI.
Having said that, put in good info, with the right prompts, it could do much if the hard work but IMHO, you’d still need that human input to check its appropriate. Whilst it’s getting better, many of the AI generated reports etc I have seen are recognisable as being AI and even signed off without human peer review.
Good luck
stevedm  
#6 Posted : 04 January 2025 15:49:35(UTC)
Rank: Super forum user
stevedm

best quote I have seen on AI...we were smart enought to invent AI, dumb enough to need it and now too stupid to know what to do with it..

there are aready AI legal models out there and some being used in trials in courts to reduce the time but there are still significant errors...however the human (a competent human) still needs to validate the output...I would say for this application you still have a long way to go to get as bad as some of the RAMS I have seen over the years...  sounds like you are looking afor a quick buck, there isn't as it takes thousands of hours to train and although there are 'free' models out there to start and it is predominatly US based behaviours...yes behaviours.  I have one trained on my process safety work for the last 40 years (personal use) and it still comes out with crap sometimes...but helpful to remind me what I did when I was young and stupid...

  

thanks 2 users thanked stevedm for this useful post.
Kate on 04/01/2025(UTC), peter gotch on 04/01/2025(UTC)
Holliday42333  
#7 Posted : 06 January 2025 09:23:13(UTC)
Rank: Super forum user
Holliday42333

Hi Ardee,

You may also need to consider that the use of AI may not be welcome in some sectors due to cyber security issues (either real or percieved).

I am aware of a national infrastructure project where even the use of AI to take meeting minutes/notes caused significant hand wringing.  Not sure of the details as I wasn't directly involved but there was a clear edict not to use any AI based tools.

A Kurdziel  
#8 Posted : 06 January 2025 10:14:50(UTC)
Rank: Super forum user
A Kurdziel

AI systems like GPChat are only as good as the information that they have access to. So if  you ask it about the legal issues surrounding risk assessment let’s say, it will be able to find loads of information  on the internet about it ad put together a NEBOSH style answer about the topic but if you ask it to create a RAMS for a particular piece ( a monkey with a typewriter can and are used to write generic RAMS) it will struggle to create anything useful.  So far as is reasonably practical not only includes the official factors such as cost and effort but also an organisation’s appetite for risk. How is AI  meant to judge that?

 

Holliday42333  
#9 Posted : 06 January 2025 10:31:08(UTC)
Rank: Super forum user
Holliday42333

Having had a bit more of a think about the practical application of this, it did occur to me that like most 'automated' risk assessment processes, could this approach (even if made to work) fall down the first time it is tested at Law?

I'd imagine it would be easy for a prosecution barrister to ask "So [insert Duty Holder] could you please explain to the court how your organisation determined the Safe System of Work?"

"Well actually we let an AI program make those decisions"

Considering how a process would be challenged at the very pointy end of a barristers argument isn't sexy when trying to create inovative solutions but unfortunatelly could become critical.

thanks 1 user thanked Holliday42333 for this useful post.
A Kurdziel on 08/01/2025(UTC)
Kate  
#10 Posted : 06 January 2025 11:16:12(UTC)
Rank: Super forum user
Kate

Is that really so much worse though than "Well, we save down a copy of a RAMS for something else and then change the title, and if we have time we do some search-and-replace of some of the words to make it fit the job"?

thanks 2 users thanked Kate for this useful post.
A Kurdziel on 08/01/2025(UTC), andrewhopwood on 08/01/2025(UTC)
Holliday42333  
#11 Posted : 06 January 2025 11:35:57(UTC)
Rank: Super forum user
Holliday42333

Originally Posted by: Kate Go to Quoted Post

Is that really so much worse though than "Well, we save down a copy of a RAMS for something else and then change the title, and if we have time we do some search-and-replace of some of the words to make it fit the job"?


Exactly Kate.  Or the old "We have a generic template or suite of templates that we use".  Worked in a business that got torn to shreds by a smiling assassin barrister over that one; it didn't end well.

In the past I did a key-notes presentation on risk assessment at a local conference where I asked the delegates to raise their hands if they had a working risk assessment process.  Most hands went up with lots of nodding.  I then asked them to keep their hands raised if their process had ever been tested in court or been challenged by enforcement.  Most hands went down.  After me was an HSE Inspector then a barrister.  After each of their presentations the delegates were asked to raise their hands if they still believed their risk assessment process would meet the challenges each presenter raised.  Ive got to say that a LOT less hands were raised and there was a lot less nodding going on to say the least.

I have waxed lyrical before on this but I think that the current aplication of risk assessment has become flawed and corrupted to the point of it being useless.  In my current organisation we do have a risk assessment process that follows current thinking (as our clients wouldn't accept anything else however flawed I think it is) however when I am training it out I stress that the fundamental question is 'Have we identified the safest way to do this?' and make the answer fit our process not the other way around.

Sorry Ardee for going of the strict topic.

thanks 1 user thanked Holliday42333 for this useful post.
peter gotch on 06/01/2025(UTC)
andybz  
#12 Posted : 06 January 2025 11:53:03(UTC)
Rank: Super forum user
andybz

I am sure that apps are already available that use AI to support risk management. Even ChatGPT does a remarkably good job at summarising hazards and effective controls considering it is a general AI tool. I can only imagine a specialised app being even more useful at helping people to manage risks, noting that this is not the same as automating the whole process.

AI does get things wrong but one its strengths (illustrated by ChatGPT) is that it gives comprehensive but clear and coherent answers to problems. This makes it easy for people to identify errors. Compare that to human generated text related to any complex issue where it can often be very difficult to understand what the issues are and hence whether a reasonable approach has been followed.

The human will always be responsible. I don't understand why someone using AI will make any difference in a law court to their level of negligence.

Holliday42333  
#13 Posted : 06 January 2025 12:00:04(UTC)
Rank: Super forum user
Holliday42333

Originally Posted by: andybz Go to Quoted Post

The human will always be responsible. I don't understand why someone using AI will make any difference in a law court to their level of negligence.


It shouldn't but, in my experience, humans are notorious at trying to delegate their responsibility to a 'process'

Xavier123  
#14 Posted : 07 January 2025 11:46:01(UTC)
Rank: Super forum user
Xavier123

Producing a risk assessment (the noun) could be done by AI with the right rules. It's ultimately just text.

Undertaking a risk assessment (the verb) is more difficult for all the reasons already laid out and why competent human judgement and nuance is necessary. 

In real terms, the act of assessing is more critical than the recording of that assessment. Starting with the end document and working backwards is the wrong way around. Although it is already practiced by companies the world over via existing RA model templates, so who would notice? ;)

thanks 1 user thanked Xavier123 for this useful post.
A Kurdziel on 08/01/2025(UTC)
firesafety101  
#15 Posted : 08 January 2025 11:43:09(UTC)
Rank: Super forum user
firesafety101

Just thinkig about this, and I have no opinion on AI as I know too litle about it, but surely the Human has to read the end product and critique where necessary.

That human needs to be fully competent and experienced in the subject and fully awake when reading it.

One example is fire risk assessments carried out by software produced to provide a written report of a FRA.

The danger there is to become accostomed to the order of the fra with possible overlooking something that experience tells us should be inside the assessment.

Roundtuit  
#16 Posted : 08 January 2025 13:55:38(UTC)
Rank: Super forum user
Roundtuit

Then of course you also need to consider the EU considers AI software to be included under its revised Product Liability Directive (EU 2024/2853) the internet having no respect for geographical borders anything available "on the market" falls in to scope.

Roundtuit  
#17 Posted : 08 January 2025 13:55:38(UTC)
Rank: Super forum user
Roundtuit

Then of course you also need to consider the EU considers AI software to be included under its revised Product Liability Directive (EU 2024/2853) the internet having no respect for geographical borders anything available "on the market" falls in to scope.

andybz  
#18 Posted : 08 January 2025 16:53:41(UTC)
Rank: Super forum user
andybz

Harvard Business Review  "AI Won’t Replace Humans — But Humans With AI Will Replace Humans Without AI." Seems like a reasonable way to view most AI applications.

https://hbr.org/2023/08/ai-wont-replace-humans-but-humans-with-ai-will-replace-humans-without-ai

stevedm  
#19 Posted : 08 January 2025 18:07:38(UTC)
Rank: Super forum user
stevedm

The upcoming AI Act proports to sort out all of these bits and ironically (for this debate) it relies on risk assessment to determine the affectiveness of controls..

  • Compliance Oversight

    • National authorities in EU member states will oversee enforcement, with support from a newly created European Artificial Intelligence Board (EAIB).
  • Penalties

    • Non-compliance can result in fines of up to €30 million or 6% of annual global turnover, whichever is higher.
    • These penalties are similar to those under the GDPR and aim to ensure serious accountability.

Not sure that having AI and not having AI is a requisite for mankind...a good memory and experiences still makes the human...speedy access to information hasn't improved human behaviour so far :)

Mirin  
#20 Posted : 08 January 2025 20:33:52(UTC)
Rank: New forum user
Mirin

Not sure if this helps but seems to be HSE view on AI and health and safety

https://www.hse.gov.uk/news/hse-ai.htm

chris42  
#21 Posted : 09 January 2025 09:15:19(UTC)
Rank: Super forum user
chris42

The HSE link seems to be more about the risks potentially caused by AI.

I can’t help feeling it would be fun to get AI to do the NEBOSH Diploma! In addition to doing 30 hours of CPD with reflections.

A H&S AI walks into a bar (someone’s place of work). One employee says it is too hot in there and another says it is too cold, what does the AI do. Towards the end of the conversation both employees ask what they should do with the bird that flew in through the door and died on the counter.

My point is, as H&S practitioners our job is not only know of legislation and guidance but interpret them as well, in a way that will satisfy both employees and employers (for the most part). I think it is the Interpretation and proportional part that AI will struggle with. I don’t think AI is there yet, and if you have to involve a competent human then they just as well do it in the first place. It is just like CPD it is the reflection not just the learned facts that is important.

thanks 1 user thanked chris42 for this useful post.
peter gotch on 09/01/2025(UTC)
A Kurdziel  
#22 Posted : 09 January 2025 09:38:42(UTC)
Rank: Super forum user
A Kurdziel

Exactly Chris

“Doing” H&S is not just about remembering rules and regulations or regurgitating policies but it is also about applying your life experiences. If you ever go on a training course you want to share people’s experiences, especially those that drive their approach to H&S; the bad accidents, the cock ups the funny stories. AI, no matter how powerful has not  had a life; its “experience” is based on what it has been told usually picked up from the Internet, and we all know how balanced a world view that can lead to.

thanks 1 user thanked A Kurdziel for this useful post.
peter gotch on 09/01/2025(UTC)
Users browsing this topic
Guest
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.