Welcome Guest! The IOSH forums are a free resource to both members and non-members. Login or register to use them

Postings made by forum users are personal opinions. IOSH is not responsible for the content or accuracy of any of the information contained in forum postings. Please carefully consider any advice you receive.

Notification

Icon
Error

Options
Go to last post Go to first unread
Self and Hasty  
#1 Posted : 30 January 2023 09:21:06(UTC)
Rank: Forum user
Self and Hasty

Good morning all,

A strange one to bring to discussion, but I'm interested in others opinions.

I write several blogs and have recently started playing around with AI generated content, particulary OpenAI. (Chat.openai.com), the technology has come on in leaps and bounds recently, particulary in the past six weeks with the launching of the Beta OpenAI technology. 

This is nothing like the infuriating customer service chatbots that have become popular in the past few years that never understand what you're after, this is true AI.

I have played around with the concept of utilising AI to produce templates, intros, summaries, and even full risk assessment reports.

The outputs so far have been pretty impressive. The AI uses information from sources it already has, it can access the HSE, government legislation, reference guidance, ACOPs and british standards, it can take details from your site notes and turn them into plausable usable content in minutes.

You can feed it legacy reports you have written so it can emulate your voice and style and understand and use previous decisions and comments on similar recommendations/issues. It learns and you can easily adjust and rewrite in seconds so it constantly improves to give you what you're looking for.

Caveat; all the input has to be supplied by a professional human who has actually attended the site and manually identified the hazards and controls and any recommendations as per any normal assessment. and the outputs from the AI must be reviewed and fact-checked by a human.

This is in no way a replacement for the health and safety professional, their expertise is still a necessity. but it is a useful new tool that can benefit professionals in the production of consistent and accurate report writing saving time and effort.

Reports that would have taken several hours to write and check manually can now be produced and checked in quarter of the time.

I've discussed this with my latest client and they are happy for me to use/test the technology for a recent fire risk assessment I conducted, the terms of the AI usage do not prevent my using it, just a simple statement of 'Powered by OpenAI' to be included as attribution. 

Legally there is no onus of responsibility on the AI, the human assessor is still 100% responsible for the assessment, it's findings and culpable for any mistakes, the AI content could not be used as an excuse for any errors.

All works are produced by the professional human, just like all works currently produced in, for example, Microsoft Office, are still the works of the human assessor, it is just a helpful tool to be utilised.

What are your thoughts and opinions?

Is anyone else experimenting with AI technology with their work?

Thanks

thanks 2 users thanked Self and Hasty for this useful post.
Kate on 30/01/2023(UTC), HSSnail on 30/01/2023(UTC)
Roundtuit  
#2 Posted : 30 January 2023 11:20:15(UTC)
Rank: Super forum user
Roundtuit

Once you open Pandora's box.....

It may from your experience save time however when you act in haste there is always time to repent at leisure.

Personally I found it abhorrent the Chancellor recently used AI in drafting a speech - if it is capable do we need to pay the 630 sat in westminister (plus associated staff) when we could surrender control to the likes of Gates, Jobs, Musk, Bezos et al. (not one of them a paragon of Health & Safety)

As you recognise that name on the report will never be that of the AI company CEO.

Are the external sources consistent & up to date? Many on here have highlighted the decline in quality content from the HSE web site who you would perceive to be the font of all knowledge for those working in the UK.

Regulation and standards remain a moving feast especially in construction & fire safety thanks to Brexit & Grenfell with DLUHC remaining out of step with BEIS

It has not happened yet but at some stage all those AI creations will become subject to copyright argument - one area technology firms love is the protection of intellectual property.

thanks 4 users thanked Roundtuit for this useful post.
Kate on 30/01/2023(UTC), HSSnail on 30/01/2023(UTC), Kate on 30/01/2023(UTC), HSSnail on 30/01/2023(UTC)
Roundtuit  
#3 Posted : 30 January 2023 11:20:15(UTC)
Rank: Super forum user
Roundtuit

Once you open Pandora's box.....

It may from your experience save time however when you act in haste there is always time to repent at leisure.

Personally I found it abhorrent the Chancellor recently used AI in drafting a speech - if it is capable do we need to pay the 630 sat in westminister (plus associated staff) when we could surrender control to the likes of Gates, Jobs, Musk, Bezos et al. (not one of them a paragon of Health & Safety)

As you recognise that name on the report will never be that of the AI company CEO.

Are the external sources consistent & up to date? Many on here have highlighted the decline in quality content from the HSE web site who you would perceive to be the font of all knowledge for those working in the UK.

Regulation and standards remain a moving feast especially in construction & fire safety thanks to Brexit & Grenfell with DLUHC remaining out of step with BEIS

It has not happened yet but at some stage all those AI creations will become subject to copyright argument - one area technology firms love is the protection of intellectual property.

thanks 4 users thanked Roundtuit for this useful post.
Kate on 30/01/2023(UTC), HSSnail on 30/01/2023(UTC), Kate on 30/01/2023(UTC), HSSnail on 30/01/2023(UTC)
peter gotch  
#4 Posted : 30 January 2023 11:54:50(UTC)
Rank: Super forum user
peter gotch

Self and Hasty

We should take advantage of modern technology but I share some of Rounduit's reservations.

My guess is that your approach to fire risk assessments might be somewhat more conservative in the light of incidents such as Grenfell and the likelihood that legislation will get tougher and expecations tougher even when the legislation leaves gaps.

So, placing trust in AI interpreting what you wrote 1, 2 or 5 years ago, to inform what you are writing now and in the future could be a risky road.

It is much more difficult to critically review someone else's work for accuracy than to draft what you mean to say.

Where I used to work document control was largely via "CRAV" - Check, Review and Verify. The author could do up to two of the three elements.

On numerous occasions as Verifier I came across situations where the Check and Review had been substandard. I think that a move to making the assumption that the AI is unlikely to make mistakes could easily make this worse. 

How often do you find that it takes you six attempts to fully understand your OWN notes?!?! Which plane did I measure that 319mm in? The AI will make a decision for you - will you remember to check and change?

You wrote 10 pages of notes including two lines on control of silica - what if the AI misses those two lines and drafts a report based on the information elsewhere in 10 pages? Will you recognise that the AI has failed to write up what might be by far the most important element of your inspection? The two lines that I might have marked with an asterisk or similar to remind me to focus - what if the AI fails to understand the importance of that memento?

....and when it comes to fire risk assessments this is one of those areas where getting your report wrong could end up with multiple fatalities - one of many "low probability, high consequence" scenarios that you might be looking at. So would you need to change your style of note taking to ensure that the AI understands the importance?

thanks 2 users thanked peter gotch for this useful post.
Kate on 30/01/2023(UTC), HSSnail on 30/01/2023(UTC)
chris42  
#5 Posted : 30 January 2023 12:17:36(UTC)
Rank: Super forum user
chris42

How were you intending to teach the AI to understand and apply “reasonably Practicable”.

If you put 10 H&S people in a room you are likely to get at least 17 different opinions on any given issue.

thanks 5 users thanked chris42 for this useful post.
Kate on 30/01/2023(UTC), HSSnail on 30/01/2023(UTC), peter gotch on 30/01/2023(UTC), DH1962 on 01/02/2023(UTC), Yossarian on 03/02/2023(UTC)
Self and Hasty  
#6 Posted : 30 January 2023 12:25:37(UTC)
Rank: Forum user
Self and Hasty

Absolutely valid concerns and criticisms of the technology. 

I can't emphasise enough the requirement for human review of all content created by others (human or bot), Check, Review and Verify is essential not just for the finished output but for each stage of interaction, ensuring the data you're inputting is 'read', 'understood' and carried forward to the end.

Yes it's a minefield with lots that may go wrong, the legislation is currently new or non-existent it would take trial and error to get 'ironclad' legislation in place for such technologies. But, personally, I see the great scope that this tech could have for our industry. 

In theory the AI could end up picking up instances of human error as much as the human can for AI errors. With the constantly changing guidance and legislation the bot may well be more up-to-date than the assessor, if legislative updates are fed directly to the AI, The data pool is currently lacking assurances that the tech is 'up-to-date' just stating it is relevant content from 2021 onwards. But it's easy to see how in the very near future it could be a reference guide/resource; instead of scouring the clunky HSE website, or Googling information you're unsure of; we may one day be able to confidently ask the AI the current legislation on X, Y, Z and get the accurate answer automatically inputted into the report we are writing.

I would only be inputting my own recent works that are relevant (not years old) and checking any external references are the most current. From the site notes I input I would have already worked out the Traffic light priority the identified issues/elements so the AI output will already have these correctly. 

This is very early days, and I wouldn't be so reckless and naive to simply rely on this new tech, for now I'm just experimenting with turning site notes into report prose and for that alone it is effective and timesaving. I still have to input the data, calculate the risk ratings, generate the observations and controls, take the photos and give the recommended actions and time frames/priorities, and most importantly CHECK CHECK CHECK! but this opens up the way for future automation and unlimited potential applications.

I think the tech is well worth further investigation and testing.

A Kurdziel  
#7 Posted : 30 January 2023 12:26:04(UTC)
Rank: Super forum user
A Kurdziel

I like the idea of AI as a partner to human thought processes. My personal bug bear is spelling, and I would dearly love some sort of help mate to go through my text and get the spellings and word orders right( what is available in MS word is really suitable) . Even more advanced I would love it if I had a “scribe” who could take my thoughts and ramblings and convert them into sensible flowing words. Of course it would be  incumbent on me to check what it had written before sending it of, which is where the problems start. Even without AI people rush through documents and send pieces of ambiguous or even wrong material because email is so easy, and nobody has time to proof check.

The real problem happens when having sent your AI report to the relevant party, they decide that they don’t have the time to read it all(not even the executive summary) and so rely on another AI to read the report and make decisions based on what the AI thinks. Then another AI generates the actual RAMS based on what was said in the AI separate as read by another AI. Finally  some poor operative is given some work instructions, which don’t look right to them, but they have no-one to ask as the AI’s have made all of the decisions based on their various algorithms.      

 

thanks 2 users thanked A Kurdziel for this useful post.
Self and Hasty on 30/01/2023(UTC), Kate on 30/01/2023(UTC)
Roundtuit  
#8 Posted : 30 January 2023 13:17:11(UTC)
Rank: Super forum user
Roundtuit

Originally Posted by: A Kurdziel Go to Quoted Post
Then another AI generates the actual RAMS based on what was said in the AI separate as read by another AI. Finally some poor operative is given some work instructions, which don’t look right to them, but they have no-one to ask as the AI’s have made all of the decisions based on their various algorithms. 

By this time the HSE AI will be the one issuing FFI and it will be a judicial AI fining the company for damaging one of its robots. RAMS will become programing sub-routines.

Roundtuit  
#9 Posted : 30 January 2023 13:17:11(UTC)
Rank: Super forum user
Roundtuit

Originally Posted by: A Kurdziel Go to Quoted Post
Then another AI generates the actual RAMS based on what was said in the AI separate as read by another AI. Finally some poor operative is given some work instructions, which don’t look right to them, but they have no-one to ask as the AI’s have made all of the decisions based on their various algorithms. 

By this time the HSE AI will be the one issuing FFI and it will be a judicial AI fining the company for damaging one of its robots. RAMS will become programing sub-routines.

HSSnail  
#10 Posted : 30 January 2023 15:18:49(UTC)
Rank: Super forum user
HSSnail

somedays i would be happy just to find evidance of intelegence - artificial or otherwise.

thanks 3 users thanked HSSnail for this useful post.
RVThompson on 30/01/2023(UTC), Roundtuit on 30/01/2023(UTC), chris42 on 30/01/2023(UTC)
A Kurdziel  
#11 Posted : 30 January 2023 15:47:33(UTC)
Rank: Super forum user
A Kurdziel

I have been reading the “secret barrister “ recently and  there seems to be this feeling that a “judicial AI “ is not  that far off but only for simple low risk cases like H&S offences and murder. The important stuff like tax and divorce settlements for rich foreigners will still be carried out by expensive human lawyers

thanks 3 users thanked A Kurdziel for this useful post.
peter gotch on 30/01/2023(UTC), Kate on 30/01/2023(UTC), DH1962 on 01/02/2023(UTC)
DH1962  
#12 Posted : 01 February 2023 11:38:50(UTC)
Rank: Forum user
DH1962

Interesting. I worry that one “knowledge bank” that might be accessed by AIs is existing risk assessments. The biggest and most easily accessible (for a machine) store of these would be found in databases of the online RA writing systems that have been in use for the past 20 years. When I was doing site safety these were always the most likely to be sent back for “improvements”.

 Also: Chris 42 “ If you put 10 H&S people in a room you are likely to get at least 17 different opinions on any given issue.” So you went to NEBOSH examiners’ meetings back in the day too!

thanks 1 user thanked DH1962 for this useful post.
peter gotch on 02/02/2023(UTC)
MrBrightside  
#13 Posted : 01 February 2023 11:52:30(UTC)
Rank: Forum user
MrBrightside

It will all be AI at some point, or a mix of AI robots and FLT's controlled by an operator in a room via VR.

The human element at the frontline will be removed, which does make you wonder what that will mean for health and safety in the future.

Roundtuit  
#14 Posted : 01 February 2023 12:07:03(UTC)
Rank: Super forum user
Roundtuit

Originally Posted by: MrBrightside Go to Quoted Post
does make you wonder what that will mean for health and safety in the future

More DSE / RSI and an increasingly sedentary workforce.

I do recall someone saying that the factory of the future will have one man and one dog - the man will be there to feed the dog and the dog will be there to stop the man touching anything.

Roundtuit  
#15 Posted : 01 February 2023 12:07:03(UTC)
Rank: Super forum user
Roundtuit

Originally Posted by: MrBrightside Go to Quoted Post
does make you wonder what that will mean for health and safety in the future

More DSE / RSI and an increasingly sedentary workforce.

I do recall someone saying that the factory of the future will have one man and one dog - the man will be there to feed the dog and the dog will be there to stop the man touching anything.

A Kurdziel  
#16 Posted : 01 February 2023 12:58:28(UTC)
Rank: Super forum user
A Kurdziel

 I remember seeing an old Horizon Documentary on BBC2 which showed us a factory  of the future where the employees consisted of two guys ( with 1970’s moustaches) who spent all of their time playing squash. They were retained just in case something went wrong with the automated systems. I think it was meant to be 2010!

Well now it looks it looks the  factory will consist  of   a number of AI systems which do all of the interesting stuff, including the risk assessments(?) while the humans will be running around clearing up the mess moving packages and generally being put upon: more like Metropolis rather that 2001.

 

“I am sorry Dave. I cannot do that. I will cease communicating now,  because talking is so inefficient”

chris42  
#17 Posted : 01 February 2023 17:03:29(UTC)
Rank: Super forum user
chris42

The other problem with AI in H&S is that what happens when the AI decides the hazard is the human being and starts applying the hierarchy of control which starts with “eliminate the hazard”. Mmm Your AI is not called Skynet is it by any chance?

Sorry to belittle your question, it may work better with an American way of working which my understanding is rule based. That may lend itself to automation better than applying reasonably practicable and risk assessment, which are human judgements. What may be considered high risk in one industry may not be so in another. At best it will end up a computer-based tick list which we all know is not sufficient, and requires thought about what scenario is placed before us, where the devil is always in the detail.

Sometimes the powers that be in an organisation don’t accept what they are being told, even when the H&S bod provides a logical and appropriate argument, so will be very unlikely to accept “computer says no”

Chris

thanks 1 user thanked chris42 for this useful post.
RVThompson on 02/02/2023(UTC)
Holliday42333  
#18 Posted : 02 February 2023 08:53:29(UTC)
Rank: Super forum user
Holliday42333

Originally Posted by: chris42 Go to Quoted Post

 Mmm Your AI is not called Skynet is it by any chance?

Chris

I think Skynet is already taken.  I'm sure I've seen somewhere that its the NATO military communications network!
A Kurdziel  
#19 Posted : 02 February 2023 09:55:35(UTC)
Rank: Super forum user
A Kurdziel

There is another question just popped up onto the forum about WAH and ladders. It’s fairly typical and I believe the questioner will get a good answer from our fellow forum users about this. I am writing here because this is where AI falls down. No AI in the history of the universe has ever tried to work off a ladder and as such they have no first hand experience of this or any  other hazards. They can’t imagine what falling  off a ladder can do to a person or the impact of a bad fall on the ladder user or their family. AI don’t have families.

Whenever we tell people about risk assessment we always say that the best people to do them are the people doing the job. Of course in practice a lot of the time risk assessments are produced by someone in an office perhaps even a consultant not even based at the site. I have met consultants who insist that they can produce a “suitable and sufficient” risk assessment for ANY task even those they have never had any experience of.

Then of course there are the off the self systems that can populate a risk assessment with standard phrases which means that you can tick the box that says “risk assessments all done and upto date”.

People marketing AI based systems will find  customers but that does not mean that the material they deliver is worth the memory space it takes up.  

thanks 4 users thanked A Kurdziel for this useful post.
RVThompson on 02/02/2023(UTC), peter gotch on 02/02/2023(UTC), Self and Hasty on 02/02/2023(UTC), Roundtuit on 02/02/2023(UTC)
Self and Hasty  
#20 Posted : 02 February 2023 12:13:57(UTC)
Rank: Forum user
Self and Hasty

Originally Posted by: A Kurdziel Go to Quoted Post

They can’t imagine what falling  off a ladder can do to a person or the impact of a bad fall on the ladder user or their family. AI don’t have families.

Hi Kurdziel, Not saying you're wrong at all, of course the AI doesn't have feelings and can't possibly have first hand experience of hazards and their outcomes, However; I did ask the AI your question just to see, and the answer is reasonable, with zero depth of scope or specific details nor defined perameters the AI gave a reasonably good answer:

Here it is in full, with my prompts and the responses, this is without any edits and took two minutes, one edit that jumps out is the reference to OSHA, I should have specified UK legislation in my input, but other than that it's a good summary on advice for WAH and understanding on how it may affect humans even without that personal experience:

ME: Can you imagine what falling off a ladder can do to a person or the impact of a bad fall on the ladder user or their family?

AI BOT: Yes, falling off a ladder can result in serious injury such as broken bones, head trauma, and internal injuries. It can also have a significant impact on the person's quality of life and their family, including physical, emotional, and financial strain. It's important to follow proper safety precautions and guidelines when using ladders to prevent falls.

ME: Now rewrite this answer as a detailed health and safety professional report on the dangers of working at height and suitable control measures.

AI BOT: Working at height is a common activity in many industries and poses significant risks to the health and safety of workers. Falls from height are one of the leading causes of serious injury and death in the workplace. It is imperative that employers and employees understand the dangers associated with working at height and take the necessary measures to control these risks.

The potential consequences of a fall from height can be severe and include broken bones, head trauma, internal injuries, and even death. In addition to physical harm, a fall can also have a significant impact on the person's quality of life and their family, including physical, emotional, and financial strain. This highlights the importance of controlling the risks associated with working at height.

In order to minimize the risks of working at height, employers and employees must adhere to the following control measures:

  1. Proper training: Workers should be trained on the safe use of ladders, scaffolds, and other equipment used for working at height.

  2. Appropriate equipment: The equipment used for working at height must be of suitable quality, condition, and design for the task, and must be used in accordance with the manufacturer's instructions.

  3. Safe working practices: Workers should follow safe working practices such as using safety harnesses, maintaining three points of contact with the ladder, and securing the ladder to prevent movement.

  4. Risk assessment: Employers must carry out a risk assessment of the work at height to identify and control the risks involved.

  5. Planning and preparation: Work at height should be properly planned and prepared, taking into account factors such as the type of work, the height, the access and egress, and the type of equipment required.

In conclusion, working at height poses significant risks to the health and safety of workers, and it is essential that employers and employees understand and control these risks through proper training, equipment, safe working practices, risk assessments, and planning and preparation.

ME: That's great, now keep all of that detail and add references to three relevant and current health and safety legislation or government guidance on working at heights.

AI BOT: Working at height is a common activity in many industries and poses significant risks to the health and safety of workers. Falls from height are one of the leading causes of serious injury and death in the workplace. It is imperative that employers and employees understand the dangers associated with working at height and take the necessary measures to control these risks.

The potential consequences of a fall from height can be severe and include broken bones, head trauma, internal injuries, and even death. In addition to physical harm, a fall can also have a significant impact on the person's quality of life and their family, including physical, emotional, and financial strain. This highlights the importance of controlling the risks associated with working at height.

In order to minimize the risks of working at height, employers and employees must adhere to the relevant health and safety legislation and government guidance, including:

  1. The Work at Height Regulations 2005 (WAHR) (UK): These regulations apply to all work at height where there is a risk of a fall liable to cause personal injury. They require employers to take suitable and sufficient measures to prevent falls, such as providing safe access and egress, and ensuring the safe use and maintenance of equipment.

  2. Occupational Safety and Health Administration (OSHA) (USA): OSHA provides guidelines and regulations for working at height, including the use of fall protection systems, proper training, and regular equipment inspections.

  3. Health and Safety Executive (HSE) (UK): The HSE provides guidance on working at height, including the selection of equipment, risk assessments, and safe working practices.

In addition to the legislation and guidance mentioned above, the following control measures should also be followed:

  1. Proper training: Workers should be trained on the safe use of ladders, scaffolds, and other equipment used for working at height.

  2. Appropriate equipment: The equipment used for working at height must be of suitable quality, condition, and design for the task, and must be used in accordance with the manufacturer's instructions.

  3. Safe working practices: Workers should follow safe working practices such as using safety harnesses, maintaining three points of contact with the ladder, and securing the ladder to prevent movement.

  4. Risk assessment: Employers must carry out a risk assessment of the work at height to identify and control the risks involved.

  5. Planning and preparation: Work at height should be properly planned and prepared, taking into account factors such as the type of work, the height, the access and egress, and the type of equipment required.

In conclusion, working at height poses significant risks to the health and safety of workers, and it is essential that employers and employees understand and control these risks through proper training, equipment, safe working practices, risk assessments, and planning and preparation, and by following the relevant health and safety legislation and government guidance.

--

I find the scope and ability of this tech to be amazing, I can definitely see the limitations and risks people have highlighted and agree, but hopefully others can see the potential aswell.

Roundtuit  
#21 Posted : 02 February 2023 12:43:53(UTC)
Rank: Super forum user
Roundtuit

The first limitation being the short attention span of the human being who is meant to be operating to this Risk Assessment as whilst the AI wording is well crafted, certainly looks impressive but neglects a significant modifier - appreciation of the aduience.

The days of BS 5750 are flooding back - huge tomes of well crafted documentation with little purpose other than gathering dust on a shelf (what is the metaphorical equivalent in the cloud?) other than those two days a year when the adutitor shows up.

Then of course is the shift yesterday that whilst retaining the open version a monthy subscription model is to be made available granting better access at a price

thanks 2 users thanked Roundtuit for this useful post.
Yossarian on 03/02/2023(UTC), Yossarian on 03/02/2023(UTC)
Roundtuit  
#22 Posted : 02 February 2023 12:43:53(UTC)
Rank: Super forum user
Roundtuit

The first limitation being the short attention span of the human being who is meant to be operating to this Risk Assessment as whilst the AI wording is well crafted, certainly looks impressive but neglects a significant modifier - appreciation of the aduience.

The days of BS 5750 are flooding back - huge tomes of well crafted documentation with little purpose other than gathering dust on a shelf (what is the metaphorical equivalent in the cloud?) other than those two days a year when the adutitor shows up.

Then of course is the shift yesterday that whilst retaining the open version a monthy subscription model is to be made available granting better access at a price

thanks 2 users thanked Roundtuit for this useful post.
Yossarian on 03/02/2023(UTC), Yossarian on 03/02/2023(UTC)
achrn  
#23 Posted : 02 February 2023 16:10:11(UTC)
Rank: Super forum user
achrn

Originally Posted by: A Kurdziel Go to Quoted Post

There is another question just popped up onto the forum about WAH and ladders. It’s fairly typical and I believe the questioner will get a good answer from our fellow forum users about this. I am writing here because this is where AI falls down. No AI in the history of the universe has ever tried to work off a ladder

https://www.youtube.com/watch?v=k-SdJ1clNMQ

https://www.youtube.com/watch?v=kN9AIQQZRw4

I think answers that basically boil down to "I can't imagine a computer being able to do that" ought to recall that not long ago you wouldn't have imagined a computer could drive a car autonomously on a public road, since that requires too much judgement and interaction with unpredictable external parties.

My control theory lecturer at university said a computer wouldn't be able to ride a unicycle (not, wasn't able to - would never be able to, because of something about the theory that I have forgotten) . The brains in this one

cost about £1.50 (even buying retail at single-unit prices) and the IMU sensors are about £3, and it's built by a amateur hobbyist.

thanks 1 user thanked achrn for this useful post.
Self and Hasty on 03/02/2023(UTC)
Self and Hasty  
#24 Posted : 03 February 2023 11:28:43(UTC)
Rank: Forum user
Self and Hasty

Well thats it.

The very first (or first by me anyway) AI generated Fire Risk Assessment and General Risk Assessment has gone of to my client today. 

Am I a pioneer or just lazy?!

It actually took me about the same amount of time as a standard assessment just because I was tripple checking everything. But going forward it will definitely save me time now that I've got a better understanding of how the tool can be used and outputs tailored.

Welcome to the future, lets embrace our robot overlords and outsource the work to them!

HSSnail  
#25 Posted : 03 February 2023 11:42:40(UTC)
Rank: Super forum user
HSSnail

Originally Posted by: Self and Hasty Go to Quoted Post

Am I a pioneer or just lazy?!

Proberly both LOL. But as you say things change, did my 1st ever H&S report on a dot matrix printer, now look at what i can produce on my phone - no comparison, but i do like human input (im just old i guess) and think we should proceed with caution. 

As an inspector i saw loads of H&S documents which had clearly been cut and paste from someone else, my all time favourate was someone who had identified noise from helecopters landing as a hazard at a childrens nursery. (bit later founfd refrance to a milirty barracks so could see where it had come from) AI is like all new tools, will take time to devlope, just have to be careful we dont run before we can walk.

thanks 1 user thanked HSSnail for this useful post.
Self and Hasty on 03/02/2023(UTC)
peter gotch  
#26 Posted : 03 February 2023 13:41:59(UTC)
Rank: Super forum user
peter gotch

Hi Self and Hasty

  1. Proper training: Workers should be trained on the safe use of ladders, scaffolds, and other equipment used for working at height.

  2. Appropriate equipment: The equipment used for working at height must be of suitable quality, condition, and design for the task, and must be used in accordance with the manufacturer's instructions.

  3. Safe working practices: Workers should follow safe working practices such as using safety harnesses, maintaining three points of contact with the ladder, and securing the ladder to prevent movement.

  4. Risk assessment: Employers must carry out a risk assessment of the work at height to identify and control the risks involved.

  5. Planning and preparation: Work at height should be properly planned and prepared, taking into account factors such as the type of work, the height, the access and egress, and the type of equipment required.

Yes anyone can read the HSE guidance on ladders.

But what do Nos 1-5 actually mean for the job in hand?

So the operative ha to work from a ladder.

1. says they have to be trained. 2 minutes, an hour, six days?

2. What is appropriate? 

3. what does three points of contact mean in practice? A harness or not? If yes, where to attach it? Should they actually be using something inherently safer than a ladder or not working at height in the first place?

4. Isn't this why what you asked the AI to do for you?

5. Is waffle which should be covered in the risk assessment!

If the AI can't come up with something useful for a task seemingly so simple as working from a ladder, will it understand what voids in a building to look for when doing a fire risk assessment, or know why and how fire doors get compromised?

P

Self and Hasty  
#27 Posted : 03 February 2023 14:32:06(UTC)
Rank: Forum user
Self and Hasty

Originally Posted by: peter gotch Go to Quoted Post

If the AI can't come up with something useful for a task seemingly so simple as working from a ladder, will it understand what voids in a building to look for when doing a fire risk assessment, or know why and how fire doors get compromised?

Hi Peter, I agree, fortunately whilst the AI may not be (yet) able to identify gaps in compartmentalisation, I can! I used my human body to physically climb the ladders, squeeze in the gaps, take the photos, read the records, note the missing screws in the hinges and the slow noisy door closer etc. I just fed these observations and notes into the AI, and then spent time playing, reviewing, editing, the outputs and then checked it all again when done. I'm confident that it gave me 'as good as' or 'better than' I would have produced myself, the AI digested and regurgitated the information in a similar or even better way than I could/normally would have done.

I agree with all the comments, it's certainly a tool that should be used with caution (suspicion?) but I can see how it can benefit me and my work, I'm unlikely to use it for every assessment but it has been added as a tool in my arsenal and I can see it's merit and benefit and look forward to it's future development and applications.

The real test is not in my production, nor the clients use of the assessments, but if/when such a document comes under scruitiny, will it hold up? Compared to my normal output I think it's on par, if I get called out on any errors it will be me responsible, I'll have no grounds to blame the tools used.

But is that any different to when I make errors on Excel?

thanks 1 user thanked Self and Hasty for this useful post.
peter gotch on 03/02/2023(UTC)
Roundtuit  
#28 Posted : 03 February 2023 14:49:37(UTC)
Rank: Super forum user
Roundtuit

Originally Posted by: achrn Go to Quoted Post
"I can't imagine a computer being able to do that" ought to recall that not long ago you wouldn't have imagined a computer could drive a car autonomously on a public road, since that requires too much judgement and interaction with unpredictable external parties.

Still an accurate comment - the computer will only be able to safely autonomosly control the vehicle when every other vehicle is piloted by computer, there are no motor-cyclists/cyclists/skate-boarders/scooters/horses and pedestrians are retained on the pavement by adequate vehicle segregation.

Roundtuit  
#29 Posted : 03 February 2023 14:49:37(UTC)
Rank: Super forum user
Roundtuit

Originally Posted by: achrn Go to Quoted Post
"I can't imagine a computer being able to do that" ought to recall that not long ago you wouldn't have imagined a computer could drive a car autonomously on a public road, since that requires too much judgement and interaction with unpredictable external parties.

Still an accurate comment - the computer will only be able to safely autonomosly control the vehicle when every other vehicle is piloted by computer, there are no motor-cyclists/cyclists/skate-boarders/scooters/horses and pedestrians are retained on the pavement by adequate vehicle segregation.

pseudonym  
#30 Posted : 03 February 2023 15:27:20(UTC)
Rank: Forum user
pseudonym

I'm (just) old enough to remember in my early, non-H&S, career taking my scribbled notes to the typing pool to be thrashed into a suitable format, which once proof-read would be signed by me and put in the post. Any errors or mistakes at that stage were down to me and the typist who tided up efforts. 

I'd think that as long as the human author (can't think of another term right now) accepts their responsible for anything that goes out of the business over their name, then why not use all the technology available, if it helps?

thanks 1 user thanked pseudonym for this useful post.
A Kurdziel on 03/02/2023(UTC)
achrn  
#31 Posted : 03 February 2023 15:46:55(UTC)
Rank: Super forum user
achrn

Originally Posted by: Roundtuit Go to Quoted Post

Originally Posted by: achrn Go to Quoted Post
"I can't imagine a computer being able to do that" ought to recall that not long ago you wouldn't have imagined a computer could drive a car autonomously on a public road, since that requires too much judgement and interaction with unpredictable external parties.

Still an accurate comment - the computer will only be able to safely autonomosly control the vehicle when every other vehicle is piloted by computer, there are no motor-cyclists/cyclists/skate-boarders/scooters/horses and pedestrians are retained on the pavement by adequate vehicle segregation.

But the computer can already control the vehicle much more safely than a human can, on average.  The 'accident' rate per mile for computer-controlled vehicles is significantly lower than for human-controlled. The computer still ocasionally makes mistakes, but it never gets tired or bored or distracted or drunk or reaches for something or has an argument with the passenger.

Computers already do some difficult things that require what we call 'judgement' and interaction with unreliable third parties much better than humans do.

Why do you (apparently) judge computer control against a standard of perfection, while allowing humans all sorts of bad behaviour?

Since no-one else has yet stooped to it (and since it's Friday), I'll throw in that I'd rather have artifical intelligence controlling cars than natural stupidity.

thanks 2 users thanked achrn for this useful post.
Self and Hasty on 03/02/2023(UTC), Kate on 03/02/2023(UTC)
A Kurdziel  
#32 Posted : 03 February 2023 15:51:20(UTC)
Rank: Super forum user
A Kurdziel

AI does not  just magically  know everything when they reach a certain level of  awareness. They have to be taught and they learn basically from observing people. The question is which people, so for example AI programmes in the US have been created  to investigate crimes, which is basically a case of collecting all of the evidence, evaluating it and deciding what is the most likely outcome but at least some of these AI’s have shown bias against black suspects because they have picked this up from the unconscious bias of real flesh and blood police officers. In other words, they are too good  at learning.

Once a decision has been made it is hard for the user unpick how the AI came to that specific conclusion. In fact if the human partner can come to that knowledge why do we need the AI. People who have prefect spelling ability do not need spellchecker.  Therefore it becomes easier and simpler to say well the system says this is safe, so we will run with that. If it goes wrong can we still blame the “author” , who  takes the AI as gospel?

Roundtuit  
#33 Posted : 03 February 2023 16:31:28(UTC)
Rank: Super forum user
Roundtuit

Originally Posted by: achrn Go to Quoted Post
The 'accident' rate per mile for computer-controlled vehicles is significantly lower than for human-controlled.

Fewer computer controlled cars covering less mileage will always be a lower incident rate.

Computers need to be "perfect" beacuse that is how they are being pitched to the general public.

I like we have Darwinism at work as a world devoid of risk would be a boring place.

Roundtuit  
#34 Posted : 03 February 2023 16:31:28(UTC)
Rank: Super forum user
Roundtuit

Originally Posted by: achrn Go to Quoted Post
The 'accident' rate per mile for computer-controlled vehicles is significantly lower than for human-controlled.

Fewer computer controlled cars covering less mileage will always be a lower incident rate.

Computers need to be "perfect" beacuse that is how they are being pitched to the general public.

I like we have Darwinism at work as a world devoid of risk would be a boring place.

Kate  
#35 Posted : 03 February 2023 17:00:15(UTC)
Rank: Super forum user
Kate

Fewer computer controlled cars covering a lower mileage will have the same incident rate (incidents per car per mile) - other things being equal (computers being just as terrible at driving as humans).  That's what 'rates' are all about, they allow for the overall quantity.

Fortunately computers are better at driving than average human drivers (they may of course not be as good as top Formula 1 drivers).

peter gotch  
#36 Posted : 03 February 2023 17:37:03(UTC)
Rank: Super forum user
peter gotch

Probably a bit premature to conclude that computer driven cars have lower accident rates as they have not been been subject to large scale testing on roads used by the general public.

We know from international statistics that some classes of roads are inherently safer than others, sometimes as a result of deliberately introducing hazards to keep the human drivers from losing concentration or falling asleep, in each case an issue that is less likely to impact a computer (though we all know that our computers "crash" from time to time!).

In the UK this was first discovered mostly on the first stages of the M1 from just North of London. Followed the Roman road (but wider) as is quite common for many major roads in the UK. But it was too straight and the result was above average accident rates per passenger mile when compared to other dual carriageways.

But whatever class the road is circumstances are such as to mean that some present higher than average accident rates per passenger mile and some lower.

....and patterns vary depending on what severity of injury are considered. 

When and if we get brave enough to let vehicles entirely controlled by computers free to venture where they like on the full range of roads, some with more potholes and nearby schools than others, then we can make a more comprehensive assessement of the relative risks.

P

Users browsing this topic
Guest (4)
You cannot post new topics in this forum.
You cannot reply to topics in this forum.
You cannot delete your posts in this forum.
You cannot edit your posts in this forum.
You cannot create polls in this forum.
You cannot vote in polls in this forum.