Hi Toe
The problem is that to some extent some initiatives can be self-fulfilling for various reasons, such that the Cause-Effect relationship might be INDIRECT rather than DIRECT, a mix of both, or mostly coincidental.
Here we have an initiative that sought to impact driver behaviour by getting the passengers to comment on their perception of driver behaviour either by talking to the driver or ringing a hotline (potential advantages and disadvantages with both options).
First thing to consider is the Hawthorne Effect. The simple fact that the drivers know that someone is taking an intiative is likely to result in change, but whether that will be lasting is far from guaranteed.
Then there are all the other variables some within the control of the minibus operators, some not, that the research does not appear to have considered.
So, as examples within the control of the minibus operators, issues such as safer vehicles and safer route planning. Suppose a bus company simultaneously introduced these "stickers" AND replaced buses AND improved maintenance etc regimes AND made route planning more realistic (i.e. allowing more time for the foreseen and unforeseen delays faced by drivers) AND removed any financial and other incentives to hit deadlines AND etc etc etc........
May be the bus company doesn't really want to say that their buses were UNSAFE before, and that the managment was INFLUENCING unsafe driver behaviour, so ALL the apparent reduction in accidents APPEARS to be a result of these "stickers".
When did you last work for an employer who openly admitted that their H&S performance had been far from satisfactory?!?!
There is also the possibility that an intitiative designed with a stated objective of reducing the number of reported incidents results in less reporting of the incidents that occur.
Then there are all the things that are outside the bus company's control. The authorities take action to make the roads inherently safer - you get less accidents, BUT that reduction for the bus company could be attributed to the "stickers" not the changing CONDITIONS - this implicitly suits those advocating the sticker initiative.
To assess the real impact on any initiative you have to look in the longer term AND attempt to isolate out the impacts of whatever else is changing and that is very hard to do.
Looking at what happened over 6 months is far too short a timescale. Suppose you were comparing what happened in last year's rainy season with what happened when the stickers were introduced when it wasn't raining so often?
Arguably the main factor is why the UK (amongst other countries) has reduced the number of Fatal and Serious Injuries over several decades is more about safer ROADS and safer VEHICLES than driver behaviour.
Even to the extent that road designers sometimes deliberately introduce apparently unnecessary hazards, with a view to reducing the risk of drivers falling asleep at the wheel (something you mention happening in Africa).
Driver fatigue from monotony was perhaps first noticed in the UK in the first years after the construction of the inital part of the M1 motorway, from just North of London to Rugby. The road was too straight and flat. People lost concentration and the accidents occurred.
Exactly the same phenomenon has been researched in other countries such as the US, Australia and India.
....and over the decades vehicles have become much safer for the occupants but less so for those outside - hence the Department for Transport keeps data for "vulnerable users" - including, inter alia, pedestrians and cyclists.
LOTS of supposedly brilliant initiatives proving someone's pet theory turn out to be less brilliant when subjected to scrutiny.
Just yesterday I read someone's thesis. They had looked at safety (not health) performance in the last two months of a project. A project that would have taken at least 6 months on site, probably more. MOST of the higher risk activities would have been completed before the last two months.
There were lots of statistics, despite less than 250,000 hours worked. Lots of variables reported in four levels of performance, in simple terms Very good, good, poor, very poor and colour coded from Green to Red.
However, as the data set was so small the Standard Deviation (SD) was such that in some categories ACTUAL performance might have changed from Green to Red or vice versa on the basis of a single Standard Deviation of movement. So the colour coding was in effect almost meaningless.
Not saying that the bus stickers didn't have some impact. Just questioning whether all the other variables were adequately considered.
Edited by user 01 September 2024 17:23:48(UTC)
| Reason: Typo