Engage EHS was recently invited to attend a round table discussion between software providers, health and safety managers from leading organisations and other experts, to consider how the use of technology to manage workplace health and safety is changing. Current trends were highlighted, such as how the ubiquity of mobile phones is changing the way we interact with desktop systems. GPS positioning, cameras, QR code scanners can be used on devices we’re already carrying around with us, which when combined with mobile apps and safety management systems make audits, incident reporting or inspections quicker, more accurate and more informative. The application of social media techniques in the workplace was discussed as a means of worker involvement through improved communication and sharing of ideas. Increased use of wearables to collect data to protect worker health is another growth area. Data analytics has improved the reporting available from the mass of data collected, with automatically tailored reports available for managers at all levels.
There was also discussion about technology that exists but is not yet widely adopted – augmented reality for providing “heads up” information to field staff, virtual reality for training, and various forms of artificial intelligence such as voice and image recognition, natural language processing and machine learning.
Amidst the enthusiasm for new and changing technologies, a sound of caution was noted. With all the increases in functionality (and with smaller and smaller devices to access it on) there is a danger that usability is left behind, and whilst a younger generation might be happy to learn how to swipe, gesture and talk to inanimate objects, some people might get left behind. The software developers all confirmed that usability was at the heart of all they provided.
Where technology provides not just information about your processes, but is involved with running a process, the critical nature of ease of use becomes even more apparent. In their own investigation of the circumstances at Alton Towers that led to leg amputations for two young women, as well as other serious injuries, operators Merlin had initially concluded that the cause of the accident was “human error .” This statement was later withdrawn. The HSE investigation identified that the whole system “from training through to fixing faults, was not strong enough to stop a series of errors by staff when working with people on the ride.” The final straw was however a misunderstanding of the technology by one of the engineers called in to deal with a problem on the ride. The computer showed that there was a ride car on the track, and this was preventing other cars running. The engineer believed this to be a mis-read by the technology, and so over-rode the “zone stop” function, without first making a visual check of the location indicated by the computer.
Such “human errors” as a result of overly complex and poorly designed systems are not new.
In 1979 the nuclear facility at Three Mile Island in Pennsylvania, USA, suffered a meltdown. Initially, systems had responded as they should – automatic shutdown processes had started, and a pressure release valve (PRV) opened. However, the PRV was supposed to close again afterwards. Operators misunderstood the information they had, partly due to lack of training, and partly due to the presence of 750 alarms, some of which would have helped the operators had they not been displayed on a remote screen, facing away from the operators’ position. As coolant escaped through the open valve, pressure readings increased, so operators shut down the water pumps to relieve the pressure. The uranium core was exposed, and the melt down occurred. Whilst the plant was lost, with the financial consequences this implied, there were fortunately no injuries or adverse health effects from the incident.
A few years later, in 1986, a much worse nuclear disaster occurred, killing 31 people in the short-term, and with consequences still causing harm to thousands of people, to animals and to the environment thirty years later . As before there were a large number of organisational factors – in this case, the organisation was the Soviet Union. The choice of nuclear reactor design, the culture within which people worked and lived, and inadequate training were all factors in the accident. But again, the final straw appears to be a misunderstanding of how the technology worked. A button was pressed that shouldn’t have been pressed, which caused extra control rods to drop into the reactor. Normally this might be expected to slow the reaction, but at Chernobyl it displaced coolant, increasing the heat and leading to an explosion. Poor design and a lack of resilience in the physical structure meant that a minor “human error” had consequences which reached through Scandinavia as far as the Arctic, and across the UK to Scotland, Northern Ireland, Cumbria and North Wales.
In all these cases, disaster occurred because of a combination of organisational and technical failures. As the round table discussion illustrated, software developers and the organisations using their software to manage or control processes must work together and talk together – the developers to improve the usability, and the operational organisations to make sure their processes make the best use of technology to prevent future disasters.
For more related articles and infographics, please check out our Ultimate Guide To Health and Safety Software.