We must protect workers from any ‘sorcerer’ scenarios

New technologies and artificial intelligence are already proving to be life-changing. But is this at any cost to workers’ safety and health? IOSH policy specialist Dr Ivan Williams Jimenez considers the issues

It’s an all-time Disney classic. Mickey Mouse plays the young apprentice, told by the sorcerer to fill a cauldron with water while he’s away. The overconfident apprentice uses one of his master’s best spells to enlist the help of a broomstick in completing the task, with the least amount of energy expended on his part.

After a productive start, the apprentice falls asleep only to wake to find the cauldron overflowing, the room flooded with water and the broomstick continuing to fetch and carry more water. The apprentice didn’t trouble himself to think how the magic could be turned off and the broomstick brought to heel. He panics and attacks the broom, only to create a whole new family of brooms that multiply the chaos.

Thankfully, an angry sorcerer returns to restore order, but the apprentice’s credit is dashed to pieces.

I was reminded of this iconic scene from ‘Fantasia’ when I read how TUC research had shown that most workers have experienced intrusive surveillance technology and artificial intelligence (AI), saying they believe it risks “spiraling out of control” without stronger regulation to protect them.

The TUC report warned that modern technologies could lead to widespread discrimination, work intensification and unfair treatment if left unchecked. The pandemic had encouraged a significant increase in workplace surveillance technology, it was claimed, as employers moved to more remote forms of work. AI-powered technologies were even being used to analyse facial expressions, tone of voice and accents to assess candidates’ suitability for roles, it was alleged.

This led to the call for a statutory duty to consult trade unions before an employer introduces the use of AI and automated decision-making systems, as well as legislation to include the right for workers to disconnect, plus digital rights to improve transparency around use of surveillance technology.

AI is a strategic technology that offers many benefits for us all and for the economy. This includes bringing improvements to working conditions. Robotics (including AI and drones) can make work safer by replacing dangerous activities and intervening in many of the physical jobs that cause higher levels of injury, or when operating in dangerous environments – such as mines, mills, farms, laboratories, factories and energy plants, for example. AI can also detect if a worker is not wearing suitable PPE and send an alert to prevent access to a hazardous area.

In short, AI technology has the potential to change our lives – for the better.

Yet it is not a given that AI will work for the benefit of people and be a force for good in society. We have to work at it. So, all of us – OSH professionals, HR specialists, designers, employers and workers - must come together to give AI technologies a human-centred and ethical focus. It’s important that all parties contribute to the implementation stage of AI-driven technology, helping to identify, mitigate and prevent any unintended consequences.

Workers’ fundamental rights must be respected, with human dignity, non-discrimination and the protection of privacy and personal data remaining paramount. If we’re concerned about potential physical and mental harm at work, then we must advocate risk assessment and the ongoing review of new technologies and digital transformation (including AI adoption and man-to-machine communications). We must guard against any inappropriate algorithmic decisions or tracking.

While it’s critical we maximise the many potential occupational safety and health benefits of AI applications, we must also take care to minimise any potential threats. These threats might include, for example:

  • The potential misuse of AI-enabled workplace sensors leading to the tracking of all aspects of worker activity
  • Insufficient collision control where AI-enabled robotic devices and workers share a physical space
  • Any lack of clarity on responsibility for AI-enabled decision-support systems.

What we must have is:

  • Regulatory oversight and checks of AI applications at work (including by OSH departments and authorities)
  • Sufficient training in OSH assistive and collaborative AI
  • Proper consultation with workers whenever new technologies are integrated at work, applying a worker-centred, ‘human-in-command’ approach.

The adoption of AI technologies must be socially responsible. Before any AI-enabled devices or systems are introduced into the workplace, their risks and benefits must be subject to through and transparent review. All parties must consider how AI applications will impact the workforce, local community, supply chain and anyone else affected by an organisation’s activities. This will reduce workers’ exposure to serious workplace hazards and ensure risk prevention is designed-in from the get-go.

If only the sorcerer (and Mickey) had been as socially responsible.

Dr Ivan Williams Jimenez
IOSH Policy Development Manager

  • All
  • All
  • Opinion