AI technology is damaging the mental health of workers, concludes MPs
Despite widespread industry optimism, concerns have been raised as to the current impact of AI technology on the mental health of employees.
The All Party Parliamentary Group on the Future of Work (APPG), a cross-party grouping of MPs, peers, and industry leaders, focused on workplace technological innovation, has concluded that monitoring workers and performance targets through algorithms needs urgent regulation.
In a report, the informal parliamentary group said: ‘AI offers invaluable opportunities to create new work and improve the quality of work if it is designed and deployed with this as an objective. However, we find that this potential is not currently being materialised. Instead, a growing body of evidence points to significant negative impacts on the conditions and quality of work across the country.’
- See also: 'The potential of AI for mental health services'
Specifically, the pressures of monitoring and target setting technologies have been found to be eroding the mental health and physical wellbeing of workers due to automated assessment and real-time micromanagement.
Is the growing use of AI making employment more precarious and anxiety-inducing?
Earlier this year, the union the TUC came to a similar conclusion saying that while AI has the potential to improve working lives, new unregulated technologies also pose a considerable risk to the boundaries between home and work and are starting to be used for damaging practices such as ‘hired and fired by algorithm’.
In March, Frances O'Grady, TUC General Secretary, said: “AI at work could be used to improve productivity and working lives. But it is already being used to make life-changing decisions about people at work – like who gets hired and fired.”
“Without fair rules, the use of AI at work could lead to widespread discrimination and unfair treatment – especially for those in insecure work and the gig economy.”
A TUC report found that throughout the Covid-19 pandemic, employers have increasingly been deploying new monitoring technologies, which have been considered intrusive by many workers and have in many instances failed to monitor absences and performance accurately, causing unnecessary stress and anxiety.
- See also: 'Will virtual counselling replace in-person therapy?'
- See also: 'Changing the narrative: how gaming can be used for good in mental health'
New legislation is necessary to make AI human-centred
The APPG report recommends that the Government bring new legislation forwards and update the regulation of AI technologies. The recommendations include:
- Legislating an Accountability for Algorithms Act (AAA): creating a legal requirement for corporations to assess how the use of AI could impact their workforce.
- Updating digital protection: establishing the right of workers for a full explanation of the use, purpose, and metric of the algorithmic systems used in the workplace.
- Enabling a partnership approach: a new right for unions to be consulted whether risky AI tools are being introduced into the workplace.
- Enforcement in practice: the Digital Regulation Cooperation Forum should be expanded with new regulatory powers.
- Support human-centred AI: focusing automation and the use of AI on how it can potentially benefit the workforce.
David Davis, Conservative MP, and chair of the AAPG, referencing the report's recommendations, wrote on Twitter: 'AI technologies have spread beyond the gig economy to control what, who and how work is done. An Accountability for Algorithms Act is urgently needed to ensure fairness and transparency in the workplace.’
Clive Lewis, Labour MP, and vice-chair of the AAPG, commented: “The gaps in regulation are damaging people and communities across the country, and the Government must urgently bring forward robust proposals for AI regulation.”
Comments
Write a Comment
Comment Submitted