AI at Work: Tackling the work, health and safety ethics of artificial intelligence

robot

By Genevieve Knight

While we are all pivoting, innovating and e-volving during Covid19, perhaps you have noticed some of the news stories highlighting how Artificial Intelligence (AI) can trip up some businesses and government bodies, and worried about how you can keep up with the innovation game without similarly falling over.

Even firms viewed as conservative are adopting AI such as Barclays, but not always without encountering problems that suggest AI adoption needs to be integrated into governance and risk systems. 

In fact, AI is present in Australia and has been for some time, in use within a range of industries and government bodies for a variety of purposes. NSW Police have said that they have used biometric technology since 2004 and currently use a tool called PhotoTrac to search images taken by NSW Police which uses AI facial recognition technology. Artificial intelligence and machine learning, particularly in the fields of business process innovation, IT systems architecture, change management, human resources, and increasingly also in government service design, have both a high level of public interest and a high media profile.

Algorithms and, more broadly speaking, adopted AI technologies in business that were eventually found to have been poorly designed, inadequately understood or underdeveloped have contributed to some widely reported incidents.

Three relatively well known examples include the sudden descent of Qantas QF72 Airbus A330-303 flight from Singapore to Perth Australia in 2008 (injuring 119 passengers and crew) (The Australian Transport Safety Bureau 2008 described this in their investigative report); the recent death of a pedestrian during the trial of an autonomous vehicle by the logistics company Uber; and the recently admitted hardship caused by Australia’s social security payment recovery program, nicknamed the Robodebt scandal (Paul Henman coined this term when he wrote about this in 2017 in The computer says “DEBT”: towards a critical sociology of algorithms and algorithmic governance, for the conference Data for Policy 2017: Government by Algorithm? that was held in London, UK.).

Avoidable organisational malfunction have often been found to accompany technological or programming flaws, such as in the case of two fatal Boeing 737 Max aircraft accidents in 2018 and 2019. Here the aircraft manufacturer Boeing was found to have facilitated the aircrafts’ crashing after prioritising a quick roll out into production and cost reductions over aircraft safety, including withholding critical information about potential safety issues from aircraft pilots (The US House Committee on Transportation and Infrastructure 2020 considered this in their report on Preliminary Investigative Findings, The Boeing 737 MAX Aircraft: Costs, Consequences, and Lessons from its Design, Development, and Certification.).

Pressures on production times and costs, paired with poor communications and hubristic management, have already been found to have contributed to catastrophic accidents affecting NASA’s two Space Shuttle craft, Challenger in 1986 (these were pinpointed in the 1986 Report to the President On the Space Shuttle Challenger Accident June 6th, 1986, and the 2003 report to the Columbia Accident Investigation Board Chapter 7, The Accident’s Organizational Cause  from the Government Printing Office, Washington, D.C.)

Whilst these are examples of major, often catastrophic accidents that are typically rare exceptions, not routine, they all concerned workplaces and their workers. They also demonstrate the far-reaching effects that poor design can have on others more (aircraft passengers) or less (pedestrians) closely connected to the responsible organisation.

The desire to innovate products or production processes is a prerequisite to the application of novel technologies, such as AI, in the workplace. The introduction of AI in a workplace itself typically signals a major process or product innovation. The challenge that AI-based innovations face is that their impact on users and producers may be hard, if at all possible, to foresee. This is particularly the case during the early implementation stages of an innovation when modifications and adjustments intended to reduce undesirable risks or indeed hazards may still be fairly easily applied (and at comparatively low cost). Instead, these risks may only become apparent when it could become too late or too costly to change direction. This is typically referred to as the Collingridge dilemma, after David Collingridge, who in 1980 wrote The Social Control of Technology.

Join and form our discussions

The SA Centre for Economic Studies, Australian Institute for Machine Learning, Australian Industrial Transformation Institute, in association with the NSW Centre for Work Health and Safety, are currently undertaking a project that aims to develop an understanding of the key ethical considerations for business in adopting AI technologies, with a particular focus on work health and safety (WHS).

Our project is seeking organisations that are working with Artificial Intelligence who are willing to take part in further case study discussions in the next few months – all information will be anonymised. If you and your firm are keen to be involved please contact us at SA Centre for Economic Studies by 22 September 2020.

 

Tagged in News and Highlights