Artificial Intelligence, Automation and Technological Unemployment
This short reflection is based on the article in the Guardian called: “Buy now, pay later: AI and the ‘red-light risk’ for millions of Australian jobs”. It focused on the ‘savings’ aspect in retail with thoughts on how this would change the job market. Therein lies the focus on cutting costs and often the tempting idea to reduce human labour to improve margins. Another aspect in the article was human rights and technology related to this development. There was one statement I found particularly interesting in this article. A news tidbit that caught my attention stated the following:
“Last month, supermarket giant Coles announced it was working with Microsoft to use the data from its 21 million weekly transactions to “optimise” and simplify its business. The aim is to save $1bn by 2023.”
As such it struck me that when an algorithm is created it seems unlikely that its creator or team may think of the consequences to possible job loss (although this is much discussed). In which case we may consider technical unemployment.
Technological unemployment is the loss of jobs caused by technological change. It is a key type of structural unemployment. Technological change typically includes the introduction of labour-saving “mechanical-muscle” machines or more efficient “mechanical-mind” processes (automation). Automation is the technology by which a process or procedure is performed with minimal human assistance.
My question to this is not likely to be novel: do developers have a responsibility? The easy answer is of course yes, however this does not play out in straightforward ways.
When Google was about to sign a large defence deal there was a big protest within Google to stop this action. 20’000 Google employees participated in the 2018 walkouts. This had the result of Google stopping the deal and making its principles for AI. Now it seems some of the organisers have been harrased afterwards and left the company.
Google does have the clear yet ambiguous mantra of ‘don’t be evil’ so it seems employees made an interpretation that Google was moving in an evil direction. Amazon employees were protesting against the use of ICE with Palantir/Amazon. US Immigration and Customs Enforcement agency (ICE) uses Amazon servers and Palantir software for deportations.
We can question whether Microsoft workers will protest automating certain parts of a retail situation, it just seems more efficient, right? Then if you are working with AI safety —’ safe for whom’ is a pervasive question. How far does AI Safety extend? Let us look at the mission for AI safety at Stanford:
The mission of the Stanford Center for AI Safety is to develop rigorous techniques for building safe and trustworthy AI systems and establishing confidence in their behavior and robustness, thereby facilitating their successful adoption in society.
In the whitepaper attached it does say that they strive for transparency. Explainable, accountable, and fair AI. Yet their overarching mission is more aimed at adoption than fairness. Equally fairness is the last point on their list in the whitepaper. As such we may question the order of priority. Then again there seems to be a culture of ‘speed’ or ‘agility’ perhaps even in AI Safety — should safety come in the way of innovation?
Since I am Norwegian we do have an acknowledged tradition of social democracy although there has been disagreement over what this democratic model looks like. This movement does across most Scandinavia stem from Labour movements. Many seem surprised to see such big strikes from tech, yet Uber, Amazon, Facebook are experiencing large protests over recent actions. Will we see social democratic technology in AI Safety?
Does AI Safety mean wanting to ensure jobs? There is talk of retraining or reskilling. Is this however something that is considered before implementation or in a reactive manner?
These are questions that arose today.
Credit: BecomingHuman By: Alex Moltzau