‘The deployment of artificial intelligence could deal a heavy blow to the worldwide labour market and result in massive layoffs in global companies, the World Economic Forum (WEF) Future of Jobs report warns.
The study, which surveyed hundreds of large businesses worldwide, found that 41% of companies plan to slash their workforce by 2030 in response to the increasing capabilities of AI. Further, 77% of companies are preparing to reskill and upskill their existing staff from 2025 to 2030 for better human-machine collaboration.
The report predicted that 170 million new jobs will be created by the end of the decade, while 92 million jobs will be displaced. The WEF noted that skills in AI, big data, and cybersecurity are expected to be in high demand.
“Trends such as generative AI and rapid technological shifts are upending industries and labour markets, creating both unprecedented opportunities and profound risks,” Till Leopold, the head of Work, Wages and Job Creation at the WEF, said.
The WEF said that advances in AI, robotics, and energy systems, particularly in renewable energy and environmental engineering, are expected to boost demand for specialist roles in these fields.
The report also identified job categories that will face the largest decline in numbers due to AI and other technological trends. They include service clerks, executive secretaries, payroll clerks, and graphic designers.
“The presence of both graphic designers and legal secretaries just outside the top 10 fastest-declining job roles, a first-time prediction not seen in previous editions of the Future of Jobs Report, may illustrate GenAI’s increasing capacity to perform knowledge work,” the report said.
The report stressed that the impact of AI extends beyond job displacement, highlighting the potential of the technology to augment human output, rather than replace it outright.
The WEF concluded that “human-centered skills” such as creative thinking, resilience, flexibility, and agility will continue to be critical.
Meanwhile, high-profile figures and scientists have raised concerns in recent years over the potential dangers posed by AI. Last year, computer scientist and author Paul Graham warned that the use of AI for writing will result in the majority of people losing the skill in a few decades.
The labour market will change significantly because of the adoption of advanced technology, according to Daniil Gavrilov, the head of the Artificial Intelligence Research Laboratory at T-Bank AI Research. Everything a human is capable of doing can be done by AI, and machines can do it well, he said in an interview with RIA Novosti last year.
Gavrilov noted that in the short and medium term, employees will have to master AI skills in order to remain competitive.’
The below is from the Socialist Standard June 2022
‘In this issue we spotlight the rise and rise of Artificial Intelligence, a hot topic that raises fundamental questions about how it should be used, and what happens if it develops in ways we don’t expect and don’t want.
Currently AI is strictly horses for courses, confined within rule-based parameters and master of just one thing at a time, rather than becoming a super-jack of all trades. So, like numerical engines before the era of programmable general-purpose computing, it has been of limited use. But artificial general intelligence (AGI) is without doubt the ultimate goal, and the race is on to achieve it.
With this in mind, and with a chequered history of failed AI winters behind them, developers are concentrating on the ‘can we do it?’ question rather than the bigger ‘should we do it?’ question. Even less ethically distracted are investors whose only question is ‘can we make money out of it?’ This is not encouraging, given capitalism’s track record.
One problem with AI is that the more advanced it gets, the less we understand it. AI is increasingly a ‘black-box’ phenomenon, whose inner workings are a mystery to us and whose results are often inexplicable and unverifiable by other means. We can’t just treat it like a Delphic oracle, because it’s already clocked up embarrassing gaffes such as building racism and sexism into its staff-hiring rationales, or factorising income instead of health conditions into its medical outcomes estimates. And there have been several public relations disasters, with AIs answering enquiries with profanities after reading the online Urban Dictionary, Facebook chatbots creepily inventing their own language that no human can understand, and Amazon’s Alexa laughing demonically at its own joke: ‘Why did the chicken cross the road? Answer – because humans are a fragile species who have no idea what’s coming next’ (bit.ly/3wd4vh6).
Then there is the lack of internationally agreed definitions, paradigms and developmental standards, in the absence of which each developer is left to make up their own rules. Can we expect global agreement when we can’t get states to agree on climate change? In the absence of such a framework, it’s no wonder that people fear the worst.
Frankenstein-anxiety is nothing new in the history of technology, of course, and if we banned every advance that might go wrong we would never have stopped wearing animal skins and woad. It’s uncontroversial to say that the possible advantages to capitalism are huge, and indeed we’re already seeing AI in everything from YouTube preference algorithms to self-drive tractors and military drone swarms. And that’s small potatoes next to the quest for the holy grail of AGI. But while all this promises big profits for capitalists, what are the pros and cons in human terms? What is the long-term effect of the automation of work, for example? Tech pundits including Tesla boss Elon Musk take it for granted that most of us will have no jobs and that the only solution is a Universal Basic Income, a solution we argue is unworkable.
That’s not the worst of it. In 1950 Alan Turing wrote, ‘[T]he machine thinking method […] would not take long to outstrip our feeble powers. At some stage therefore we should have to expect the machines to take control’. IJ Good, Turing’s colleague at Bletchley Park, helpfully added, ‘The first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control’ (bit.ly/3FNCekb). The last thing we ever need, or the last thing we ever do, this side of a Singularity that wipes humans from the Earth?
It’s not so much a question of a Terminator-style Armageddon with machines bent on our annihilation. Even in capitalism it’s hard to imagine anyone investing in developing such a capability, at least not on purpose. But the fear is that it could happen by accident, as in the proposed ‘paperclip apocalypse’, in which a poorly considered instruction to make as many paperclips as possible results in the AI dutifully embarking on the destruction of the entire globe in order to turn everything into paperclips. Musk has similarly argued that AI does not have to be evil to destroy humanity: ‘It’s just like, if we’re building a road and an anthill just happens to be in the way, we don’t hate ants, we’re just building a road, and so, goodbye anthill’ (cnb.cx/3yJ7pMl).
Stuart Russell, in his excellent 2021 Reith lectures on AI (see our summary here), makes a telling observation about capitalist corporations like the fossil fuel industry, arguing that they operate as uncontrolled superintelligent AIs with fixed objectives which ignore externalities. But why only certain industries? We would go one further and argue that capitalism as a whole works like this. It doesn’t hate humans or the planet, but is currently destroying both in the blind and disinterested quest to build ever greater profits, so goodbye world, to paraphrase its richest beneficiary, one Elon Musk.
Musk is right about one thing, saying ‘the least scary future I can think of is one where we have at least democratized AI because if one company or small group of people manages to develop godlike digital superintelligence, they could take over the world’. It’s rather ironic that, once again, Musk sees himself as part of the solution, not part of the problem.
To democratise AI you would first need to democratise social production, because in capitalism science and tech are sequestered behind barriers of ownership by private investors anxious to avoid any uncontrolled release of potentially profitable knowledge into the environment. AI needs to belong to all humanity, just like all other forms of wealth, which is why socialists advocate post-capitalist common ownership. In such circumstances, a global standardisation of AI development rules becomes genuinely feasible, and as Russell argues, it wouldn’t be that difficult to program AIs not to kill us all in the quest for more paperclips: you simply build in an uncertainty principle, so that the AI understands that the solution it has devised may not be the one humans really want or need. It’s a sensible approach. If only humans used a bit of natural intelligence and adopted it, they’d get rid of capitalism tomorrow.’
Paddy Shannon
No comments:
Post a Comment