LONDON: Since generative artificial intelligence burst on to the scene last November, the forecast for white-collar workers has been gloomy.
OpenAI, the company behind ChatGPT, estimates that the jobs most at risk from the new wave of AI are those with the highest wages, and that someone in an occupation that pays a six-figure salary is about three times as exposed as someone making US$30,000. McKinsey warns of the models’ ability to automate the application of expertise.
I understand the temptation to wave away these warnings as mere projections. Thousands of years of history have lulled many of us into the false sense of security that automation is something that happens to other people’s jobs, never our own.
But for some, the fear that AI may one day take white-collar jobs is already a reality. In an ingenious study published this summer, US researchers showed that within a few months of the launch of ChatGPT, copywriters and graphic designers on major online freelancing platforms saw a significant drop in the number of jobs they got, and even steeper declines in earnings.
This suggested not only that generative AI was taking their work, but also that it devalues the work they do still carry out.
Most strikingly, the study found that freelancers who previously had the highest earnings and completed the most jobs were no less likely to see their employment and earnings decline than other workers. If anything, they had worse outcomes. In other words, being more skilled was no shield against loss of work or earnings.
Related:
Commentary: Look beyond a universal basic income for answers to AI future
IN FOCUS: When AI goes mainstream – these professionals use ChatGPT to do their job better
LESS HIGHLY SKILLED WORKERS ENJOY BIGGEST PERFORMANCE GAINS
But the online freelancing market covers a very particular form of white-collar work and of labour market. What about looking higher up the ranks of the knowledge worker class?
For that, we can turn to a recent, fascinating Harvard Business School study, which monitored the impact of giving GPT-4, OpenAI’s latest and most advanced offering, to employees at Boston Consulting Group.
Staff randomly assigned to use GPT-4 when carrying out a set of consulting tasks were far more productive than their colleagues who could not access the tool. Not only did AI-assisted consultants carry out tasks 25 per cent faster and complete 12 per cent more tasks overall, their work was assessed to be 40 per cent higher in quality than their unassisted peers.
Employees right across the skills distribution benefited, but in a pattern now common in generative AI studies, the biggest performance gains came among the less highly skilled in their workforce.
This makes intuitive sense: Large language models are best understood as excellent regurgitators and summarisers of existing, public-domain human knowledge. The closer one’s own knowledge already is to that limit, the smaller the benefit from using them.
Related:
Commentary: We must become better writers in an age of ChatGPT
Commentary: Why I as a recruiter can’t ignore ChatGPT anymore
There was one catch: On a more nuanced task, which involved analysing quantitative evidence only after a careful reading of qualitative materials, AI-assisted consultants fared worse: GPT missed the subtleties.
But two groups of participants bucked that trend. The first – termed “cyborgs” by the authors – intertwined with the AI, constantly moulding, checking and refining its responses, while the second – “centaurs” – divided labour, handing off more AI-suited subtasks while focusing on their own areas of expertise.
Taken together, the studies tell us three things. First, regulation will be key. Online freelancing is about as unregulated a labour market as you will find. Without protections, even knowledge workers are in trouble.
Second, the more multi-faceted the role, the less risk of complete automation. The gig-worker model of performing one task for multiple clients – copywriting or logo design, for example – is especially exposed.
And third, getting the most out of these tools, while avoiding their pitfalls, requires treating them as an extension of ourselves, checking their outputs as we would our own. They are not separate, infallible assistants to whom we can defer or hand over responsibility.
In millennial-speak, generative AI is the white-collar worker’s frenemy. It’s wise to be wary, but this could become a flourishing relationship.