Well, that generative AI thing got real pretty quickly, didn’t it?
In this Normal Deviance column, Hugh takes a slightly less serious retrospective at a rapidly evolving space.
It’s now been about six months since OpenAI unveiled ChatGPT. This wasn’t the first text creation model, but it feels like the moment that the world stopped and took notice of generative AI.
A technology that produces passable limerick on quantum physics probably deserves the attention. So too, image generation that can put puffer jackets on Popes. Thousands of articles have already been written about generative AI and its implications. No doubt, a fair few of the articles were already co-written by ChatGPT and its competitors*. Now that we’re (hopefully?) nearing the end of the initial hype cycle, we can reflect on some implications that are now much clearer.
First, there was initial concern that large AI models, including ChatGPT-style Large Language Models (LLMs), would become the domains of a small number of large technology companies. Happily, it seems that this is not the case – several nimbler startups have emerged, and the speed of other companies releasing products suggests that the barriers to entry for the technology are not too high.
Second, there was a lot of concern about the impact of hallucinations and misinformation flowing from these models. Here I mean, the type of misinformation where a model directly gives poor or wrong advice that turns out to be harmful (as opposed to cases where people deliberately use the technology for their own nefarious purposes). By and large, I think the technology has passed this test. While some unhinged behaviour has emerged, it does not seem to be a rampant issue and users have seemed well-informed enough to understand (and appreciate!) the imperfections. Vigilance and education is still required, but it hasn’t led to planes falling out of the sky (April Fools articles notwithstanding).
Third, it is a technology that is being recognised as a big deal by the people who should know about these things. Most of tech’s elder statesmen seem to think so. OpenAI and Microsoft’s progress was enough to send Google into a panic, attempting a gaggle of AI innovations in rapid time; hopefully, with more success than the company’s haphazard social media efforts, itself a similar panic to Facebook’s rise.
Fourth, the implications for education specifically seem profound. There are clear positives – the technology can be used to create a private tutor that can dynamically offer advice on work, and show much greater patience than regular human tutors (or regular human parents, for that matter). On the negative, we have an education system that relies heavily on assessments, and many of these are no longer fit for purpose when a tailored essay is a few keystrokes away.
Questions remain. Perhaps the biggest is we still don’t have clarity is what it all means for jobs and services going forward. For LLMs, the fundamental question is what contexts can benefit from a very intelligent chatbot whose accuracy cannot be guaranteed. Examples already exist, such as improved productivity for online chat and incorporating AI-suggested text into workflows. Computer coding similarly seems a space where a dynamic virtual assistant can increase speed. General white-collar work may see productivity gains (or maybe just more polite emails), whereas my plumber will not see a massive impact. And it’s unclear how big the market is for self-driving, AI-powered prams. Will AI create jobs, morph them or destroy them? Time will tell, but there doesn’t seem to be a need for immediate pessimism.
One change that we are already seeing is an internet with even lower barriers to content creation – whether articles, images, videos, ads or other forms of content. A small team can now build a large website with hundreds of articles very quickly. These can be geared towards specific searches, crowding out existing content. Whether this becomes an unmanageable deluge, with a declining ability to search for quality content, remains to be seen. Perhaps improved search AI will counterbalance the increasing challenges of separating out quality.
The other big question is around transparency and regulation. If hundreds of new models are produced, many of them private, then how do we know when AI is driving decisions and how can we be confident on the basis of decisions when this happens? And relatedly, what are the implications of private web-scraped data collections being used to create the models? This is all a tricky space, where regulation is likely to move rapidly, and at different speeds around the world. As ever, Europe appears to be on the front foot in nailing down what a world with AI should look like.
My approach? I think taking a natural interest in how things are evolving is important – some things will change rapidly, and it helps to be forewarned. But for now, I’ll assume that the world will continue to need human-led, data-based analysis and advice; and I’ll need to make sure that I’m making use of the best tools and technologies to provide them.
* I did not use AI to write this article. But I did ask ChatGPT to rate it out of 10 – and got an 8. And then 7.5. And then 8.5.
CPD: Actuaries Institute Members can claim two CPD points for every hour of reading articles on Actuaries Digital.