This week’s update is written by Adam Smith in our Peterborough office.
Artificial Intelligence has regularly been making headlines in recent weeks. For a film buff such as myself, AI conjures thoughts of science fiction works such as The Terminator, The Matrix, Avengers: Age of Ultron and RoboCop. We are now seeing that AI systems have increasingly left the laboratory, and that the everyday person can interact with these systems on a daily basis. Whilst we seem to be a long way from the apocalyptic and dystopian versions of events presented in the aforementioned films, with every innovation comes a repeated warning or call for limitations on the use of AI.
Last week, Sam Altman, CEO of OpenAI (creators of ChatGPT) faced questions in the US Senate for over three hours. He agreed with the Senators present that the industry requires regulation as AI becomes ‘increasingly powerful’. AI is being used to create all manner of media, from completing homework for shortcut-taking schoolchildren, superimposing new singers onto existing songs and even successfully appealing a car parking fine. The possibilities are endless. In many cases, AI is only limited by the imagination of the human giving commands within the parameters of the system being used. This of course, generates headaches for regulators such as intellectual property rights and the consistency of regulation across the world.
In addition to the US Senate hearing, both UK Prime Minister Rishi Sunak and European Commission President Ursula von der Leyen saw the potential benefits and pitfalls of AI when calling for ‘guardrails’ at last week’s G7 summit in Hiroshima.
These fears come with the hope that, as Sam Altman said himself, AI has the potential to solve “humanity’s biggest challenges, like climate change and curing cancer.” Perhaps our uncertainty is merely akin to the widespread hesitancy regarding driverless cars, that we don’t want to take our ‘hands off the wheel’. However, when we find that a computer can do something faster, more accurately and consistently than humans without taking breaks, do we really want to potentially render ourselves obsolete? What do we limit AI to, in order to avoid any unintended consequences? These moral conundrums and considerations are likely to gather pace worldwide and feature regularly in headlines in the years ahead. We are currently seeing AI systems becoming easier to access, cheaper and being increasingly implemented into everyday activities. Whilst society establishes the limits for such systems, what does the short- to medium-term outlook for AI look like?
Perhaps the biggest AI related news this week is that BT is to reduce its current workforce by up to 55,000 jobs1. Up to a fifth of these cuts would fall within customer services, where technologies including AI can take the place of the present workforce. AI is being touted as able to make services faster, better and more seamless, with customers not feeling ‘like they are dealing with robots’.
On the investment front, Ravenscroft has exposure to the AI space through various holdings and we continue to keep a close eye on any interesting developments within the industry. This could, for example, be through investing in companies harnessing AI, or those using AI as a stock selection tool. This is something we have covered in other weekly updates and articles.
It looks like the short to medium-term outlook is one of global economies increasing productivity with AI systems as a core tool in realising their objectives. This is a strategy that needs careful monitoring to ensure that the systems mirror the values and desired outcomes of the societies implementing them.
We wish you a good week. Please note that our next update will be on Tuesday 30th May.
- 1 BBC