News & Insights

A regulatory conundrum - Balancing the risks and opportunities of AI

This year artificial intelligence (AI) has dominated the headlines as commercial use of chatbots like ChatGPT have become more popular.
 
Along with the many positive and productive applications of AI, there have also been many discussions about its potentially negative effects. Many concerns have been noted, which span from its role in privacy, copyright, surveillance and discrimination all the way to extremes like AI turning against humanity. 
 
We don’t need to look far to see some controversial uses of this technology. Deepfakes rose in popularity this year, from photoshopping the Pope into odd places to AI-generated music that brought back to life deceased artists. There is good money in media that can convincingly appear like a person or mimic their voice, but deepfakes generate several legal issues as well, such as defamation, data protection and internet protocol. Deep synthesis technology that depicts artificial images and video come with risk that should be mitigated through regulation. While I personally quite enjoyed listening to the new 2Pac and DMX music while browsing AI-generated images of Pope Francis in a puffer jacket, it does raise concerns over how quickly AI systems can create plausible text, images and voice from existing sources to create harmful disinformation. Like all forms of technology, in the wrong hands it could be used to commit fraudulent acts that could have destructive potential. Many have voiced their concerns, including several AI insiders. Sundar Pichai, CEO of Google, admitted he had lost sleep over the negative potential of AI and Elon Musk said he had fallen out with Google co-founder Larry Page as he was not taking AI seriously enough. He warned digital superintelligence would soon be possible and systems like Artificial General Intelligence (AGI) are a step towards what some refer to as ‘God-like AI’. Many are fearful that a computer system that is capable of generating new scientific knowledge and performing any tasks that humans can will render us obsolete.

So how are countries approaching AI regulation?
 
Despite being home to some of the world’s largest
technology businesses at the forefront of AI and having a large amount of regulatory infrastructure in place, the US has no specific legalisation under serious consideration. 
 
Meanwhile, the EU has taken a more proactive approach by drafting AI law which would provide guardrails and transparency requirements for businesses using the technology. The EU’s proposal would require companies to perform some analysis covering potential risks their services entail, which could impact areas like health systems or national security.

While the US and the EU are still pondering how to handle the giants they created, China has moved ahead of many other regions and is, unsurprisingly, taking a more hands-on approach. A recently announced raft of legislation is miles ahead of anything considered elsewhere. It includes regulatory oversight of the data collected to power algorithms and labels for synthetically produced content. Its rules around generative AI are particularly interesting. Any company producing an AI model to use legitimate data to train their models will need to disclose both the data and the model to regulators.

While Chinese technology companies trail US leaders like OpenAI, they are trying to catch up. Many businesses like Baidu and Alibaba have released AI models this year and submitted plans to have them approved by China. The question is whether this regulation will stifle innovation and hamper China’s attempt to keep up with the US, especially if their American peers remain unregulated.

One of the key reasons China was quick to confirm its regulation is because any AI ecosystem will be utilised to serve China’s goal of balancing innovation with social control and censorship. AI will likely be used to further limit information available on the internet. There are reports that two-way word filtering large language models (LLM) are being used to ensure others LLMs are scrubbed of any controversial topics. Rumours have circulated that the government is even doing spot-checks on how AI services are labelling their data. If LLMs are now being developed as censors, that could be a critical point for AI in China. Should the country continue to flex its regulatory muscles, it is unlikely that AI will reach its full potential through evolving and innovating beyond human controls.

The repercussions for violations are very real. Companies and their leaders know they will be reprimanded if they do not follow the rules. In the US, it is a different playing field altogether, where huge legal teams and strategic lobbyists dominate the conversations around regulatory action and shift the power to the technology behemoths. It has proved difficult to regulate the Silicon Valley giants so far in relation to more established areas like social media and I imagine installing AI regulations would be even more tricky.

It is a regulatory conundrum, but not just for China. How do we enjoy the many benefits of AI and encourage innovation while also managing the risks that, if left unchecked, AI could pose to society? How do regulators and lawmakers start to put definitions on a constantly evolving area of technology? A sweeping one-size-fits-all approach to regulating this area is unlikely to work, but as 2023 comes to a close and another election cycle heats up in the US, we may see governments and lawmakers having to make tough choices on what is currently feasible on the AI policy front.