From ChatGPT to the AI Safety Summit: The year in AI


Artificial intelligence has become one of the biggest issues in tech in 2023, driven by the rise of generative AI and apps such as ChatGPT.

Since OpenAI rolled out ChatGPT to the public in late 2022, awareness of the technology and its potential has exploded – from being discussed in parliaments around the world to being used to write TV news segments.

The public interest in generative AI models has also pushed many of the world’s largest tech companies to introduce their own chatbots, or speak more publicly about how they plan to use AI in the future, while regulators have increased debate around how countries can and should approach the opportunities and potential risks of AI.

In 12 months, conversations around AI have gone from concerns over how it could be exploited by schoolchildren to do their homework for them, to Prime Minister Rishi Sunak hosting the first AI safety summit of nations and technology companies to discuss how to prevent AI from surpassing humanity or even posing an existential threat.

In short, 2023 has been the year of AI.

Much like the technology itself, product launches around AI moved quickly over the last 12 months, with Google, Microsoft and Amazon all following OpenAI in announcing generative AI products in the wake of ChatGPT’s success.

Google unveiled Bard, an app it said would have the edge over any of its rivals in the new AI chatbot space because it was powered by the data from Google’s industry-leading search engine, and established Google Assistant virtual helper, found in its smartphones and smart speakers.

On a similar note, Amazon used its big product launch of the year to talk about how it was using AI to make its virtual assistant Alexa sound and respond in a more human fashion – able to understand context and react to follow-up questions more seamlessly.

And Microsoft began the rollout of its new Copilot, its take on combining generative AI with a virtual assistant on Windows, allowing users to ask for help with any task they were doing, from writing a report to organising the open windows on their screen.

Elsewhere, Elon Musk announced the creation of xAI, a new start-up focused on work in the artificial intelligence space.

The first product from that start-up has already appeared in the form of Grok, a conversational AI available to paying subscribers to Musk-owned X, formerly known as Twitter.

Such large-scale developments in the sector could not be ignored by governments and regulators, and debate around regulation of the AI sector has also intensified during the year.

In March, the Government published its White Paper on AI, which proposed using existing regulators in different sectors to carry out AI governance, rather than give responsibility to a new single regulator.

But any AI Bill is still yet to be brought forward, a delay that has been criticised by some experts, who have warned that it risks allowing the technology to go unchecked just as the use of AI tools is exploding.

The Government has said it does not want to rush to legislate while the world is still getting to grips with the potential of AI, and says its approach is more agile and allows for innovation.

In contrast, earlier this month the EU agreed on its own set of rules around AI oversight, although they are unlikely to become law before 2025, which will give regulators the power to scrutinise AI models and be provided with details on how models are trained.

But Mr Sunak’s desire for the UK to be a key player in AI regulation was highlighted in November as he hosted world leaders and industry figures at Bletchley Park for the world’s first AI Safety Summit.

Mr Sunak and Technology Secretary Michelle Donelan used the two-day summit to discuss the threats of so-called “frontier AI”, cutting edge aspects of the technology which, in the wrong hands, could be used for nefarious means.

The summit saw all the international attendees, including the US and China, sign the Bletchley Declaration, which acknowledged the risks of AI and pledged to develop safe and responsible models.

And the Prime Minister announced the launch of the UK’s AI Safety Institute, alongside a voluntary agreement with leading firms including OpenAI and Google DeepMind, to allow the institute to test new AI models before they are released.

Although not a binding agreement, it has laid the groundwork for AI safety to become an increasingly prominent part of the debate moving forwards.

Elsewhere, the AI industry witnessed some major boardroom soap opera to end the year, as ChatGPT maker OpenAI sensationally ousted chief executive Sam Altman in late November.

But it sparked backlash among staff, nearly all of whom signed a letter pledging to leave the company and join Altman on a proposed new AI research team at Microsoft if he was not reinstated.

Within days Altman was back at the helm of OpenAI and the board had been reconfigured, with the reasoning behind the saga still unclear.

Since then, the UK’s Competition and Markets Authority (CMA) has asked for views from within the industry on Microsoft’s partnership with OpenAI, which has seen the tech giant invest billions into the AI firm and have an observer on its board.

The CMA said it was minded to look into the partnership in part because of the Altman saga.

Another sign that the coming year is likely to see scrutiny of the AI sector continue to intensify.