Insights

Reflections on AI in 2023 – Three Things that Stood Out

2023 was the year that AI went mainstream. Sure, people were talking about AI at the end of 2022, but it was 2023 that everyone was talking about AI – whether about the good (the useful things that AI can achieve) or their concerns (robots will take over the world ☺).

In this blog, we take a look at three things that stood out this year.

The year of the large language model

ChatGPT prompted the large tech companies to announce their own models. There were many releases throughout the year: Meta’s Llama 2, Google’s Bard chatbot and Gemini, Baidu’s Ernie Bot, as well as other models.

Google’s Gemini – which was announced recently – will certainly provide a true multimodal contender to ChatGPT. Gemini’s capability combines different data types like text, images, audio, and video. It also comes in different ‘flavours’ for different devices and tasks – Nano, Pro, and Ultra variants.

Not to be outdone, Microsoft announced Bing Chat in February, which has since been branded Copilot (along with other Microsoft AI offerings, including Microsoft 365 Copilot). Copilot uses its own model which is built on top of OpenAI’s GPT-4. Microsoft is also bringing AI to PCs. At its Build 2023 conference, Microsoft announced plans to integrate Copilot into Windows 11, allowing users to access it directly through the task bar.

Competition and a choice of large language models (LLMs) can only be a good thing, helping to propel the industry forward.

A focus on AI’s energy consumption

2023 was the year that concerns about the energy consumption of AI came to the fore. This isn’t that surprising: datacentres that provide the compute power for AI consume an incredible amount of energy. In fact, IEA estimated that in 2022 global datacentre energy consumption was already 1.3% of global energy production, with some predicting this reaching 4% by 2030.

Researchers at Hugging Face and Carnegie Mellon University estimated that generating a single image using an LLM uses as much energy as fully charging a smartphone (https://arxiv.org/pdf/2311.16863.pdf). They also looked at the carbon footprint, estimating that generating 1,000 images with a sophisticated AI model produces as much carbon as driving 4.1 miles in an average petrol car.

The same research estimates that at its peak, ChatGPT had upward of 10 million users per day. With so many users, the energy cost of inference far exceeds that of training: “Even assuming a single query per user, which is rarely the case, the energy costs of deploying it would surpass its training costs after a few weeks or months of deployment.”

This year, the Lumai team attended the high-performance compute conference (SC23) in Denver. The message was clear: AI devices have to become more energy efficient. With AI accelerators already consuming more than 700W each, and with potentially tens of thousands of them in a datacentre, power and cooling have become limiting factors.  Not only are these devices expensive to operate (given the cost of energy) but they also impact the data centre operators’ environmental credentials. The widespread thinking at SC23 was  that if the industry doesn’t fix this there will be reputational damage to some of the bigger brands.

The overwhelming feedback from SC23 was that AI providers will continue to look beyond current suppliers and to new technologies for ways to solve the performance and power challenges. With its excellent energy efficiency, Lumai’s 3D optics-based AI accelerator is the ideal solution to this problem.

The start of AI regulation

If 2023 was the year that AI went mainstream, it was also the year that AI regulation hit the headlines.

The US published an “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” to provide guardrails in the use of AI. There was a significant focus on safety and security in key areas: biosecurity, cybersecurity, national security, and critical infrastructure. The aim was to reuse as much as possible of the existing processes that already address risk in these areas. Other areas addressed in the Executive Order include support for employees affected by AI, minimising AI bias, consumer protection and privacy – setting out at this stage aims rather than more immediate solutions.

The recent EU AI Act agreement also addressed some specific high-risk applications of AI. The agreement identified unacceptable risk applications (where the use of AI is prohibited), high-risk applications (permitted but must comply with multiple requirements and undergo a conformity assessment), and finally limited-risk or minimal-risk applications (where transparency is required).

The Bletchley Declaration signed at the UK’s AI Safety Summit also set out broad aims for international cooperation, but aligned global regulation seems unlikely for now.

So what next?

Given the pace of change of AI in 2023 it would be a safe bet that 2024 will be equally as transformational and exciting.  Keep a watch for announcements from Lumai as we share more about how we are driving innovation in AI with our 3D optics AI inference accelerator.

View all news

Want to find out more?

Contact Lumai