Insights

Speed, Sustainability, Scale: How Optical Matrix Multiplication Will Transform AI

The world of AI is power hungry and compute limited. Existing transistor-based computing is approaching its physical limit and is already struggling to meet growing computation demands.

So, how can technology fully enable this exponentially increasing need for compute power?

Our Head of Research, Xianxin Guo, recently authored an article for RT Insights titled “Speed, Sustainability, Scale: How Optical Matrix Multiplication Will Transform AI”, discussing how optical matrix multiplication will revolutionise AI computing.

Through basic operations of multiplication and addition, matrix multiplication supports the different functional blocks of AI. And it’s not just language models, this basic linear algebra operation is fundamental to almost every kind of neural network: to achieve massive interconnection of neurons, perform convolution for image classification and object detection, process sequential data and so on.

As AI models become larger, more of these matrix operations must be performed, meaning more and more compute power is needed. By utilising the inherent advantages of optical computing, Lumai is creating a processor that can perform matrix multiplication 1000x faster than traditional electronics, while consuming only 1/100th of the energy.

You can read Xianxin’s full article here, or follow us on LinkedIn to keep up with the latest news on all things Lumai.

View all news

Want to find out more?

Contact Lumai