Our advancements unleash the next wave of AI possibilities.

Watch our video


Through game-changing innovation, we’re significantly improving the speed and effectiveness of AI inference. Enabling applications such as biotech, scientific research, finance and consumer internet to be taken to a far higher level.

  • We are creating a world-leading AI processor using 3D optical technology

    Targeted at AI inference in datacentres, our technology will increase the AI performance available within datacentres. This is achieved by both increasing the performance-per-device and by increasing processing density (by decreasing energy required). Using 3D optics is critical to overcoming the challenges of computing scalability.

    It isn’t just about hardware – we are also developing software and tools to make developing on our technology fast and easy.  By using standard frameworks we are ensuring that AI developers can easily migrate models with just a few lines of code.

  • We are delivering computation speeds faster than anything else on the planet

    We are developing an optical matrix-vector multiplier (MVM) using a technology that has a near-term speed limit of up to 1017 operations per second – 100x faster than the human brain and 1000x faster than traditional electronics.

    Matrix-vector multiplication is one of the most essential and computationally heavy operations required by neural networks in modern AI, and the massive and ever-growing computational power driving the AI revolution has been largely provided by digital electronics. However, digital electronics is struggling to keep up with the trend, fundamentally limited by the heating of electronic components and the difficulty of further reducing the size of transistors as they approach their physical limit.

    Our computing advantage is rooted in the inherent parallelism of our MVM architecture – up to millions of optical beams propagate and interfere simultaneously inside the MVM to complete the computation. As we scale up the MVM, our computing speed increases quadratically, leading to ultra-fast optical computing.

  • We are solving datacentres’ environmental challenges

    The rapid increase in use and complexity of AI is driving ever-increasing AI processing requirements. This is in turn is driving an incredible increase in power consumption in datacentres at a time when both datacentres and the companies using them want to meet environmental targets.

    It isn’t just direct AI processing power which is causing problems, the corresponding heat generated in the silicon needs to be dissipated, resulting in significant energy being used for cooling. The thermal design power (TDP) of some devices has reached the kW level, and hence can quickly exceed the maximum rating of the server rack.

    By computing with photons rather than electrons (where every arithmetic calculation involves a large number of transistor operations), 3D optical computing consumes far-less energy than GPU solutions and avoids the inherent limitations of photonics-based solutions.  The performance-per-watt achievable means that more compute can be achieved within a datacentre, providing a sustainable solution for next-generation AI.

Want to find out more?

Contact Lumai