OPPO has implemented the Mixture of Experts (MoE) architecture on-device milestone which enhances AI processing efficiency and opens new possibilities for more advanced and flexible on-device AI, laying the groundwork for future innovations in smartphone AI.
Large AI models require substantial computational power, which can impact performance, especially on devices with limited hardware resources. However, OPPO is changing this with a new collaboration with chip manufacturers to implement the MoE architecture on-device.
The MoE architecture dynamically activates specialized sub-models (“experts”) to handle specific tasks, thereby significantly improving processing efficiency and cutting down computing and data transfer consumption. Lab tests reveal that on-device MoE architecture accelerates AI task speeds by approximately 40%, reducing resource demands and improving energy efficiency. This means faster AI responses, longer battery life and enhanced privacy as more tasks are handled locally on the device.
OPPO’s implementation of the MoE architecture on-device is a breakthrough that highlights its advancement in AI innovation. By lowering AI’s computational costs, MoE allows more devices-ranging from flagship to affordable devices-to perform complex AI tasks, accelerating AI’s adoption across the industry. As a result, The MoE architecture on-device opens new opportunities for the industry to make advanced AI capabilities more accessible to a wider audience.
Looking ahead, OPPO remains committed to advancing AI technology and making it available to more users. With over 5,860 patent applications in the AI field, OPPO continues to invest heavily in AI R&D. The establishment of OPPO’s AI Center in 2024 serves as a key step in consolidating its AI research efforts, furthering the company’s mission to provide high-quality AI experiences to users worldwide. Through continued research like MoE and the global rollout of AI-powered features across its smartphone line-up, OPPO aims to extend high-quality AI experiences to a broader audience, ensuring that AI technology becomes more accessible to users across its device categories.