In order to speed up the speed of smartphone processing AI tasks, technology companies are carrying out a variety of different attempts. Microsoft and ARM in the design is more suitable for running the neural network chip, Facebook and Google want to reduce the performance of the AI itself. But for the chip manufacturers Qualcomm, their current plan is more simple: the existing chip products to adapt.
Qualcomm has developed a software development kit (SDK) called the Neural Processing Engine to help developers optimize their applications so that they can run AI tasks on the Xiao Long 600 and 800 series processors The That means that if you are developing an application that uses AI (such as image recognition), you can integrate Qualcomm's SDK to run it more quickly on a compatible processor.
The neural processing engine was released by Qualcomm as part of the Zeroth platform (which was subsequently cut as a brand) a year ago. Since September last year, Qualcomm began with a number of partners to develop the SDK, and now, it is finally officially open for everyone to use.
"Any developer who has already invested in deep learning - that is, they have data access and trained AI models - regardless of size, are our target users," Qualcomm AI and the machine learning department are responsible for "It's very easy to use, and we've got all the basics," Gary Brotman said. "You do not have to go to these heavy tasks."
Qualcomm said Facebook will be one of the first manufacturers to integrate the SDK, they are currently using it to accelerate their mobile applications in the enhanced reality filter. By using the neural processing engine, Facebook's filter loading speed increased by 5 times.
As for how developers use this SDK, this will vary according to the content of the work, but its basic task is to assign different tasks to Xiaolong chip different parts. For example, if the developer wants to optimize the life or processing speed, they can extract the computing resources from different parts of the chip, such as CPU, GPU or DST.
The SDK also supports the most popular AI system development architecture, including Caffe, Caffe2 and Google's TensorFlow. Qualcomm said it is also suitable for cars, unmanned aerial vehicles, VR helmets and smart home products, in addition to optimizing AI on mobile electronic parts.
But deploying the architecture that can fit the existing processor is just the beginning. "AI workloads will increase the need for computing performance," Brotman said. In order to meet this demand, technology companies are working on AI to optimize the chip to develop a new architecture design. For example, Microsoft is developing a custom machine learning processor for HoloLens 2, and UK chip maker Graphcore recently raised $ 30 million to develop its own "smart processing unit" for mobile devices.
And for Qualcomm, they will certainly take this strategy in the future, but now is not the time. "For us, adding new content to the chip is a bet, and we're not going to do that easily," Brotman said. "If we can optimize the existing product line, we've done a great job. In the long run, do we need a dedicated neural calculation? The answer is yes, the question is when we should bet.
Reference: 74hc595 datesheet and ds18b20 datesheet