With one of the strongest processor ecosystems in the industry, Arm’s role to accelerate the adoption of machine learning (ML) at the edge can’t be overemphasized, and its effort to enhance its CPU performance and software support for ML workloads is now in full swing. According to Jem Davies, Arm fellow, VP and GM, Machine Learning Group, the market outlook, challenges, and Arm’s overall strategy to address the huge opportunity are clearly disclosed.
ML in edge devices is just beginning
“We see ML as one of the most exciting advancements in computers and processors in modern times,” Davies said. “As machine learning is exploding across edge devices, we’re now to the point where we are seeing huge amounts of activities and some really interesting use cases across all markets Arm technology services.”
From his point of view, some of the most interesting use cases and active communities for ML are coming out of the IoT sector using traditionally very small processors like the Arm Cortex-M microcontroller family.
For example, the use cases come from the embedded and IoT spaces, such as life improving medical devices like smart asthma inhalers, through to industrial sorting and robotics to voice assistants, more intelligent home security and even things like DTVs where there is a lot of activity in super scaling, scene recognition and picture quality enhancement and gesture recognition.
Of course, there are some of the more interesting ones, including the well covered autonomous vehicle and driver assistance; in smartphone a huge range of applications are implementing ML improvements like smarter games engines, richer social media applications, and even utility applications built directly into the OS like predictive text and voice assistants.
“From an Arm perspective, the thirst for ML in edge devices is just beginning and we expect it to continue growing substantially for several years yet,” he said. “The use cases are still growing rapidly and we expect an explosion of creativity over the next couple years as the algorithms become more understood and smaller and the developer community really engages with what ML can bring.”
But, the challenges are…
However, opportunities always come with challenges. The challenge for Arm is to ensure people have the improved CPU and other processors along with associated software and tools to support their needs today while also ensuring it is working on the products for tomorrow with even more capability, such as ML dedicated processors.
But at the customers’ side, Davies saw lots of confusion as ML is too new and too complicated for them to adopt. “A lot of what we’re working on now is just trying to help demystify and clarify things in the technology space as there’s a lot of confusion and misinformation out there.”
“Two years ago or even a year ago, it wasn’t uncommon for people to think that if you wanted to do any ML on a device, you needed to have an ML dedicated processor – a view fuelled by people with dedicated processors to sell – and so we would get asked which processor was best for ML a lot. It depends on what is important to you. So we’ve spent a lot of time explaining when a small CPU, large CPU, multi-processor CPU, GPU or ML processor would best meet people’s needs.”
On the other hand, one of the biggest challenges for software developers is the same one they always have: which hardware platform/processor should I target to give my software the widest compatibility with devices?
Aside from that, models have introduced a new and critical component to the technology stack when you are doing ML. A lot of work has gone on in the industry the last few years to make these better understood and more friendly and accessible.
ML are affecting nearly all Arm’s products
According to Davies, ML is driving a change in software, and Arm’s processors and products are all about running software. As such, his view is that ML is affecting nearly all Arm’s products.
“You can see this in the CPUs we have been releasing the last couple years which have had major performance improvements specifically targeted at ML workloads. Often these are 4x or even 10x on generation improvements. This also extends to our latest GPUs and even our efficient Cortex-M family where we recently released our Helium extensions specifically supporting 15 times improved ML performance in microcontrollers.”
“So Arm is focused on offering a huge range of ‘processors’ that can give customers a vast range of price, performance and power trade-offs. We will be extending that further through 2019 with a range of complementary ML processors for all markets given the market options where heavy ML workloads are needed or ML power efficiency is critical.”
He explained that Arm’s strategy has been to add capabilities to its CPU and GPU architectures to support ML for some years now. And the company has launched a range of scalable and extensible NPUs (neural processing units) to provide more efficient ML processing in a range of market segments with performance requirements such as in automotive, right down to the tiniest low-power embedded microcontrollers.
As he mentioned, there is no one-size-fits-all approach when choosing processors for ML. “We predict that ML will continue to be run across a range of processors, not just NPUs,” he said.
In addition to this, Arm has been investing heavily in ensuring the software is there to get the best performance out of current and future Arm hardware as well as ensuring software portability.
Arm’s advantages in the new ML era
It is fair to say that Arm provides the architectures at the heart of modern computing and Arm-based devices are everywhere around us. With this advantage, when disruptions like ML come along, developers will use Arm’s architectures first to innovate on, Davies indicated.
“We recently launched a survey to measure where ML processing is being performed, and the most popular was the Arm Cortex CPU architecture, followed by the Arm Mali GPU architecture,” he said. “That’s a huge advantage for us – it gives us great connections to developers who can then tell us what they need us to develop in terms of software libraries, compilers, development tools etc and what they will want to do themselves.”
“This major shift in the industry is something we started some years ago and we could see it spreading across various areas of the company. That’s when Arm’s Project Trillium was brought to life to help ensure we were taking a broad, holistic and comprehensive view of ML in the business.”
He explained that Project Trillium, which covers all of Arm’s ML activities, is Arm’s and third-party hardware IP and software running ML workloads and applications. Within it are CPU and GPU improvements started many years ago in research, a broad range of complementary current and future ML processors software to enable ML on Arm, tools, ecosystem and a ramping amount of educational material.
As ML is fundamentally a software problem, Arm is also investing hugely to support ML developers on Arm. Arm NN provides a framework to allow ML workloads to be easily run across a variety of processors: CPUs, GPUs, NPUs and other IP blocks.
“We developed Arm NN, investing over 120 engineering years of effort before donating it to Linaro, and now, with Open Source and Open Governance, our partners are contributing their own efforts to Arm NN, in the confidence that it will become an open standard across multiple industries, not controlled by any one company.”
With these big investments in tooling and ecosystem support and development, Arm aims to provide the broadest and most comprehensive range of ML solutions for all ranges of edge devices, and hope to bear fruit in the years to come.
Jem Davies, Arm fellow, VP and GM, Machine Learning Group
Arm is enabling ML on edge devices
Credit: Google News