Credit: Google News
Google has unveiled a lightweight cross-platform solution for developers deploying machine learning models on mobile and Internet of Things (IoT) devices.
TensorFlow is an open-source programming platform developed by the Google Brain team released in 2015. The specialized version for mobile devices, TensorFlow Lite 1.0, was first seen as a developer preview in November 2017.
Back then, the TensorFlow team said the adoption of machine learning models has grown exponentially, and so has the need to deploy them on mobile and embedded devices.
Raziel Alvarez, an engineer at TensorFlow Lite, explained at the TensorFlow Dev Summit 2019 held last week in Sunnyvale, California why they built the lightweight version.
“With machine learning, you typically think of running it on the server, but more and more it is moving to edge devices, cars, wearables. So there is more machine learning moving to mobile devices,” he said.
And there is good reason for that, he said: “One, you have more access to a lot of data because you have access to the audio, the camera, and you don’t have to stream it all the way to the server. So you can do a lot of stuff and that means you can build faster.”
However, he said machine learning on device also poses a lot of challenge compared to server-side execution, such as reduced compute power, limited on-device memory, and battery constraints.
At launch, Alvarez said TensorFlow Lite can already support many tasks, including text (classification and prediction), speech (text to speech and speech to text), image (object detection and location, gesture recognition, facial modelling, compression and many others, audio (translation and voice synthesis) and content (video, text and audio generation).
The TensorFlow Lite engineer said the solution is now deployed in over two billion mobile devices in production, including Google’s own properties like Assistant and Photos, and frameworks such as Auto ML and ML Kit.
TFLite for Google Assistant
Alex Gruenstein, Principal Software Engineer, Speech Team at Google Assistant, that in as much as Google has put Assistant into a wide range of devices including phones, speakers, smart displays, cars, TVs laptops, wearables, that means the neural nets they need to build should be able to run anywhere.
Devices also come in a variety of forms — high-end devices, low-end devices, ARM, X-86, battery-powered, plugged in, all using a wide range of operating systems.
“Over the last year, we’ve migrated all of the computation we do on CPU for our models to TFLite,” Gruenstein said. “And we think TFLite will help us accelerate our models on things like GPUs and Edge TPUs, all the new kinds of accelerators that are coming.”
On-device translation at Netease Youdao
Chinese unicorn NetEase Youdao, which specializes in applications for online education with over 800 million users, has also built many functions and applications on TFLite, including its highly popular Youdao Dictionary and Youdao Translator in China and U-Dictionary, its learning application in India.
Huijie Lin, a Machine Learning Engineer at NetEase Youdao, said at the summit, that the company has built word scanning and look-up functions in the dictionary applications as well as photo translation and AR translation solutions using TFLite.
“Beyond applications, we have also created hardware, such as the Youdao Translator King, which is a translation machine that offers offline and online speech translation.
Credit: Google News