Mobile applications based on machine learning are reshaping and affecting many aspects of our lives. Implementing machine learning on mobile devices faces various challenges, including computational power, energy, latency, low memory, and privacy risks. In this article, we investigate the current state of implementing machine learning for mobile applications, providing an overview of five architectures commonly used for this purpose and the ways in which they address the given challenges. We also discuss their pros and cons, providing recommendations for each architecture. Additionally, we review recent studies, popular toolkits, cloud services, and platforms supporting machine learning as a service. This survey will, therefore, bring mobile developers up to speed on the latest trends in implementing machine learning for mobile applications.
5 ARCHITECTURES for Implementing Machine Learning Mobile Apps:
Training and inference are two essential phases of implementing ML applications. Training is the process of extrapolating a ML model from the data. Inference then uses this model to make predictions about new data. Both training and inference can be conducted either on a device itself or in the cloud. In this section we review five architectures commonly used to implement ML for mobile applications, paying particular attention to how training and inference are implemented. We discuss each architecture’s pros and cons. An appropriate architecture depends on the details of a scenario, such as the specific requirements of the mobile application, the complexity of the model, the amount of data, etc.
A. Cloud inference without training
The mobile application sends a request to the cloud through an application programming interface (API) together with the new data, and the service returns a prediction. There is no need for the app itself (i.e. its developer) to perform training.
B. Both inference and training in the cloud
This architecture is similar to Architecture A., the only difference being that the service providers give the mobile developers capabilities to train the data and build their own unique models through the cloud service.
C. On-device inference with pre-trained models
On-device inference is essential for mobile applications where latency in the order of microseconds is mission-critical. Response time is the major reason for performing inference directly on a device. For example, inference performed…. On-device inference is appropriate for applications where privacy issues are a concern, such as the case of medical diagnostics. For example ..
D. Both inference and training on device
The main advantage of this architecture is that the application can continuously learn from the user’s data and behavior, and thus continuously update models and improve performance for the given user. Data never needs to leave the device, which protects the user’s privacy.
E. Hybrid Architecture
In this architecture, training takes place on both mobile device and the cloud.
Download the original pdf from IEEE.