Are you coming to the beautiful Barcelona this February? You should definitely add few things to your agenda:
- Experience Gaudi’s amazing works first hand. I highly recommend taking a nice selfie with Casa Batlló in the background:
- Cool off somewhere alongside city’s magnificent beachfront boardwalk while trying a bunch of local tapas
- And treat yourself with the new tastes of the high technologies by joining the MWC Barcelona event.
This year, byteLAKE, together with Lenovo will be serving a technology that will finally make sense of the IoT ecosystems that we know today. And here’s a brief story about how we are going to do that.
As I described in one of my previous posts, Artificial Intelligence on Edge (or executing directly on small, embedded devices if you like) is booming. Machine and Deep Learning algorithms are being developed to enable both: training and inferencing directly on devices like cameras, drones, local edge servers, routers etc. Major benefits of doing so are that:
- We do not have to rely on connectivity anymore and can process the data either where they are created or very close to that
- And can significantly reduce latencies between events (i.e. signals detected in data) and related actions (i.e. device’s response).
Top 4 Most Popular Ai Articles:
1. Five Free AI Tools You Can Use to be More Productive at Work Today
2. Face recognition: realtime masks development
3. Understanding and building Generative Adversarial Networks(GANs)
4. Six AI Subscriptions to keep you Informed
Think for instance about Industry 4.0 related scenarios where in many cases it very much makes sense to analyze the data on-the-spot rather than send them across the network. But processing data locally (on the devices) with a use of machine learning algorithms leads to creating many local models. The devices become more responsive but looking at the IoT deployments holistically, I always wondered whether it would be possible to somehow leverage on all of these local (distributed) models? Would it be possible to somehow enforce that all the IoT devices could share their knowledge and findings? The first idea was to go back to the basics and collect the raw data from all of the devices and process it in a central place. But that comes with a huge cost of bandwidth, latency… not to mention privacy if that one comes to play. After all, sharing raw data might be tricky. So the real question remains:
can we leverage on all of the local (distributed) AI models?
To answer the question, we started a research work at byteLAKE that was partially inspired by Google’s announcements in the space. We built a federated learning framework which we showcased live during the AI Summit in San Francisco in 2018. That led us to a conclusion that:
the next big thing for the IoT devices, after enhancing them with the locally executed machine learning algorithms, is about enabling them to learn from each other thru federated learning.
Federated Learning is a machine learning setting that allows us to create a model based on a number of other models (not raw data), usually distributed across the network of independent clients. Such clients are very often accessible thru unreliable or low throughput networks which makes it in many cases impossible to download their raw data. In other cases, access to the client’s raw data might be limited by various regulations, policies etc. Hence we can follow a path of processing the raw data directly on such clients, producing a number of distributed machine learning models and aggregating these models with a help of federated learning. Then the aggregated model can be distributed back to the clients in a form of an update. So the whole process works in loop:
- clients: run machine learning to process raw data and produce local models
- server: collects local models, aggregates them and sends back an update to clients
- clients: receive update and include it in another round of training, producing new local models.
In essence, the key thing is that we do not have to share the data. We rather share models and aggregate them.
Come visit us at the #MWC19 to see a live demo of the federated learning in action.
We will be using machine learning to predict the air pressure changes as it flows thru the gradually clogging filter. We will show how painful and slow the process is when we train a model using just one filter. And we’ll also show how the training speeds up and the accuracy goes thru the roof when we leverage the models aggregated from filters across the whole warehouse. When in hall 3, you cannot miss the booth 3N30.
We will also be hosting a series of presentations on Wednesday morning (February 27th, 2019) in the booth and the federated learning is scheduled at 11:45am. Definitely add it to your calendar and let’s talk afterwards. Our clients and partners sometimes surprise us with ideas where our solution can be deployed so we are very eager to have a chat and brainstorm about possible implementations.
If you cannot make it to the show, PM me and we schedule a dedicated demo session. Also you can follow us on social to get the full coverage of the event:
- Twitter: @Lenovodc
- Facebook: @LenovoDataCenterEMEA