Day 1 of 30 Days of Federated Learning Code.
OpenMined currently handling events that called curious learner to learn about Federated Learning from 20 November to 20 December and here I am also as participants that wanted to get into the learning journey as well.
The comic from google about Federated Learning shows a really insightful terminology and necessity of Federated Learning in Machine Learning systems regarding the privacy on the data side.
Data Privacy defines how a particular piece of information/data should be handled or who has authorized access based on its relative importance. With the introduction to AI (Machine Learning and Deep Learning), a lot of personal information can be extracted from these models, which can cause irreparable damage to the people whose personal data has been exposed. So, here comes the need to preserve this data while implementing various machine learning models.
Federated Learning which is also known as collaborative learning is a deep learning technique where the training takes place across multiple decentralized edge devices (clients) or servers on their personal data, without sharing it with other clients, thus keeping the data private. It aims at training a machine learning algorithm, say, deep neural networks on multiple devices (clients) having local datasets without explicitly exchanging the data samples.
Federated learning makes it possible for AI algorithms to gain experience from a vast range of data located at different sites. This approach enables several organizations to collaborate on the development of models but without having to share sensitive data with each other.
There are two types of Federated Learning:
Centralized federated learning: In this setting, a central server is used to orchestrate the different steps of algorithms and coordinate all the participating nodes during the learning process. The server is responsible for the nodes selection at the beginning of the training process and for the aggregation of the received model updates (weights). It is the bottleneck of the system.
Decentralized federated learning: In this type, nodes are able to coordinate themselves to obtain the global model. This setting prevents single point failures as the model updates are exchanged only between interconnected nodes.
As the learning commitment goes, here I put a daily blogs in this profile and I hope anyone find this learning logs to be informative and useful.
Top comments (0)