Federated learning is a paradigm where a distributed system of devices is set up to collaborate to train a model. Traditional federated learning involves having a centralized server that contains model weights with devices that contribute to the training of that model by occasionally sending their weights back to the server. When the weights get sent back to the server, all devices are given an equal chance to update the main/server model using a process called Federated Averaging (FedAvg). In simple terminology, it can be thought of as averaging a set of values.

An essential part of our proposed model is the clustering of the devices. Clustering can be thought of as combining similar devices. It allows the devices to benefit from an added layer of collaboration from devices with similar learning traits. For example, assume that the EMNIST dataset was being used for training, and two devices can likely have a great deal of experience learning to identify the number 5 class label. By sharing their weights, they can ideally help each other learn at a faster rate. Clustering occurs during the second phase of our model and plays a large role in the training conducted in both the second and third phases. In our study, we tested two methods for clustering.

Read the full article on towards data science

Related Post