To read this content please select one of the options below:

FedACQ: adaptive clustering quantization of model parameters in federated learning

Tingting Tian (College of Artificial Intelligence and Computer, Jiangnan University, Wuxi, China and Laboratory of Media Design and Software Technology, Jiangnan University, Wuxi, China)
Hongjian Shi (School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China)
Ruhui Ma (School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China and Shanghai Key Laboratory of Scalable Computing and Systems, Shanghai Jiao Tong University, Shanghai, China)
Yuan Liu (College of Artificial Intelligence and Computer, Jiangnan University, Wuxi, China and Laboratory of Media Design and Software Technology, Jiangnan University, Wuxi, China)

International Journal of Web Information Systems

ISSN: 1744-0084

Article publication date: 28 November 2023

Issue publication date: 5 February 2024

114

Abstract

Purpose

For privacy protection, federated learning based on data separation allows machine learning models to be trained on remote devices or in isolated data devices. However, due to the limited resources such as bandwidth and power of local devices, communication in federated learning can be much slower than in local computing. This study aims to improve communication efficiency by reducing the number of communication rounds and the size of information transmitted in each round.

Design/methodology/approach

This paper allows each user node to perform multiple local trainings, then upload the local model parameters to a central server. The central server updates the global model parameters by weighted averaging the parameter information. Based on this aggregation, user nodes first cluster the parameter information to be uploaded and then replace each value with the mean value of its cluster. Considering the asymmetry of the federated learning framework, adaptively select the optimal number of clusters required to compress the model information.

Findings

While maintaining the loss convergence rate similar to that of federated averaging, the test accuracy did not decrease significantly.

Originality/value

By compressing uplink traffic, the work can improve communication efficiency on dynamic networks with limited resources.

Keywords

Acknowledgements

This research was funded by the National Natural Science Foundation of China under grant number 61972182.

Since submission of this article, the following author has updated his affiliation: Hongjian Shi is at the Shanghai Key Laboratory of Scalable Computing and Systems, Shanghai Jiao Tong University, Shanghai, China.

Citation

Tian, T., Shi, H., Ma, R. and Liu, Y. (2024), "FedACQ: adaptive clustering quantization of model parameters in federated learning", International Journal of Web Information Systems, Vol. 20 No. 1, pp. 88-110. https://doi.org/10.1108/IJWIS-08-2023-0128

Publisher

:

Emerald Publishing Limited

Copyright © 2023, Emerald Publishing Limited

Related articles