Skip to main content

Gradient Aggregation

Definition

Gradient aggregation is a process in machine learning where local model updates are combined into a global model. This technique is commonly employed in federated learning, where multiple decentralized clients train machine learning models on their local datasets without sharing raw data. Instead, clients compute gradients from their local data and transmit these gradients to a central server. The server then combines these received gradients to update the main global model, distributing the refined model back to the clients. This method maintains data privacy while improving overall model performance.