Skip to yearly menu bar Skip to main content


Poster

Device-Wise Federated Network Pruning

Shangqian Gao · Junyi Li · Zeyu Zhang · Yanfu Zhang · Weidong Cai · Heng Huang

Arch 4A-E Poster #263
[ ] [ Paper PDF ]
[ Slides [ Poster
Thu 20 Jun 10:30 a.m. PDT — noon PDT

Abstract:

Neural network pruning, particularly channel pruning, is a widely used technique for compressing deep learning models to enable their deployment on edge devices with limited resources. Typically, redundant weights or structures are removed to achieve the target resource budget. Although data-driven pruning approaches have proven to be more effective, they cannot be directly applied to federated learning (FL), which has emerged as a popular technique in edge computing applications, because of distributed and confidential datasets. In response to this challenge, we design a new network pruning method for FL. We propose device-wise sub-networks for each device, assuming that the data distribution is similar within each device. These sub-networks are generated through sub-network embeddings and a hypernetwork. To further minimize memory usage and communication costs, we permanently prune the full model to remove weights that are not useful for all devices. During the FL process, we simultaneously train the device-wise sub-networks and the base sub-network to facilitate the pruning process. We then finetune the pruned model with device-wise sub-networks to regain performance. In addition, we provided the theoretical guarantee of convergence for our method. Our method achieves better performance and resource trade-off than other well-established network pruning baselines, as demonstrated through extensive experiments on CIFAR-10, CIFAR-100, and TinyImageNet.

Chat is not available.