Federated fine-tuning of large language models (LLMs) is critical for enabling privacy-preserving learning on edge devices. Traditional backpropagation, however, imposes high memory and computational demands, making it impractical for deployment on lightweight hardware such as Neural Processing Units (NPUs). This paper presents a simulation-based analysis comparing three optimization strategies: standard backpropagation-based Federated Averaging (FedAvg), zerothorder (ZO) full-rank, and ZO low-rank estimators. A compact transformer classifier is trained on a synthetic AGNEWS-like dataset to evaluate convergence behavior, rank sensitivity, and scalability across federated clients. Experimental results demonstrate that the ZO low-rank method achieves smooth and stable convergence, delivering accuracy comparable to backpropagation while significantly reducing memory overhead. These findings highlight forward-only, low-rank ZO optimization as an effective and backpropagation-free alternative for federated fine-tuning of LLMs on NPU-constrained environments.
Efficient Federated Fine-Tuning via Zeroth-Order Optimization for Resource-Constrained Edge Devices
Amish Ranjan,Sundaram,B. C. Sahana
Published 2025 in International Conference on Computational Intelligence and Communication Networks
ABSTRACT
PUBLICATION RECORD
- Publication year
2025
- Venue
International Conference on Computational Intelligence and Communication Networks
- Publication date
2025-12-20
- Fields of study
Not labeled
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-20 of 20 references · Page 1 of 1
CITED BY
- No citing papers are available for this paper.
Showing 0-0 of 0 citing papers · Page 1 of 1