In this paper, a gradient-free distributed algorithm is introduced to solve a set constrained optimization problem under a directed communication network. Specifically, at each time-step, the agents locally compute a so-called pseudo-gradient to guide the updates of the decision variables, which can be applied in the fields where the gradient information is unknown, not available or non-existent. As compared to most distributed optimization methods, the proposed algorithm does not require the weighting matrix to be doubly stochastic, which enables the implementation in the graphs whose associated doubly stochastic weighting matrix does not exist. Furthermore, different from the approximate convergence to the sub-optimal solution achieved by most gradient-free algorithms, the proposed algorithm is able to achieve the asymptotic convergence to the exact optimal solution. Moreover, to establish the exact convergence, existing optimization methods usually assume the step-size to be non-summable but square-summable. In our algorithm, we adopt an optimal averaging scheme that only requires the step-size to be positive, non-summable and non-increasing, which increases the range of the step-size selection. Finally, the effectiveness of the proposed algorithm is verified through numerical simulation.
Gradient-free distributed optimization with exact convergence
Published 2020 in at - Automatisierungstechnik
ABSTRACT
PUBLICATION RECORD
- Publication year
2020
- Venue
at - Automatisierungstechnik
- Publication date
2020-04-13
- Fields of study
Mathematics, Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-44 of 44 references · Page 1 of 1
CITED BY
Showing 1-11 of 11 citing papers · Page 1 of 1