| Form of presentation | Articles in international journals and collections |
| Year of publication | 2025 |
| Язык | английский |
|
Gabidullina Zulfiya Ravilevna, author
|
|
Aghasi Alireza , author
Doostmohammadian Mohammadreza , author
Ghods Amir Ahmad , author
Rabiee Hamid R. , author
|
| Bibliographic description in the original language |
Mohammadreza Doostmohammadian, Amir Ahmad Ghods, Alireza Aghasi, Zulfiya R. Gabidullina, Hamid R. Rabiee.
Using Non-Lipschitz Signum-Based Functions for Distributed Optimization and Machine Learning: Trade-Off Between Convergence Rate and Optimality Gap,
Mathematical and Computational Applications. 2025, 30(5), 108;
https://doi.org/10.3390/mca30050108
|
| Annotation |
In recent years, the prevalence of large-scale datasets and the demand for sophisticated
learning models have necessitated the development of efficient distributed machine learning
(ML) solutions. Convergence speed is a critical factor influencing the practicality and
effectiveness of these distributed frameworks. Recently, non-Lipschitz continuous optimization
algorithms have been proposed to improve the slow convergence rate of the
existing linear solutions. The use of signum-based functions was previously considered in
consensus and control literature to reach fast convergence in the prescribed time and also
to provide robust algorithms to noisy/outlier data. However, as shown in this work, these
algorithms lead to an optimality gap and steady-state residual of the objective function
in discrete-time setup. This motivates us to investigate the distributed optimization and
ML algorithms in terms of trade-off between convergence rate and optimality gap. In
this direction, we specifically consider the distributed regression problem and check its
convergence rate by applying both linear and non-Lipschitz signum-based functions. We
check our distributed regression approach by extensive simulations. Our results show
that although adopting signum-based functions may give faster convergence, it results in
large optimality gaps. The findings presented in this paper may contribute to and advance
the ongoing discourse of similar distributed algorithms, e.g., for distributed constrained
optimization and distributed estimation.
Lipschitz continuity |
| Keywords |
linear regression, distributed optimization,network and graph theory,
Lipschitz continuity |
| The name of the journal |
MATHEMATICAL AND COMPUTATIONAL APPLICATIONS
|
| Please use this ID to quote from or refer to the card |
https://repository.kpfu.ru/eng/?p_id=320950&p_lang=2 |
Full metadata record  |
| Field DC |
Value |
Language |
| dc.contributor.author |
Gabidullina Zulfiya Ravilevna |
ru_RU |
| dc.contributor.author |
Aghasi Alireza |
ru_RU |
| dc.contributor.author |
Doostmohammadian Mohammadreza |
ru_RU |
| dc.contributor.author |
Ghods Amir Ahmad |
ru_RU |
| dc.contributor.author |
Rabiee Hamid R. |
ru_RU |
| dc.date.accessioned |
2025-01-01T00:00:00Z |
ru_RU |
| dc.date.available |
2025-01-01T00:00:00Z |
ru_RU |
| dc.date.issued |
2025 |
ru_RU |
| dc.identifier.citation |
Mohammadreza Doostmohammadian, Amir Ahmad Ghods, Alireza Aghasi, Zulfiya R. Gabidullina, Hamid R. Rabiee.
Using Non-Lipschitz Signum-Based Functions for Distributed Optimization and Machine Learning: Trade-Off Between Convergence Rate and Optimality Gap,
Mathematical and Computational Applications. 2025, 30(5), 108;
https://doi.org/10.3390/mca30050108
|
ru_RU |
| dc.identifier.uri |
https://repository.kpfu.ru/eng/?p_id=320950&p_lang=2 |
ru_RU |
| dc.description.abstract |
MATHEMATICAL AND COMPUTATIONAL APPLICATIONS |
ru_RU |
| dc.description.abstract |
In recent years, the prevalence of large-scale datasets and the demand for sophisticated
learning models have necessitated the development of efficient distributed machine learning
(ML) solutions. Convergence speed is a critical factor influencing the practicality and
effectiveness of these distributed frameworks. Recently, non-Lipschitz continuous optimization
algorithms have been proposed to improve the slow convergence rate of the
existing linear solutions. The use of signum-based functions was previously considered in
consensus and control literature to reach fast convergence in the prescribed time and also
to provide robust algorithms to noisy/outlier data. However, as shown in this work, these
algorithms lead to an optimality gap and steady-state residual of the objective function
in discrete-time setup. This motivates us to investigate the distributed optimization and
ML algorithms in terms of trade-off between convergence rate and optimality gap. In
this direction, we specifically consider the distributed regression problem and check its
convergence rate by applying both linear and non-Lipschitz signum-based functions. We
check our distributed regression approach by extensive simulations. Our results show
that although adopting signum-based functions may give faster convergence, it results in
large optimality gaps. The findings presented in this paper may contribute to and advance
the ongoing discourse of similar distributed algorithms, e.g., for distributed constrained
optimization and distributed estimation.
Lipschitz continuity |
ru_RU |
| dc.language.iso |
ru |
ru_RU |
| dc.subject |
linear regression |
ru_RU |
| dc.subject |
distributed optimization |
ru_RU |
| dc.subject |
network and graph theory |
ru_RU |
| dc.subject |
Lipschitz continuity |
ru_RU |
| dc.title |
Using Non-Lipschitz Signum-Based Functions for Distributed Optimization and Machine Learning: Trade-Off Between Convergence Rate and Optimality Gap |
ru_RU |
| dc.type |
Articles in international journals and collections |
ru_RU |
|