On the Preconditioned Delayed Weighted Gradient Method
DOI:
https://doi.org/10.5540/tcam.2023.024.03.00437Keywords:
Gradient methods, Convex quadratic optimization, Krylov subspace methods, PreconditioningAbstract
In this article a preconditioned version of the Delayed Weighted Gradient Method (DWGM) is presented and analyzed. In addition to the convergence, some nice properties as the A- orthogonality of the current transformed gradient with all the previous gradient vectors as well as finite convergence are demonstrated. Numerical experimentation is also offered, exposing the benefits of preconditioning.References
E. Birgin, I. Chambouleyron, and J. M. Martínez, Estimation of the optical constants and the thickness of thin films using unconstrained optimization, Computational Physics, no. 151, pp. 862-880, 1999.
T. Serani, G. Zanghirati, and L. Zanni, Gradient projection methods for large quadratic programs and applications in training support vector machines, Optimization Methods and Software, no. 20, pp. 353-378, 2005.
M. A. Figueiredo, R. Nowak, and S. Wright, Projection for sparse reconstruction: application to compressed sensing and other inverse problems, IEEE J. Sel. Top. Signal Process, no. 1, pp. 586-597, 2007.
A. Cauchy, Méthode générale pour la résolution des systemes d'équations simultanées, Comptes Rendus Sci. Paris, vol. 25, pp. 536-538, 1847.
H. Akaike, On a successive transformation of probability distribution and its application to the analysis of the optimum gradient method, Annals of the Institute of Statistical Mathematics, vol. 11, pp. 1-16, 1959.
R. D. Asmundis, D. di Serafino, W. W. Hager, G. Toraldo, and H. Zhang,
An eficient gradient method using the Yuan steplength, Computational Optimization and Applications, vol. 59, pp. 541-563, 2014.
J. Barzilai and J. M. Borwein, Two-point step size gradient methods, IMA Journal of Numerical Analysis, vol. 8, no. 1, pp. 141-148, 1988.
D. di Serano, V. Ruggiero, G. Toraldo, and L. Zanni, On the steplength selection in gradient methods for unconstrained optimization, Applied Mathematics and Computation, vol. 318, pp. 176-195, 2018.
Y.-H. Dai and L.-Z. Liao, R-linear convergence of the Barzilai and Borwein gradient method, IMA Journal of Numerical Analysis, vol. 22, no. 1, pp. 1-10, 2002.
A. Friedlander, J. M. Martínez, B. Molina, and M. Raydan, Gradient method with retards and generalizations, SIAM Journal on Numerical Analysis, vol. 36, no. 1, pp. 275-289, 1999.
Y. X. Yuan, A new stepsize for the steepest decent method, Journal of Computational Mathematics, vol. 24, no. 2, pp. 149-156, 2006.
Y.-H. Dai, Y. Huang, and X.-W. Liu, A family of spectral gradient methods for optimization, Computational Optimization and Applications, vol. 74, pp. 43-65, 2019.
H. F. Oviedo-Leon, A delayed weighted gradient method for strictly convex quadratic minimization, Computational Optimization and Applications, vol. 74, pp. 729-746, 2019.
R. Andreani and M. Raydan, Properties of the delayed weighted gradient method, Computational Optimization and Applications, vol. 78, pp. 167-180, 2021.
G. E. Forsythe and T. S. Motzkin, Asymptotic properties of the optimum gradient method, preliminary report, Bull. Amer. Math. Soc, vol. 57, pp. 304-305, 1951.
B. V. Shah, R. J. Buehler, and O. Kempthorne, Some algorithms for minimizing a function of several variables, Journal of the Society for Industrial and Applied Mathematics, vol. 12, no. 1, pp. 74-92, 1964.
H. Sorenson, Comparison of some conjugate direction procedures for function minimization, Journal of the Franklin Institute, vol. 288, no. 6, pp. 421-441, 1969.
J.-L. Lamotte, B. Molina, and M. Raydan, Smooth and adaptive gradient method with retards, Mathematical and Computer Modelling, vol. 36, pp. 1161-1168, December 2002.
D. Bertaccini and F. Durastante, Iterative Methods and Preconditioning for Large and Sparse Linear Systems with Applications. New York: Chapman and Hall/CRC, 2018.
E. G. Birgin, J. M. Martínez, and M. Raydan, Spectral projected gradient methods: review and perspectives, Journal of Statistical Software, vol. 60, no. 3, pp. 1-21, 2014.
R. D. Asmundis, D. di Serafino, F. Riccio, and G. Toraldo, On spectral properties of steepest descent methods, IMA Journal of Numerical Analysis, vol. 33, no. 4, pp. 1416-1435, 2013.
Y. Huang, Y.-H. Dai, X.-W. Liu, and H. Zhang, Gradient methods exploiting spectral properties, Optimization Methods and Software, vol. 35, no. 4, pp. 681-705, 2020.
G. H. Golub and C. F. Van Loan, Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences, Johns Hopkins University Press, 4th ed., 2012.
D. S. Bernstein, Matrix Mathematics: Theory, Facts, and Formulas. Princeton University Press, second ed., 2011.
C. T. Kelley, Iterative Methods for Linear and Nonlinear Equations. Society for Industrial and Applied Mathematics, 1995.[26] W. Ford, Numerical Linear Algebra with Applications: Using MATLAB. New York: Elsevier Inc, 2015.
W. Ford, Numerical Linear Algebra with Applications: Using MATLAB. New York: Elsevier Inc, 2015.
Downloads
Published
How to Cite
Issue
Section
License
Copyright
Authors of articles published in the journal Trends in Computational and Applied Mathematics retain the copyright of their work. The journal uses Creative Commons Attribution (CC-BY) in published articles. The authors grant the TCAM journal the right to first publish the article.
Intellectual Property and Terms of Use
The content of the articles is the exclusive responsibility of the authors. The journal uses Creative Commons Attribution (CC-BY) in published articles. This license allows published articles to be reused without permission for any purpose as long as the original work is correctly cited.
The journal encourages Authors to self-archive their accepted manuscripts, publishing them on personal blogs, institutional repositories, and social media, as long as the full citation is included in the journal's website version.