New conjugate gradient method for unconstrained optimization
RAIRO - Operations Research - Recherche Opérationnelle, Special issue - Advanced Optimization Approaches and Modern OR-Applications, Tome 50 (2016) no. 4-5, pp. 1013-1026.

In this paper, a new conjugate gradient method is proposed for large-scale unconstrained optimization. This method includes the already existing three practical nonlinear conjugate gradient methods, which produces a descent search direction at every iteration and converges globally provided that the line search satisfies the Wolfe conditions. The numerical experiments are done to test the efficiency of the new method, which confirms the promising potentials of the new method.

DOI : 10.1051/ro/2015064
Classification : 65K05, 90C25, 90C26, 90C27, 90C30
Mots-clés : Unconstrained optimization, conjugate gradient method, line search, global convergence
Sellami, Badreddine 1 ; Chaib, Yacine 1

1 Department of mathematics and informatics, Mohamed Cherif Messaadia University, Souk-Ahras, Algeria.
@article{RO_2016__50_4-5_1013_0,
     author = {Sellami, Badreddine and Chaib, Yacine},
     title = {New conjugate gradient method for unconstrained optimization},
     journal = {RAIRO - Operations Research - Recherche Op\'erationnelle},
     pages = {1013--1026},
     publisher = {EDP-Sciences},
     volume = {50},
     number = {4-5},
     year = {2016},
     doi = {10.1051/ro/2015064},
     zbl = {1357.65076},
     mrnumber = {3570546},
     language = {en},
     url = {http://archive.numdam.org/articles/10.1051/ro/2015064/}
}
TY  - JOUR
AU  - Sellami, Badreddine
AU  - Chaib, Yacine
TI  - New conjugate gradient method for unconstrained optimization
JO  - RAIRO - Operations Research - Recherche Opérationnelle
PY  - 2016
SP  - 1013
EP  - 1026
VL  - 50
IS  - 4-5
PB  - EDP-Sciences
UR  - http://archive.numdam.org/articles/10.1051/ro/2015064/
DO  - 10.1051/ro/2015064
LA  - en
ID  - RO_2016__50_4-5_1013_0
ER  - 
%0 Journal Article
%A Sellami, Badreddine
%A Chaib, Yacine
%T New conjugate gradient method for unconstrained optimization
%J RAIRO - Operations Research - Recherche Opérationnelle
%D 2016
%P 1013-1026
%V 50
%N 4-5
%I EDP-Sciences
%U http://archive.numdam.org/articles/10.1051/ro/2015064/
%R 10.1051/ro/2015064
%G en
%F RO_2016__50_4-5_1013_0
Sellami, Badreddine; Chaib, Yacine. New conjugate gradient method for unconstrained optimization. RAIRO - Operations Research - Recherche Opérationnelle, Special issue - Advanced Optimization Approaches and Modern OR-Applications, Tome 50 (2016) no. 4-5, pp. 1013-1026. doi : 10.1051/ro/2015064. http://archive.numdam.org/articles/10.1051/ro/2015064/

M. Al-Baali, Descent property and global convergence of the Fletcher-Reeves method with inexact line search. IMA J. Numer. Anal. 5 (1985) 121–124. | DOI | MR | Zbl

M. Al-Baali, New property and global convergence of the Fletcher-Reeves method with inexact line searches. IMA J. Numer. Anal. 5 (1985) 122–124. | DOI | MR | Zbl

N. Andrei, An unconstrained optimization test functions collection. Adv. Model. Optim 10 (2008) 147–161. | MR | Zbl

I. Bongartz, A.R. Conn, N. Gould and P.L. Toint, Cute: Constrained and unconstrained testing environment. ACM Trans. Math. Software (TOMS) 21 (1995) 123–160. | DOI | Zbl

Y. Dai and Y. Yuan, Some properties of a new conjugate gradient method. In Advances in Nonlinear Programming. Springer (1998) 251–262. | MR | Zbl

Y. Dai and Y. Yuan, A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10 (1999) 177–182. | DOI | MR | Zbl

Y. Dai and Y. Yuan, A three-parameter family of nonlinear conjugate gradient methods. Math. Comput. 70 (2001) 1155–1167. | DOI | MR | Zbl

Y. Dai and Y. Yuan, A class of globally convergent conjugate gradient methods. Sci. China Ser. A: Math. 46 (2003) 251–261. | DOI | MR | Zbl

Y. Dai, J. Han, G. Liu, D. Sun, H. Yin and Y. Yuan, Convergence properties of nonlinear conjugate gradient methods. SIAM J. Optim. 10 (2000) 345–358. | DOI | MR | Zbl

R. Fletcher and C.M. Reeves, Function minimization by conjugate gradients. Comput. J. 7 (1964) 149–154. | DOI | MR | Zbl

J.C. Gilbert and J. Nocedal, Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2 (1992) 21–42. | DOI | MR | Zbl

W.W. Hager and H. Zhang, A survey of nonlinear conjugate gradient methods. Pacific J. Optim. 2 (2006) 35–58. | MR | Zbl

M. Hestenes, Methods of conjugate gradients for solving linear systems. Res. J. Natl. Bur. Stan. 49 (1952) 409–436. | DOI | MR | Zbl

Y. Hu and C. Storey, Global convergence result for conjugate gradient methods. J. Optim. Theory Appl. 71 (1991) 399–405. | DOI | MR | Zbl

J. Moré and E.D. Dolan, Benchmarking optimization software with performance files. Math. Program. 91 (2002) 201–2013. | DOI | MR | Zbl

E. Polak and G. Ribiere, Note sur la convergence de méthodes de directions conjuguées. Revue française d’informatique et de recherche opérationnelle, série rouge 3 (1969) 35–43. | Numdam | MR | Zbl

B.T. Polyak, The conjugate gradient method in extremal problems. USSR Comput. Math. Math. Phys. 9 (1969) 94–112. | DOI | Zbl

M. Powell, Nonconvex minimization calculations and the conjugate gradient method. Numer. Anal. (1984) 122–141. | MR | Zbl

B. Sellami, Y. Laskri and R. Benzine, A new two-parameter family of nonlinear conjugate gradient methods. Optimization 64 (2015) 993–1009. | DOI | MR | Zbl

D.F. Shanno, Conjugate gradient methods with inexact searches. Math. Oper. Res. 3 (1978) 244–256. | DOI | MR | Zbl

P. Wolfe, Convergence conditions for ascent methods. SIAM Rev. 11 (1969) 226–235. | DOI | MR | Zbl

P. Wolfe, Convergence conditions for ascent methods. ii: Some corrections. SIAM Rev. 13 (1971) 185–188. | DOI | MR | Zbl

G. Zoutendijk, Nonlinear programming, computational methods. In Integer and Nonlinear Programming, edited by J. Abadie. North-Holland, Amsterdam (1970) 37–86. | MR | Zbl

Cité par Sources :