Conjugate gradient techniques are known for their simplicity and minimal memory usage. However, it is known that in the vector optimization context, the Polak-Ribi{\’e}re-Polyak (PRP), Liu-Storey (LS), and Hestenes-Stiefel (HS) conjugate gradient (CG) techniques fail to satisfy the sufficient descent property using Wolfe line searches. In this work, we propose a variation of the PRP, LS, and HS CG techniques that we termed YPR, YLS, and YHS, respectively. These techniques exhibit the desirable property of sufficient descent without line search, except for the YHS which uses Wolfe line search for its sufficient descent property.  Under certain standard assumptions and employing strong Wolfe conditions, we investigate the global convergence properties of the proposed techniques. The global convergence analysis extends beyond convexity assumption on the objective functions. Additionally, we present numerical experiments and comparisons to demonstrate the implementation, efficiency, and robustness of the proposed techniques.

 

 

Additional Information

Author(s)

 Kumam, Poom , Abubakar, Jamilu , Yahaya, Jamilu

DOI

https://doi.org/10.37193/CJM.2024.02.18