All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.

Description

In this research, we developed a new search direction in the conjugate gradient algorithms by using combined convex property. The developed algorithm becomes converged by assuming some hypothesis. The numerical results show the efficiency of the

Transcript

editor@tjprc.org www.tjprc.org
A NEW SPECTRAL CONJUGATE GRADIENT ALGORITHM FOR UNCONSTRAINED OPTIMIZATION
KHALIL K. ABBO
1
, YOKSAL A. LAYLANI
2
& HISHAM M KHUDHUR
3
1, 3
Department of Mathematics, College of computers sciences and mathematics University of Mosul,Mosul, Iraq
2
Department of Mathematics College of sciences University of Kirkuk, Kirkuk, Iraq
ABSTRACT In this research, we developed a new search direction in the conjugate gradient algorithms by using combined convex property. The developed algorithm becomes converged by assuming some hypothesis. The numerical results show the efficiency of the developed method for solving test unconstrained nonlinear optimization problems. KEYWORDS: Search Direction, Convergence, .& Optimization Problems
Received
: Oct 24, 2017;
Accepted
: Nov 14, 2017;
Published
: Dec 29, 2017;
Paper Id
.: IJMCARFEB20181
1. INTRODUCTION
Conjugate gradient methods represent an important class of unconstrained optimization algorithms with strong local and global convergence properties and modest memory requirements. An excellent survey of the development of different versions of nonlinear conjugate-gradient methods, with special attention to global convergence properties, is presented by Hager and Zhang [8]. This family of algorithms includes a lot of variants, well known in the literature, with important convergence properties and numerical efficiency. The purpose of this chapter is to present these algorithms as well as their performances to solve a large variety of large-scale unconstrained optimization problems. For solving the nonlinear unconstrained optimization problem
min{() , }
n
fxxR
∈
(1) Where
R R f
n
→
:
is continuously differentiable function and bounded below, starting from an initial guess
n
xR
∈
a nonlinear conjugate-gradient method generates a sequence
{}
k
x
as:
k k k k
d x x
α
+=
+
1
(2) Where
0
k
α
>
is obtained by line search, and the directions
k
d
are generated as:
1111
,
kkkk
dgsdg
β
+ +
=− + =−
(3) In (3)
k
β
is known as the conjugate gradient parameter,
k k k
x x s
−=
+
1
and
)(
k k
x f g
∇=
. Consider
.
the Euclidean norm and define
k k k
gg y
−=
+
1
. The line search in the conjugate-gradient algorithms is often
Or i gi n al Ar t i c l e
International Journal of Mathematics and ComputerApplications Research (IJMCAR) ISSN (P): 2249-6955; ISSN (E): 2249-8060 Vol. 8, Issue 1, Feb 2018, 1-10 © TJPRC Pvt. Ltd.
2 Khalil K. Abbo, Yoksal A. Laylani & Hisham M. Khudhur
Impact Factor (JCC): 5.9876 NAAS Rating: 3.76
based on the standard Wolfe conditions:
()()
T kkkkkkk
fxdfxgd
α ρα
+ − ≤ +
(4)
1
TT kkkk
gdgd
σ
+
≥
(5) Where
k
d
is a descent direction and
01
ρ σ
< ≤ <
. For some conjugate-gradient algorithms, stronger versions of the Wolfe conditions are needed to ensure convergence and to enhance stability. According to the formula for
k
β
computation, the conjugate-gradient algorithms can be classified as classical, hybrid, scaled, modified and parametric. In the following, we shall present these algorithms and insist on their numerical Dolan and Moré’s performances profiles for solving large-scale unconstrained optimization problems. The history of conjugate-gradient method begins with the seminal paper of Hestenes and Stiefel [9], who presented an algorithm for solving symmetric, positivedefinite linear algebraic systems. In 1964 Fletcher and Reeves [6] extended the domain of application of conjugate-gradient method to nonlinear problems, thus starting the nonlinear conjugate-gradient research direction. The main advantages of the conjugate-gradient method are its low memory requirements and its convergence speed. A large variety of nonlinear conjugate gradient algorithms are known. For each of them, convergence results have been proved in mild conditions which refer to the Lipschitz and boundedness assumptions. To prove the global convergence of nonlinear conjugate-gradient methods, often the Zoutendijk condition is used combined with analysis showing that the sufficient descent condition
2
T kkk
gdcg
≤ −
holds, and that there exists a constant
δ
such that
2
.
k
dk
δ
≤
Often, the convergence analysis of conjugate gradient algorithms, for general nonlinear functions, follows insights developed by Gilbert and Nocedal [7]. The idea is to bound the change
1
kk
uu
+
−
in the normalized direction
/,
kkk
udd
=
which is used to conclude, by contradiction, that the gradients cannot be bounded away from zero.
2. CLASSICAL CONJUGATE GRADIENT ALGORITHMS
These algorithms are defined by (2) and (3), where the parameter
k
β
is computed as in Table (1). Observe that these algorithms can be classified as algorithms with
1
k
g
+
in the numerator of
k
β
and algorithms with
1
kk
gy
+
in the numerator of parameter
k
β
The FR, CD and DY methods (see the tables for the definitions of the acronyms used for the algorithms throughout the text), with
1
k
g
+
in the numerator of
k
β
have strong convergence theory, but all performance profiles of conjugate gradient algorithms for unconstrained optimization.
A New Spectral Conjugate Gradient Algorithm for Unconstrained Optimization 3
editor@tjprc.org www.tjprc.org
Table 1: Classical Conjugate Gradient Algorithms [1]
(Author) )Formula( No.
(Hestenes-Stiefel (HS), 1952)
k T k k T k HS k
yd yg
1
+
=
β
1.
(Fletcher-Reeves (FR), 1964)
k T k k T k FRk
gggg
11
++
=
β
2.
(Polak- Ribière (PR), 1969)
k T k k T k PRk
gg yg
1
+
=
β
3.
(Dixon (DX), 1975)
k T k k T k DX k
gd gg
11
++
−=
β
4.
(Fletcher (CD), 1987)
k T k k T k CDk
gd gg
11
++
−=
β
5.
(Liu-Storey (LS), 1991)
k T k k T k LS k
gd yg
1
+
−=
β
6.
(Dai-Yuan (DY), 1999)
k T k k T k DY k
yd gg
11
++
=
β
7.
These methods are susceptible to jamming. They begin to take small steps without making any significant progress to the minimum. On the other hand, the HS, PRP and LS methods, with
1
kk
gy
+
in the numerator of parameter
k
β
, have a built-in restart feature that addresses the jamming phenomenon. When the step
k
s
is small, the factor
k k k
gg y
−=
+
1
in the numerator of
k
β
tends to zero. Therefore,
k
β
becomes small and the new direction
1
k
d
+
in (3) is essentially the steepest descent direction
1
k
g
+
−
. With other words, the HS, PRP and LS methods automatically adjust
k
β
to avoid jamming, and their performances are better than the performance of methods with
1
k
g
+
in the numerator of
k
β
. This paper is organized as following. In section 3,newalgorithms for CG to solve Unconstrained Optimization Problems. In section, 4 we will show that our algorithm satisfies descent condition for every iteration. Section 5,we will show that our algorithm satisfies Global convergence condition for every iteration. Section6,presents numerical experiments and comparisons.
3. NEW ALGORITHM FOR CG TO SOLVE UNCONSTRAINED OPTIMIZATION PROBLEMS
We candefined the new search direction by the following:
1
1
1
(1)
k
kkk
kk
g
dd
θ θ β
+
+
+
=− + −
(6)
1
T kk
T kk
gd
d
y
θ
+
=
(7)
4 Khalil K. Abbo, Yoksal A. Laylani & Hisham M. Khudhur
Impact Factor (JCC): 5.9876 NAAS Rating: 3.76
Where
01
θ
< <
New Algorithm Step 1
: Initialization.Select
1
n
xR
∈
and the parameters
0
θ
>
. Compute
1
()
fx
and
1
g
. Consider
11
dg
=−
and set the initial guess
11
1/
g
α
=
.
Step 2
: Test for continuation of iterations. If
61
10
k
g
−+
≤
, then stop.
Step 3
: Line search. Compute
1
0
k
α
+
>
satisfying the Wolfe line search condition (4) and (5) and update the variables
1
kkkk
xxd
α
+
= +
.
Step 4
: Direction new computation, compute
1
11
,
(1)
k
T kk kkk T kk
k
gd g yd
dd
θ θ β θ
+
++
=
=− + −
. If the restart criterion
211
0.2
T kkk
ggg
+ +
≥
, is satisfied, then set
1
k
dg
+
=−
Otherwise define
1
k
dd
+
=
. Compute the initial step size
11
/,
kkkk
dd
α α
− −
=
set
1
kk
= +
and continue with step2.
4. THE DESCENT PROPERTY OF THE NEW METHOD
Below we have to show the descent property for our proposed new Scaled conjugate gradient algorithm, denoted by:
1
1
1
(1)
k
kkk
kk
g
dd
θ θ β
+
+
+
=− + −
(8) In the following theorem (1).
Theorem (
1
)
The search direction defined by
1
1
1
(1)
k
kkk
kk
g
dd
θ θ β
+
+
+
=− + −
(9) Satisfies descent property for all
1
k
≥
Proof
The proof is by induction. If k=1 then
1
11
0
T
g
dg
<
= −
. Assume that
0
T kk
gd
<
Note that from the wolfe conditionswe have
0
T kk
dy
>
.
A New Spectral Conjugate Gradient Algorithm for Unconstrained Optimization 5
editor@tjprc.org www.tjprc.org
We prove that the relation is true when
1
kk
= +
by multiplying the equation (9) to
1
T k
g
+
we obtain
1111
11
)
(1
TTT kkkkkk
kkk
dggggd
θ θ β
+ + + +
+ +
=− + −
(10)
2111111
1
)
(1
TT kk TT kkkk TT kk
kk kk kk
g
gdgd dggd ydyd
β
+ ++ + + +
+
=− + −
( 11)
211111111
1
TT kk TTT kkkkkk TT kk
kk kkk kk
g
gdgd dggdgd ydyd
β β
+ ++ + + + + +
+
+
=− −
(12) Let
21111111
TT kk TT kkkkk TT kk
kk kk kk
g
gdgd gdgd ydyd
β β
+ ++ + + + +
− −
>
1
1
0
T k
k
dg
+
+
<
(13)
5. GLOBAL CONVERGENCE ANALYSIS
Next we will show that CG method with
1
k
β
+
converges globally. We need the following assumption for the convergence of the proposed new algorithm.
Assumption (1) [4]
1-Assume is bound below in the level set
{
:()()
n
SxRfxfx
= ∈ ≤
o
; In some Initial point. 2-is continuously differentiable and its gradient is Lipshitz continuous, there exist
0
L
>
such that:
()() x,yN
gxgyLxy
− ≤ − ∀ ∈
(14) 3- f is uniformly convex function, then there exists a constant
0
µ
>
such that
( ) ( )
2
,
()() , for any
T
xyS
fxfyxyxy
µ
∈
∇ −∇ − ≥ −
(15) or equivalently
222
and
TT kkkkkkk
ysssysLs
µ µ
≥ ≤ ≤
(16) On the other hand, under assumption (1), it is clear that there exist positive constants B such
,
xBxS
≤ ∀ ∈
(17)
() ,
fxxS
γ
∇ ≤ ∀ ∈
(18)
Lemma(1)
[4,5 and 10]
Suppose that Assumption (1) and equation (17) hold true. Consider any conjugate gradient method in from (2) and

Related Search

Evolving A New Model (SDLC Model-2010) For SoCommunity and Religious harmony for a New worA new method to correlate a possible couplingA new interpretation of Silbury Hill in AvebuUsing of Natural Herbal Plants as a New AlterA New Industry: The Publishing Of Structural A New Industry: the Publication of Reading TyA NEW RESEARCH WORK IN LINGUISTICS/ PERSIAN AAlgorithm for Bangla OCRA new variant of Alesia brooch from Italy and

Similar documents

We Need Your Support

Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks