Download presentation
Presentation is loading. Please wait.
1
Lecture 2 Linear Variational Problems (Part II)
2
Conjugate Gradient Algorithms for Linear Variational Problems in Hilbert Spaces 1.Introduction. Synopsis. Conjugate gradient algorithms are among the most popular methods of Scientific Computing. Introduced initially for the solution of linear finite dimensional problems they have found applications for the solution of nonlinear and/or infinite dimensional problems and, combined with least-squares can be applied to the solution of critical point problems which are not minimization ones. We will begin our discussion with the solution of linear variational problems (LVP) in Hilbert spaces when the bilinear functional a(,.,) is symmetric.
3
2. Formulation of the basic problem. The basic problem to be considered was discussed in Lecture 1; it reads as follows (with a(.,.) symmetric, here): Find u V such that (LVP) a(u,v) = L(v), v V, with {V, a, L} as in Lecture I.
4
3. Description of the Conjugate Gradient Algorithm. The algorithm reads as follows Step 0: Initialization (1) u 0 is given in V ; Solve g 0 V, (2) (g 0,v) = a(u 0,v) – L(v), v V, Set (if g 0 ≠ 0) (3) w 0 = g 0.
5
For n ≥ 0, assuming that u n, g n and w n are known, the last two different from 0, we update them as follows: Step 1: Descent (4) ρ n = ||g n || 2 /a(w n, w n ), (5) u n+1 = u n – ρ n w n. Step 2: Testing Convergence & Updating w n Solve g n+1 V, (6) (g n+1, v) = (g n, v) – ρ n a(w n,v), v V.
6
If ||g n +1 ||/||g 0 || tol. take u = u n +1 ; else (7) γ n = ||g n +1 || 2 /||g n || 2, (8) w n +1 = g n +1 + γ n w n. Do n = n + 1 and return to (4). We observe that the CG algorithm requires the solution of one linear variational problem per iteration, implying that the choice of the inner product is critical (in finite dimension this is related to the choice of the preconditioning matrix)
7
4. Convergence of the CG algorithm Theorem: Suppose that tol. = 0 in algorithm (1)-(8); then, u 0 V, we have lim n +∞ u n = u, with u the solution of (LVP). Moreover if dim V = d < +∞, there exists N d, such that u N = u (finite termination property) *. Proof: See, e.g., RG, HNA, Vol. IX, Chapter 3 (2003).
8
From a practical point of view the most important result is (Meinardius-Daniel) ||u n – u|| C ||u 0 – u|| [(√ν a – 1 )/(√ν a + 1 )] n with ν a = sup v a(v,v) / inf v a(v,v) being the unit sphere of V ( = {v V, ||v|| = 1}).
9
The Finite Dimensional Case(1) Consider (LE) Ax = b, with A a SPD d × d real matrix and b R d. Since (LE) is equivalent to x R d, (LEV) Ax.y = b.y, y R d, we can apply CG algorithms to the solution of (LE).
10
The Finite Dimensional Case(2) Suppose now that the inner product of R d is defined by (y,z) = Sy.z with S another SPD matrix, then ν a = ν ( S – 1 A), a well-known result (!)
11
5. An application to the Control of Elliptic PDE’s Let us consider the following control problem for an elliptic PDE: u L 2 (ω), (ECP) J(u) J(v), v L 2 (ω), with J(v) = ½ ∫ ω |v| 2 dx + ½ k ∫ O |y – y d | 2 dx and – y = f + v χ ω in Ω, y = 0 on ∂ Ω. (SE) Above ω Ω,O Ω and k > 0.
12
The above control problem has a unique solution characterized by DJ(u) = 0 (OC) where, v L 2 (ω), DJ(v) = v + p | ω p being the unique solution of the following adjoint equation – p = k(y – y d ) χ O, p = 0 on ∂ Ω. (ASE) To solve (OC) we advocate the following conjugate gradient algorithm:
13
(1) u 0 is given in L 2 (ω). Solve (2) – y 0 = f + u 0 χ ω in Ω, y 0 = 0 on ∂ Ω and (3) – p 0 = k(y 0 – y d ) χ O, p 0 = 0 on ∂ Ω. Set (4) g 0 = u 0 + p 0 | ω, (5) w 0 = g 0.
14
For n ≥ 0, u n, g n and w n known, the last two ≠ 0, we compute u n+1, g n+1 and, if necessary, w n+1 as follows: Solve (6) – y 0 = w 0 χ ω in Ω, y 0 = 0 on ∂ Ω and (7) – p 0 = k y 0 χ O, p 0 = 0 on ∂ Ω. Set (8) g 0 = w 0 + p 0 | ω, and compute
15
(9) ρ n = ∫ ω |g n | 2 dx / ∫ ω g n w n dx, (10) u n+1 = u n – ρ n w n, (11) g n+1 = g n – ρ n g n. If ∫ ω |g n+1 | 2 dx / ∫ ω |g 0 | 2 dx tol. take u = u n+1 ; otherwise, compute (12) γ n = ∫ ω |g n+1 | 2 dx / ∫ ω |g n | 2 dx, (13) w n+1 = g n+1 + γ n w n. Do n = n + 1 and return to (6).
16
It can be shown that, generically speaking, the number of iterations necessary to obtain convergence varies like k ½ log(1/tol.) For more information and examples on the application of conjugate gradient algorithms to the solution of control problems, see, e.g., R.GLOWINSKI, J.L. LIONS, J. HE, Exact and Approximate Controllability for Distributed Parameters Systems: A Numerical Approach, Cambridge University Press, 2008.
17
Advertising
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.