General Optimality Systems for PDE-Constrained Optimization
Overview ¶ The previous lectures developed a complete theory for linear-quadratic elliptic optimal control:
reduced formulation through the control-to-state map;
adjoint-based gradient representation;
variational inequality for control constraints;
discrete KKT system and its numerical solution.
This lecture places these results in an abstract framework that covers
nonlinear state equations and general objective functionals.
The logical path of the lecture:
fix the functional analytic setting — spaces, duality pairings, and adjoint operators;
define Fréchet differentiability of maps between Banach spaces;
derive the differentiability of the control-to-state map via the Implicit Function Theorem;
compute the Fréchet derivative of the reduced functional via the chain rule;
introduce the adjoint equation and rewrite the gradient without nested PDE solves;
formulate the Lagrangian, derive the full KKT system, and state the second-order conditions;
define the normal cone precisely and recover the variational inequality and projection formula;
compare the reduced and all-at-once approaches in terms of structure and computational cost;
verify the framework on a semilinear elliptic example.
Functional Analytic Setting ¶ Let X X X , Y Y Y , Z Z Z , U U U be real Banach spaces.
A Banach space is a complete normed vector space.
A Hilbert space is a Banach space whose norm is induced by an inner product:
∥ v ∥ 2 = ( v , v ) \|v\|^2 = (v,v) ∥ v ∥ 2 = ( v , v ) .
Hilbert spaces are reflexive, meaning ( Y ′ ) ′ ≅ Y (Y')' \cong Y ( Y ′ ) ′ ≅ Y isometrically.
Dual spaces and duality pairings.
The dual space X ′ X' X ′ consists of all bounded linear functionals ℓ : X → R \ell: X \to \mathbb{R} ℓ : X → R .
For ℓ ∈ X ′ \ell \in X' ℓ ∈ X ′ and x ∈ X x \in X x ∈ X we write the duality pairing
⟨ ℓ , x ⟩ X ′ , X : = ℓ ( x ) . \langle \ell, x \rangle_{X', X} := \ell(x). ⟨ ℓ , x ⟩ X ′ , X := ℓ ( x ) . The norm of X ′ X' X ′ is ∥ ℓ ∥ X ′ : = sup ∥ x ∥ X ≤ 1 ∣ ⟨ ℓ , x ⟩ ∣ \|\ell\|_{X'} := \sup_{\|x\|_X \le 1} |\langle \ell, x \rangle| ∥ ℓ ∥ X ′ := sup ∥ x ∥ X ≤ 1 ∣ ⟨ ℓ , x ⟩ ∣ .
Spaces used in this lecture.
Y Y Y : the state space .
For elliptic problems: Y = H 0 1 ( Ω ) Y = H_0^1(\Omega) Y = H 0 1 ( Ω ) or Y = H 1 ( Ω ) Y = H^1(\Omega) Y = H 1 ( Ω ) .
In the abstract framework: a Hilbert space.
U U U : the control space .
Typically: U = L 2 ( Ω ) U = L^2(\Omega) U = L 2 ( Ω ) or U = L 2 ( Γ ) U = L^2(\Gamma) U = L 2 ( Γ ) .
In the abstract framework: a Hilbert space.
Z ′ Z' Z ′ : the constraint residual space , i.e., the codomain of the state equation.
For elliptic problems: Z ′ = H − 1 ( Ω ) Z' = H^{-1}(\Omega) Z ′ = H − 1 ( Ω ) , the dual of H 0 1 ( Ω ) H_0^1(\Omega) H 0 1 ( Ω ) .
In the abstract framework: the dual of some Banach space Z Z Z .
p ∈ Z p \in Z p ∈ Z : the adjoint state or Lagrange multiplier for the state equation.
Adjoint operators.
Let A : X → Z ′ A: X \to Z' A : X → Z ′ be a bounded linear operator.
Its adjoint A ∗ : Z → X ′ A^*: Z \to X' A ∗ : Z → X ′ is defined by
⟨ A x , z ⟩ Z ′ , Z = ⟨ x , A ∗ z ⟩ X , X ′ ∀ x ∈ X , z ∈ Z . \langle Ax, z \rangle_{Z', Z}
= \langle x, A^* z \rangle_{X, X'}
\qquad \forall x \in X, \; z \in Z. ⟨ A x , z ⟩ Z ′ , Z = ⟨ x , A ∗ z ⟩ X , X ′ ∀ x ∈ X , z ∈ Z . If X X X and Z Z Z are Hilbert spaces and we identify Z ≅ Z ′ Z \cong Z' Z ≅ Z ′ via the Riesz isomorphism,
this becomes ( A x , z ) Z = ( x , A ∗ z ) X (Ax, z)_Z = (x, A^* z)_X ( A x , z ) Z = ( x , A ∗ z ) X .
Riesz isomorphism.
In a Hilbert space X X X , the Riesz map R X : X → X ′ \mathcal{R}_X: X \to X' R X : X → X ′ is defined by
⟨ R X u , v ⟩ X ′ , X = ( u , v ) X ∀ v ∈ X . \langle \mathcal{R}_X u, v \rangle_{X', X} = (u, v)_X \qquad \forall v \in X. ⟨ R X u , v ⟩ X ′ , X = ( u , v ) X ∀ v ∈ X . It is an isometric isomorphism.
Its inverse R X − 1 : X ′ → X \mathcal{R}_X^{-1}: X' \to X R X − 1 : X ′ → X identifies the gradient :
given ℓ ∈ X ′ \ell \in X' ℓ ∈ X ′ , the unique element g : = R X − 1 ℓ ∈ X g := \mathcal{R}_X^{-1} \ell \in X g := R X − 1 ℓ ∈ X satisfies
⟨ ℓ , v ⟩ X ′ , X = ( g , v ) X ∀ v ∈ X , \langle \ell, v \rangle_{X', X} = (g, v)_X \qquad \forall v \in X, ⟨ ℓ , v ⟩ X ′ , X = ( g , v ) X ∀ v ∈ X , and we write g = ∇ f ( u ) g = \nabla f(u) g = ∇ f ( u ) when ℓ = f ′ ( u ) \ell = f'(u) ℓ = f ′ ( u ) .
Fréchet Differentiability ¶ Let X X X and W W W be Banach spaces, and let F : X → W F: X \to W F : X → W .
F F F is Fréchet differentiable at x ∈ X x \in X x ∈ X if there exists a bounded linear operator
F ′ ( x ) : X → W F'(x): X \to W F ′ ( x ) : X → W such that
lim ∥ h ∥ X → 0 ∥ F ( x + h ) − F ( x ) − F ′ ( x ) h ∥ W ∥ h ∥ X = 0. \lim_{\|h\|_X \to 0}
\frac{\|F(x+h) - F(x) - F'(x)h\|_W}{\|h\|_X} = 0. ∥ h ∥ X → 0 lim ∥ h ∥ X ∥ F ( x + h ) − F ( x ) − F ′ ( x ) h ∥ W = 0. The operator F ′ ( x ) F'(x) F ′ ( x ) is called the Fréchet derivative of F F F at x x x .
For real-valued functionals F : X → R F: X \to \mathbb{R} F : X → R , the Fréchet derivative is an element of X ′ X' X ′ :
F ′ ( x ) ∈ X ′ . F'(x) \in X'. F ′ ( x ) ∈ X ′ . Partial derivatives.
For a map F : X 1 × X 2 → W F: X_1 \times X_2 \to W F : X 1 × X 2 → W , the partial Fréchet derivative with respect to X 1 X_1 X 1
at ( x 1 , x 2 ) (x_1, x_2) ( x 1 , x 2 ) in direction h 1 ∈ X 1 h_1 \in X_1 h 1 ∈ X 1 is
F x 1 ( x 1 , x 2 ) h 1 : = lim t → 0 F ( x 1 + t h 1 , x 2 ) − F ( x 1 , x 2 ) t . F_{x_1}(x_1, x_2) h_1
:= \lim_{t \to 0} \frac{F(x_1 + t h_1, x_2) - F(x_1, x_2)}{t}. F x 1 ( x 1 , x 2 ) h 1 := t → 0 lim t F ( x 1 + t h 1 , x 2 ) − F ( x 1 , x 2 ) . For fixed ( x 1 , x 2 ) (x_1, x_2) ( x 1 , x 2 ) , the map h 1 ↦ F x 1 ( x 1 , x 2 ) h 1 h_1 \mapsto F_{x_1}(x_1, x_2) h_1 h 1 ↦ F x 1 ( x 1 , x 2 ) h 1 is a bounded linear operator
from X 1 X_1 X 1 to W W W .
Chain rule.
If G : X → W G: X \to W G : X → W and H : W → V H: W \to V H : W → V are Fréchet differentiable, then
( H ∘ G ) : X → V (H \circ G): X \to V ( H ∘ G ) : X → V is Fréchet differentiable and
( H ∘ G ) ′ ( x ) = H ′ ( G ( x ) ) ∘ G ′ ( x ) . (H \circ G)'(x) = H'(G(x)) \circ G'(x). ( H ∘ G ) ′ ( x ) = H ′ ( G ( x )) ∘ G ′ ( x ) . For our purposes: given S : U → Y S: U \to Y S : U → Y and J : Y × U → R J: Y \times U \to \mathbb{R} J : Y × U → R , with f ( u ) : = J ( S ( u ) , u ) f(u) := J(S(u), u) f ( u ) := J ( S ( u ) , u ) ,
f ′ ( u ) h = ⟨ J y ( S ( u ) , u ) , S ′ ( u ) h ⟩ Y ′ , Y + ⟨ J u ( S ( u ) , u ) , h ⟩ U ′ , U ∀ h ∈ U . f'(u)h
= \langle J_y(S(u), u), S'(u) h \rangle_{Y', Y}
+ \langle J_u(S(u), u), h \rangle_{U', U}
\qquad \forall h \in U. f ′ ( u ) h = ⟨ J y ( S ( u ) , u ) , S ′ ( u ) h ⟩ Y ′ , Y + ⟨ J u ( S ( u ) , u ) , h ⟩ U ′ , U ∀ h ∈ U . Abstract Problem ¶ We now state the abstract optimal control problem and fix notation for all derivatives.
Objective functional.
J : Y × U → R J: Y \times U \to \mathbb{R} J : Y × U → R is assumed Fréchet differentiable, with partial derivatives
J y ( y , u ) ∈ Y ′ , J u ( y , u ) ∈ U ′ . J_y(y, u) \in Y', \qquad J_u(y, u) \in U'. J y ( y , u ) ∈ Y ′ , J u ( y , u ) ∈ U ′ . Constraint mapping.
e : Y × U → Z ′ e: Y \times U \to Z' e : Y × U → Z ′ is assumed Fréchet differentiable, with partial derivatives
e y ( y , u ) : Y → Z ′ , e u ( y , u ) : U → Z ′ . e_y(y, u): Y \to Z', \qquad e_u(y, u): U \to Z'. e y ( y , u ) : Y → Z ′ , e u ( y , u ) : U → Z ′ . These are bounded linear operators for each fixed ( y , u ) (y, u) ( y , u ) .
Admissible controls.
U a d ⊂ U U_{\mathrm{ad}} \subset U U ad ⊂ U is a nonempty, closed, and convex subset.
Optimal control problem.
Find
( y ˉ , u ˉ ) ∈ Y × U a d (\bar y, \bar u) \in Y \times U_{\mathrm{ad}} ( y ˉ , u ˉ ) ∈ Y × U ad such that
J ( y ˉ , u ˉ ) = inf { J ( y , u ) : e ( y , u ) = 0 in Z ′ , u ∈ U a d } . J(\bar y, \bar u)
= \inf\bigl\{ J(y, u):\; e(y, u) = 0 \text{ in } Z',\; u \in U_{\mathrm{ad}} \bigr\}. J ( y ˉ , u ˉ ) = inf { J ( y , u ) : e ( y , u ) = 0 in Z ′ , u ∈ U ad } . Standing assumptions.
For every u ∈ U a d u \in U_{\mathrm{ad}} u ∈ U ad , the equation e ( y , u ) = 0 e(y, u) = 0 e ( y , u ) = 0 in Z ′ Z' Z ′ admits a unique solution y ∈ Y y \in Y y ∈ Y .
J J J and e e e are of class C 1 C^1 C 1 (all Fréchet derivatives exist and are continuous).
At every feasible pair ( y , u ) (y, u) ( y , u ) , the operator e y ( y , u ) : Y → Z ′ e_y(y, u): Y \to Z' e y ( y , u ) : Y → Z ′ is an isomorphism.
Assumption 3 is the regularity (constraint qualification) condition.
For the Poisson state equation, it holds because − Δ : H 0 1 ( Ω ) → H − 1 ( Ω ) -\Delta: H_0^1(\Omega) \to H^{-1}(\Omega) − Δ : H 0 1 ( Ω ) → H − 1 ( Ω ) is a homeomorphism.
Control-to-State Map ¶ Under Assumption 1, define
S : U a d → Y , u ↦ y = S ( u ) , S: U_{\mathrm{ad}} \to Y, \qquad u \mapsto y = S(u), S : U ad → Y , u ↦ y = S ( u ) , where S ( u ) S(u) S ( u ) is the unique solution of e ( y , u ) = 0 e(y, u) = 0 e ( y , u ) = 0 in Z ′ Z' Z ′ .
The map S S S is the control-to-state operator (or solution operator ) of the state equation.
Differentiability via the Implicit Function Theorem.
The Implicit Function Theorem in Banach spaces states:
if e : Y × U → Z ′ e: Y \times U \to Z' e : Y × U → Z ′ is C 1 C^1 C 1 and e y ( y 0 , u 0 ) : Y → Z ′ e_y(y_0, u_0): Y \to Z' e y ( y 0 , u 0 ) : Y → Z ′ is an isomorphism at some
feasible point ( y 0 , u 0 ) (y_0, u_0) ( y 0 , u 0 ) , then locally around u 0 u_0 u 0 the map u ↦ S ( u ) u \mapsto S(u) u ↦ S ( u ) is C 1 C^1 C 1 and
S ′ ( u ) h = − e y ( y , u ) − 1 e u ( y , u ) h ∀ h ∈ U , S'(u)h = -e_y(y, u)^{-1} e_u(y, u) h \qquad \forall h \in U, S ′ ( u ) h = − e y ( y , u ) − 1 e u ( y , u ) h ∀ h ∈ U , where y = S ( u ) y = S(u) y = S ( u ) .
Derivation.
Differentiate the identity e ( S ( u ) , u ) = 0 e(S(u), u) = 0 e ( S ( u ) , u ) = 0 with respect to u u u in direction h h h :
e y ( y , u ) [ S ′ ( u ) h ] + e u ( y , u ) [ h ] = 0 in Z ′ . e_y(y, u)\bigl[S'(u) h\bigr] + e_u(y, u)\bigl[h\bigr] = 0 \qquad \text{in } Z'. e y ( y , u ) [ S ′ ( u ) h ] + e u ( y , u ) [ h ] = 0 in Z ′ . Solving for S ′ ( u ) h S'(u)h S ′ ( u ) h using invertibility of e y e_y e y :
S ′ ( u ) h = − e y ( y , u ) − 1 e u ( y , u ) h . S'(u) h = -e_y(y, u)^{-1} e_u(y, u) h. S ′ ( u ) h = − e y ( y , u ) − 1 e u ( y , u ) h . This formula shows that:
the state variation S ′ ( u ) h S'(u)h S ′ ( u ) h due to a control perturbation h h h is obtained by one linearized state solve;
the linearized state equation is e y ( y , u ) [ v ] = − e u ( y , u ) [ h ] e_y(y,u)[v] = -e_u(y,u)[h] e y ( y , u ) [ v ] = − e u ( y , u ) [ h ] , driven by the perturbation;
the map h ↦ S ′ ( u ) h h \mapsto S'(u)h h ↦ S ′ ( u ) h is a bounded linear operator from U U U to Y Y Y .
Example (linear Poisson model).
For e ( y , u ) = − Δ y − u e(y, u) = -\Delta y - u e ( y , u ) = − Δ y − u :
e y ( y , u ) = − Δ : H 0 1 ( Ω ) → H − 1 ( Ω ) , e u ( y , u ) = − I : L 2 ( Ω ) → H − 1 ( Ω ) . e_y(y, u) = -\Delta : H_0^1(\Omega) \to H^{-1}(\Omega), \qquad
e_u(y, u) = -I : L^2(\Omega) \to H^{-1}(\Omega). e y ( y , u ) = − Δ : H 0 1 ( Ω ) → H − 1 ( Ω ) , e u ( y , u ) = − I : L 2 ( Ω ) → H − 1 ( Ω ) . Hence S ′ ( u ) h = ( − Δ ) − 1 h = S ( h ) S'(u)h = (-\Delta)^{-1}h = S(h) S ′ ( u ) h = ( − Δ ) − 1 h = S ( h ) ,
consistent with the linearity of S S S in that case.
Reduced Functional and Chain Rule ¶ Define the reduced functional
f : U a d → R , f ( u ) : = J ( S ( u ) , u ) . f: U_{\mathrm{ad}} \to \mathbb{R}, \qquad f(u) := J(S(u), u). f : U ad → R , f ( u ) := J ( S ( u ) , u ) . The PDE-constrained problem becomes an optimization problem in u u u alone:
min u ∈ U a d f ( u ) . \min_{u \in U_{\mathrm{ad}}} f(u). u ∈ U ad min f ( u ) . Fréchet derivative of f f f .
By the chain rule,
f ′ ( u ) h = ⟨ J y ( y , u ) , S ′ ( u ) h ⟩ Y ′ , Y + ⟨ J u ( y , u ) , h ⟩ U ′ , U ∀ h ∈ U , f'(u) h
= \langle J_y(y, u),\, S'(u) h \rangle_{Y', Y}
+ \langle J_u(y, u),\, h \rangle_{U', U}
\qquad \forall h \in U, f ′ ( u ) h = ⟨ J y ( y , u ) , S ′ ( u ) h ⟩ Y ′ , Y + ⟨ J u ( y , u ) , h ⟩ U ′ , U ∀ h ∈ U , where y = S ( u ) y = S(u) y = S ( u ) .
Substituting the formula for S ′ ( u ) h S'(u)h S ′ ( u ) h :
f ′ ( u ) h = − ⟨ J y ( y , u ) , e y ( y , u ) − 1 e u ( y , u ) h ⟩ Y ′ , Y + ⟨ J u ( y , u ) , h ⟩ U ′ , U . f'(u) h
= -\langle J_y(y, u),\, e_y(y, u)^{-1} e_u(y, u) h \rangle_{Y', Y}
+ \langle J_u(y, u),\, h \rangle_{U', U}. f ′ ( u ) h = − ⟨ J y ( y , u ) , e y ( y , u ) − 1 e u ( y , u ) h ⟩ Y ′ , Y + ⟨ J u ( y , u ) , h ⟩ U ′ , U . This expression is not yet computationally convenient: evaluating the first term for every
direction h h h requires one linearized state solve per direction tested.
The adjoint equation removes this bottleneck by moving the operator to the other argument.
Adjoint Equation ¶ Setup.
Let ( y , u ) (y, u) ( y , u ) be a feasible pair.
We want to rewrite the directional derivative f ′ ( u ) h f'(u)h f ′ ( u ) h so that h h h appears only in one
bounded linear functional, without intermediate state solves.
Definition of the adjoint state.
Introduce p ∈ Z p \in Z p ∈ Z as the solution of the adjoint equation
e y ( y , u ) ∗ p = J y ( y , u ) in Y ′ . e_y(y, u)^* p = J_y(y, u) \qquad \text{in } Y'. e y ( y , u ) ∗ p = J y ( y , u ) in Y ′ . The adjoint state p p p is uniquely determined because e y ( y , u ) ∗ : Z → Y ′ e_y(y, u)^*: Z \to Y' e y ( y , u ) ∗ : Z → Y ′ is
an isomorphism (dual of the isomorphism e y ( y , u ) : Y → Z ′ e_y(y,u): Y \to Z' e y ( y , u ) : Y → Z ′ ).
Note: p p p lives in Z Z Z , the pre-dual of Z ′ Z' Z ′ .
For the Poisson model, Z = Z ′ = H − 1 ( Ω ) Z = Z' = H^{-1}(\Omega) Z = Z ′ = H − 1 ( Ω ) and the Riesz identification sends p p p to H 0 1 ( Ω ) H_0^1(\Omega) H 0 1 ( Ω ) .
Gradient formula via the adjoint.
With p p p as above, use the duality identity: for any invertible A : Y → Z ′ A: Y \to Z' A : Y → Z ′ ,
⟨ q , A − 1 w ⟩ Y ′ , Y = ⟨ ( A ∗ ) − 1 q , w ⟩ Z , Z ′ ∀ q ∈ Y ′ , w ∈ Z ′ . \langle q, A^{-1} w \rangle_{Y', Y}
= \langle (A^*)^{-1} q, w \rangle_{Z, Z'} \qquad \forall q \in Y', \; w \in Z'. ⟨ q , A − 1 w ⟩ Y ′ , Y = ⟨( A ∗ ) − 1 q , w ⟩ Z , Z ′ ∀ q ∈ Y ′ , w ∈ Z ′ . Apply this with A = e y ( y , u ) A = e_y(y,u) A = e y ( y , u ) , q = J y ( y , u ) q = J_y(y, u) q = J y ( y , u ) , w = e u ( y , u ) h w = e_u(y, u) h w = e u ( y , u ) h , and p = ( e y ∗ ) − 1 J y p = (e_y^*)^{-1} J_y p = ( e y ∗ ) − 1 J y :
⟨ J y ( y , u ) , e y ( y , u ) − 1 e u ( y , u ) h ⟩ Y ′ , Y = ⟨ p , e u ( y , u ) h ⟩ Z , Z ′ = ⟨ e u ( y , u ) ∗ p , h ⟩ U ′ , U . \langle J_y(y, u),\, e_y(y, u)^{-1} e_u(y, u) h \rangle_{Y', Y}
= \langle p,\, e_u(y, u) h \rangle_{Z, Z'}
= \langle e_u(y, u)^* p,\, h \rangle_{U', U}. ⟨ J y ( y , u ) , e y ( y , u ) − 1 e u ( y , u ) h ⟩ Y ′ , Y = ⟨ p , e u ( y , u ) h ⟩ Z , Z ′ = ⟨ e u ( y , u ) ∗ p , h ⟩ U ′ , U . Hence
f ′ ( u ) h = ⟨ J u ( y , u ) − e u ( y , u ) ∗ p , h ⟩ U ′ , U ∀ h ∈ U . f'(u) h
= \langle J_u(y, u) - e_u(y, u)^* p,\; h \rangle_{U', U}
\qquad \forall h \in U. f ′ ( u ) h = ⟨ J u ( y , u ) − e u ( y , u ) ∗ p , h ⟩ U ′ , U ∀ h ∈ U . Gradient of the reduced functional.
Identifying f ′ ( u ) ∈ U ′ f'(u) \in U' f ′ ( u ) ∈ U ′ with its Riesz representative ∇ f ( u ) ∈ U \nabla f(u) \in U ∇ f ( u ) ∈ U :
∇ f ( u ) = R U − 1 ( J u ( y , u ) − e u ( y , u ) ∗ p ) ∈ U . \nabla f(u)
= \mathcal{R}_U^{-1}\bigl(J_u(y, u) - e_u(y, u)^* p\bigr)
\in U. ∇ f ( u ) = R U − 1 ( J u ( y , u ) − e u ( y , u ) ∗ p ) ∈ U . Cost count. Regardless of how many directions h h h must be tested:
one state solve to get y = S ( u ) y = S(u) y = S ( u ) ;
one adjoint solve to get p p p ;
one evaluation of J u − e u ∗ p J_u - e_u^* p J u − e u ∗ p .
Example (Poisson model).
J y = y − y d ∈ L 2 ( Ω ) J_y = y - y_d \in L^2(\Omega) J y = y − y d ∈ L 2 ( Ω ) , e y ∗ = − Δ e_y^* = -\Delta e y ∗ = − Δ , so the adjoint equation is
− Δ p = y − y d -\Delta p = y - y_d − Δ p = y − y d with p = 0 p = 0 p = 0 on ∂ Ω \partial\Omega ∂ Ω .
J u = α u ∈ L 2 ( Ω ) J_u = \alpha u \in L^2(\Omega) J u = αu ∈ L 2 ( Ω ) , e u ∗ = − I e_u^* = -I e u ∗ = − I , so ∇ f ( u ) = p + α u \nabla f(u) = p + \alpha u ∇ f ( u ) = p + αu ,
exactly as derived in Lecture 4.
The Lagrangian naturally encodes both the objective and the constraint.
Definition.
L : Y × U × Z → R , L ( y , u , p ) : = J ( y , u ) − ⟨ e ( y , u ) , p ⟩ Z ′ , Z . \mathcal{L}: Y \times U \times Z \to \mathbb{R}, \qquad
\mathcal{L}(y, u, p) := J(y, u) - \langle e(y, u), p \rangle_{Z', Z}. L : Y × U × Z → R , L ( y , u , p ) := J ( y , u ) − ⟨ e ( y , u ) , p ⟩ Z ′ , Z . Here p ∈ Z p \in Z p ∈ Z is the Lagrange multiplier for the equality constraint e ( y , u ) = 0 e(y, u) = 0 e ( y , u ) = 0 .
The duality pairing is well-defined because e ( y , u ) ∈ Z ′ e(y, u) \in Z' e ( y , u ) ∈ Z ′ and p ∈ Z p \in Z p ∈ Z .
Partial derivatives of L \mathcal{L} L .
With respect to y y y in direction v ∈ Y v \in Y v ∈ Y :
L y ( y , u , p ) [ v ] = ⟨ J y ( y , u ) , v ⟩ Y ′ , Y − ⟨ e y ( y , u ) v , p ⟩ Z ′ , Z = ⟨ J y ( y , u ) − e y ( y , u ) ∗ p , v ⟩ Y ′ , Y . \mathcal{L}_y(y, u, p)[v]
= \langle J_y(y, u),\, v \rangle_{Y', Y}
- \langle e_y(y, u) v,\, p \rangle_{Z', Z}
= \langle J_y(y, u) - e_y(y, u)^* p,\; v \rangle_{Y', Y}. L y ( y , u , p ) [ v ] = ⟨ J y ( y , u ) , v ⟩ Y ′ , Y − ⟨ e y ( y , u ) v , p ⟩ Z ′ , Z = ⟨ J y ( y , u ) − e y ( y , u ) ∗ p , v ⟩ Y ′ , Y . With respect to u u u in direction h ∈ U h \in U h ∈ U :
L u ( y , u , p ) [ h ] = ⟨ J u ( y , u ) , h ⟩ U ′ , U − ⟨ e u ( y , u ) h , p ⟩ Z ′ , Z = ⟨ J u ( y , u ) − e u ( y , u ) ∗ p , h ⟩ U ′ , U . \mathcal{L}_u(y, u, p)[h]
= \langle J_u(y, u),\, h \rangle_{U', U}
- \langle e_u(y, u) h,\, p \rangle_{Z', Z}
= \langle J_u(y, u) - e_u(y, u)^* p,\; h \rangle_{U', U}. L u ( y , u , p ) [ h ] = ⟨ J u ( y , u ) , h ⟩ U ′ , U − ⟨ e u ( y , u ) h , p ⟩ Z ′ , Z = ⟨ J u ( y , u ) − e u ( y , u ) ∗ p , h ⟩ U ′ , U . With respect to p p p in direction q ∈ Z q \in Z q ∈ Z :
L p ( y , u , p ) [ q ] = − ⟨ e ( y , u ) , q ⟩ Z ′ , Z . \mathcal{L}_p(y, u, p)[q] = -\langle e(y, u),\, q \rangle_{Z', Z}. L p ( y , u , p ) [ q ] = − ⟨ e ( y , u ) , q ⟩ Z ′ , Z . Interpretation of the conditions L p = 0 \mathcal{L}_p = 0 L p = 0 , L y = 0 \mathcal{L}_y = 0 L y = 0 .
L p ( y , u , p ) = 0 \mathcal{L}_p(y, u, p) = 0 L p ( y , u , p ) = 0 means e ( y , u ) = 0 e(y, u) = 0 e ( y , u ) = 0 in Z ′ Z' Z ′ : the state equation.
L y ( y , u , p ) = 0 \mathcal{L}_y(y, u, p) = 0 L y ( y , u , p ) = 0 means e y ( y , u ) ∗ p = J y ( y , u ) e_y(y, u)^* p = J_y(y, u) e y ( y , u ) ∗ p = J y ( y , u ) in Y ′ Y' Y ′ : the adjoint equation.
L u ( y , u , p ) = 0 \mathcal{L}_u(y, u, p) = 0 L u ( y , u , p ) = 0 (unconstrained case) means J u − e u ∗ p = 0 J_u - e_u^* p = 0 J u − e u ∗ p = 0 in U ′ U' U ′ : optimality.
First-Order Optimality System ¶ We now state the full KKT conditions.
Unconstrained case (U a d = U U_{\mathrm{ad}} = U U ad = U ).
At a local minimizer u ˉ \bar u u ˉ with corresponding state y ˉ = S ( u ˉ ) \bar y = S(\bar u) y ˉ = S ( u ˉ ) ,
there exists a unique adjoint state p ˉ ∈ Z \bar p \in Z p ˉ ∈ Z such that:
{ e ( y ˉ , u ˉ ) = 0 in Z ′ , e y ( y ˉ , u ˉ ) ∗ p ˉ = J y ( y ˉ , u ˉ ) in Y ′ , J u ( y ˉ , u ˉ ) − e u ( y ˉ , u ˉ ) ∗ p ˉ = 0 in U ′ . \begin{cases}
e(\bar y, \bar u) = 0 & \text{in } Z', \\[4pt]
e_y(\bar y, \bar u)^* \bar p = J_y(\bar y, \bar u) & \text{in } Y', \\[4pt]
J_u(\bar y, \bar u) - e_u(\bar y, \bar u)^* \bar p = 0 & \text{in } U'.
\end{cases} ⎩ ⎨ ⎧ e ( y ˉ , u ˉ ) = 0 e y ( y ˉ , u ˉ ) ∗ p ˉ = J y ( y ˉ , u ˉ ) J u ( y ˉ , u ˉ ) − e u ( y ˉ , u ˉ ) ∗ p ˉ = 0 in Z ′ , in Y ′ , in U ′ . Constrained case (U a d ⊊ U U_{\mathrm{ad}} \subsetneq U U ad ⊊ U ).
At a local minimizer u ˉ ∈ U a d \bar u \in U_{\mathrm{ad}} u ˉ ∈ U ad , the optimality condition for u u u becomes
the variational inequality
f ′ ( u ˉ ) ( u − u ˉ ) ≥ 0 ∀ u ∈ U a d , f'(\bar u)(u - \bar u) \ge 0 \qquad \forall u \in U_{\mathrm{ad}}, f ′ ( u ˉ ) ( u − u ˉ ) ≥ 0 ∀ u ∈ U ad , which in terms of the Lagrangian is
0 ∈ L u ( y ˉ , u ˉ , p ˉ ) + N U a d ( u ˉ ) in U ′ . 0 \in \mathcal{L}_u(\bar y, \bar u, \bar p) + N_{U_{\mathrm{ad}}}(\bar u) \qquad \text{in } U'. 0 ∈ L u ( y ˉ , u ˉ , p ˉ ) + N U ad ( u ˉ ) in U ′ . The full constrained KKT system is therefore:
{ e ( y ˉ , u ˉ ) = 0 in Z ′ , e y ( y ˉ , u ˉ ) ∗ p ˉ = J y ( y ˉ , u ˉ ) in Y ′ , 0 ∈ J u ( y ˉ , u ˉ ) − e u ( y ˉ , u ˉ ) ∗ p ˉ + N U a d ( u ˉ ) in U ′ . \begin{cases}
e(\bar y, \bar u) = 0 & \text{in } Z', \\[4pt]
e_y(\bar y, \bar u)^* \bar p = J_y(\bar y, \bar u) & \text{in } Y', \\[4pt]
0 \in J_u(\bar y, \bar u) - e_u(\bar y, \bar u)^* \bar p + N_{U_{\mathrm{ad}}}(\bar u) & \text{in } U'.
\end{cases} ⎩ ⎨ ⎧ e ( y ˉ , u ˉ ) = 0 e y ( y ˉ , u ˉ ) ∗ p ˉ = J y ( y ˉ , u ˉ ) 0 ∈ J u ( y ˉ , u ˉ ) − e u ( y ˉ , u ˉ ) ∗ p ˉ + N U ad ( u ˉ ) in Z ′ , in Y ′ , in U ′ . The three equations live, respectively, in Z ′ Z' Z ′ , Y ′ Y' Y ′ , and U ′ U' U ′ .
The three unknowns are y ˉ ∈ Y \bar y \in Y y ˉ ∈ Y , p ˉ ∈ Z \bar p \in Z p ˉ ∈ Z , and u ˉ ∈ U a d \bar u \in U_{\mathrm{ad}} u ˉ ∈ U ad .
Normal Cone and Variational Inequality ¶ Definition.
Let K ⊂ U K \subset U K ⊂ U be a nonempty closed convex set and u ∈ K u \in K u ∈ K .
The normal cone to K K K at u u u is the closed convex cone
N K ( u ) : = { ξ ∈ U ′ : ⟨ ξ , v − u ⟩ U ′ , U ≤ 0 ∀ v ∈ K } . N_K(u) :=
\bigl\{\xi \in U' :\; \langle \xi, v - u \rangle_{U', U} \le 0 \quad \forall v \in K \bigr\}. N K ( u ) := { ξ ∈ U ′ : ⟨ ξ , v − u ⟩ U ′ , U ≤ 0 ∀ v ∈ K } . Geometrically: ξ ∈ N K ( u ) \xi \in N_K(u) ξ ∈ N K ( u ) if and only if u u u maximizes the linear functional
v ↦ ⟨ ξ , v ⟩ v \mapsto \langle \xi, v \rangle v ↦ ⟨ ξ , v ⟩ over K K K (i.e., ξ \xi ξ points outward from K K K at u u u ).
Equivalence with the variational inequality.
The normal-cone inclusion − f ′ ( u ˉ ) ∈ N U a d ( u ˉ ) -f'(\bar u) \in N_{U_{\mathrm{ad}}}(\bar u) − f ′ ( u ˉ ) ∈ N U ad ( u ˉ ) is equivalent to
f ′ ( u ˉ ) ( u − u ˉ ) ≥ 0 ∀ u ∈ U a d . f'(\bar u)(u - \bar u) \ge 0 \qquad \forall u \in U_{\mathrm{ad}}. f ′ ( u ˉ ) ( u − u ˉ ) ≥ 0 ∀ u ∈ U ad . Projection. When U U U is a Hilbert space and the Riesz identification U ≅ U ′ U \cong U' U ≅ U ′ is used,
for any w ∈ U w \in U w ∈ U the projection Π K ( w ) \Pi_K(w) Π K ( w ) is the unique element of K K K satisfying
Π K ( w ) ∈ K , ( w − Π K ( w ) , v − Π K ( w ) ) U ≤ 0 ∀ v ∈ K . \Pi_K(w) \in K, \qquad
(w - \Pi_K(w),\, v - \Pi_K(w))_U \le 0 \qquad \forall v \in K. Π K ( w ) ∈ K , ( w − Π K ( w ) , v − Π K ( w ) ) U ≤ 0 ∀ v ∈ K . The normal-cone inclusion reads: w − u ∈ N K ( u ) w - u \in N_K(u) w − u ∈ N K ( u ) if and only if u = Π K ( w ) u = \Pi_K(w) u = Π K ( w ) .
Projection formula for the control.
The KKT condition 0 ∈ ∇ f ( u ˉ ) + N U a d ( u ˉ ) 0 \in \nabla f(\bar u) + N_{U_{\mathrm{ad}}}(\bar u) 0 ∈ ∇ f ( u ˉ ) + N U ad ( u ˉ ) is equivalent to
u ˉ = Π U a d ( u ˉ − ρ ∇ f ( u ˉ ) ) for any ρ > 0. \bar u = \Pi_{U_{\mathrm{ad}}}(\bar u - \rho\, \nabla f(\bar u))
\qquad \text{for any } \rho > 0. u ˉ = Π U ad ( u ˉ − ρ ∇ f ( u ˉ )) for any ρ > 0. This is both a characterization of optimality and a natural fixed-point iteration
(projected gradient step).
Box constraints in L 2 ( Ω ) L^2(\Omega) L 2 ( Ω ) .
For
U a d = { u ∈ U : u min ( x ) ≤ u ( x ) ≤ u max ( x ) a.e. } U_{\mathrm{ad}} = \{u \in U :\; u_{\min}(x) \le u(x) \le u_{\max}(x) \text{ a.e.}\} U ad = { u ∈ U : u m i n ( x ) ≤ u ( x ) ≤ u m a x ( x ) a.e. } with u min , u max ∈ L ∞ ( Ω ) u_{\min}, u_{\max} \in L^\infty(\Omega) u m i n , u m a x ∈ L ∞ ( Ω ) , the normal cone at u ˉ ∈ U a d \bar u \in U_{\mathrm{ad}} u ˉ ∈ U ad is
characterized pointwise: ξ ∈ N U a d ( u ˉ ) \xi \in N_{U_{\mathrm{ad}}}(\bar u) ξ ∈ N U ad ( u ˉ ) if and only if for a.e. x ∈ Ω x \in \Omega x ∈ Ω ,
{ ξ ( x ) ≥ 0 if u ˉ ( x ) = u min ( x ) , ξ ( x ) = 0 if u min ( x ) < u ˉ ( x ) < u max ( x ) , ξ ( x ) ≤ 0 if u ˉ ( x ) = u max ( x ) . \begin{cases}
\xi(x) \ge 0 & \text{if } \bar u(x) = u_{\min}(x), \\
\xi(x) = 0 & \text{if } u_{\min}(x) < \bar u(x) < u_{\max}(x), \\
\xi(x) \le 0 & \text{if } \bar u(x) = u_{\max}(x).
\end{cases} ⎩ ⎨ ⎧ ξ ( x ) ≥ 0 ξ ( x ) = 0 ξ ( x ) ≤ 0 if u ˉ ( x ) = u m i n ( x ) , if u m i n ( x ) < u ˉ ( x ) < u m a x ( x ) , if u ˉ ( x ) = u m a x ( x ) . For the linear-quadratic case, the KKT condition 0 ∈ α u ˉ + p ˉ + N U a d ( u ˉ ) 0 \in \alpha\bar u + \bar p + N_{U_{\mathrm{ad}}}(\bar u) 0 ∈ α u ˉ + p ˉ + N U ad ( u ˉ )
is equivalent to the pointwise projection
u ˉ ( x ) = Π [ u min ( x ) , u max ( x ) ] ( − 1 α p ˉ ( x ) ) a.e. in Ω . \bar u(x) = \Pi_{[u_{\min}(x),\, u_{\max}(x)]}
\!\left(-\tfrac{1}{\alpha} \bar p(x)\right)
\quad \text{a.e. in } \Omega. u ˉ ( x ) = Π [ u m i n ( x ) , u m a x ( x )] ( − α 1 p ˉ ( x ) ) a.e. in Ω. Semismooth Newton for Control Constraints ¶ The projection characterization of the KKT condition suggests a nonsmooth root equation.
Fix ρ > 0 \rho > 0 ρ > 0 and define
F ( u ) : = u − Π U a d ( u − ρ ∇ f ( u ) ) ∈ U . F(u)
:=
u - \Pi_{U_{\mathrm{ad}}}\bigl(u - \rho\, \nabla f(u)\bigr)
\in U. F ( u ) := u − Π U ad ( u − ρ ∇ f ( u ) ) ∈ U . Then
F ( u ˉ ) = 0 ⟺ u ˉ = Π U a d ( u ˉ − ρ ∇ f ( u ˉ ) ) ⟺ 0 ∈ ∇ f ( u ˉ ) + N U a d ( u ˉ ) . F(\bar u)=0
\iff
\bar u = \Pi_{U_{\mathrm{ad}}}(\bar u - \rho\, \nabla f(\bar u))
\iff
0 \in \nabla f(\bar u) + N_{U_{\mathrm{ad}}}(\bar u). F ( u ˉ ) = 0 ⟺ u ˉ = Π U ad ( u ˉ − ρ ∇ f ( u ˉ )) ⟺ 0 ∈ ∇ f ( u ˉ ) + N U ad ( u ˉ ) . Hence solving the constrained first-order condition is equivalent to solving
the nonsmooth equation F ( u ) = 0 F(u)=0 F ( u ) = 0 in U U U .
Proof of the three equivalences for F ( u ˉ ) = 0 F(\bar u)=0 F ( u ˉ ) = 0 ¶ We prove the equivalence chain in detail.
Fix K : = U a d K := U_{\mathrm{ad}} K := U ad , and define
w : = u ˉ − ρ ∇ f ( u ˉ ) . w := \bar u - \rho\, \nabla f(\bar u). w := u ˉ − ρ ∇ f ( u ˉ ) . First equivalence
F ( u ˉ ) = 0 ⟺ u ˉ − Π K ( w ) = 0 ⟺ u ˉ = Π K ( w ) . F(\bar u)=0
\iff
\bar u - \Pi_K(w)=0
\iff
\bar u = \Pi_K(w). F ( u ˉ ) = 0 ⟺ u ˉ − Π K ( w ) = 0 ⟺ u ˉ = Π K ( w ) . This follows directly from the definition
F ( u ) = u − Π K ( u − ρ ∇ f ( u ) ) F(u)=u-\Pi_K(u-\rho\nabla f(u)) F ( u ) = u − Π K ( u − ρ ∇ f ( u )) .
Second equivalence
By the projection-normal-cone characterization in a Hilbert space,
for any u ∈ K u \in K u ∈ K and w ∈ U w \in U w ∈ U :
u = Π K ( w ) ⟺ w − u ∈ N K ( u ) . u = \Pi_K(w)
\iff
w-u \in N_K(u). u = Π K ( w ) ⟺ w − u ∈ N K ( u ) . Applying this with u = u ˉ u=\bar u u = u ˉ and w = u ˉ − ρ ∇ f ( u ˉ ) w=\bar u-\rho\nabla f(\bar u) w = u ˉ − ρ ∇ f ( u ˉ ) gives
u ˉ = Π K ( u ˉ − ρ ∇ f ( u ˉ ) ) ⟺ − ρ ∇ f ( u ˉ ) ∈ N K ( u ˉ ) . \bar u = \Pi_K(\bar u-\rho\nabla f(\bar u))
\iff
-\rho\nabla f(\bar u) \in N_K(\bar u). u ˉ = Π K ( u ˉ − ρ ∇ f ( u ˉ )) ⟺ − ρ ∇ f ( u ˉ ) ∈ N K ( u ˉ ) . Since N K ( u ˉ ) N_K(\bar u) N K ( u ˉ ) is a cone, η ∈ N K ( u ˉ ) \eta \in N_K(\bar u) η ∈ N K ( u ˉ ) implies
( 1 / ρ ) η ∈ N K ( u ˉ ) (1/\rho)\eta \in N_K(\bar u) ( 1/ ρ ) η ∈ N K ( u ˉ ) for every ρ > 0 \rho>0 ρ > 0 , hence
− ρ ∇ f ( u ˉ ) ∈ N K ( u ˉ ) ⟺ − ∇ f ( u ˉ ) ∈ N K ( u ˉ ) ⟺ 0 ∈ ∇ f ( u ˉ ) + N K ( u ˉ ) . -\rho\nabla f(\bar u) \in N_K(\bar u)
\iff
-\nabla f(\bar u) \in N_K(\bar u)
\iff
0 \in \nabla f(\bar u)+N_K(\bar u). − ρ ∇ f ( u ˉ ) ∈ N K ( u ˉ ) ⟺ − ∇ f ( u ˉ ) ∈ N K ( u ˉ ) ⟺ 0 ∈ ∇ f ( u ˉ ) + N K ( u ˉ ) . Combining 1 and 2 yields
F ( u ˉ ) = 0 ⟺ u ˉ = Π U a d ( u ˉ − ρ ∇ f ( u ˉ ) ) ⟺ 0 ∈ ∇ f ( u ˉ ) + N U a d ( u ˉ ) . F(\bar u)=0
\iff
\bar u = \Pi_{U_{\mathrm{ad}}}(\bar u - \rho\nabla f(\bar u))
\iff
0 \in \nabla f(\bar u)+N_{U_{\mathrm{ad}}}(\bar u). F ( u ˉ ) = 0 ⟺ u ˉ = Π U ad ( u ˉ − ρ ∇ f ( u ˉ )) ⟺ 0 ∈ ∇ f ( u ˉ ) + N U ad ( u ˉ ) . Semismoothness and generalized derivatives ¶ In finite dimensions, a locally Lipschitz map G : R n → R m G: \mathbb{R}^n \to \mathbb{R}^m G : R n → R m
is semismooth at x x x if for every sequence h k → 0 h_k \to 0 h k → 0 and every choice
V k ∈ ∂ C G ( x + h k ) V_k \in \partial_C G(x+h_k) V k ∈ ∂ C G ( x + h k ) (Clarke generalized Jacobian),
∥ G ( x + h k ) − G ( x ) − V k h k ∥ = o ( ∥ h k ∥ ) . \|G(x+h_k)-G(x)-V_k h_k\| = o(\|h_k\|). ∥ G ( x + h k ) − G ( x ) − V k h k ∥ = o ( ∥ h k ∥ ) . It is strongly semismooth if the remainder is O ( ∥ h k ∥ 2 ) O(\|h_k\|^2) O ( ∥ h k ∥ 2 ) .
In Hilbert spaces, the same idea is used with generalized derivatives of
nonsmooth operators (Clarke/Bouligand derivatives or Newton derivatives).
The key property is that each linearization captures the first-order behavior
with a small remainder, so a Newton correction can be defined.
For box constraints, the projection is piecewise affine pointwise:
Π [ a , b ] ( s ) = min { max { s , a } , b } . \Pi_{[a,b]}(s)=\min\{\max\{s,a\},b\}. Π [ a , b ] ( s ) = min { max { s , a } , b } . Therefore Π [ a , b ] \Pi_{[a,b]} Π [ a , b ] is globally Lipschitz and strongly semismooth in
finite dimensions, and so is the induced superposition operator in
L q ( Ω ) L^q(\Omega) L q ( Ω ) spaces (1 < q < ∞ 1<q<\infty 1 < q < ∞ ).
Semismooth Newton step ¶ Given an iterate u k u^k u k , choose a generalized derivative
M k ∈ ∂ F ( u k ) , M_k \in \partial F(u^k), M k ∈ ∂ F ( u k ) , and compute δ u k ∈ U \delta u^k \in U δ u k ∈ U from
M k δ u k = − F ( u k ) , M_k\, \delta u^k = -F(u^k), M k δ u k = − F ( u k ) , then update
u k + 1 = u k + δ u k . u^{k+1} = u^k + \delta u^k. u k + 1 = u k + δ u k . For constrained PDE-control, each Newton step requires coupled evaluation
of state and adjoint variables because ∇ f ( u ) \nabla f(u) ∇ f ( u ) depends on
( y ( u ) , p ( u ) ) (y(u),p(u)) ( y ( u ) , p ( u )) . In practice, one linearizes the reduced mapping
u ↦ F ( u ) u \mapsto F(u) u ↦ F ( u ) and solves the resulting linear system with PDE blocks
using the same solver technology as in the all-at-once KKT setting.
Primal-dual construction with an additional multiplier for F ( u ) = 0 F(u)=0 F ( u ) = 0 ¶ The semismooth Newton step can be obtained from a KKT system with an explicit
multiplier for the equation constraint F ( u ) = 0 F(u)=0 F ( u ) = 0 .
At iteration k k k , consider the proximal feasibility problem
min u ∈ U 1 2 ∥ u − u k ∥ U 2 subject to F ( u ) = 0. \min_{u\in U}\; \frac12\|u-u^k\|_U^2
\qquad \text{subject to } F(u)=0. u ∈ U min 2 1 ∥ u − u k ∥ U 2 subject to F ( u ) = 0. Its Lagrangian is
L k ( u , λ ) = 1 2 ∥ u − u k ∥ U 2 + ⟨ λ , F ( u ) ⟩ U , U , \mathscr L_k(u,\lambda)
= \frac12\|u-u^k\|_U^2 + \langle \lambda, F(u)\rangle_{U,U}, L k ( u , λ ) = 2 1 ∥ u − u k ∥ U 2 + ⟨ λ , F ( u ) ⟩ U , U , where λ ∈ U \lambda \in U λ ∈ U is an additional Lagrange multiplier.
The first-order system is
{ u − u k + F ′ ( u ) ∗ λ = 0 , F ( u ) = 0. \begin{cases}
u-u^k + F'(u)^*\lambda = 0,\\
F(u)=0.
\end{cases} { u − u k + F ′ ( u ) ∗ λ = 0 , F ( u ) = 0. Replace F ′ ( u k ) F'(u^k) F ′ ( u k ) with a generalized derivative
M k ∈ ∂ F ( u k ) M_k\in\partial F(u^k) M k ∈ ∂ F ( u k ) and linearize at ( u k , λ k ) (u^k,\lambda^k) ( u k , λ k ) :
( I M k ∗ M k 0 ) ( δ u k δ λ k ) = − ( u k − u k + M k ∗ λ k F ( u k ) ) = − ( M k ∗ λ k F ( u k ) ) . \begin{pmatrix}
I & M_k^* \\
M_k & 0
\end{pmatrix}
\begin{pmatrix}
\delta u^k \\
\delta\lambda^k
\end{pmatrix}
=-
\begin{pmatrix}
u^k-u^k+M_k^*\lambda^k \\
F(u^k)
\end{pmatrix}
=-
\begin{pmatrix}
M_k^*\lambda^k \\
F(u^k)
\end{pmatrix}. ( I M k M k ∗ 0 ) ( δ u k δ λ k ) = − ( u k − u k + M k ∗ λ k F ( u k ) ) = − ( M k ∗ λ k F ( u k ) ) . If we initialize and keep λ k = 0 \lambda^k=0 λ k = 0 , the first block row gives
δ u k = − M k ∗ δ λ k \delta u^k = -M_k^*\delta\lambda^k δ u k = − M k ∗ δ λ k , and substituting in the second row yields
the Schur-complement equation
M k M k ∗ δ λ k = F ( u k ) , M_k M_k^*\,\delta\lambda^k = F(u^k), M k M k ∗ δ λ k = F ( u k ) , with
δ u k = − M k ∗ δ λ k . \delta u^k = -M_k^*\delta\lambda^k. δ u k = − M k ∗ δ λ k . This primal-dual step is equivalent to solving a regularized Newton equation
for F ( u ) = 0 F(u)=0 F ( u ) = 0 and leads to the same local model as the direct semismooth
Newton correction when M k M_k M k is nonsingular.
Hence semismooth Newton may be viewed in two equivalent ways:
reduced form : solve M k δ u k = − F ( u k ) M_k\delta u^k=-F(u^k) M k δ u k = − F ( u k ) ;
primal-dual form : solve a saddle-point system in ( δ u k , δ λ k ) (\delta u^k,\delta\lambda^k) ( δ u k , δ λ k )
with the extra multiplier enforcing the linearized equality constraint.
The primal-dual perspective is useful for block preconditioning and for
embedding the step in all-at-once PDE-constrained solvers.
For
J ( y , u ) = 1 2 ∥ y − y d ∥ L 2 2 + α 2 ∥ u ∥ L 2 2 , U a d = { u : u min ≤ u ≤ u max } , J(y,u)=\tfrac12\|y-y_d\|_{L^2}^2 + \tfrac\alpha2\|u\|_{L^2}^2,
\qquad
U_{\mathrm{ad}} = \{u: u_{\min}\le u \le u_{\max}\}, J ( y , u ) = 2 1 ∥ y − y d ∥ L 2 2 + 2 α ∥ u ∥ L 2 2 , U ad = { u : u m i n ≤ u ≤ u m a x } , we have
∇ f ( u ) = α u + p ( u ) , \nabla f(u)=\alpha u + p(u), ∇ f ( u ) = αu + p ( u ) , and the root equation is
F ( u ) = u − Π [ u min , u max ] ( u − ρ ( α u + p ( u ) ) ) = 0. F(u)=u-\Pi_{[u_{\min},u_{\max}]}
\bigl(u-\rho(\alpha u + p(u))\bigr)=0. F ( u ) = u − Π [ u m i n , u m a x ] ( u − ρ ( αu + p ( u )) ) = 0. Define the active and inactive sets at iteration k k k from
w k : = u k − ρ ( α u k + p k ) : w^k := u^k - \rho(\alpha u^k + p^k): w k := u k − ρ ( α u k + p k ) :
A − k : = { x : w k ( x ) ≤ u min ( x ) } , A + k : = { x : w k ( x ) ≥ u max ( x ) } , I k : = Ω ∖ ( A − k ∪ A + k ) . \mathcal A_-^k := \{x:\, w^k(x) \le u_{\min}(x)\},
\qquad
\mathcal A_+^k := \{x:\, w^k(x) \ge u_{\max}(x)\},
\qquad
\mathcal I^k := \Omega \setminus (\mathcal A_-^k \cup \mathcal A_+^k). A − k := { x : w k ( x ) ≤ u m i n ( x )} , A + k := { x : w k ( x ) ≥ u m a x ( x )} , I k := Ω ∖ ( A − k ∪ A + k ) . Then the generalized derivative of the projection is multiplication by
the characteristic function χ I k \chi_{\mathcal I^k} χ I k , and the Newton step is
equivalent to:
enforce control bounds directly on active sets,
u k + 1 = u min on A − k , u k + 1 = u max on A + k ; u^{k+1}=u_{\min} \text{ on } \mathcal A_-^k,
\qquad
u^{k+1}=u_{\max} \text{ on } \mathcal A_+^k; u k + 1 = u m i n on A − k , u k + 1 = u m a x on A + k ; solve the unconstrained optimality equation on I k \mathcal I^k I k .
This is exactly the primal-dual active-set (PDAS) method.
Thus, for box constraints, PDAS can be interpreted as a semismooth Newton
method applied to the projection equation.
Local convergence statement ¶ Under standard assumptions:
∇ f \nabla f ∇ f is (locally) Lipschitz differentiable in a neighborhood of u ˉ \bar u u ˉ ;
F F F is semismooth at u ˉ \bar u u ˉ ;
every generalized derivative M ∈ ∂ F ( u ˉ ) M \in \partial F(\bar u) M ∈ ∂ F ( u ˉ ) is uniformly invertible,
i.e., ∥ M − 1 ∥ ≤ c \|M^{-1}\| \le c ∥ M − 1 ∥ ≤ c .
Then semismooth Newton is locally superlinearly convergent:
∥ u k + 1 − u ˉ ∥ U = o ( ∥ u k − u ˉ ∥ U ) . \|u^{k+1}-\bar u\|_U = o(\|u^k-\bar u\|_U). ∥ u k + 1 − u ˉ ∥ U = o ( ∥ u k − u ˉ ∥ U ) . If F F F is strongly semismooth and the derivative approximation is exact,
one obtains local quadratic convergence:
∥ u k + 1 − u ˉ ∥ U = O ( ∥ u k − u ˉ ∥ U 2 ) . \|u^{k+1}-\bar u\|_U = O(\|u^k-\bar u\|_U^2). ∥ u k + 1 − u ˉ ∥ U = O ( ∥ u k − u ˉ ∥ U 2 ) . These rates explain why active-set/Newton-type methods are often much faster
than projected gradient iterations once the active set is close to the final one.
Reduced vs All-at-Once ¶ The first-order optimality conditions can be approached in two complementary ways.
Reduced approach.
The control u ˉ ∈ U a d \bar u \in U_{\mathrm{ad}} u ˉ ∈ U ad is the only optimization unknown.
The state y ˉ = S ( u ˉ ) \bar y = S(\bar u) y ˉ = S ( u ˉ ) and adjoint p ˉ \bar p p ˉ are functions of u ˉ \bar u u ˉ .
The problem reduces to
min u ∈ U a d f ( u ) , \min_{u \in U_{\mathrm{ad}}} f(u), u ∈ U ad min f ( u ) , and algorithms iterate on u u u only, solving state and adjoint equations at each step.
At each optimization iterate u k u^k u k :
state solve: find y k ∈ Y y^k \in Y y k ∈ Y such that e ( y k , u k ) = 0 e(y^k, u^k) = 0 e ( y k , u k ) = 0 ;
adjoint solve: find p k ∈ Z p^k \in Z p k ∈ Z such that e y ( y k , u k ) ∗ p k = J y ( y k , u k ) e_y(y^k, u^k)^* p^k = J_y(y^k, u^k) e y ( y k , u k ) ∗ p k = J y ( y k , u k ) ;
gradient: g k = R U − 1 ( J u ( y k , u k ) − e u ( y k , u k ) ∗ p k ) ∈ U g^k = \mathcal{R}_U^{-1}(J_u(y^k, u^k) - e_u(y^k, u^k)^* p^k) \in U g k = R U − 1 ( J u ( y k , u k ) − e u ( y k , u k ) ∗ p k ) ∈ U ;
control update: u k + 1 = Π U a d ( u k − τ k g k ) u^{k+1} = \Pi_{U_{\mathrm{ad}}}(u^k - \tau_k g^k) u k + 1 = Π U ad ( u k − τ k g k ) .
Advantages:
the optimization variable lives only in U U U ;
off-the-shelf iterative solvers can be reused for state/adjoint;
memory footprint is one control vector.
Disadvantages:
state and adjoint must be solved to sufficient accuracy at each step;
second-order information (Hessian of f f f ) requires a second adjoint solve (second-order adjoint);
slow convergence when inner solves are expensive.
All-at-once approach.
The triple ( y ˉ , u ˉ , p ˉ ) (\bar y, \bar u, \bar p) ( y ˉ , u ˉ , p ˉ ) is the unknown.
One applies Newton’s method directly to the full KKT system.
The Newton step ( δ y , δ u , δ p ) (\delta y, \delta u, \delta p) ( δy , δ u , δ p ) solves the block system
( L y y L y u e y ∗ L u y L u u e u ∗ e y e u 0 ) ( δ y δ u δ p ) = − ( L y L u + N U a d − 1 ( u ˉ ) e ) , \begin{pmatrix}
\mathcal{L}_{yy} & \mathcal{L}_{yu} & e_y^* \\[4pt]
\mathcal{L}_{uy} & \mathcal{L}_{uu} & e_u^* \\[4pt]
e_y & e_u & 0
\end{pmatrix}
\begin{pmatrix}
\delta y \\[4pt] \delta u \\[4pt] \delta p
\end{pmatrix}
= -
\begin{pmatrix}
\mathcal{L}_y \\[4pt] \mathcal{L}_u + N_{U_{\mathrm{ad}}}^{-1}(\bar u) \\[4pt] e
\end{pmatrix}, ⎝ ⎛ L yy L u y e y L y u L uu e u e y ∗ e u ∗ 0 ⎠ ⎞ ⎝ ⎛ δy δ u δ p ⎠ ⎞ = − ⎝ ⎛ L y L u + N U ad − 1 ( u ˉ ) e ⎠ ⎞ , where L y y \mathcal{L}_{yy} L yy , L y u \mathcal{L}_{yu} L y u , L u u \mathcal{L}_{uu} L uu are second partial derivatives
of the Lagrangian and N U a d − 1 N_{U_{\mathrm{ad}}}^{-1} N U ad − 1 is the linearization of the normal-cone
inclusion (active-set method or semismooth Newton regularization).
For the linear-quadratic case, the block system is the saddle-point system seen in Lecture 7,
which is linear and can be solved directly.
For nonlinear problems, Newton linearization is needed and the block structure is the same,
but with variable-dependent operators.
Advantages:
quadratic convergence near the solution (Newton’s method);
one global solve per Newton iteration rather than many inner iterations;
second-order information built in.
Disadvantages:
the system lives in Y × U × Z Y \times U \times Z Y × U × Z , which is much larger than U U U alone;
saddle-point preconditioning is needed for efficiency.
A Worked Example: Semilinear Elliptic Control ¶ We verify that the abstract framework applies to a nonlinear state equation.
Model problem.
Let Ω ⊂ R d \Omega \subset \mathbb{R}^d Ω ⊂ R d be a bounded Lipschitz domain and d : R → R d: \mathbb{R} \to \mathbb{R} d : R → R
a smooth function with d ( 0 ) = 0 d(0) = 0 d ( 0 ) = 0 (the semilinearity ).
Consider
min u ∈ U a d J ( y , u ) : = 1 2 ∥ y − y d ∥ L 2 ( Ω ) 2 + α 2 ∥ u ∥ L 2 ( Ω ) 2 \min_{u \in U_{\mathrm{ad}}} J(y, u)
:= \frac{1}{2}\|y - y_d\|_{L^2(\Omega)}^2
+ \frac{\alpha}{2}\|u\|_{L^2(\Omega)}^2 u ∈ U ad min J ( y , u ) := 2 1 ∥ y − y d ∥ L 2 ( Ω ) 2 + 2 α ∥ u ∥ L 2 ( Ω ) 2 subject to
{ − Δ y + d ( y ) = u in Ω , y = 0 on ∂ Ω . \begin{cases}
-\Delta y + d(y) = u & \text{in } \Omega, \\[4pt]
y = 0 & \text{on } \partial\Omega.
\end{cases} { − Δ y + d ( y ) = u y = 0 in Ω , on ∂ Ω. Spaces.
Y = H 0 1 ( Ω ) , U = L 2 ( Ω ) , Z ′ = H − 1 ( Ω ) , Z = H 0 1 ( Ω ) . Y = H_0^1(\Omega), \quad U = L^2(\Omega), \quad Z' = H^{-1}(\Omega), \quad Z = H_0^1(\Omega). Y = H 0 1 ( Ω ) , U = L 2 ( Ω ) , Z ′ = H − 1 ( Ω ) , Z = H 0 1 ( Ω ) . Constraint mapping.
e : H 0 1 ( Ω ) × L 2 ( Ω ) → H − 1 ( Ω ) , e: H_0^1(\Omega) \times L^2(\Omega) \to H^{-1}(\Omega), e : H 0 1 ( Ω ) × L 2 ( Ω ) → H − 1 ( Ω ) , defined by
⟨ e ( y , u ) , v ⟩ H − 1 , H 0 1 : = ∫ Ω ∇ y ⋅ ∇ v d x + ∫ Ω d ( y ) v d x − ∫ Ω u v d x ∀ v ∈ H 0 1 ( Ω ) . \langle e(y, u), v \rangle_{H^{-1}, H_0^1}
:= \int_\Omega \nabla y \cdot \nabla v \, dx
+ \int_\Omega d(y) v \, dx
- \int_\Omega u v \, dx
\qquad \forall v \in H_0^1(\Omega). ⟨ e ( y , u ) , v ⟩ H − 1 , H 0 1 := ∫ Ω ∇ y ⋅ ∇ v d x + ∫ Ω d ( y ) v d x − ∫ Ω uv d x ∀ v ∈ H 0 1 ( Ω ) . Partial derivatives of e e e .
Linear operator e y ( y , u ) : H 0 1 ( Ω ) → H − 1 ( Ω ) e_y(y, u): H_0^1(\Omega) \to H^{-1}(\Omega) e y ( y , u ) : H 0 1 ( Ω ) → H − 1 ( Ω ) :
⟨ e y ( y , u ) w , v ⟩ H − 1 , H 0 1 = ∫ Ω ∇ w ⋅ ∇ v d x + ∫ Ω d ′ ( y ) w v d x ∀ w , v ∈ H 0 1 ( Ω ) . \langle e_y(y, u) w, v \rangle_{H^{-1}, H_0^1}
= \int_\Omega \nabla w \cdot \nabla v \, dx
+ \int_\Omega d'(y) w\, v \, dx
\qquad \forall w, v \in H_0^1(\Omega). ⟨ e y ( y , u ) w , v ⟩ H − 1 , H 0 1 = ∫ Ω ∇ w ⋅ ∇ v d x + ∫ Ω d ′ ( y ) w v d x ∀ w , v ∈ H 0 1 ( Ω ) . Under the assumption d ′ ( y ) ≥ − σ 0 d'(y) \ge -\sigma_0 d ′ ( y ) ≥ − σ 0 for some σ 0 < λ 1 ( Ω ) \sigma_0 < \lambda_1(\Omega) σ 0 < λ 1 ( Ω )
(first Dirichlet eigenvalue of − Δ -\Delta − Δ ), coercivity of the bilinear form is preserved,
so e y ( y , u ) e_y(y, u) e y ( y , u ) is an isomorphism (Assumption 3 is satisfied).
Linear operator e u ( y , u ) : L 2 ( Ω ) → H − 1 ( Ω ) e_u(y, u): L^2(\Omega) \to H^{-1}(\Omega) e u ( y , u ) : L 2 ( Ω ) → H − 1 ( Ω ) :
⟨ e u ( y , u ) h , v ⟩ H − 1 , H 0 1 = − ∫ Ω h v d x = − ( h , v ) L 2 ( Ω ) ∀ h ∈ L 2 ( Ω ) , v ∈ H 0 1 ( Ω ) . \langle e_u(y, u) h, v \rangle_{H^{-1}, H_0^1}
= -\int_\Omega h\, v \, dx
= -(h, v)_{L^2(\Omega)}
\qquad \forall h \in L^2(\Omega), \; v \in H_0^1(\Omega). ⟨ e u ( y , u ) h , v ⟩ H − 1 , H 0 1 = − ∫ Ω h v d x = − ( h , v ) L 2 ( Ω ) ∀ h ∈ L 2 ( Ω ) , v ∈ H 0 1 ( Ω ) . Thus e u ( y , u ) h = − h e_u(y, u) h = -h e u ( y , u ) h = − h (embedding of L 2 L^2 L 2 into H − 1 H^{-1} H − 1 via the L 2 L^2 L 2 inner product).
Partial derivatives of J J J .
J y ( y , u ) = y − y d ∈ L 2 ( Ω ) ⊂ H − 1 ( Ω ) = Y ′ , J u ( y , u ) = α u ∈ L 2 ( Ω ) = U ′ . J_y(y, u) = y - y_d \in L^2(\Omega) \subset H^{-1}(\Omega) = Y',
\qquad
J_u(y, u) = \alpha u \in L^2(\Omega) = U'. J y ( y , u ) = y − y d ∈ L 2 ( Ω ) ⊂ H − 1 ( Ω ) = Y ′ , J u ( y , u ) = αu ∈ L 2 ( Ω ) = U ′ . Adjoint operators.
e y ( y , u ) ∗ : H 0 1 ( Ω ) → H − 1 ( Ω ) e_y(y, u)^*: H_0^1(\Omega) \to H^{-1}(\Omega) e y ( y , u ) ∗ : H 0 1 ( Ω ) → H − 1 ( Ω ) (identifying Z ≅ H 0 1 Z \cong H_0^1 Z ≅ H 0 1 via Riesz):
⟨ e y ( y , u ) ∗ p , v ⟩ H − 1 , H 0 1 = ∫ Ω ∇ v ⋅ ∇ p d x + ∫ Ω d ′ ( y ) v p d x ∀ p , v ∈ H 0 1 ( Ω ) . \langle e_y(y, u)^* p, v \rangle_{H^{-1}, H_0^1}
= \int_\Omega \nabla v \cdot \nabla p \, dx
+ \int_\Omega d'(y)\, v\, p \, dx
\qquad \forall p, v \in H_0^1(\Omega). ⟨ e y ( y , u ) ∗ p , v ⟩ H − 1 , H 0 1 = ∫ Ω ∇ v ⋅ ∇ p d x + ∫ Ω d ′ ( y ) v p d x ∀ p , v ∈ H 0 1 ( Ω ) . (The symmetry of the bilinear form and the symmetry of multiplication by d ′ ( y ) d'(y) d ′ ( y ) mean
e y ∗ = e y e_y^* = e_y e y ∗ = e y in this case; this would differ for non-symmetric operators such as transport.)
e u ( y , u ) ∗ : H 0 1 ( Ω ) → L 2 ( Ω ) = U ′ e_u(y, u)^*: H_0^1(\Omega) \to L^2(\Omega) = U' e u ( y , u ) ∗ : H 0 1 ( Ω ) → L 2 ( Ω ) = U ′ :
⟨ e u ( y , u ) ∗ p , h ⟩ U ′ , U = ⟨ p , e u ( y , u ) h ⟩ Z , Z ′ = − ( p , h ) L 2 ( Ω ) ∀ h ∈ L 2 ( Ω ) , \langle e_u(y, u)^* p, h \rangle_{U', U}
= \langle p, e_u(y, u) h \rangle_{Z, Z'}
= -(p, h)_{L^2(\Omega)}
\qquad \forall h \in L^2(\Omega), ⟨ e u ( y , u ) ∗ p , h ⟩ U ′ , U = ⟨ p , e u ( y , u ) h ⟩ Z , Z ′ = − ( p , h ) L 2 ( Ω ) ∀ h ∈ L 2 ( Ω ) , so e u ( y , u ) ∗ p = − p ∈ L 2 ( Ω ) e_u(y, u)^* p = -p \in L^2(\Omega) e u ( y , u ) ∗ p = − p ∈ L 2 ( Ω ) (restriction of the H 0 1 H_0^1 H 0 1 function p p p to L 2 L^2 L 2 ).
Adjoint equation.
Find p ˉ ∈ H 0 1 ( Ω ) \bar p \in H_0^1(\Omega) p ˉ ∈ H 0 1 ( Ω ) such that
e y ( y ˉ , u ˉ ) ∗ p ˉ = J y ( y ˉ , u ˉ ) = y ˉ − y d in H − 1 ( Ω ) , e_y(\bar y, \bar u)^* \bar p = J_y(\bar y, \bar u) = \bar y - y_d
\quad \text{in } H^{-1}(\Omega), e y ( y ˉ , u ˉ ) ∗ p ˉ = J y ( y ˉ , u ˉ ) = y ˉ − y d in H − 1 ( Ω ) , i.e., in weak form:
∫ Ω ∇ v ⋅ ∇ p ˉ d x + ∫ Ω d ′ ( y ˉ ) v p ˉ d x = ∫ Ω ( y ˉ − y d ) v d x ∀ v ∈ H 0 1 ( Ω ) . \int_\Omega \nabla v \cdot \nabla \bar p \, dx
+ \int_\Omega d'(\bar y)\, v\, \bar p \, dx
= \int_\Omega (\bar y - y_d) v \, dx
\qquad \forall v \in H_0^1(\Omega). ∫ Ω ∇ v ⋅ ∇ p ˉ d x + ∫ Ω d ′ ( y ˉ ) v p ˉ d x = ∫ Ω ( y ˉ − y d ) v d x ∀ v ∈ H 0 1 ( Ω ) . The corresponding strong form is
{ − Δ p ˉ + d ′ ( y ˉ ) p ˉ = y ˉ − y d in Ω , p ˉ = 0 on ∂ Ω . \begin{cases}
-\Delta \bar p + d'(\bar y)\, \bar p = \bar y - y_d & \text{in } \Omega, \\[4pt]
\bar p = 0 & \text{on } \partial\Omega.
\end{cases} { − Δ p ˉ + d ′ ( y ˉ ) p ˉ = y ˉ − y d p ˉ = 0 in Ω , on ∂ Ω. The adjoint equation is always linear in the adjoint state p ˉ \bar p p ˉ ,
even when the state equation is nonlinear in y y y .
This is because the adjoint equation involves only the linearization e y e_y e y ,
not the full nonlinear mapping e e e .
Gradient of the reduced functional.
∇ f ( u ˉ ) = J u ( y ˉ , u ˉ ) − e u ( y ˉ , u ˉ ) ∗ p ˉ = α u ˉ − ( − p ˉ ) = α u ˉ + p ˉ ∈ L 2 ( Ω ) . \nabla f(\bar u)
= J_u(\bar y, \bar u) - e_u(\bar y, \bar u)^* \bar p
= \alpha \bar u - (-\bar p)
= \alpha \bar u + \bar p \in L^2(\Omega). ∇ f ( u ˉ ) = J u ( y ˉ , u ˉ ) − e u ( y ˉ , u ˉ ) ∗ p ˉ = α u ˉ − ( − p ˉ ) = α u ˉ + p ˉ ∈ L 2 ( Ω ) . Full optimality system for box constraints.
{ − Δ y ˉ + d ( y ˉ ) = u ˉ in Ω , − Δ p ˉ + d ′ ( y ˉ ) p ˉ = y ˉ − y d in Ω , u ˉ = Π [ u min , u max ] ( − 1 α p ˉ ) a.e. in Ω , y ˉ = p ˉ = 0 on ∂ Ω . \begin{cases}
-\Delta \bar y + d(\bar y) = \bar u & \text{in } \Omega, \\[4pt]
-\Delta \bar p + d'(\bar y)\, \bar p = \bar y - y_d & \text{in } \Omega, \\[4pt]
\bar u = \Pi_{[u_{\min},\, u_{\max}]}\!\left(-\tfrac{1}{\alpha} \bar p\right)
& \text{a.e. in } \Omega, \\[4pt]
\bar y = \bar p = 0 & \text{on } \partial\Omega.
\end{cases} ⎩ ⎨ ⎧ − Δ y ˉ + d ( y ˉ ) = u ˉ − Δ p ˉ + d ′ ( y ˉ ) p ˉ = y ˉ − y d u ˉ = Π [ u m i n , u m a x ] ( − α 1 p ˉ ) y ˉ = p ˉ = 0 in Ω , in Ω , a.e. in Ω , on ∂ Ω. The structure is the same as the linear-quadratic case from Lectures 4 and 5.
The nonlinearity only affects the state equation (via d ( y ˉ ) d(\bar y) d ( y ˉ ) ) and
the adjoint equation (via d ′ ( y ˉ ) p ˉ d'(\bar y)\bar p d ′ ( y ˉ ) p ˉ ).
The projection formula for the control is unchanged.
Existence for the General Problem ¶ For completeness we state when a minimizer exists in the abstract setting.
Theorem.
Suppose:
U a d U_{\mathrm{ad}} U ad is nonempty, bounded, closed, and convex in the reflexive Banach space U U U ;
the reduced functional f : U a d → R f: U_{\mathrm{ad}} \to \mathbb{R} f : U ad → R is weakly lower semicontinuous;
f f f is bounded below on U a d U_{\mathrm{ad}} U ad .
Then there exists at least one global minimizer u ˉ ∈ U a d \bar u \in U_{\mathrm{ad}} u ˉ ∈ U ad .
Remarks.
If U a d U_{\mathrm{ad}} U ad is not bounded, coercivity of f f f (level sets bounded) replaces Condition 1.
Condition 2 holds if f f f is convex and continuous (e.g., linear-quadratic case),
or more generally if J J J is convex and the state map S S S is weakly continuous.
Strict convexity of f f f implies uniqueness of the minimizer.
Second-Order Conditions ¶ First-order conditions are necessary but not sufficient for a strict local minimizer.
Lagrangian Hessian.
At a KKT point ( y ˉ , u ˉ , p ˉ ) (\bar y, \bar u, \bar p) ( y ˉ , u ˉ , p ˉ ) , the second-order derivative of L \mathcal{L} L
with respect to ( y , u ) (y, u) ( y , u ) in the direction ( v , h ) ∈ Y × U (v, h) \in Y \times U ( v , h ) ∈ Y × U is
L ′ ′ ( y ˉ , u ˉ , p ˉ ) [ ( v , h ) , ( v , h ) ] = ⟨ J y y ( y ˉ , u ˉ ) v , v ⟩ Y ′ , Y + 2 ⟨ J y u ( y ˉ , u ˉ ) h , v ⟩ Y ′ , Y + ⟨ J u u ( y ˉ , u ˉ ) h , h ⟩ U ′ , U − ⟨ e y y ( y ˉ , u ˉ ) [ v , v ] , p ˉ ⟩ Z ′ , Z . \mathcal{L}''(\bar y, \bar u, \bar p)[(v, h), (v, h)]
= \langle J_{yy}(\bar y, \bar u) v, v \rangle_{Y', Y}
+ 2\langle J_{yu}(\bar y, \bar u) h, v \rangle_{Y', Y}
+ \langle J_{uu}(\bar y, \bar u) h, h \rangle_{U', U}
- \langle e_{yy}(\bar y, \bar u)[v, v],\, \bar p \rangle_{Z', Z}. L ′′ ( y ˉ , u ˉ , p ˉ ) [( v , h ) , ( v , h )] = ⟨ J yy ( y ˉ , u ˉ ) v , v ⟩ Y ′ , Y + 2 ⟨ J y u ( y ˉ , u ˉ ) h , v ⟩ Y ′ , Y + ⟨ J uu ( y ˉ , u ˉ ) h , h ⟩ U ′ , U − ⟨ e yy ( y ˉ , u ˉ ) [ v , v ] , p ˉ ⟩ Z ′ , Z . The term involving e y y e_{yy} e yy vanishes for linear state equations,
so for the Poisson model the Hessian of the Lagrangian coincides with the Hessian of J J J .
Critical cone.
C ( u ˉ ) : = { h ∈ U : h satisfies the first-order admissibility conditions at u ˉ , f ′ ( u ˉ ) h = 0 } . C(\bar u) :=
\bigl\{h \in U :\; h \text{ satisfies the first-order admissibility conditions at } \bar u,\;
f'(\bar u) h = 0 \bigr\}. C ( u ˉ ) := { h ∈ U : h satisfies the first-order admissibility conditions at u ˉ , f ′ ( u ˉ ) h = 0 } . Second-order necessary condition.
If u ˉ \bar u u ˉ is a local minimizer, then for every h ∈ C ( u ˉ ) h \in C(\bar u) h ∈ C ( u ˉ ) and v = S ′ ( u ˉ ) h v = S'(\bar u) h v = S ′ ( u ˉ ) h ,
L ′ ′ ( y ˉ , u ˉ , p ˉ ) [ ( v , h ) , ( v , h ) ] ≥ 0. \mathcal{L}''(\bar y, \bar u, \bar p)[(v, h), (v, h)] \ge 0. L ′′ ( y ˉ , u ˉ , p ˉ ) [( v , h ) , ( v , h )] ≥ 0. Second-order sufficient condition.
If ( y ˉ , u ˉ , p ˉ ) (\bar y, \bar u, \bar p) ( y ˉ , u ˉ , p ˉ ) is a KKT point and
L ′ ′ ( y ˉ , u ˉ , p ˉ ) [ ( v , h ) , ( v , h ) ] > 0 ∀ ( v , h ) ∈ Y × C ( u ˉ ) , ( v , h ) ≠ ( 0 , 0 ) , \mathcal{L}''(\bar y, \bar u, \bar p)[(v, h), (v, h)] > 0
\qquad \forall (v, h) \in Y \times C(\bar u),\; (v, h) \ne (0, 0), L ′′ ( y ˉ , u ˉ , p ˉ ) [( v , h ) , ( v , h )] > 0 ∀ ( v , h ) ∈ Y × C ( u ˉ ) , ( v , h ) = ( 0 , 0 ) , then u ˉ \bar u u ˉ is a strict local minimizer.
For the linear-quadratic case with α > 0 \alpha > 0 α > 0 this condition holds globally:
L ′ ′ [ ( v , h ) , ( v , h ) ] = ∥ v ∥ L 2 2 + α ∥ h ∥ L 2 2 > 0 , \mathcal{L}''[(v,h),(v,h)] = \|v\|_{L^2}^2 + \alpha\|h\|_{L^2}^2 > 0, L ′′ [( v , h ) , ( v , h )] = ∥ v ∥ L 2 2 + α ∥ h ∥ L 2 2 > 0 , confirming that the linear-quadratic problem has a unique global minimizer.
Summary ¶ The abstract framework identifies three functional spaces and two adjoint operators
that are the backbone of every PDE-constrained optimality system.
State : y ˉ ∈ Y \bar y \in Y y ˉ ∈ Y satisfies e ( y ˉ , u ˉ ) = 0 e(\bar y, \bar u) = 0 e ( y ˉ , u ˉ ) = 0 in Z ′ Z' Z ′ .
Adjoint : p ˉ ∈ Z \bar p \in Z p ˉ ∈ Z satisfies e y ( y ˉ , u ˉ ) ∗ p ˉ = J y ( y ˉ , u ˉ ) e_y(\bar y, \bar u)^* \bar p = J_y(\bar y, \bar u) e y ( y ˉ , u ˉ ) ∗ p ˉ = J y ( y ˉ , u ˉ ) in Y ′ Y' Y ′ .
Control : u ˉ ∈ U a d \bar u \in U_{\mathrm{ad}} u ˉ ∈ U ad satisfies
0 ∈ J u ( y ˉ , u ˉ ) − e u ( y ˉ , u ˉ ) ∗ p ˉ + N U a d ( u ˉ ) 0 \in J_u(\bar y, \bar u) - e_u(\bar y, \bar u)^* \bar p + N_{U_{\mathrm{ad}}}(\bar u) 0 ∈ J u ( y ˉ , u ˉ ) − e u ( y ˉ , u ˉ ) ∗ p ˉ + N U ad ( u ˉ ) in U ′ U' U ′ .
The reduced gradient is
∇ f ( u ˉ ) = R U − 1 ( J u ( y ˉ , u ˉ ) − e u ( y ˉ , u ˉ ) ∗ p ˉ ) ∈ U . \nabla f(\bar u) = \mathcal{R}_U^{-1}\bigl(J_u(\bar y, \bar u) - e_u(\bar y, \bar u)^* \bar p\bigr) \in U. ∇ f ( u ˉ ) = R U − 1 ( J u ( y ˉ , u ˉ ) − e u ( y ˉ , u ˉ ) ∗ p ˉ ) ∈ U . For the linear-quadratic elliptic model of Lectures 4 and 5, all operators are linear,
the Lagrangian Hessian is positive definite, and the system reduces exactly to the
adjoint-based formulation derived there.
For the semilinear model of this lecture, the structure is identical but the
state equation and adjoint equation carry additional nonlinear terms.
References ¶ F. Tröltzsch, Optimal Control of Partial Differential Equations , AMS, 2010 —
Chapters 4 and 5 (semilinear elliptic control, second-order conditions)
A. Manzoni, A. Quarteroni, S. Salsa, Optimal Control of Partial Differential Equations ,
Springer, 2021 — Chapters 3 and 4
J. C. De los Reyes, Numerical PDE-Constrained Optimization , Springer, 2015 —
Chapters 2 and 3
M. Hinze, R. Pinnau, M. Ulbrich, S. Ulbrich, Optimization with PDE Constraints ,
Springer, 2009 — Chapter 1 (abstract framework)