Proof of Continuity in Min Max Theorem

In linear algebra and functional analysis, the min-max theorem, or variational theorem, or Courant–Fischer–Weyl min-max principle, is a result that gives a variational characterization of eigenvalues of compact Hermitian operators on Hilbert spaces. It can be viewed as the starting point of many results of similar nature.

This article first discusses the finite dimensional case and its applications before considering compact operators on infinite dimensional Hilbert spaces. We will see that for compact operators, the proof of the main theorem uses essentially the same idea from the finite dimensional argument.

The min-max theorem can be extended to self adjoint operators that are bounded below.

Contents

  • 1 Matrices
    • 1.1 Min-max Theorem
    • 1.2 Proof
  • 2 Applications
    • 2.1 Min-max principle for singular values
    • 2.2 Cauchy interlacing theorem
  • 3 Compact operators
  • 4 See also
  • 5 References

Matrices

Let A be a n × n Hermitian matrix. As with many other variational results on eigenvalues, one considers the Rayleigh–Ritz quotient R A : C n \{0} → R defined by

R_A(x) = \frac{(Ax, x)}{(x,x)}

where (·, ·) denotes the Euclidean inner product on C n . Clearly, the Rayleigh quotient of an eigenvector is its associated eigenvalue. Equivalently, the Rayleigh–Ritz quotient can be replaced by

f(x) = (Ax, x), \; \|x\| = 1.

For Hermitian matrices, the range of the continuous function R A (x), or f(x), is a compact subset [a, b] of the real line. The maximum b and the minimum a are the largest and smallest eigenvalue of A, respectively. The min-max theorem is a refinement of this fact.

Min-max Theorem

Let A be a n × n Hermitian matrix with eigenvalues λ 1 ≥ ... ≥ λk ≥ ... ≥ λn then

  \lambda_k = \max \{ \min \{ R_A(x) \mid x \in U \text{ and } x \neq 0 \} \mid \dim(U)=k \}

and

  \lambda_k = \min \{ \max \{ R_A(x) \mid x \in U \text{ and } x \neq 0 \} \mid \dim(U)=n-k+1 \}

in particular,

  \lambda_n \leq R_A(x) \leq \lambda_1 \quad\forall x \in \mathbb{C}^n

and these bounds are attained when x is an eigenvector of the appropriate eigenvalues.

Proof

Since the matrix A is Hermitian it is diagonalizable and we can choose an orthonormal basis of eigenvectors {u 1,...,u n } that is, u i is an eigenvector for the eigenvalue λ i and such that (u i , u i ) = 1 and (u i , u j ) = 0 for all ij.

If U is a subspace of dimension k then its intersection with the subspace

  \text{span}\{ u_k, \ldots, u_n \}

isn't zero (by simply checking dimensions) and hence there exists a vector v in this intersection that we can write as

  v = \sum_{i=k}^n \alpha_i u_i

and whose Rayleigh quotient is

  R_A(v) = \frac{\sum_{i=k}^n \lambda_i \alpha_i^2}{\sum_{i=k}^n \alpha_i^2} \leq \lambda_k

and hence

  \min \{ R_A(x) \mid x \in U \} \leq \lambda_k

And we can conclude that

  \max \{ \min \{ R_A(x) \mid x \in U \text{ and } x \neq 0 \} \mid \dim(U)=k \} \leq \lambda_k

And since that maximum value is achieved for

  U = \text{span}\{u_1,\ldots,u_k\}

We can conclude the equality.

In the case where U is a subspace of dimension n-k+1, we proceed in a similar fashion: Consider the subspace of dimension k

  \text{span}\{ u_1, \ldots, u_k \}

Its intersection with the subspace U isn't zero (by simply checking dimensions) and hence there exists a vector v in this intersection that we can write as

  v = \sum_{i=1}^k \alpha_i u_i

and whose Rayleigh quotient is

  R_A(v) = \frac{\sum_{i=1}^k \lambda_i \alpha_i^2}{\sum_{i=1}^k \alpha_i^2} \geq \lambda_k

and hence

  \max \{ R_A(x) \mid x \in U \} \geq \lambda_k

And we can conclude that

  \min \{ \max \{ R_A(x) \mid x \in U \text{ and } x \neq 0 \} \mid \dim(U)=n-k+1 \} \geq \lambda_k

And since that minimum value is achieved for

  U = \text{span}\{u_k,\ldots,u_n\}

We can conclude the equality.

Applications

Min-max principle for singular values

The singular values {σk } of a square matrix M are the square roots of eigenvalues of M*M (equivalently MM*). An immediate consequence of the first equality from min-max theorem is

\sigma_k ^{\uparrow} = \min_{S_k} \max_{x \in S_k, \|x\| = 1} (M^* Mx, x)^{\frac{1}{2}}=  \min_{S_k} \max_{x \in S_k, \|x\| = 1} \| Mx \|.

Similarly,

\sigma_k ^{\downarrow} = \max_{S_k} \min_{x \in S_k, \|x\| = 1} \| Mx \|.

Cauchy interlacing theorem

Let A be a n × n matrix. The m × m matrix B, where mn, is called a compression of A if there exists an orthogonal projection P onto a subspace of dimension m such that PAP = B. The Cauchy interlacing theorem states:

Theorem If the eigenvalues of A are α 1 ≤ ... ≤ αn , and those of B are β 1 ≤ ... βj ... ≤ βm , then for all j < m+1,

\alpha_j \leq \beta_j \leq \alpha_{n-m+j}.

This can be proven using the min-max principle. Let βi have corresponding eigenvector bi and Sj be the j dimensional subspace Sj = span{b 1...bj }, then

\beta_j = \max_{x \in S_j, \|x\| = 1} (Bx, x) = \max_{x \in S_j, \|x\| = 1} (PAPx, x)= \max_{x \in S_j, \|x\| = 1} (Ax, x).

According to first part of min-max,

\alpha_j \leq \beta_j.

On the other hand, if we define S mj+1 = span{b j ...bm }, then

\beta_j = \min_{x \in S_{m-j+1}, \|x\| = 1} (Bx, x) = \min_{x \in S_{m-j+1}, \|x\| = 1} (PAPx, x)= \min_{x \in S_{m-j+1}, \|x\| = 1} (Ax, x) \leq \alpha_{n-m+j},

where the last inequality is given by the second part of min-max.

Notice that, when n −m = 1, we have

\alpha_j \leq \beta_j \leq \alpha_{j+1}.

Hence the name interlacing theorem.

Compact operators

Let A be a compact, Hermitian operator on a Hilbert space H. Recall that the spectrum of such an operator form a sequence of real numbers whose only possible cluster point is zero. Every nonzero number in the spectrum is an eigenvalue. It no longer makes sense here to list the positive eigenvalues in increasing order. Let the positive eigenvalues of A be

\cdots \le \lambda_k \le \cdots \le \lambda_1,

where multiplicity is taken into account as in the matrix case. When H is infinite dimensional, the above sequence of eigenvalues is necessarily infinite. We now apply the same reasoning as in the matrix case. Let Sk H be a k dimensional subspace, and S' be the closure of the linear span S' = span{uk ,u k + 1, ...}. The subspace S' has codimension k − 1. By the same dimension count argument as in the matrix case, S'Sk is non empty. So there exists x ∈S' ∩Sk with ||x|| = 1. Since it is an element of S' , such an x necessarily satisfy

(Ax, x) \le \lambda_k.

Therefore, for all Sk

\inf_{x \in S_k, \|x\| = 1}(Ax,x) \le \lambda_k

But A is compact, therefore the function f(x) = (Ax, x) is weakly continuous. Furthermore, any bounded set in H is weakly compact. This lets us replace the infimum by minimum:

\min_{x \in S_k, \|x\| = 1}(Ax,x) \le \lambda_k.

So

\sup_{S_k} \min_{x \in S_k, \|x\| = 1}(Ax,x) \le \lambda_k.

Because equality is achieved when Sk = span{&u; 1...&u;k },

\max_{S_k} \min_{x \in S_k, \|x\| = 1}(Ax,x) = \lambda_k.

This is the first part of min-max theorem for compact self-adjoint operators.

Analogously, consider now a k − 1 dimensional subspace S k−1, whose the orthogonal compliment is denoted by S k−1 . If S' = span{u 1...uk },

S' \cap S_{k-1}^{\perp} \ne {0}.

So

\exists x \in S_{k-1}^{\perp} \, \|x\| = 1, (Ax, x) \ge \lambda_k.

This implies

\max_{x \in S_{k-1}^{\perp}, \|x\| = 1} (Ax, x) \ge \lambda_k

where the compactness of A was applied. Index the above by the collection of (k − 1)-dimensional subspaces gives

\inf_{S_{k-1}} \max_{x \in S_{k-1}^{\perp}, \|x\|=1} (Ax, x) \ge \lambda_k.

Pick S k−1 = span{u 1...u k−1} and we deduce

\min_{S_{k-1}} \max_{x \in S_{k-1}^{\perp}, \|x\|=1} (Ax, x) = \lambda_k.

In summary,

Theorem (Min-Max) Let A be a compact, self-adjoint operator on a Hilbert space H, whose positive eigenvalues are listed in decreasing order:

\cdots \le \lambda_k \le \cdots \le \lambda_1.

Then

\max_{S_k} \min_{x \in S_k, \|x\| = 1}(Ax,x) = \lambda_k ^{\downarrow},
\min_{S_{k-1}} \max_{x \in S_{k-1}^{\perp}, \|x\|=1} (Ax, x) = \lambda_k^{\downarrow}.

A similar pair of equalities hold for negative eigenvalues.

See also

  • Courant minimax principle

References

  • M. Reed and B. Simon, Methods of Modern Mathematical Physics IV: Analysis of Operators, Academic Press, 1978.

donahueareption.blogspot.com

Source: http://dictionary.sensagent.com/min%20max%20theorem/en-en/

0 Response to "Proof of Continuity in Min Max Theorem"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel