\( \require{amstext} \require{amsmath} \require{amssymb} \require{amsfonts} \newcommand{\norm}[1]{\lVert #1 \rVert} \)
Linear Algebra
Hermitian
- Conjugate Transpose: \( M^H = \overline{M^T} = \overline{M}^T \)
- Hermitian: if $M^H = M$
- $(AB)^H = B^H A^H$
- $A^H A $ must be hermitian
-
Unitary matrix: if $M^H = M^{-1}$
- Hermitian positive definite Matrix: \( x^H M x > 0 \forall x \in \mathcal{C}^n\backslash 0 \)
Vector/Matrix Norms
Vector Norm
-
Vector $l_p$ norms: \( \norm{x} = (\sum_{i} |x_i|^p )^{1/p} \)
- Infinite norm just find the maximum term of the vector \( \norm{x}_{\infty} = \max_i |x_i| \) which is also knownas the maiximum norm.
-
Matrix induced norm: \( \norm{x}^2_{A} = x^H A x \)
-
Properties:
- Linearity: \( \norm{kx} = |k| \norm{x} \)
- Triangle inequality: \( \norm{x+y} \le \norm{x} + \norm{y} \)
- Another inequality: \( \norm{xy} \le \norm{x}\norm{y} \)
Matrix Norm
-
Operator norms \( \norm{A} = \max_{\forall x \in \mathcal{C}^n\backslash 0} \frac{\norm{Ax}}{\norm{x}} \)
- This norm measures the maximum amount by which the matrix $A$ can re-scale a vector $x$
- 1-norm: \( \norm{A} =\max_j \sum_{i=1}^n |a_{ij}| \) which is column of A with maximum $l_1$ norm
-
$\infty$ norm: \( \norm{A} = \max_i \sum_{j}^n|a_{ij}| \) which is row of A with maximum $l_1$ norm
- $l_2$ norm: \( \norm{A} = \sqrt{\lambda_{\max}(A^H A)} \)
Condition number
- $\kappa(A) = \norm{A}\norm{A^{-1}}$
- For the 2-norm:
\(
\kappa_2(A) = \frac{\sqrt{\lambda_{\max}A^H A}}{\sqrt{\lambda_{\min}A^H A}}
\)
- which is the max singular value over min singular value
- Since eigenvalue is reciprocal for matrix inverse, if A is Hermitian, then: \( \kappa_2(A) = \frac{|\lambda(A)|max}{|\lambda(A)|min} \)
- A matrix with a large condition number is ill-conditioned, which lead to instable computation $\rightarrow$ small error lead to large computation error
Iterative Methods for linear systems
Optimisation
- Linear Programming solved by using Simplex Algorithm