Consider the system $A\mathbf{x}=\mathbf{b}$ with
It is based on the decomposition of $A$ as
$$ A=L+D+U $$where
$$ A= \left(\begin{array}{cccc} a_{11} & a_{12} & \ldots & a_{1n}\\ a_{21} & a_{22} & \ldots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{n1} & a_{n1} & \ldots & a_{nn} \end{array}\right) $$and then
$$ L=\left(\begin{array}{cccc} 0 & 0 & \cdots & 0\\ a_{21} & 0 & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ a_{n1} & a_{n2} & \cdots & 0 \end{array}\right)\;D=\left(\begin{array}{cccc} a_{11} & 0 & \cdots & 0\\ 0 & a_{22} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & a_{nn} \end{array}\right)\;U=\left(\begin{array}{cccc} 0 & a_{12} & \cdots & a_{1n}\\ 0 & 0 & \cdots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & 0 \end{array}\right) $$We have
$$ \begin{array}{l} A\mathbf{x}=\mathbf{b} \quad \Rightarrow \quad \left(L+D+U\right)\mathbf{x}=\mathbf{b} \quad \Rightarrow \quad \\[0.5cm] (L+D)\,\mathbf{x}=-U\mathbf{x}+\mathbf{b} \quad \Rightarrow \quad \mathbf{x}=-\left(L+D\right)^{-1}U\mathbf{x}+\left(L+D\right)^{-1}\mathbf{b} \end{array} $$And if we call
$$ B_{G-S}=-\left(L+D\right)^{-1}U \quad \mathbf{c}_{G-S}=\left(L+D\right)^{-1}\mathbf{b} $$we can write
$$ \mathbf{x}=B_{G-S}\mathbf{x}+\mathbf{c}_{G-S} $$And we can perform iterations using
$$ \mathbf{x}^{(k+1)}=B_{G-S}\mathbf{x}^{(k)}+\mathbf{c}_{G-S} $$starting with the initial guess $\mathbf{x}^{(0)}$
$$ \mathbf{x}^{(1)}=B_{G-S}\mathbf{x}^{(0)}+\mathbf{c}_{G-S}, \quad \mathbf{x}^{(2)}=B_{G-S}\mathbf{x}^{(1)}+\mathbf{c}_{G-S}, \quad \mathbf{x}^{(3)}=B_{G-S}\mathbf{x}^{(2)}+\mathbf{c}_{G-S}, \quad \ldots $$A matrix $A$ of $n$ rows and $n$ columns is diagonally dominant if
$$ \sum_{j=1\\i\ne j}^n|a_{ij}|\lt |a_{ii}|\quad i=1,\ldots,n $$For the matrix $A$
$$A=\left(\begin{array}{crr} \fbox{$4$} & 1 & 0\\ 1 & \fbox{$4$} & 3\\ 0 & 3 & \fbox{$4$} \end{array}\right)\begin{array}{r} \left|1\right|+\left|0\right|\lt\left|4\right|\\ \left|1\right|+\left|3\right|\nless\left|4\right|\\ \left|0\right|+\left|3\right|\lt\left|4\right| \\ \end{array}$$And it is not diagonally dominant since in the second row the inequality is not verified since $\left|4\right|=\left|1\right|+\left|3\right|$
The iteration matrix is $B_{G-S}=-(L+D)^{-1}U$. If
$$ A=\left(\begin{array}{ccc} 4 & 1 & 0\\ 1 & 4 & 3\\ 0 & 3 & 4 \end{array}\right) $$then
$$ L=\left(\begin{array}{ccc} 0 & 0 & 0\\ 1 & 0 & 0\\ 0 & 3 & 0 \end{array}\right) \quad D=\left(\begin{array}{ccc} 4 & 0 & 0\\ 0 & 4 & 0\\ 0 & 0 & 4 \end{array}\right) \quad U=\left(\begin{array}{ccc} 0 & 1 & 0\\ 0 & 0 & 3\\ 0 & 0 & 0 \end{array}\right) $$Let us compute the inverse by Gauss-Jordan
We write the matrix $\left[(L+D)|I\right]$. As the pivot is $\mathbf{{\color{red}4}}$ we divide the row by $4$ to get a $1$ in the pivot position
$$ \begin{array}{c} r_{1}\\ r_{2}\\ r_{3} \end{array}\left(\begin{array}{rrr|rrr} {\mathbf{{\color{red}4}}} & 0 &0 & 1 & 0 & 0\\ \mathbf{{\color{blue}1}} & 4 & 0 & 0 & 1 & 0\\ \mathbf{{\color{ForestGreen}0}} & 3 & 4 & 0 & 0 & 1 \end{array}\right) \begin{array}{rrl} r_{1} & = & r_{1}/(\mathbf{\color{red}{4}})\\ & &\\ & & \end{array} $$We make zeros below the pivot
As the pivot is $\mathbf{{\color{red}4}}$ we divide all the row by 4 to convert it into a 1.
We make zeros above and below the pivot (although above there is already a zero and nothing needs to be done)
$$ \begin{array}{c} r_{1}\\ r_{2}\\ r_{3} \end{array}\left(\begin{array}{rrr|rrr} 1 & 0 & \mathbf{{\color{blue}0}} & 1/4 & 0 & 0\\ 0 & 1 & \mathbf{{\color{ForestGreen}0}} & -1/16 & 1/4 & 0\\ 0 & 0 & {\mathbf{{\color{red}4}}} & 3/16 & -3/4 & 1 \end{array}\right) \begin{array}{rrl} & &\\ & &\\ r_{3} & = & r_{3}/(\mathbf{\color{red}{4}}) \end{array} $$We divide the row of the pivot by the pivot, and since above it there are only zeros, we have finished.
$$ \begin{array}{c} r_{1}\\ r_{2}\\ r_{3} \end{array}\left(\begin{array}{rrr|rrr} 1 & 0 & 0 & 1/4 & 0 & 0\\ 0 & 1 & 0 & -1/16 & 1/4 & 0\\ 0 & 0 & 1 & 3/64 & -3/16 & 1/4 \end{array}\right) $$Since we already have the matrix $I$ on the left, the matrix on the right will be $(L+D)^{-1}$.
$$ (L+D)^{-1} = \left(\begin{array}{rrr} 1/4 & 0 & 0\\ -1/16 & 1/4 & 0\\ 3/64 & -3/16 & 1/4 \end{array}\right) $$We multiply it by $U$
$$ B_{G-S} = -(L+D)^{-1}\,U= -\left(\begin{array}{rrr} 1/4 & 0 & 0\\ -1/16 & 1/4 & 0\\ 3/64 & -3/16 & 1/4 \end{array}\right) \left(\begin{array}{ccc} 0 & 1 & 0\\ 0 & 0 & 3\\ 0 & 0 & 0 \end{array}\right)= \left(\begin{array}{rrr} 0 & -1/4 & 0\\ 0 & 1/16 & -3/4\\ 0 & -3/64 & 9/16 \end{array}\right) $$Infinity norm. If $A$ is an $m\times n$ matrix the infinity norm is given by
We move to floating point to better compare quantities and study the infinity norm.
Then
$$ \left|\left|B_{G-S}\right|\right|_{\infty}=\textrm{Max} \left(0.25,0.81,0.61\right)=0.81 $$1-norm If $A$ is an $m\times n$ matrix its $1-$norm is
Then
$$ \left|\left|B_{G-S}\right|\right|_{1}=\textrm{Max} \left(0,0.36,1.31\right)=1.31 $$Eigenvalues We calculate the eigenvalues of $B_{G-S}$
$$ \left(\begin{array}{rrr} 0 & -1/4 & 0\\ 0 & 1/16 & -3/4\\ 0 & -3/64 & 9/16 \end{array}\right) $$Computing the roots of $|B_{G-S}-\lambda I|=0$
. $$ =-\lambda\left|\begin{array}{cc} (1/16)-\lambda & -3/4\\ -3/64 & (9/16)-\lambda \end{array}\right|= $$ . $$ =-\lambda \left[ \left(\frac{1}{16}-\lambda\right)\left(\frac{9}{16}-\lambda\right)-\left(-\frac{3}{4}\right)\left(-\frac{3}{64}\right)\right]= $$ . $$ =-\lambda\left[\left(\frac{1}{16}-\lambda\right)\left(\frac{9}{16}-\lambda\right)-\frac{9}{4\times 64} \right]= $$ . $$ =-\lambda\left[\frac{1}{16}\times\frac{9}{16}-\frac{1}{16}\lambda-\frac{9}{16}\lambda+\lambda^2-\frac{9}{2^2\times 2^6} \right]= $$ .
$$ =-\lambda\left[\frac{9}{2^8}-\frac{10}{16}\lambda+\lambda^2-\frac{9}{2^2\times 2^6} \right]=-\lambda^2\left(-\frac{10}{16}+\lambda\right)=-\lambda^2\left(\lambda-\frac{5}{8}\right) $$And the eigenvalues are
$$ \lambda_{1,2}=0,\quad\lambda_{3}=\frac{5}{8}=0.625 $$It is a sufficient condition for the convergence of the Gauss-Seidel method that the coefficients matrix $A$ is diagonally dominant by rows:
It is a sufficient condition for the convergence of the Gauss-Seidel method that some norm of the iteration matrix $B_{G-S}$ is less than $1.$
It is a necessary and sufficient condition for the convergence of the Gauss-Seidel method that all the eigenvalues of the iteration matrix $B_{G-S}$ are less than $1$ in absolute value.
The system is $A\mathbf{x}=\mathbf{b}$ with
If we write the equations
$$ \begin{array}{rrrrrr} 4x & +&y & & & = & 3\\ x & +&4y & +&3z & = & 3\\ & &3y & +&4z & = & 5 \end{array} $$In the first equation we solve for the first unknown, $x$, in the second, the second unknown, $y$, and finally, $z.$
We perform the iterations starting with the values of the previous iteration, but as we get new values, we use them
$$ \begin{array}{ccl} x^{\color{red}{(1)}} & = & \dfrac{3-y^{(0)}}{4}\\ y^{\color{red}{(1)}} & = & \dfrac{3-x^{\color{red}{(1)}}-3z^{(0)}}{4}\\ z^{\color{red}{(1)}} & = & \dfrac{5-3y^{\color{red}{(1)}}}{4} \end{array} $$Iteration 1
We perform an iteration, taking as initial value, the null vector
$$ \begin{array}{ccl} x^{\color{red}{(1)}} & = & (3-y^{(0)})/4=(3-0)/4=3/4= 0.75\\ y^{\color{red}{(1)}} & = & (3-x^{\color{red}{(1)}}-3z^{(0)})/4=(3-0.75-0)/4=0.56\\ z^{\color{red}{(1)}} & = & (5-3y^{\color{red}{(1)}})/4= (5-3(0.56))/4= 0.83 \end{array} $$Iteration 2
$$ \begin{array}{ccl} x^{\color{red}{(2)}} & = & (3-y^{(1)})/4=(3-0.56)/4=0.61\\ y^{\color{red}{(2)}} & = & (3-x^{\color{red}{(2)}}-3z^{(1)})/4=(3-0.61-3(0.83))/4=-0.02\\ z^{\color{red}{(2)}} & = & (5-3y^{\color{red}{(2)}})/4= (5-3(-0.02))/4= 1.27 \end{array} $$Iteration 3
$$ \begin{array}{ccl} x^{\color{red}{(3)}} & = & (3-y^{(2)})/4=(3-(-0.02))/4=0.76\\ y^{\color{red}{(3)}} & = & (3-x^{\color{red}{(3)}}-3z^{(2)})/4=(3-0.76-3(1.27))/4=-0.39\\ z^{\color{red}{(3)}} & = & (5-3y^{\color{red}{(3)}})/4= (5-3(-0.39))/4= 1.54 \end{array} $$The exact solution is
$$x = 1\quad y = -1 \quad z = 2$$