Given an initial guess $\mathbf{x}^{(0)}$, an iterative method generates a sequence of approximations
$$\mathbf{x}^{(0)}, \mathbf{x}^{(1)}, \mathbf{x}^{(2)} ,\ldots,\mathbf{x}^{(k)},\ldots \rightarrow \mathbf{x}$$that converges to the solution.
We iterate until a stopping condition is met. For example, a certain number of iterations.
Classical (linear) iterative methods are based on rewriting the problem as
being $B$ an $n\times n$ matrix and $\mathbf{c}$ a column matrix of $n$ elements.
The $B$ matrix is the iteration matrix.
It is based on the decomposition of $A$ as
$$ A=L+D+U $$where
$$ A= \left(\begin{array}{cccc} a_{11} & a_{12} & \ldots & a_{1n}\\ a_{21} & a_{22} & \ldots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ a_{n1} & a_{n1} & \ldots & a_{nn} \end{array}\right) $$and then
$$ L=\left(\begin{array}{cccc} 0 & 0 & \cdots & 0\\ a_{21} & 0 & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ a_{n1} & a_{n2} & \cdots & 0 \end{array}\right)\;D=\left(\begin{array}{cccc} a_{11} & 0 & \cdots & 0\\ 0 & a_{22} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & a_{nn} \end{array}\right)\;U=\left(\begin{array}{cccc} 0 & a_{12} & \cdots & a_{1n}\\ 0 & 0 & \cdots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & 0 \end{array}\right) $$Beware! Even though the names of the matrices are the same this decomposition has nothing to do with the $LU$ decomposition.
We have
$$ \begin{array}{l} A\mathbf{x}=\mathbf{b} \quad \Rightarrow \quad \left(L+D+U\right)\mathbf{x}=\mathbf{b} \quad \Rightarrow \quad \\[0.5cm] D\mathbf{x}=-\left(L+U\right)\mathbf{x}+\mathbf{b} \quad \Rightarrow \quad \mathbf{x}=-D^{-1}\left(L+U\right)\mathbf{x}+D^{-1}\mathbf{b} \end{array} $$And if we call
$$ B_J=-D^{-1}\left(L+D\right) \quad \mathbf{c}_J=D^{-1}\mathbf{b} $$we can write
$$ \mathbf{x}=B_J\mathbf{x}+\mathbf{c}_J $$And we can perform iterations using
$$ \mathbf{x}^{(k+1)}=B_J\mathbf{x}^{(k)}+\mathbf{c}_J $$starting with the initial guess $\mathbf{x}^{(0)}$
$$ \mathbf{x}^{(1)}=B_J\mathbf{x}^{(0)}+\mathbf{c}_J, \quad \mathbf{x}^{(2)}=B_J\mathbf{x}^{(1)}+\mathbf{c}_J, \quad \mathbf{x}^{(3)}=B_J\mathbf{x}^{(2)}+\mathbf{c}_J, \quad \ldots $$Consider the system $A\mathbf{x}=\mathbf{b}$ with
A matrix $A$ of $n$ rows and $n$ columns is diagonally dominant if
$$ \sum_{j=1\\i\ne j}^n|a_{ij}|\lt |a_{ii}|\quad i=1,\ldots,n $$For the matrix $A$
$$A=\left(\begin{array}{crr} \fbox{$4$} & 1 & 0\\ 1 & \fbox{$4$} & 3\\ 0 & 3 & \fbox{$4$} \end{array}\right)\begin{array}{r} \left|1\right|+\left|0\right|\lt\left|4\right|\\[0.1cm] \left|1\right|+\left|3\right|\nless\left|4\right|\\[0.1cm] \left|0\right|+\left|3\right|\lt\left|4\right| \\ \end{array}$$And it is not diagonally dominant since in the second row the inequality is not verified since $\left|4\right|=\left|1\right|+\left|3\right|$
The Jacobi iteration matrix is $B_{J}=-D^{-1}(L+U)$. We can also calculate it as follows:
Infinity norm. If $A$ is an $m\times n$ matrix the infinity norm is given by
The infinity norm for $B_J$ is
$$ B_{J}=\left(\begin{array}{ccc} 0 & -1/4 & 0\\ -1/4 & 0 & -3/4\\ 0 & -3/4 & 0 \end{array}\right)\begin{array}{l} 0+1/4+0=1/4\\ 1/4+0+3/4=1\\ 0+3/4+0=3/4 \end{array} $$And
$$ \left|\left|B_{J}\right|\right|_{\infty}=\textrm{Max} \left(1/4,1,3/4\right)=1 $$1-norm. If $A$ is an $m\times n$ matrix its $1-$norm is
And then
$$ \left|\left|B_{J}\right|\right|_{1}=\textrm{Max} \left(1/4,1,3/4\right)=1 $$Eigenvalues. We calculate the eigenvalues of $B_J$ computing the roots of $|B_J-\lambda I|=0$
$$ \left|\begin{array}{ccc} 0-\lambda & -1/4 & 0\\ -1/4 & 0-\lambda & -3/4\\ 0 & -3/4 & 0-\lambda \end{array}\right|= -\lambda^3 -\left( -\frac{1}{16}\lambda-\frac{9}{16}\lambda\right) =$$And the eigenvalues are
It is a sufficient condition for the convergence of the Jacobi method that the coefficients matrix $A$ is diagonally dominant by rows:
It is a sufficient condition for the convergence of the Jacobi method that some norm of the iteration matrix $B_J$ is less than $1.$
It is a necessary and sufficient condition for the convergence of the Jacobi method that all the eigenvalues of the iteration matrix $B_J$ are less than $1$ in absolute value.
The system is $A\mathbf{x}=\mathbf{b}$ with
If we write the equations
$$ \begin{array}{rrrrrr} 4x & +&y & & & = & 3\\ x & +&4y & +&3z & = & 3\\ & &3y & +&4z & = & 5 \end{array} $$We solve the first equation for the first unknown, $x$, the second, for the second unknown, $y$, and finally, $z.$
We perform the iterations assuming that all the values of the unknowns on the right side are values from the previous iteration
$$ \begin{array}{ccl} x^{(1)} & = & \dfrac{3-y^{(0)}}{4}\\ y^{(1)} & = & \dfrac{3-x^{(0)}-3z^{(0)}}{4}\\ z^{(1)} & = & \dfrac{5-3y^{(0)}}{4} \end{array} $$We perform an iteration, taking as initial value, the null vector
Second iteration
$$ \begin{array}{ccl} x^{(2)} & = & (3-y^{(1)})/4=(3-0.75)/4=0.56\\ y^{(2)} & = & (3-x^{(1)}-3z^{(1)})/4=(3-0.75-3(1.25))/4=-0.38\\ z^{(2)} & = & (5-3y^{(1)})/4=(5-3(0.75))/4=0.69 \end{array} $$Third iteration
$$ \begin{array}{ccl} x^{(3)} & = & (3-y^{(2)})/4=(3-(-0.38))/4=0.84\\ y^{(3)} & = & (3-x^{(2)}-3z^{(2)})/4=(3-0.56-3(0.69))/4=0.09\\ z^{(3)} & = & (5-3y^{(2)})/4=(5-3(-0.38))/4=1.53 \end{array} $$For comparison purposes, the exact solution is
$$x = 1\quad y = -1 \quad z = 2$$The algorithm we have applied is formally expressed