Course Webpage

Jacobi method

Consider the system $A\mathbf{x}=\mathbf{b}$ with

$$ A=\left(\begin{array}{cccc} 20 & 1 & 0 & 1\\ 1 & 20 & 3 & 1\\ 0 & 3 & 20 & 1\\ 1 & 0 & 1 & 20 \end{array}\right)\quad\mathbf{b}=\left(\begin{array}{c} 10\\ 7\\ 4\\ 6 \end{array}\right) $$

Solve it using the Jacobi method.

Use as stopping condition $\left\Vert \mathbf{x}^{(k)}-\mathbf{x}^{(k-1)}\right\Vert _{\infty}\lt0.01$


Let us use python

In [1]:
import numpy as np
np.set_printoptions(precision=3)

We write the data that are the coefficient matrix $A$ and $\mathbf{b}$

In [2]:
A = np.array([[20., 1, 0, 1],[1, 20, 3, 1],[0, 3, 20, 1],[1, 0, 1, 20]])
b = np.array([10., 7, 4, 6])

We initialize the two vectors where we will store the last two approximate solutions

In [3]:
x0 = np.zeros(4)
x1 = np.zeros(4)

If we have a system of $4$ equations and $4$ unknowns

$$ \begin{array}{cll} a_{11}x_{1}+a_{12}x_{2}+a_{13}x_{3}+a_{14}x_{4} & = & b_{1}\\ a_{21}x_{1}+a_{22}x_{2}+a_{23}x_{3}+a_{24}x_{4} & = & b_{2}\\ a_{31}x_{1}+a_{32}x_{2}+a_{33}x_{3}+a_{34}x_{4} & = & b_{3}\\ a_{41}x_{1}+a_{42}x_{2}+a_{43}x_{3}+a_{44}x_{4} & = & b_{4} \end{array} $$

and we solve the first equation for the first unknown, the second equation for the second unknown and so on

$$ \begin{array}{cll} x_{1}& = &\dfrac{b_{1}-a_{12}x_{2}-a_{13}x_{3}-a_{14}x_{4}}{a_{11}} \\ x_{2} & = & \dfrac{b_{2}-a_{21}x_{1}-a_{23}x_{3}-a_{24}x_{4}}{a_{22}}\\ x_{3} & = & \dfrac{b_{3}-a_{31}x_{1}-a_{32}x_{2}-a_{34}x_{4}}{a_{33}}\\ x_{4} & = & \dfrac{b_{4}-a_{41}x_{1}-a_{42}x_{2}-a_{43}x_{3}}{a_{44}} \end{array} $$

And we get the values of the first iteration from the initial values as follows

$$ \begin{array}{cll} x_{1}^{{\color{red}{(1)}}} & = & \dfrac{b_{1}-a_{12}x_{2}^{(0)}-a_{13}x_{3}^{(0)}-a_{14}x_{4}^{(0)}}{a_{11}} \\ x_{2}^{{\color{red}{(1)}}} & = & \dfrac{b_{2}-a_{21}x_{1}^{(0)}-a_{23}x_{3}^{(0)}-a_{24}x_{4}^{(0)}}{a_{22}}\\ x_{3}^{{\color{red}{(1)}}} & = & \dfrac{b_{3}-a_{31}x_{1}^{(0)}-a_{32}x_{2}^{(0)}-a_{34}x_{4}^{(0)}}{a_{33}}\\ x_{4}^{{\color{red}{(1)}}}& = & \dfrac{b_{4}-a_{41}x_{1}^{(0)}-a_{42}x_{2}^{(0)}-a_{43}x_{3}^{(0)}}{a_{44}} \end{array} $$

Algorithm

  • Given an initial guess $\mathbf{x}^{(0)}$
  • For $k=1,2,\ldots,$MaxIter
    • For $i=1,2,\ldots,n$, compute $$x_{i}^{{\color{red}{(k)}}}=\dfrac{1}{a_{ii}}\left(b_{i}-\sum_{j=1}^{i-1}a_{ij}x_{j}^{(k-1)}-\sum_{j=i+1}^{n}a_{ij}x_{j}^{(k-1)}\right)$$
    • If the stopping condition is met, take $\mathbf{x}^{(k)}$ as approximation of the solution.

Iteration 1

From $\mathbf{x^{(0)}}$ we compute $\mathbf{x^{(1)}}$

In [4]:
k = 1
x1[0] = (b[0] - A[0,1] * x0[1] - A[0,2] * x0[2] - A[0,3] * x0[3]) / A[0,0]
x1[1] = (b[1] - A[1,0] * x0[0] - A[1,2] * x0[2] - A[1,3] * x0[3]) / A[1,1]
x1[2] = (b[2] - A[2,0] * x0[0] - A[2,1] * x0[1] - A[2,3] * x0[3]) / A[2,2]
x1[3] = (b[3] - A[3,0] * x0[0] - A[3,1] * x0[1] - A[3,2] * x0[2]) / A[3,3]

We print the new iteration, the difference vector and the norm of said difference vector, which will be used to control the error

In [5]:
print ('x1              = ', x1)
print ('x1-x0           = ', x1-x0)
print ('max(abs(x1-x0)) = ', max(abs(x1-x0)))
x1              =  [0.5  0.35 0.2  0.3 ]
x1-x0           =  [0.5  0.35 0.2  0.3 ]
max(abs(x1-x0)) =  0.5

As the norm is greater than $0.01$, we go on with the iterations

Iteration 2

From $\mathbf{x^{(1)}}$ we compute $\mathbf{x^{(2)}}$ (although we store them respectively in x0 and x1)

In [6]:
k = 2
x0 = np.copy(x1)
x1[0] = (b[0] - A[0,1] * x0[1] - A[0,2] * x0[2] - A[0,3] * x0[3]) / A[0,0]
x1[1] = (b[1] - A[1,0] * x0[0] - A[1,2] * x0[2] - A[1,3] * x0[3]) / A[1,1]
x1[2] = (b[2] - A[2,0] * x0[0] - A[2,1] * x0[1] - A[2,3] * x0[3]) / A[2,2]
x1[3] = (b[3] - A[3,0] * x0[0] - A[3,1] * x0[1] - A[3,2] * x0[2]) / A[3,3]

We print the new iteration, the difference vector and the norm of said difference vector, which will be used to control the error

In [7]:
print ('x1              = ', x1)
print ('x1-x0           = ', x1-x0)
print ('max(abs(x1-x0)) = ', max(abs(x1-x0)))
x1              =  [0.467 0.28  0.133 0.265]
x1-x0           =  [-0.033 -0.07  -0.068 -0.035]
max(abs(x1-x0)) =  0.06999999999999995

As the norm is greater than $0.01$, we go on with the iterations

Iteration 3

From $\mathbf{x^{(2)}}$ we compute $\mathbf{x^{(3)}}$ (although we store them respectively in x0 and x1)

In [8]:
k = 3
x0 = np.copy(x1)
x1[0] = (b[0] - A[0,1] * x0[1] - A[0,2] * x0[2] - A[0,3] * x0[3]) / A[0,0]
x1[1] = (b[1] - A[1,0] * x0[0] - A[1,2] * x0[2] - A[1,3] * x0[3]) / A[1,1]
x1[2] = (b[2] - A[2,0] * x0[0] - A[2,1] * x0[1] - A[2,3] * x0[3]) / A[2,2]
x1[3] = (b[3] - A[3,0] * x0[0] - A[3,1] * x0[1] - A[3,2] * x0[2]) / A[3,3]

We print the new iteration, the difference vector and the norm of said difference vector

In [9]:
print ('x1              = ', x1)
print ('x1-x0           = ', x1-x0)
print ('max(abs(x1-x0)) = ', max(abs(x1-x0)))
x1              =  [0.473 0.293 0.145 0.27 ]
x1-x0           =  [0.005 0.013 0.012 0.005]
max(abs(x1-x0)) =  0.013499999999999956

As the norm is greater than $0.01$, we go on with the iterations

Iteration 4

From $\mathbf{x^{(3)}}$ we compute $\mathbf{x^{(4)}}$ (although we store them respectively in x0 and x1)

In [10]:
k = 4
x0 = np.copy(x1)
x1[0] = (b[0] - A[0,1] * x0[1] - A[0,2] * x0[2] - A[0,3] * x0[3]) / A[0,0]
x1[1] = (b[1] - A[1,0] * x0[0] - A[1,2] * x0[2] - A[1,3] * x0[3]) / A[1,1]
x1[2] = (b[2] - A[2,0] * x0[0] - A[2,1] * x0[1] - A[2,3] * x0[3]) / A[2,2]
x1[3] = (b[3] - A[3,0] * x0[0] - A[3,1] * x0[1] - A[3,2] * x0[2]) / A[3,3]

We print the result

In [11]:
print ('x1              = ', x1)
print ('x1-x0           = ', x1-x0)
print ('max(abs(x1-x0)) = ', max(abs(x1-x0)))
x1              =  [0.472 0.291 0.142 0.269]
x1-x0           =  [-0.001 -0.002 -0.002 -0.001]
max(abs(x1-x0)) =  0.0023499999999999077

Now that the norm of the difference vector is less than $0.01$ and we stop. Our approximate solution is

In [12]:
print ('x1 = ', x1)
x1 =  [0.472 0.291 0.142 0.269]

For comparison sake, the exact solution is

In [13]:
x = np.linalg.solve(A,b);
print(x)
[0.472 0.292 0.143 0.269]

And the error

In [14]:
print ('error = ', max(abs(x1-x)))
error =  0.0003674776538470925