A real scalar equation is an expression of the form
$$f(x)=0$$where $f:\Omega\subset\mathbb{R}\rightarrow\mathbb{R}$ is a function.
Any number, $\alpha$, satisfying $f(\alpha)=0$ is said to be a solution of the equation or a zero of $f$. When $f$ is a polynomial zeros are also called roots.
Example 1. Compute graphically an approximation to the solutions of
$$x^5-3x^2+1.6=0$$We import the modules matplotlib.pyplot
and numpy
.
import numpy as np
import matplotlib.pyplot as plt
f = lambda x : x**5 - 3 * x**2 + 1.6 # define the function
x = np.linspace(-1,1.5) # define the mesh in (-1,1.5)
OX = 0*x # define X axis
plt.figure()
plt.plot(x,f(x)) # plot the function
plt.plot(x,OX,'k-') # plot the X axis
plt.show()
We see that there are three solutions (intersections with $X$ axis). They are, approximately, $x\simeq -0.7$, $x\simeq 0.8, $ and $x\simeq 1.3$.
Let us plot these approximations over the function. As they are intersections with $X$ axis, the value of the coordinate $y$ will be $0$. Thus, the coordinates $x$ and $y$ of these points will be
xp = np.array([-0.7, 0.8, 1.3])
yp = np.array([ 0., 0., 0. ])
And if we plot these points over the funtion line
plt.figure()
plt.plot(x,f(x))
plt.plot(x,OX,'k-')
plt.plot(xp,yp,'ro') # approximation of the zeros as red dots
plt.show()
From the plot above, we see that we may separate the zeros of $f$ in intervals satisfying the conditions of Bolzano's theorem:
Let $f:[a,b]\rightarrow\mathbb{R}$ be a continuous function with $f(a)\,f(b) \lt 0$. Then, there exists at least one zero of $f$ in $[a,b]$.
So we seek for intervals where $f$ changes its sign.
In addition, if the function is increasing or decreasing, then there can only exist a unique zero.
In our example, we choose the following three intervals satisfying these conditions:
$$ [-1,-0.1]\quad [0.5,1] \quad [1.2,1.5] $$In general, the best way to locate the zeros is to graph the function.
Another method to find and bracket the zeros is the incremental search. It can also be used to compute the zeros but it is not an efficient method, compared with other methods we will see in this and the next lab.
It is based on Bolzano's Theorem: given a continuous function, we are searching for intervals with different signs at the boundaries. And, if the interval is small, it probably only contains a unique zero.
Thus, we divide an interval $[a,b]$ in small intervals of $dx$ length, using the points $a = x_0, x_1, x_2,\ldots, x_{n-1}, x_n=b$, where for any interval $[x_{k-1},x_{k}]$ we have $dx = x_{k}-x_{k-1}$. Beginning at the left, we check if the sign in the boundary changes for any of these intervals.
This method has some drawbacks:
Write a function incrementalSearch(f,a,b,n)
that has, as input arguments, a lambda function f
, the interval boundaries a
and b
, and the number of subintervals n
we will use to separate the zeros in subintervals of length $dx = \dfrac{b-a}{n}$ where Bolzano's conditions are met. The output argument will be a matrix that contains the boundaries of the intervals.
Use this function with
$$ f_1(x) = x^5 - 3x^2 + 1.6 $$searching in the interval $[-1,1.5]$ with $25$ subintervals.
Use it also with
$$ f_2(x) = (x+2)\cos(2x) $$searching in the interval $[0,10]$ with $100$ subintervals.
Hint:
intervals = np.zeros((n,2))
. c
.[a,b]
has been searched, chop the matrix with intervals = intervals[:c,:]
. The output argument will be this matrix.%run Exercise1
The Bisection method works for a continuous funtion $f$ defined on a bounded interval, $[a,b]$, where Bolzano's theorem conditions are satisfied.
The separation condition is not necessary, but in case that several zeros are located in $[a,b]$, the Bisection method will approximate only one.
The Bisection method generates a sequence of intervals that:
In these conditions:
- Let $a_1=a$, $b_1=b$.
- For $k=1,2,\ldots,$MaxIter
- Compute the middle point $x_k=\dfrac{a_k+b_k}{2}$.
- if $f(a_k)\,f(x_k)<0$ then:
$ a_{k+1}=a_{k},\quad b_{k+1}=x_k,$
- else if $f(x_k)\,f(b_k)<0$ then:
$ a_{k+1}=x_{k},\quad b_{k+1}=b_k,$
- else
Stop (we have found the zero).
- If $x_k$ satisfies the stopping criterion, stop.
bisection_a
Since the sequence $\{x_k\}$ such that $x_k\rightarrow \alpha$, with $f(\alpha)=0$ is, in general, infinite, we must introduce some stopping criterium to stop the algorithm when the approximation is good enough.
Stopping criteria are based on the maximum number of iterations and/or:
Write a function bisection(f,a,b,tol=1e-6,maxiter=100)
whose input arguments are the function f
, the interval boundaries a
and b
, the absolute error bound tol
and the maximum number of iterations maxiter
. And, as output arguments, the approximate solution sol
and the number of iterations performed n
.
(a) Use it to approximate the three zeros of $f(x)= x^5-3x^2+1.6$, with tol = 1e-6
and maxiter = 100
. Use as initial guess for the intervals the ones found in Exercise 1. Print the three zeros and their corresponding number of iterations.
(b) Approximate the three zeros of the function $$f(x) = \frac{x^3+1}{x^2+1}\cos(x)- 0.2$$ in the interval $[-3,3]$ with $\mathrm{tol}=10^{-6}$ and $\mathrm{MaxIter}=100.$ Plot the function in the interval $[-3,3]$ and estimate, over the graph, for each zero, an interval $[a,b]$ that contains a zero and that meets the Bolzano's conditions.
(c) For both cases, plot the graph function and the zeros approximations as red dots.
Hints:
Use a numpy array for the zeros. If the number of zeros is m
, it may be initialized
r = np.zeros(m)
The zeros can be plotted
plt.plot(r,r*0,'ro')
For the second function, take into account that results may vary depending on the initial guess. Print the results rounded to 5 digits to be able to compare results printing the vector r
and using
np.set_printoptions(precision = 5)
%run Exercise2
We use this method when the exact differential may be computed.
In each step, the function $f$ is replaced by its tangent line, $t$, at the point $x_k$, and the corresponding zero is computed as an approximation to the true solution.
- Let $x_0$ be an initial point.
- For $k=1,2,\ldots,MaxIter$:
- Compute $x_k=x_{k-1}-\dfrac{f(x_{k-1})}{f'(x_{k-1})}$
- If $x_k$ satisfies the stopping criterion, stop.
- On the contrary, perform a new iteration.
In the plot, we see the sequence of tangent lines (in red) for computing the first zero. The inital guess is $x_0=-1$.
newton_raphson
Write a function newton(f,df,x0,tol=1e-6,maxiter=100)
whose input arguments are the function f
, its derivative df
, the initial guess x0
, the absolute error bound tol
and the maximum number of iterations maxiter
. And, as output arguments, the approximate solution sol
and the number of iterations performed n
.
(a) Use it to approximate the three zeros of $f(x)= x^5 - 3x^2 + 1.6$, with tol = 1e-6
and maxiter = 100
. Use as initial guess the left border of the intervals found in Exercise 1. Print the three zeros and their corresponding number of iterations.
(b) Approximate the three zeros of the function $$f(x) = \frac{x^3+1}{x^2+1}\cos(x) - 0.2$$ in the interval $[-3,3]$ with $\mathrm{tol}=10^{-6}$ and $\mathrm{MaxIter}=100.$ Plot the function in the interval $[-3,3]$ and estimate, over the graph, for each zero, an initial guess $x_0.$
(c) For both cases, plot the graph function and the zeros approximations as red dots.
%run Exercise3
Exercise 4
Using the functions from exercises 1 and 2, write a function zeros_bisec(f,a,b)
that, using an interval $[a,b]$ and a continuous function $f$, computes all the real zeros of $f$ contained in the interval $[a,b]$.
The absolute error boundary will be tol = 1.e-6
, the maximum number of iterations maxiter = 100
and the number of subintervals will be n = 100
.
Use it to compute all the real zeros of
$$ f_1(x) = \cos^2(x)+x/10 $$Plot the function with its zeros as red dots. The interval $[a,b]$ will be found with trial and error.
Find also all the real zeros of
$$ f_2(x) = x^5-3x^4+x+1.1 $$Hints:
Use a numpy array for the zeros. If the number of zeros is m = len(intervals)
, it may be initialized
r = np.zeros(m)
The zeros can be plotted
plt.plot(r,r*0,'ro')
Take into account that results may vary depending on the initial guess. We will use np.set_printoptions(precision = 5)
that rounds the results to be able to compare results.
%run Exercise4
Exercise 5
Using the functions from exercises 1 and 3, write a function zeros_newton(f,df,a,b)
that, using an interval $[a,b]$ and a differentiable function $f$, computes all the real zeros of $f$ contained in the interval $[a,b]$.
The absolute error boundary will be tol = 1.e-6
, the maximum number of iterations maxiter = 100
and the number of subintervals will be n = 100
.
Use np.set_printoptions(precision = 5)
.
Use it to compute all the real zeros of
$$ P_4(x) = x^4+2x^3-7x^2+3 \qquad P_6(x)=x^6 - 0.1x^5 - 17x^4 + x^3 +73 x^2 - 4x - 68$$%run Exercise5
We can calculate the exact derivatives of a function using the sympy
module (symbolic python).
import sympy as sym
We create a symbolic variable
x = sym.Symbol('x', real=True)
Then we define a symbolic function and we can calculate its derivatives
f_sim = sym.cos(x)*(x**3+1)/(x**2+1)
df_sim = sym.diff(f_sim,x)
d2f_sim = sym.diff(df_sim,x)
We can print these functions
print(df_sim)
We can lambify them to use their corresponding numerical functions.
f = sym.lambdify([x], f_sim,'numpy')
df = sym.lambdify([x], df_sim,'numpy')
d2f = sym.lambdify([x], d2f_sim,'numpy')
For instance, let us draw them with the numerical command plot
.
x = np.linspace(-3,3,100)
plt.figure()
plt.plot(x,f(x),x,df(x),x,d2f(x))
plt.legend(['f','df','d2f'],loc='best')
plt.show()