We import the modules matplotlib.pyplot
and numpy
.
import numpy as np
import matplotlib.pyplot as plt
It is the most important variant of the Netwon-Raphson method, in which the tangent line is replaced by the secant.
The main drawback of the Newton-Raphson method is that we need to compute $f'(x)$ in advance.
In the Secant method we approximate $f'(x)$ by finite differences: $$f'(x_{k-1})\approx\frac{f(x_{k-1})-f(x_{k-2})}{x_{k-1}-x_{k-2}}.$$
In counterpart, we need two initial guesses, $x_0$ and $x_1$ for the first iteration: $$ x_k= x_{k-1} -f( x_{k-1})\frac{x_{k-1}-x_{k-2}}{f(x_{k-1})-f(x_{k-2})}.$$
In the plot, we see the sequence of secants (in red) to compute the first zero of $f$, taking initial guesses $x_0=0$ and $x_1=-1$.
secant_a
- Let $x_0$ and $x_1$ be the initial guesses.
- For $k=1,2,\ldots,\mathrm{MaxIter}$:
- Compute $$ x_k= x_{k-1} -f( x_{k-1})\frac{x_{k-1}-x_{k-2}}{f(x_{k-1})-f(x_{k-2})}.$$
- If $x_k$ satisfies the stopping criterion, stop.
- On the contrary, perform a new iteration.
Write a function secant(f,x0,x1,tol=1e-6,maxiter=100)
whose input arguments are the function f
, the two initial guesses x0
and x1
, the absolute error bound tol
and the maximum number of iterations maxiter
. And, as output arguments, the approximate solution sol
and the number of iterations performed n
.
Use it to approximate the three zeros of $f(x)= x^5 - 3x^2 + 1.6$, with tol = 1e-6
and maxiter = 100
.
Use the intervals [a,b]
obtained in Exercise 1 (incremental search) from the previous lab where x0 = a
and x1 = b
.
Print the three zeros and their corresponding number of iterations.
Plot the curve and the zeros as red dots.
Hints:
Use a numpy array to store the zeros. It may be initialized
r = np.zeros(3)
The zeros can be plotted
plt.plot(r,r*0,'ro')
%run Exercise1
SciPy
is a open source library that contains mathematical tools and algorithms for Phython. SciPy
has optimization, linear algebra, integration, interpolation, special functions, FTT, signal and image processing and ODE solvers modules and more for scienctific and engineering tasks.
Funtions that solve equations are included in the optimization module (scipy.optimize).
If we only want to use this module we can import it
import scipy.optimize as op
Let us see an example computing the positive solutions of $\sin x-0.1x-0.1=0$
We need the derivative for Newton's method
f = lambda x: np.sin(x) - 0.1*x - 0.1
df = lambda x: np.cos(x) - 0.1
We plot the function
x = np.linspace(-1,20,1000)
plt.figure(figsize=(15,4))
plt.plot(x,f(x))
plt.plot(x,0*x,'k-')
plt.show()
We obtain the zeros estimating initial guesses over this plot
x0 = np.array([0., 2., 7., 8.])
raiz = op.newton(f,x0,fprime=df,tol=1.e-6,maxiter=100)
print(raiz)
If we do not use the derivative as input argument, the function uses the Secant method
x0 = np.array([0., 2., 7., 8.])
raiz = op.newton(f,x0,tol=1.e-6,maxiter=100)
print(raiz)
For Bisection, we need to estimate the initial interval
x0 = np.array([0., 2., 7., 8.])
x1 = x0 + 1
raiz = np.zeros_like(x0)
for i in range(len(raiz)):
raiz[i]= op.bisect(f,x0[i],x1[i],xtol=1.e-6,maxiter=100)
print(raiz[i])
Let us check the solutions graphically
x = np.linspace(-1,9)
plt.figure()
plt.plot(x,f(x),x,x*0,'k',raiz,raiz*0,'ro')
plt.show()
If $f$ is differentiable in a $x_0$ of its domain and $x_0$ is a relative extremum
$$ f'(x_0)=0 $$That is, extrema are zeros of $f'$.
If $f$ is differentiable two times in a $x_0$ of its domain and $x_0$ is an inflection point
$$ f''(x_0)=0 $$That is, inflection points are zeros of $f''$.
Given the function: $$f(x)=x^2+\ln (2x+7) \cos (3 x)+0.1$$
Calculate the relative extrema (maxima and minima) of $f$. That is, the zeros of $f'$:
sympy
module (see Note at the bottom).tol = 1e-6
calculate the zeros of the function $f'$ using newton
from scipy.optimize
.Calculate the inflection points of $f$ in $[-1,4]$. That is, the zeros of $f''$:
Hint:
NumPy Elementary Mathematical Functions.
SymPy Elementary Mathematical Functions.
Usually, in sympy
the functions are the same but, for instance, instead of np.cos(x)
, we write sym.cos(x)
(taking into account how we have imported numpy
and sympy
in this lab). This is not a general rule because, for instance np.arctan(x)
and sym.atan(x)
.
%run Exercise2
In the Fixed point method, we change the equation $f(x)=0$, with an equivalent equation $g(x)=x$ where $\alpha$ is a fixed point of $g$. That is, $g(\alpha)=\alpha$, and $\alpha$ is also a zero of $f$, and thus $f(\alpha)=0$.
A function $g:[a,b]\to\mathbb{R}$ has a fixed point $\alpha\in [a,b]$ if $g(\alpha)=\alpha$. The method of Fixed point is base in the sequence defined by
$$ x_{k+1}=g(x_k),\quad k\geq 0, $$where $x_0$ is the initial guess.
There is not a unique $g$. Let us see an example.
If
$$f(x)=x^3 - 2 x^2 + 1$$and we want to find the zeros of $f$, that is the values of $x$ that satisfy $x^3 - 2 x^2 + 1 = 0$. We can reorganize the equation as $x^3 + 1 = 2 x^2$, or
$$x^3 - 2 x^2 + 1 = 0\quad\implies \quad x^3 + 1 = 2 x^2\quad\implies \qquad x = -\sqrt{\frac{x^3+1}{2}}$$The solution of this second equation is called fixed point of the function
$$ g(x) = -\sqrt{\frac{x^3+1}{2}}$$and it will also be a zero of $f$.
Let us check it graphically.
f = lambda x: x**3 - 2*x**2 +1
g = lambda x: - np.sqrt((x**3+1)/2)
a = -1.; b = 0;
x = np.linspace(a, b, 200) # vector with 200 equally spaced points
plt.figure()
plt.plot(x, f(x),'g-',label='f')
plt.plot(x, g(x),'r-',label='g')
plt.plot(x, 0*x,'k-') # OX axis
plt.plot(x, x,'b-',label='y = x')
r = -0.61803
plt.plot(r,0,'ro',label='zero')
plt.plot(r,r,'bo',label='fixed point')
plt.legend(loc='best')
plt.show()
And the $x$ where $f(x)=0$ is also the $x$ where $x=g(x)$.
Thus, the fixed point of
$$ g(x) = -\sqrt{\frac{x^3+1}{2}}\quad (1)$$is a zero of $f$.
Contraction mapping theorem: let $g$ be differentiable and defined in an interval $[a,b] \subset \mathbb{R} $ and $x_0 \in [a,b]$ a point of the interval. If
then $g$ has a unique fixed point $\alpha \in [a,b]$, and the sequence $x_n$ defined as $x_{i+1}=g(x_i)$ with initial guess $x_0$ converges to $\alpha$ with order at least linear.
a = -0.8; b = -0.3;
x = np.linspace(a, b)
plt.figure()
plt.plot(x, g(x),'r-', label='g')
plt.plot(x, x, 'b-',label='y = x')
plt.plot([a,b],[b,a],'k-') # Draw the other diagonal
pf = -0.61803
plt.plot(pf,pf,'bo',label='fixed point')
plt.axis([a, b, a, b]) # Graph in [a,b]x[a,b]
plt.legend(loc='best')
plt.show()
In this graph we can see that the conditions of the contraction mapping theorem are satisfied by function $g$ in the interval $[-0.8, -0.3]$. If we take as initial guess a point inside this interval, the sequence generated by the fixed point algorithm converges.
fpa
fpb
fpc
fpd
Write a script that contains function fixedPoint(g,x0,tol=1e-6,maxiter=200
) whose input arguments are the iteration function g
, the initial guess x0
, the absolute error bound tol
and the maximum number of iterations maxiter
. And, as output arguments, the approximate solution sol
and the number of iterations performed n
. Use the default tol
and maxiter
.
Use it to approximate the zero of
$$f(x)= e^{-x}-x$$with iteration function
$$g(x)=e^{-x}$$con $\mathrm{tol}=10^{-6}$ y $\mathrm{maxiter}=200$.
Follow the steps:
incrementalSearch
for the interval $[0,1]$ find a 0.1 length interval that contains a zero of function f
.g
. Remember that the fixed point method sequence is defined by $x_{k+1} = g(x_k).$g
, in red, the straight line y = x
in blue and the fixed point in blue.Repeat the process to find the zeros of
$$f(x)= x - \cos(x)$$using as iteration functions
$$g_1(x)=\cos(x)\quad g_2(x)=2x-\cos(x) \quad g_3(x)=x-\dfrac{x-\cos(x)}{1+\sin(x)}\quad g_4(x)=\dfrac{9x+\cos(x)}{10}$$Decide which sequences converge. Graph the iteration functions, the line y = x
in blue and the fixed point in blue.
Hint:
%run Exercise3
Using the secant method and the function newton
from the module scipy.optimize
, calculate the nearest solution to zero of the equation $\cosh(x)\cos(x)-1=0$. Begin drawing the function to estimate the initial guesses.
Use as input parameters tol=1e-7
and maxiter=100
. Check it is a solution drawing it as a red circle in the graph. Print the solucion using print('%.6f'%r)
where r
is the zero.
%run Exercise4
Using the function bisect from the module scipy.optimize
, compute $\sqrt[3]{75}$. Start drawing the function to estimate the initial guesses.
Use as input parameters xtol=1e-6
and maxiter=100
. Check it is a solution drawing it as a red circle in the graph. Print the solucion using print('%.6f'%r)
where r
is the zero.
%run Exercise5
%run Exercise6
We can calculate the exact derivatives of a function using the sympy
module (symbolic python).
import sympy as sym
We create a symbolic variable
x = sym.Symbol('x', real=True)
Then we define a symbolic function and we can calculate its derivatives
f_sim = sym.cos(x)*(x**3+1)/(x**2+1)
df_sim = sym.diff(f_sim,x)
d2f_sim = sym.diff(df_sim,x)
We can plot these derivatives
print(df_sim)
We may lambify them to use their corresponding numerical functions.
f = sym.lambdify([x], f_sim,'numpy')
df = sym.lambdify([x], df_sim,'numpy')
d2f = sym.lambdify([x], d2f_sim,'numpy')
For instance, let us draw them with the numerical command plot
.
x = np.linspace(-3,3,100)
plt.plot(x,f(x),x,df(x),x,d2f(x),x,0*x,'k')
plt.legend(['f','df','d2f'],loc='best')
plt.show()