Numerical differentiation


First derivative of a one-variable function

We seek for approximating \(f'\) in \(\hat x \in [a,b]\subset \mathbb{R}\) by using \(f\)-values in a small number of points close to \(\hat x\).

The two main reasons to do this are:

From the definition of derivative \[ f'(\hat x)=\lim_{h\to0}\frac{f(\hat x+h)-f(\hat x)}{h} \]

we obtain three main approximation finite differences schemes: \[ \textbf{Forward: } (\delta^+ f)( x_{i})=\frac{f(x_{i+1})-f( x_{i})}{h}, \qquad\textbf{Backward: } (\delta^- f)( x_{i})=\frac{f(x_{i})-f(x_{i-1})}{h}, \qquad\textbf{Centered: }(\delta f)(x_{i})=\frac{1}{2}\big((\delta^+ f)(x_{i})+(\delta^- f)(x_{i})\big)= \frac{f(x_{i+1})-f(x_{i-1})}{2h} \]

Observe that with these formulas we can not compute the derivative at, at least, one of the extremes of the interval. For instance, $(\delta^+ f)(b)$ needs the value $f(b+h)$ which, in general, is unknown.

Matlab commands. Matlab is more efficient when vectorizing computations. For the exercises below, you may find useful the command diff(X), which computes $(X_2-X_1, X_3-X_2,\ldots,X_n-X_{n-1})$, and the command norm(X) which computes the Euclidean norm of vector $X$, i.e. \[ \text{norm}(X)=\sqrt{\sum_{i=1}^n X_i^2}.\]

Exercise 7.1. For the function \(f(x)=e^x\) in \([0,1]\), and for \(h=0.1\) and \(h=0.01\):

The solution is:

step size and global relative errors (h,f,b,c) = 	0.010	 5.016708e-03	 4.983375e-03	 1.666675e-05
step size and global relative errors (h,f,b,c) = 	0.001	 5.001667e-04	 4.998334e-04	 1.666667e-07

We see that the centered scheme gives better results. This is because forward and backward schemes are of order 1 while centered scheme is of order 2.

As mentioned above, on the extremes of the interval, all the schemes need to evaluate $f$ outside the interval $[a,b]$. To override this difficulty we shall use the 2nd order Lagrange interpolating polynomial for \(x_1\), \(x_2\), \(x_3\), being \(y_j=f(x_j)\). We have

\[ p(x)=[y_1]+[y_1,y_2](x-x_1)+[y_1,y_2,y_3](x-x_1)(x-x_2),\]

with \[[y_1]=y_1,\quad [y_1,y_2]=\frac{y_1-y_2}{x_1-x_2}=\frac{y_2-y_1}{h}, \quad [y_1,y_2,y_3]=\frac{[y_1,y_2]-[y_2,y_3]}{x_1-x_3}=\frac{1}{2h}(\frac{y_1-y_2}{h} -\frac{y_2-y_3}{h}\big)= \frac{1}{2h^2}(y_1-2y_2+y_3),\]

and therefore, we deduce \[ f'(x_1)\approx p'(x_1)=\frac{1}{2h}(-3f(x_1)+4f(x_2)-f(x_3)), \qquad f'(x_3)\approx p'(x_3)=\frac{1}{2h}(f(x_1)-4f(x_2)+3f(x_3)) \qquad (1)\]

Exercise 7.2. For \(f(x)=\frac{1}{x}\) in \([0.2,1.2]\), and $h=0.01$:

The solution is:

step size and global relative errors (h,order 1,order 2) = 	0.01	 1.786397e-02	 2.171714e-03

Second derivative of one-variable function

From the interpolating polynomial \[ p(x)=[y_1]+[y_1,y_2](x-x_1)+[y_1,y_2,y_3](x-x_1)(x-x_2) \] we obtain \[p''(x_2)=2[y_1,y_2,y_3]= \frac{1}{h^2}(y_1-2y_2+y_3)\qquad (2)\]

Exercise 7.3. For \(f(x)=\sin(2\pi x)\) in \([0,1]\), and $h=0.01$:

The solution is:

step size and global relative error (h, RE) = 	0.01	 3.289435e-04

Differentials of two-variable functions

Lat \(\Omega = [a,b]\times[c,d]\subset \mathbb{R}^2\) and consider uniform partitions \begin{eqnarray*} a=x_1<x_2<\ldots<x_{M-1}<x_M=b \\ c=y_1<y_2<\ldots<y_{N-1}<y_N=d \end{eqnarray*} Denote a point of $\Omega$ by \((x_m,y_n)\). Then, its neighbor points are

We may use the same ideas than those used for scalar functions. For instance

Before entering into matter, let us recall the basics on plotting surfaces and vector functions with Matlab.

If we want to plot the escalar function \(f(x,y)=x^2+y^2\) in \(x \in [0,5]\) with step \(h_x=1\) and \(y \in [-2,2]\) with step \(h_y=1\), we use

x =
     0     1     2     3     4     5
     0     1     2     3     4     5
     0     1     2     3     4     5
     0     1     2     3     4     5
     0     1     2     3     4     5
y =
    -2    -2    -2    -2    -2    -2
    -1    -1    -1    -1    -1    -1
     0     0     0     0     0     0
     1     1     1     1     1     1
     2     2     2     2     2     2

In this way we have generated two matrices with the components of the coordinates of the given domain. Now, we may compute the values of \(z=f(x,y)=x^2+y^2\) for each pair \((x_{ij},y_{ij}) \quad i=1,\ldots,m,\,j=1,\ldots,n\), being \(m\) and \(n\) the number of rows and columns of $x$ and $y$ (their size, in Matlab notation).

f=@(x,y) x.^2+y.^2;
z =
     4     5     8    13    20    29
     1     2     5    10    17    26
     0     1     4     9    16    25
     1     2     5    10    17    26
     4     5     8    13    20    29

With these matrices we have all what we need to plot \(f\). Functions contour and surf are the most commonly used:

subplot(2,2,1), contourf(x,y,z)
subplot(2,2,2), contour(x,y,z)
subplot(2,2,3), surf(x,y,z)
subplot(2,2,4), mesh(x,y,z)

For plotting a vector function we use meshgrid and quiver:

figure, quiver(x,y,u1,u2)

Exercise 7.4. In this exercise we compute the gradient of a function using Matlab command gradient: [fx,fy] = gradient(f,hx,hy), returns the numerical gradient of the matrix f, using the spacing specified by hx and hy.

relative error = 	1.225674e-02

Exercise 7.5. In this exercise we compute the divergence of a vector function using Matlab command divergence: div = divergence(x,y,f1,f2) computes the divergence of a 2-D vector field $\mathbb{f}=(f_1(x,y),f_2(x,y))$. The arrays $x,y$ define the coordinates for $(f_1,f_2)$ and may be produced by meshgrid.

relative error = 	9.333167e-03 2.698803e-16

Exercise 7.6. In this exercise we compute the laplacian of a function using Matlab command del2: $L = del2(f,h_x,h_y)$, when $f$ is a matrix, is a discrete approximation of $\frac{1}{4} (\partial_{xx} f(x,y)+\partial_{yy} f(x,y))$. The matrix $L$ is the same size as $f$, with each element equal to the difference between an element of $f$ and the average of its four neighbors.

relative error = 	5.914553e-03