How do I compute derivative using Numpy?

Question:

How do I calculate the derivative of a function, for example

y = x2+1

using numpy?

Let’s say, I want the value of derivative at x = 5…

Asked By: DrStrangeLove

||

Answers:

NumPy does not provide general functionality to compute derivatives. It can handles the simple special case of polynomials however:

>>> p = numpy.poly1d([1, 0, 1])
>>> print p
   2
1 x + 1
>>> q = p.deriv()
>>> print q
2 x
>>> q(5)
10

If you want to compute the derivative numerically, you can get away with using central difference quotients for the vast majority of applications. For the derivative in a single point, the formula would be something like

x = 5.0
eps = numpy.sqrt(numpy.finfo(float).eps) * (1.0 + x)
print (p(x + eps) - p(x - eps)) / (2.0 * eps * x)

if you have an array x of abscissae with a corresponding array y of function values, you can comput approximations of derivatives with

numpy.diff(y) / numpy.diff(x)
Answered By: Sven Marnach

Depending on the level of precision you require you can work it out yourself, using the simple proof of differentiation:

>>> (((5 + 0.1) ** 2 + 1) - ((5) ** 2 + 1)) / 0.1
10.09999999999998
>>> (((5 + 0.01) ** 2 + 1) - ((5) ** 2 + 1)) / 0.01
10.009999999999764
>>> (((5 + 0.0000000001) ** 2 + 1) - ((5) ** 2 + 1)) / 0.0000000001
10.00000082740371

we can’t actually take the limit of the gradient, but its kinda fun.
You gotta watch out though because

>>> (((5+0.0000000000000001)**2+1)-((5)**2+1))/0.0000000000000001
0.0
Answered By: fraxel

You have four options

  1. Finite Differences
  2. Automatic Derivatives
  3. Symbolic Differentiation
  4. Compute derivatives by hand.

Finite differences require no external tools but are prone to numerical error and, if you’re in a multivariate situation, can take a while.

Symbolic differentiation is ideal if your problem is simple enough. Symbolic methods are getting quite robust these days. SymPy is an excellent project for this that integrates well with NumPy. Look at the autowrap or lambdify functions or check out Jensen’s blogpost about a similar question.

Automatic derivatives are very cool, aren’t prone to numeric errors, but do require some additional libraries (google for this, there are a few good options). This is the most robust but also the most sophisticated/difficult to set up choice. If you’re fine restricting yourself to numpy syntax then Theano might be a good choice.

Here is an example using SymPy

In [1]: from sympy import *
In [2]: import numpy as np
In [3]: x = Symbol('x')
In [4]: y = x**2 + 1
In [5]: yprime = y.diff(x)
In [6]: yprime
Out[6]: 2⋅x

In [7]: f = lambdify(x, yprime, 'numpy')
In [8]: f(np.ones(5))
Out[8]: [ 2.  2.  2.  2.  2.]
Answered By: MRocklin

The most straight-forward way I can think of is using numpy’s gradient function:

x = numpy.linspace(0,10,1000)
dx = x[1]-x[0]
y = x**2 + 1
dydx = numpy.gradient(y, dx)

This way, dydx will be computed using central differences and will have the same length as y, unlike numpy.diff, which uses forward differences and will return (n-1) size vector.

Answered By: Sparkler

I’ll throw another method on the pile…

scipy.interpolate‘s many interpolating splines are capable of providing derivatives. So, using a linear spline (k=1), the derivative of the spline (using the derivative() method) should be equivalent to a forward difference. I’m not entirely sure, but I believe using a cubic spline derivative would be similar to a centered difference derivative since it uses values from before and after to construct the cubic spline.

from scipy.interpolate import InterpolatedUnivariateSpline

# Get a function that evaluates the linear spline at any x
f = InterpolatedUnivariateSpline(x, y, k=1)

# Get a function that evaluates the derivative of the linear spline at any x
dfdx = f.derivative()

# Evaluate the derivative dydx at each x location...
dydx = dfdx(x)
Answered By: flutefreak7

Assuming you want to use numpy, you can numerically compute the derivative of a function at any point using the Rigorous definition:

def d_fun(x):
    h = 1e-5 #in theory h is an infinitesimal
    return (fun(x+h)-fun(x))/h

You can also use the Symmetric derivative for better results:

def d_fun(x):
    h = 1e-5
    return (fun(x+h)-fun(x-h))/(2*h)

Using your example, the full code should look something like:

def fun(x):
    return x**2 + 1

def d_fun(x):
    h = 1e-5
    return (fun(x+h)-fun(x-h))/(2*h)

Now, you can numerically find the derivative at x=5:

In [1]: d_fun(5)
Out[1]: 9.999999999621423
Answered By: fabda01

To calculate gradients, the machine learning community uses Autograd:

Efficiently computes derivatives of numpy code.

To install:

pip install autograd

Here is an example:

import autograd.numpy as np
from autograd import grad

def fct(x):
    y = x**2+1
    return y

grad_fct = grad(fct)
print(grad_fct(1.0))

It can also compute gradients of complex functions, e.g. multivariate functions.

Answered By: Gordon Schücker

You can use scipy, which is pretty straight forward:

scipy.misc.derivative(func, x0, dx=1.0, n=1, args=(), order=3)

Find the nth derivative of a function at a point.

In your case:

from scipy.misc import derivative

def f(x):
    return x**2 + 1

derivative(f, 5, dx=1e-6)
# 10.00000000139778
Answered By: johnson

To compute the derivative of a numerical function, use this second order finite differences scheme as seen in:
https://youtu.be/5QnToSn_oxk?t=1804

dx = 0.01
x = np.arange(-4, 4+dx, dx)
y = np.sin(x)
n = np.size(x)

yp = np.zeros(n)
yp[0] = (-3*y[0] + 4*y[1] - y[2]) / (2*dx)
yp[n-1] = (3 * y[n-1] - 4*y[n-2] + y[n-3]) / (2*dx)
for j in range(1,n-1):
    yp[j] = (y[j+1] - y[j-1]) / (2*dx)

Or if you want to use a higher order, use:
https://youtu.be/5QnToSn_oxk?t=1374

All that comes from the Nathan Kutz’ lectures of the course "Beginning Scientific Computing".

Answered By: mclzc
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.