How to integrate large trigonometric equations with python?
Question:
In my program I have to recreate the galerkin-method, with which I have to do some integrations which I now do with sympy.integrate
.
My problem is, that in cases where the equation contains multiple trigonometric functions or also an added e function (it should be able to solve everything), sympy calculates for ever. I’m also working with sympy.symbols
because with those integrations I’m creating a system of equations which has to be solved. I need a antiderivative and not just a solution for one value.
Is there a numeric integration method or something else which gives back an accurate value?
I tried with sympy.Integral(equation).evalf(1)
and there the error is way too high or the returned decimal numbers are too long which brings me back to a way too high runtime.
The Functions can look like this:
(-4*x**3*sin(2*x) + 2*cos(x)*cos(2*x))*(cos(x) + 1)
and I have to integrate like 20 of these.
Answers:
If you don’t need the integration to be symbolic, and you know the integration bounds, then you could integrate numerically. A pragmatic starting point would be the trapezoidal rule. https://en.wikipedia.org/wiki/Trapezoidal_rule. The accuracy could be arbitrarily increased by finer steps (within bounds).
Increasing the accuracy by using a higher order is a bit more elaborate programmatically, but numerically more efficient. An implementation usually starts only to make sense when computation times are higher than the programming time)
import math
import matplotlib
sin=math.sin
cos=math.cos
def f(x):
return (-4*x**3*sin(2*x) + 2*cos(x)*cos(2*x))*(cos(x) + 1)
x0 = 0 # lower bound
x1 = 2.8 # upper bound (not exactly resolved in this example)
dx = 0.01 # integration step size
F_int = 0
x=x0
while x < x1:
f_n = f(x)
f_n_plusdx = f(x+dx)
dF = 0.5*(f_n+f_n_plusdx)*dx # calc the trapezoid area
F_int += dF # sum of the integration
x+=dx # next dx.
print(F_int)
SymPy’s integrate function tries a number of different integration methods one of which is the Risch algorithm which can be very slow in some cases. There is also the “manual” integration method which is not as complete as Risch but suffers less from occasional extreme slowness. There is some description of this here:
https://docs.sympy.org/latest/modules/integrals/integrals.html#internals
The problem in the example that you have given is that it gets stuck in heurisch. So let’s try “manual” instead:
In [1]: expr = (-4*x**3*sin(2*x) + 2*cos(x)*cos(2*x))*(cos(x) + 1)
In [2]: expr
Out[2]:
⎛ 3 ⎞
⎝- 4⋅x ⋅sin(2⋅x) + 2⋅cos(x)⋅cos(2⋅x)⎠⋅(cos(x) + 1)
In [3]: %time anti1 = expr.integrate(x, manual=True)
CPU times: user 39.7 s, sys: 232 ms, total: 39.9 s
Wall time: 43.1 s
In [4]: anti1
Out[4]:
3 3 3 ⌠
8⋅x ⋅cos (x) 3 2 x 4⋅sin (x) sin(4⋅x) ⎮ 2 3
──────────── + 2⋅x ⋅cos(2⋅x) - 3⋅x ⋅sin(2⋅x) - 3⋅x⋅cos(2⋅x) + ─ - ───────── + 2⋅sin(x) + 2⋅sin(2⋅x) + ──────── - 8⋅⎮ x ⋅cos (x) dx
3 2 3 8 ⌡
So that took 40 seconds but the result is not completely integrated: manualintegrate
has left an unevaluated integral in there. We can finish that off using normal integrate
by calling doit
:
In [5]: %time anti1.doit()
CPU times: user 4.46 s, sys: 142 ms, total: 4.61 s
Wall time: 4.81 s
Out[5]:
3 3 2 3 2 3
8⋅x ⋅cos (x) 3 16⋅x ⋅sin (x) 2 2 2 32⋅x⋅sin (x)⋅cos(x) 112⋅x⋅cos (x)
──────────── + 2⋅x ⋅cos(2⋅x) - ───────────── - 8⋅x ⋅sin(x)⋅cos (x) - 3⋅x ⋅sin(2⋅x) - ─────────────────── - ───────────── - 3⋅x⋅c
3 3 3 9
3 2
x 284⋅sin (x) 112⋅sin(x)⋅cos (x) sin(4⋅x)
os(2⋅x) + ─ + ─────────── + ────────────────── + 2⋅sin(x) + 2⋅sin(2⋅x) + ────────
2 27 9 8
So it took another few seconds to get that result. This is now a complete antiderivative as we can verify:
In [6]: simplify(expr - _.diff(x))
Out[6]: 0
That means we can do this particular integral in around 50 seconds with expr.integrate(x, manual=True).doit()
.
Actually this particular example can be done in more like 5 seconds if it is rewritten from sin/cos to exp:
In [1]: expr = (-4*x**3*sin(2*x) + 2*cos(x)*cos(2*x))*(cos(x) + 1)
In [2]: %time expr.rewrite(exp).expand().integrate(x).expand().rewrite(sin).simplify()
CPU times: user 5.3 s, sys: 21.2 ms, total: 5.32 s
Wall time: 5.33 s
Out[2]:
3 2
3 3 2⋅x ⋅cos(3⋅x) 2 2 2⋅x ⋅sin(3⋅x) 4⋅x⋅cos
2⋅x ⋅cos(x) + 2⋅x ⋅cos(2⋅x) + ───────────── - 6⋅x ⋅sin(x) - 3⋅x ⋅sin(2⋅x) - ───────────── - 12⋅x⋅cos(x) - 3⋅x⋅cos(2⋅x) - ───────
3 3 9
(3⋅x) x 13⋅sin(3⋅x) sin(4⋅x)
───── + ─ + 13⋅sin(x) + 2⋅sin(2⋅x) + ─────────── + ────────
2 27 8
In [3]: simplify(expr - _.diff(x))
Out[3]: 0
Although this answer looks different to the previous one there are an infinite number of ways of rewriting trig expressions using trig identities but they should be equivalent up to an additive constant (as required for antiderivatives).
In my program I have to recreate the galerkin-method, with which I have to do some integrations which I now do with sympy.integrate
.
My problem is, that in cases where the equation contains multiple trigonometric functions or also an added e function (it should be able to solve everything), sympy calculates for ever. I’m also working with sympy.symbols
because with those integrations I’m creating a system of equations which has to be solved. I need a antiderivative and not just a solution for one value.
Is there a numeric integration method or something else which gives back an accurate value?
I tried with sympy.Integral(equation).evalf(1)
and there the error is way too high or the returned decimal numbers are too long which brings me back to a way too high runtime.
The Functions can look like this:
(-4*x**3*sin(2*x) + 2*cos(x)*cos(2*x))*(cos(x) + 1)
and I have to integrate like 20 of these.
If you don’t need the integration to be symbolic, and you know the integration bounds, then you could integrate numerically. A pragmatic starting point would be the trapezoidal rule. https://en.wikipedia.org/wiki/Trapezoidal_rule. The accuracy could be arbitrarily increased by finer steps (within bounds).
Increasing the accuracy by using a higher order is a bit more elaborate programmatically, but numerically more efficient. An implementation usually starts only to make sense when computation times are higher than the programming time)
import math
import matplotlib
sin=math.sin
cos=math.cos
def f(x):
return (-4*x**3*sin(2*x) + 2*cos(x)*cos(2*x))*(cos(x) + 1)
x0 = 0 # lower bound
x1 = 2.8 # upper bound (not exactly resolved in this example)
dx = 0.01 # integration step size
F_int = 0
x=x0
while x < x1:
f_n = f(x)
f_n_plusdx = f(x+dx)
dF = 0.5*(f_n+f_n_plusdx)*dx # calc the trapezoid area
F_int += dF # sum of the integration
x+=dx # next dx.
print(F_int)
SymPy’s integrate function tries a number of different integration methods one of which is the Risch algorithm which can be very slow in some cases. There is also the “manual” integration method which is not as complete as Risch but suffers less from occasional extreme slowness. There is some description of this here:
https://docs.sympy.org/latest/modules/integrals/integrals.html#internals
The problem in the example that you have given is that it gets stuck in heurisch. So let’s try “manual” instead:
In [1]: expr = (-4*x**3*sin(2*x) + 2*cos(x)*cos(2*x))*(cos(x) + 1)
In [2]: expr
Out[2]:
⎛ 3 ⎞
⎝- 4⋅x ⋅sin(2⋅x) + 2⋅cos(x)⋅cos(2⋅x)⎠⋅(cos(x) + 1)
In [3]: %time anti1 = expr.integrate(x, manual=True)
CPU times: user 39.7 s, sys: 232 ms, total: 39.9 s
Wall time: 43.1 s
In [4]: anti1
Out[4]:
3 3 3 ⌠
8⋅x ⋅cos (x) 3 2 x 4⋅sin (x) sin(4⋅x) ⎮ 2 3
──────────── + 2⋅x ⋅cos(2⋅x) - 3⋅x ⋅sin(2⋅x) - 3⋅x⋅cos(2⋅x) + ─ - ───────── + 2⋅sin(x) + 2⋅sin(2⋅x) + ──────── - 8⋅⎮ x ⋅cos (x) dx
3 2 3 8 ⌡
So that took 40 seconds but the result is not completely integrated: manualintegrate
has left an unevaluated integral in there. We can finish that off using normal integrate
by calling doit
:
In [5]: %time anti1.doit()
CPU times: user 4.46 s, sys: 142 ms, total: 4.61 s
Wall time: 4.81 s
Out[5]:
3 3 2 3 2 3
8⋅x ⋅cos (x) 3 16⋅x ⋅sin (x) 2 2 2 32⋅x⋅sin (x)⋅cos(x) 112⋅x⋅cos (x)
──────────── + 2⋅x ⋅cos(2⋅x) - ───────────── - 8⋅x ⋅sin(x)⋅cos (x) - 3⋅x ⋅sin(2⋅x) - ─────────────────── - ───────────── - 3⋅x⋅c
3 3 3 9
3 2
x 284⋅sin (x) 112⋅sin(x)⋅cos (x) sin(4⋅x)
os(2⋅x) + ─ + ─────────── + ────────────────── + 2⋅sin(x) + 2⋅sin(2⋅x) + ────────
2 27 9 8
So it took another few seconds to get that result. This is now a complete antiderivative as we can verify:
In [6]: simplify(expr - _.diff(x))
Out[6]: 0
That means we can do this particular integral in around 50 seconds with expr.integrate(x, manual=True).doit()
.
Actually this particular example can be done in more like 5 seconds if it is rewritten from sin/cos to exp:
In [1]: expr = (-4*x**3*sin(2*x) + 2*cos(x)*cos(2*x))*(cos(x) + 1)
In [2]: %time expr.rewrite(exp).expand().integrate(x).expand().rewrite(sin).simplify()
CPU times: user 5.3 s, sys: 21.2 ms, total: 5.32 s
Wall time: 5.33 s
Out[2]:
3 2
3 3 2⋅x ⋅cos(3⋅x) 2 2 2⋅x ⋅sin(3⋅x) 4⋅x⋅cos
2⋅x ⋅cos(x) + 2⋅x ⋅cos(2⋅x) + ───────────── - 6⋅x ⋅sin(x) - 3⋅x ⋅sin(2⋅x) - ───────────── - 12⋅x⋅cos(x) - 3⋅x⋅cos(2⋅x) - ───────
3 3 9
(3⋅x) x 13⋅sin(3⋅x) sin(4⋅x)
───── + ─ + 13⋅sin(x) + 2⋅sin(2⋅x) + ─────────── + ────────
2 27 8
In [3]: simplify(expr - _.diff(x))
Out[3]: 0
Although this answer looks different to the previous one there are an infinite number of ways of rewriting trig expressions using trig identities but they should be equivalent up to an additive constant (as required for antiderivatives).