Optimize a function in scipy without explicitly defining the gradient

Question:

I’m currently trying to optimize a function using scipy. I have some constraints on the variables, and from this link: http://docs.scipy.org/doc/scipy-0.14.0/reference/tutorial/optimize.html, it looks like SLSQP is exactly what I want. In their example, they have a well defined explicit formula for the result in terms of the input, from which they find the gradient. I have an extremely disgustingly computationally intensive function which calculates how electromagnetic fields bounce off of metal walls, which by no means can be expressed in closed form (I’m using MEEP FDTD Python simulation, if you’re interested). Is there an equivalent function build into scipy that finds the gradient of a function for you and then optimizes? Or, equivalently, is there a function built into scipy (any basic python library would be fine) which would find the gradient of a function for me, which I could then pass into this optimization program? Any suggestions would greatly be appreciated.

Asked By: QuantumFool

||

Answers:

Since you cannot easily compute the gradient, it might pay off to use a gradient-free optimization algorithm. Here’s an overview of some available in SciPy:

http://scipy-lectures.github.io/advanced/mathematical_optimization/#gradient-less-methods

There’s also the basin hopping algorithm, which is similar to simulated annealing and not mentioned on that page:

http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.basinhopping.html

Answered By: cfh

Looks like scipy.optimize.minimize‘s SLSQP doesn’t necessarily require gradient. See this open source project, arch, code, around 738-748 lines; they use SLSQP but don’t provide gradient. Like this:

opt = minimize(
                func,
                sv,
                args=args,
                method="SLSQP",
                bounds=bounds,
                constraints=ineq_constraints,
                tol=tol,
                callback=_callback,
                options=options,
            )

Also, on version 1.8.0, the following snippet works:

import scipy.optimize
res = scipy.optimize.minimize(
        lambda x: x[0]**2, [3.],
        method='SLSQP', bounds=[(1., None)])
print(res.success)  #: True

which also showcases the fact.

I haven’t look into scipy’s code, but I guess they compute numerical gradient if you don’t provide analytical one.

Answered By: Kevin