Scipy.optimize: how to restrict argument values
Question:
I’m trying to use scipy.optimize
functions to find a global minimum of a complicated function with several arguments. scipy.optimize.minimize
seems to do the job best of all, namely, the ‘Nelder-Mead’ method. However, it tends to go to the areas out of arguments’ domain (to assign negative values to arguments that can only be positive) and thus returns an error in such cases. Is there a way to restrict the arguments’ bounds within the scipy.optimize.minimize
function itself? Or maybe within other scipy.optimize
functions?
I’ve found the following advice:
When the parameters fall out of the admissible range, return a wildly huge number (far from the data to be fitted). This will (hopefully) penalize this choice of parameters so much that curve_fit
will settle on some other admissible set of parameters as optimal.
given in this previous answer, but the procedure will take a lot of computational time in my case.
Answers:
The minimize
function has a bounds
parameter which can be used to restrict the bounds for each variable when using the L-BFGS-B, TNC, COBYLA or SLSQP methods.
For example,
import scipy.optimize as optimize
fun = lambda x: (x[0] - 1)**2 + (x[1] - 2.5)**2
res = optimize.minimize(fun, (2, 0), method='TNC', tol=1e-10)
print(res.x)
# [ 1. 2.49999999]
bnds = ((0.25, 0.75), (0, 2.0))
res = optimize.minimize(fun, (2, 0), method='TNC', bounds=bnds, tol=1e-10)
print(res.x)
# [ 0.75 2. ]
The Nelder-Mead solver doesn’t support constrained optimization, but there are several others that do.
TNC and L-BFGS-B both support only bound constraints (e.g. x[0] >= 0
), which should be fine for your case. COBYLA and SLSQP are more flexible, supporting any combination of bounds, equality and inequality-based constraints.
You can find more detailed info about the solvers by looking at the docs for the standalone functions, e.g. scipy.optimize.fmin_slsqp
for method='SLSQP'
.
You can see my previous answer here for an example of constrained optimization using SLSQP.
I know this is late in the game, but maybe have a look at mystic
. You can apply arbitrary python functions as penalty functions, or apply bounds constraints, and moreā¦ on any optimizer (including the algorithm from scipy.optimize.fmin
).
The argument you are looking for is: constraints
which is one of the arguments passed to scipy.minimize
. Roll your own lambda function that receives the parameters to constrain like this:
#A function to define the space where scipy.minimize should
#confine its search:
def apply_sum_constraint(inputs):
#return value must come back as 0 to be accepted
#if return value is anything other than 0 it's rejected
#as not a valid answer.
total = 50.0 - np.sum(inputs)
return total
my_constraints = ({'type': 'eq', "fun": apply_sum_constraint })
result = spo.minimize(f,
guess,
method='SLSQP',
args=(a, b, c),
bounds=((-1.0, 1.0), (-1.0, 1.0)),
options={'disp': True},
constraints=my_constraints)
The above example asserts that all the new candidates in the neighborhood of the last searched item better add up to 50. Change that method to define the permissible search space and the scipy.minimize function will waste no energy considering those answers.
I’m trying to use scipy.optimize
functions to find a global minimum of a complicated function with several arguments. scipy.optimize.minimize
seems to do the job best of all, namely, the ‘Nelder-Mead’ method. However, it tends to go to the areas out of arguments’ domain (to assign negative values to arguments that can only be positive) and thus returns an error in such cases. Is there a way to restrict the arguments’ bounds within the scipy.optimize.minimize
function itself? Or maybe within other scipy.optimize
functions?
I’ve found the following advice:
When the parameters fall out of the admissible range, return a wildly huge number (far from the data to be fitted). This will (hopefully) penalize this choice of parameters so much that
curve_fit
will settle on some other admissible set of parameters as optimal.
given in this previous answer, but the procedure will take a lot of computational time in my case.
The minimize
function has a bounds
parameter which can be used to restrict the bounds for each variable when using the L-BFGS-B, TNC, COBYLA or SLSQP methods.
For example,
import scipy.optimize as optimize
fun = lambda x: (x[0] - 1)**2 + (x[1] - 2.5)**2
res = optimize.minimize(fun, (2, 0), method='TNC', tol=1e-10)
print(res.x)
# [ 1. 2.49999999]
bnds = ((0.25, 0.75), (0, 2.0))
res = optimize.minimize(fun, (2, 0), method='TNC', bounds=bnds, tol=1e-10)
print(res.x)
# [ 0.75 2. ]
The Nelder-Mead solver doesn’t support constrained optimization, but there are several others that do.
TNC and L-BFGS-B both support only bound constraints (e.g. x[0] >= 0
), which should be fine for your case. COBYLA and SLSQP are more flexible, supporting any combination of bounds, equality and inequality-based constraints.
You can find more detailed info about the solvers by looking at the docs for the standalone functions, e.g. scipy.optimize.fmin_slsqp
for method='SLSQP'
.
You can see my previous answer here for an example of constrained optimization using SLSQP.
I know this is late in the game, but maybe have a look at mystic
. You can apply arbitrary python functions as penalty functions, or apply bounds constraints, and moreā¦ on any optimizer (including the algorithm from scipy.optimize.fmin
).
The argument you are looking for is: constraints
which is one of the arguments passed to scipy.minimize
. Roll your own lambda function that receives the parameters to constrain like this:
#A function to define the space where scipy.minimize should
#confine its search:
def apply_sum_constraint(inputs):
#return value must come back as 0 to be accepted
#if return value is anything other than 0 it's rejected
#as not a valid answer.
total = 50.0 - np.sum(inputs)
return total
my_constraints = ({'type': 'eq', "fun": apply_sum_constraint })
result = spo.minimize(f,
guess,
method='SLSQP',
args=(a, b, c),
bounds=((-1.0, 1.0), (-1.0, 1.0)),
options={'disp': True},
constraints=my_constraints)
The above example asserts that all the new candidates in the neighborhood of the last searched item better add up to 50. Change that method to define the permissible search space and the scipy.minimize function will waste no energy considering those answers.