Optimization methods used from scipy.optimize to nlopt package
Question:
I am currently having a black box of objective function. It has been use successfully in scipy.optimize
as ‘status=op.basinhopping(obj,sp,…)’, however, when I try the same obj to NLOPT package, it gives a message of
TypeError: <lambda>() takes exactly 1 argument (2 given).
I suppose obj for scipy.optimize
has two arguments, one is the function itself and the other is differentiation of each dimension, while obj used in NLOPT methods only require the function itself. If I am right about it, how should I modify the obj so that it could be used in NLOPT?
My code of using NLOPT
sys.path.insert(0,os.path.join(os.getcwd(),"build/R_ulp"))
import foo as foo_square
reload(foo_square)
sp=np.zeros(foo_square.dim)+args.startPoint
obj=lambda X:foo_square.R(* X)
opt = nlopt.opt(nlopt.GN_CRS2_LM, foo_square.dim)
opt.set_min_objective(obj)
opt.set_lower_bounds(-1e9)
opt.set_upper_bounds(1e9)
opt.set_stopval(0)
opt.set_xtol_rel(1e-9)
opt.set_initial_step(1)
opt.set_population(0)
opt.set_maxeval(100000)
status = opt.optimize([0.111111111]*foo_square.dim)
Answers:
The SciPy optimizers and NLopt have different conventions for the signature of the objective function. The documentation for the objective function in NLopt says
The function f
should be of the form:
def f(x, grad):
if grad.size > 0:
etc.
So you’ll need to create an objective function that accepts two arguments, x
and grad
. If grad.size > 0
, the function must fill in the array with the gradient.
You might also consider functools.partial
, if your objective needs additional parameters:
Example: Instead of
obj = lambda x, grad, y1: foo_square.R(x, y=y1)
… you might use:
from functools import partial
[...]
obj = partial(foo_square.R, y=y1)
obj
will also be accepted by NlOpt, as now the function signature matches again.
I am currently having a black box of objective function. It has been use successfully in scipy.optimize
as ‘status=op.basinhopping(obj,sp,…)’, however, when I try the same obj to NLOPT package, it gives a message of
TypeError: <lambda>() takes exactly 1 argument (2 given).
I suppose obj for scipy.optimize
has two arguments, one is the function itself and the other is differentiation of each dimension, while obj used in NLOPT methods only require the function itself. If I am right about it, how should I modify the obj so that it could be used in NLOPT?
My code of using NLOPT
sys.path.insert(0,os.path.join(os.getcwd(),"build/R_ulp"))
import foo as foo_square
reload(foo_square)
sp=np.zeros(foo_square.dim)+args.startPoint
obj=lambda X:foo_square.R(* X)
opt = nlopt.opt(nlopt.GN_CRS2_LM, foo_square.dim)
opt.set_min_objective(obj)
opt.set_lower_bounds(-1e9)
opt.set_upper_bounds(1e9)
opt.set_stopval(0)
opt.set_xtol_rel(1e-9)
opt.set_initial_step(1)
opt.set_population(0)
opt.set_maxeval(100000)
status = opt.optimize([0.111111111]*foo_square.dim)
The SciPy optimizers and NLopt have different conventions for the signature of the objective function. The documentation for the objective function in NLopt says
The function
f
should be of the form:def f(x, grad): if grad.size > 0:
etc.
So you’ll need to create an objective function that accepts two arguments, x
and grad
. If grad.size > 0
, the function must fill in the array with the gradient.
You might also consider functools.partial
, if your objective needs additional parameters:
Example: Instead of
obj = lambda x, grad, y1: foo_square.R(x, y=y1)
… you might use:
from functools import partial
[...]
obj = partial(foo_square.R, y=y1)
obj
will also be accepted by NlOpt, as now the function signature matches again.