Implementing backtracking line search algorithm for unconstrained optimization problem

Question:

I cannot wrap my head around how to implement the backtracking line search algorithm into python. The algorithm itself is:
here

Another form of the algorithm is:
here

In theory, they are the exact same.

I am trying to implement this in python to solve an unconstrained optimization problem with a given start point. This is my attempt at solving so far:

def func(x):  
return # my function with inputs x1,x2

def grad_func(x):
  df1 # derivative with respect to x1
  df2 # derivative with respect to x2
  return np.array([df1, df2])

def backtrack(x, gradient, t, a, b):  
 '''  
   x: the initial values given  
   gradient: the initial gradient direction for the given initial value  
   t: t is initialized at t=1 
   a: alpha value between (0, .5). I set it to .3  
   b: beta value between (0, 1). I set it to .8  
 '''
 return t

# Define the initial point, step size, and alpha/beta constants
x0, t0, alpha, beta = [x1, x2], 1, .3, .8

# Find the gradient of the initial value to determine the initial slope
direction = grad_func(x0)

t = backtrack(x0, direction, t0, alpha, beta)

Can anyone provide any guidance for how to best implement the backtracking algorithm? I feel that I have all the information I need, but I just do not understand the implementation in code

Asked By: RocketSocks22

||

Answers:

import numpy as np
alpha = 0.3
beta = 0.8

f = lambda x: (x[0]**2 + 3*x[1]*x[0] + 12)
dfx1 = lambda x: (2*x[0] + 3*x[1])
dfx2 = lambda x: (3*x[0])

t = 1
count = 1
x0 = np.array([2,3])
dx0 = np.array([.1, 0.05])


def backtrack(x0, dfx1, dfx2, t, alpha, beta, count):
    while (f(x0) - (f(x0 - t*np.array([dfx1(x0), dfx2(x0)])) + alpha * t * np.dot(np.array([dfx1(x0), dfx2(x0)]), np.array([dfx1(x0), dfx2(x0)])))) < 0:
        t *= beta
        print("""

########################
###   iteration {}   ###
########################
""".format(count))
        print("Inequality: ",  f(x0) - (f(x0 - t*np.array([dfx1(x0), dfx2(x0)])) + alpha * t * np.dot(np.array([dfx1(x0), dfx2(x0)]), np.array([dfx1(x0), dfx2(x0)]))))
        count += 1
    return t

t = backtrack(x0, dfx1, dfx2, t, alpha, beta,count)

print("nfinal step size :",  t)

Output:

########################
###   iteration 1   ###
########################

Inequality:  -143.12


########################
###   iteration 2   ###
########################

Inequality:  -73.22880000000006


########################
###   iteration 3   ###
########################

Inequality:  -32.172032000000044


########################
###   iteration 4   ###
########################

Inequality:  -8.834580480000021


########################
###   iteration 5   ###
########################

Inequality:  3.7502844927999845

final step size : 0.32768000000000014
[Finished in 0.257s]
Answered By: Hadi Farah

I did that but in matlab, here s the code:

syms params 
f = @(params) %your function ;

gradient_f=[diff(f,param1);diff(f,param2);diff(f,param3), ....];
x0 = %first value ;
norm_gradient_zero = %norm of gradient_f(x0));

ov = %value to optimize;
a = %alpha;
b = %beta;

while f(ov, 0)-(f(x0)-ov*b*norm_gradient_zero^2)>0
    ov = a*ov;
end

disp(ov)
Answered By: Tomthecat
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.