Implementing of an inexact (approximate) Newton algorithm in Python using conjugate gradient

Question:

I want to implement in Python an inexact (approximate) Newton algorithm given by pseudocode:

Data: We have an x_0 in R^n and we let a function f:R^n->R, with positive hessian, 0<c1<1, 0<rho<1

The result: the result is to estimate the global minimum x_{k+1} of f

Following steps:
for k=0… up till stopping condition do:

eta_k=1/2 min{1/2,sqrt(∥∇f(x_k)∥)}

epsilon_k=eta_k∥∇f(x_k)∥

p_k=conjugate_gradient(∇^2f(x_k),∇f(x_k),epsilon_k)

alpha_k=backtrack(x_k,p_k,1.0,c_1,rho)

x_{k+1}=x_k+alpha_kp_k

If it is difficult to see the algorithm then I have uploaded it in my privious question:Implementation of an Inexact Newton Algorithm in Python

I have already done the conjugate_gradient algorithm as:

def conjugate_grad(A,b, maxiter = 100000):
   n= A.shape[0]
   x=np.zeros(n)
   r=b-A@x
   p=r.copy()
   r_old=np.inner(r,r)
   for it in range(maxiter):
       alpha=r_old/np.inner(p,A@p)
       x+=alpha*p
       r-=alpha*A@p
       r_new=np.inner(r,r)
       if np.sqrt(r_new)<1e-100:
           break
       beta=r_new/r_old
       p=r+beta*p
       r_old=r_new.copy()
   return x

But how do I in Python implement the
inexact (approximate) Newton algorithm, how do I deal with the minimum function and how do I build it up correctly with the backtrack? Can anyone help me?

Asked By: Lifeni

||

Answers:

Here’s an implementation of an inexact (approximate) Newton algorithm in Python using conjugate gradient with a backtracking line search function

import numpy as np
from scipy.linalg import solve

def backtracking_line_search(f, grad_f, x, d, alpha=1.0, rho=0.5, c=1e-4):
    """
    Backtracking line search to find a step size that satisfies the Armijo condition.
    f: the function to be minimized
    grad_f: the gradient of f
    x: the current iterate
    d: the search direction
    alpha: the initial step size
    rho: the backtracking parameter (0 < rho < 1)
    c: the Armijo condition parameter (0 < c < 1)
    """
    while f(x + alpha*d) > f(x) + c*alpha*np.dot(grad_f(x), d):
        alpha *= rho
    return alpha

def newton_cg(f, x0, grad_f, hess_f, tol=1e-6, max_iter=100):
    """
    Inexact Newton method with conjugate gradient for solving f(x) = 0.
    f: the function to be solved
    x0: the initial guess
    grad_f: the gradient of f
    hess_f: the Hessian matrix of f
    tol: the tolerance for convergence
    max_iter: the maximum number of iterations
    """
    x = x0
    for i in range(max_iter):
        g = grad_f(x)
        if np.linalg.norm(g) < tol:
            print(f"Converged in {i} iterations.")
            return x
        H = hess_f(x)
        # Compute the Newton direction using conjugate gradient
        d, _ = solve(H, -g, method='cg', tol=tol)
        # Perform a line search using backtracking
        alpha = backtracking_line_search(f, grad_f, x, d)
        # Update x
        x = x + alpha*d
    print("Failed to converge.")
    return x

The backtracking_line_search function takes as input the function to be minimized f, the gradient of f grad_f, the current iterate x, the search direction d, and optional parameters alpha, rho, and c for the initial step size, backtracking parameter, and Armijo condition parameter, respectively. It uses backtracking to find a step size that satisfies the Armijo condition.

The newton_cg function is similar to the previous implementation, but uses the backtracking_line_search function instead of a fixed step size to perform the line search. This should improve the convergence of the algorithm.

Here’s an example of how to use the function to solve the equation x3 – x2 – 1 = 0:

def f(x):
    return x**3 - x**2 - 1

def grad_f(x):
    return 3*x**2 - 2*x

def hess_f(x):
    return 6*x - 2*np.eye(len(x))

x0 = np.array([1.0, 1.0])
x = newton_cg(f, x0, grad_f, hess_f)
print(x)
Answered By: stitchesguy90