My code giving differnt result where as the same code in my Machine learning assignment expects a different result?

Question:

MY CODE

def lrCostFunction(theta, X, y, lambda_):
    m = y.size

    if y.dtype == bool:
        y = y.astype(int)

    tempt = theta
    tempt[0] = 0

    J = 0
    grad = np.zeros(theta.shape)
    hx = X.dot(theta.T)
    h = sigmoid(hx)

    J = (1/m) * np.sum(-y.dot(np.log(h)) - (1-y).dot(np.log(1-h)))
    J = J + (lambda_/(2*m)) * np.sum(np.square(tempt))

    grad = ((1/m) * (h - y) .dot(X)) + (lambda_/m) * tempt

    return J, grad

# rand_indices = np.random.choice(m, 100, replace=False)
# sel = X[rand_indices, :]


theta_t = np.array([-2, -1, 1, 2], dtype=float)
X_t = np.concatenate([np.ones((5, 1)), np.arange(1, 16).reshape(5, 3, order='F')/10.0], axis=1)
y_t = np.array([1, 0, 1, 0, 1])
lambda_t = 3

cost, gradient = lrCostFunction(theta_t, X_t, y_t, lambda_t)
print("J= ", cost, "nGrad= ", gradient)

OUTPUT:

J=  3.0857279966152817 
Grad=  [ 0.35537648 -0.49170896  0.88597928  1.66366752]

where as the assignment asks for these results from the same input:

print('Cost         : {:.6f}'.format(J))
print('Expected cost: 2.534819')
print('-----------------------')
print('Gradients:')
print(' [{:.6f}, {:.6f}, {:.6f}, {:.6f}]'.format(*grad))
print('Expected gradients:')
print(' [0.146561, -0.548558, 0.724722, 1.398003]');

I even searched the internet for answers everyone had the same code as me and they stated that their result is same as predicted. I went to as far as copying their code on my pycharm IDE but i got the same answer again.
The inputs are same too if u wanna read the question its "Vectorizing regularized logistic regression"

Link: PYTHON ASSIGNMENT OF ANDREW NG ML COURSE

LINK TO ONE OF THE GUYS SOLUTION THAT HAS SAME CODE AND RIGHT ANSWER:

LINK: ONE OF THE GUYS CLAIMING TO HAVE THE EXPECTED RESULT FROM SAME CODE AS MINE

This happened to me on one part of the last assignment as well, its really frustrating so i am reaching out for help.

Asked By: MBAQ

||

Answers:

Your code is correct. The problem is that when you change the value tempt[0] you are also changing theta[0]. Doing a copy of theta ensures that the initial vector is not changed.

def lrCostFunction(theta, X, y, lambda_):
    m = y.size

    if y.dtype == bool:
        y = y.astype(float)
    
    J = 0
    grad = np.zeros(theta.shape)
    hx = X.dot(theta.T)
    h = sigmoid(hx)

    tempt = np.copy(theta)    # Copy of theta
    tempt[0] = 0
    
    J = (1/m) * np.sum(-y.dot(np.log(h)) - (1-y).dot(np.log(1-h)))
    J = J + (lambda_/(2*m)) * np.sum(np.square(tempt))
    
    grad = ((1/m) * (h - y) .dot(X)) + (lambda_/m) * tempt
    
    print(theta, tempt)
    return J, grad

cost, gradient = lrCostFunction(theta_t, X_t, y_t, lambda_t)
print("J= ", cost, "nGrad= ", gradient)

# Output:
# [-2. -1.  1.  2.] [ 0. -1.  1.  2.]
# J=  2.534819396109744 
# Grad=  [ 0.14656137 -0.54855841  0.72472227  1.39800296]
Answered By: Luigi Favaro