Linear regression (Gradient descent) single feature

Question:

import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
class gradientdescent:
     def fit(self,X,Y):
        lr=0.01
        m=5
        b=0
        m_gradient=0
        for _ in range(1000):
            m_gradient= (lr*(np.sum((m*X+b-Y)*X))/(np.size(X)))
            b_gradient= (lr*(np.sum(m*X+b-Y))/(np.size(X)))
            self.m=m-m_gradient #this part is giving me conflicting results
            self.b=b-b_gradient  #and this part
         


     def predict(self,X):
            return self.m*X+self.b

X=np.array([1,2,3,4,5,6,7,8])
Y=np.array([1,2,4,4,5,7,8,8])
clf=gradientdescent()
clf.fit(X,Y)
plt.scatter(X,Y, color='black')
plt.plot(X, clf.predict(X))
#np.size(X)

I have been trying to implement my linear regression model but I’m getting an incorrect plot, but I get the correct plot when I replace:

 self.m=m-m_gradient 
 self.b=b-b_gradient

with:

m=m-m_gradient 
b=b-b_gradient

self.b=b
self.m=m

Can anyone tell me the difference between the two and why I’m getting different plots?

Asked By: Srivaths Gondi

||

Answers:

self.m is not the same as m. m is the m you’ve declared locally in the fit() function. self.m is a member of your class called m.

You are initializing m. Then you calculate a gradient based on m. Then you update self.m based on your gradient. m has NOT changed. Then you repeat. If you print the value of m_gradient in your example snippet, you should see the same value printed every iteration.

Answered By: LLSv2.0