Python
Python copied to clipboard
`machine_learning/linear_regression.py` doesn't give optimal coefficients
Feature description
Related to issue #8844
TL;DR: the current implementation doesn't give optimal solutions, the current implementation calculates SSE wrong, and we should add an implementation of a numerical methods algorithm that actually gives optimal solutions
In machine_learning/linear_regression.py
, add the following code at the bottom of the main()
function:
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
data = np.asarray(data.astype(float))
X = data[:, 0].reshape(-1, 1)
y = data[:, 1]
reg = LinearRegression().fit(X, y)
print(f"Sklearn coefficients: {reg.intercept_}, {reg.coef_}")
sse = np.sum(np.square(reg.predict(X) - y.T))
print(f"{sse = }")
print(f"mse = {sse / len(y)}")
print(f"half mse = {sse / (2 * len(y))}")
plt.scatter(X, y, color="lightgray")
plt.axline(xy1=(0, theta[0, 0]), slope=theta[0, 1], color="red", label="Gradient descent")
plt.axline(xy1=(0, reg.intercept_), slope=reg.coef_, color="blue", label="Sklearn")
plt.legend()
plt.show()
This code performs ordinary least squares (OLS) linear regression using sklearn
as a point of reference to compare the current implementation against. It then calculates the sum of squared errors (SSE), mean squared error (MSE), and half of the MSE. To compare the outputs visually, the code uses matplotlib
to plot the sklearn
regression line and the regression line produced by the current implementation.
The code produces the following command line output:
...
At Iteration 100000 - Error is 128.03882
Resultant Feature vector:
-9.34325
1.53067
Sklearn coefficients: -15.547901662158367, [1.6076036]
sse = 253795.17406773588
mse = 253.79517406773587
half mse = 126.89758703386794
As we can see, what the implementation calculates as the SSE (128.03882) is actually half of the MSE, meaning that the sum_of_square_error
function is incorrect and needs to be fixed. Why the implementation is using half of the MSE, I have no clue.
Furthermore, we can see that both the regression coefficients and the errors are slightly off. This is because the current implementation works via gradient descent, meaning that it can only approximate the OLS regression coefficients. Meanwhile, libraries like numpy
and sklearn
calculate the mathematically optimal coefficients using numerical methods.
Although using gradient descent to perform linear regression does work, it's still suboptimal and (AFAIK) it's not how linear regression is actually performed in practice. We can still include this implementation, but we should definitely also include an implementation of an optimal numerical method.
Assign this task to me :)
Assign this task to me :)
@Madhav-2808 We don't assign tasks in this repo. If you want to work on it, then just make a PR.
@tianyizheng02 I try to run your code and get your graph but i get the error
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (2, 2) + inhomogeneous part.
@liakdimi1 I'm not able to recreate your error. Can you show your code and/or the error traceback?
@tianyizheng02 i added : slope=reg.coef_[0] to the plt.axline instead of slope=reg.coef_
If the problem is just focussing on the univariate case (as plotted), wouldn't it be worthwhile ditching the iterative closed form solution method and just doing a simple "sum of rectangular area over sum of square area" method? Especially considering that you would have to manually edit the code to increase the amount of features used, applying any data transformation or altering the design matrix.
Hey, I have created a PR #9114 which addresses this issue. Please review it. Thanks!
@tianyizheng02, Hello, I can fix this issue in the best possible way. Kindly assign this issue to me
@mahi01agarwal @ssom01-github Please read the contributing guidelines. As it says, we do not assign issues, so do not ask for permission to work on an issue. If you want to contribute, just open a PR. However, at the moment, please wait until October 1 in your time zone to open a PR because we're currently not accepting new PRs ahead of Hacktoberfest.
can you assign me this task please .
Wondering whether we should split this out into
- univariate_linear_regression.py => numerical method
- multivariate_linear_regression.py => gradient descent approach and deprecate the linear_regression.py file we have. That way individuals can chose.
I can't see any benefit to using this model for a univariate case, and I'm guessing if the individual importing this file never bothers to check it, then they won't know they could be wasting resources.
Before that's done, we should definitely examine why this gradient descent isn't optimal. I'll try and check it out over the weekend :D
Hello! My PR #11311 attempts to rectify this issue by implementing OLS instead of gradient descent. Can someone please review my code?