dask-ml icon indicating copy to clipboard operation
dask-ml copied to clipboard

LogisticRegression cannot train from Dask DataFrame

Open julioasotodv opened this issue 8 years ago • 33 comments

A simple example:

from dask import dataframe as dd
from dask_glm.datasets import make_classification
from dask_ml.linear_model import LogisticRegression

X, y = make_classification(n_samples=10000, n_features=2)

X = dd.from_dask_array(X, columns=["a","b"])
y = dd.from_array(y)

lr = LogisticRegression()
lr.fit(X, y)

Returns KeyError: (<class 'dask.dataframe.core.DataFrame'>,)

I did not have time to try if it is also the case for other models.

julioasotodv avatar Nov 04 '17 02:11 julioasotodv

Thanks. At the moment the dask_glm based estimators just work with dask arrays, not dataframes. You can use .values to get the array.

I'm hoping to put in some helpers for handling all the extra DataFrame metadata sometime soon, so this will be more consistent across estimators.

TomAugspurger avatar Nov 06 '17 16:11 TomAugspurger

Thank you so much for the quick response!

The problem is that when fitting a glm with intercept (which is usually the case), the dask array containing the features needs to have defined the chunk size, which I believe it is not possible when the array comes from a dataframe.

Anyways, I will reach out to the main dask issue page and ask there.

Thank you!

julioasotodv avatar Nov 06 '17 19:11 julioasotodv

@julioasotodv, yes I forgot about that case. Let me put something together quick.

TomAugspurger avatar Nov 06 '17 19:11 TomAugspurger

Do you think there is a way to achieve this without making changes to dask's engine itself?

julioasotodv avatar Nov 06 '17 23:11 julioasotodv

What do you mean by "dasks's engine"?

See https://github.com/dask/dask-glm/issues/63 for a discussion on the relationship between dask-ml and dask-glm, and https://github.com/dask/dask-glm/compare/master...TomAugspurger:add-intercept-dd for what the fix will look like.

On Mon, Nov 6, 2017 at 5:05 PM, Julio Antonio Soto <[email protected]

wrote:

Do you think there is a way to achieve this without making changes to dask's engine itself?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/dask/dask-ml/issues/84#issuecomment-342318409, or mute the thread https://github.com/notifications/unsubscribe-auth/ABQHIhn3_1V1qCkEkmlNXqr09SUYkpc7ks5sz5DRgaJpZM4QR2-N .

TomAugspurger avatar Nov 07 '17 01:11 TomAugspurger

I see. Would it work with that fix, even if chunksize is not defined for the underlying dask array?

julioasotodv avatar Nov 07 '17 15:11 julioasotodv

Yes, that should work. The solvers only require that the shape along the second axis is known:

from dask_ml.linear_model import LinearRegression
from dask_ml.datasets import make_regression

X, y = make_regression(chunks=50)

df = dd.from_dask_array(X)
X2 = df.values  # dask.array with unknown chunks along first dim

lm = LinearRegression(fit_intercept=False)
lm.fit(X2, y)

Note that fit_intercept does not currently work with unknown chunks. But when https://github.com/dask/dask-glm/compare/master...TomAugspurger:add-intercept-dd is merged, you'd just do

lm = LinearRegression()  # fit_intercept=True
lm.fit(df)

And the intercept is added during the fit.

TomAugspurger avatar Nov 07 '17 15:11 TomAugspurger

That's awesome!

But let me be just a little picky with that change (https://github.com/dask/dask-glm/compare/master...TomAugspurger:add-intercept-dd):

In theory, if using either L1 or L2 regularization (or Elastic Net), the penalty term should not affect the intercept (this is, the "ones" column that works as the intercept should not be multiplied by the Lagrange multipliers that perform the actual regularization).

However, it would still be better than not having intercept. What do you think about this?

julioasotodv avatar Nov 12 '17 22:11 julioasotodv

Thanks, I'll take a look at how other packages handle regularization of the intercept, but I think your correct. cc @moody-marlin thoughts on that?

TomAugspurger avatar Nov 13 '17 14:11 TomAugspurger

Yea, I agree that the intercept should not be included in the regularization; I believe this is recommended best practice, and also not regularizing the intercept ensures that all regularizers still produce estimates which satisfy that the residuals have mean 0, which preserves the standard interpretation of things like R^2, etc.

cicdw avatar Nov 13 '17 17:11 cicdw

Opened https://github.com/dask/dask-glm/issues/65 to track that.

I'll deprecate the estimators in dask_glm and move them over here later today.

TomAugspurger avatar Nov 14 '17 16:11 TomAugspurger

See there is PR ( https://github.com/dask/dask-glm/pull/66 ) to deprecate the dask-glm estimators and PR ( https://github.com/dask/dask-ml/pull/94 ), which seems to have migrated the bulk of that content to dask-ml. Is this still the plan?

jakirkham avatar Jun 06 '18 02:06 jakirkham

Yes, in my mind dask-glm has the optimizers, and dask-ml has the estimators built on top of those.

On Tue, Jun 5, 2018 at 9:02 PM, jakirkham [email protected] wrote:

See there is PR ( dask/dask-glm#66 https://github.com/dask/dask-glm/pull/66 ) to deprecate the dask-glm estimators and PR ( #94 https://github.com/dask/dask-ml/pull/94 ), which seems to have migrated the bulk of that content to dask-ml. Is this still the plan?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/dask/dask-ml/issues/84#issuecomment-394915785, or mute the thread https://github.com/notifications/unsubscribe-auth/ABQHImZMyUAU1i6xb6RUlqzOslUPFnWeks5t5zgdgaJpZM4QR2-N .

TomAugspurger avatar Jun 06 '18 12:06 TomAugspurger

I'm facing the same issue.

Traceback (most recent call last):
  File "diya_libs/alog_main.py", line 20, in <module>
    clf.fit(X, y)
  File "/Users/asifali/workspace/pythonProjects/ML-engine-DataX/pre-processing/diya_libs/lib/algorithms/diya_logit.py", line 67, in fit
    self.estimator.fit(X, y)
  File "/anaconda3/lib/python3.6/site-packages/dask_ml/linear_model/glm.py", line 153, in fit
    X = self._check_array(X)
  File "/anaconda3/lib/python3.6/site-packages/dask_ml/linear_model/glm.py", line 167, in _check_array
    X = add_intercept(X)
  File "/anaconda3/lib/python3.6/site-packages/multipledispatch/dispatcher.py", line 164, in __call__
    return func(*args, **kwargs)
  File "/anaconda3/lib/python3.6/site-packages/dask_glm/utils.py", line 147, in add_intercept
    raise NotImplementedError("Can not add intercept to array with "
NotImplementedError: Can not add intercept to array with unknown chunk shape

Initially I tried with Dask DataFrame, later changed to Dask Array using X = X.values #resulted in nan chunks which is causing the above error. What am I supposed to do now? How do I install the fix, mentioned above? As it is not present in the version available on pip.

asifali22 avatar Sep 05 '18 08:09 asifali22

@asifali22 that looks strange. Can you provide a full example? Does the following work for you?

from dask import dataframe as dd
from dask_glm.datasets import make_classification
from dask_ml.linear_model import LogisticRegression

X, y = make_classification(n_samples=10000, n_features=2)

X = dd.from_dask_array(X, columns=["a","b"])
y = dd.from_array(y)

lr = LogisticRegression()
lr.fit(X.values, y.values)

TomAugspurger avatar Sep 05 '18 10:09 TomAugspurger

Having a similar issue with dask array @TomAugspurger see my SO question, Any idea?

thebeancounter avatar Jun 13 '19 15:06 thebeancounter

@thebeancounter do you have a minimal example? http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports

TomAugspurger avatar Jun 13 '19 20:06 TomAugspurger

@TomAugspurger Hi. The code is in the SO question, do you mean copy it here?

thebeancounter avatar Jun 14 '19 05:06 thebeancounter

It looks like data isn’t defined.

Also the error says you have multiple columns with no variance. You probably don’t want that.


From: thebeancounter [email protected] Sent: Friday, June 14, 2019 12:31 AM To: dask/dask-ml Cc: Tom Augspurger; Mention Subject: Re: [dask/dask-ml] LogisticRegression cannot train from Dask DataFrame (#84)

@TomAugspurgerhttps://github.com/TomAugspurger Hi. The code is in the SO question, do you mean copy it here?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dask/dask-ml/issues/84?email_source=notifications&email_token=AAKAOITHSXGZHLBI7F6J3KDP2MUMZA5CNFSM4ECHN6G2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXVYGBY#issuecomment-501973767, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AAKAOIUA32NFF2AGXY7LBFTP2MUMZANCNFSM4ECHN6GQ.

TomAugspurger avatar Jun 14 '19 12:06 TomAugspurger

@TomAugspurger

Data is defined It's regular cifar10 data, passed via a pre trained resnet 50 for feature extraction. Trains well with sklearn. I can't guarantee that there are no zero variance columns but those should not prevent learning anyway! Only waste some processing time.

Here is the data zipped (read it from folder with generator just for preventing memory from exploding)

i = ImageDataGenerator(preprocessing_function=preprocess_input)

train_flow = i.flow_from_directory(directory=test_dir, target_size=(224, 224), class_mode="sparse", batch_size=1024, shuffle=True)

pre_model = ResNet50(weights="imagenet", include_top=False)
pre_model.compile(optimizer=Adam(), loss=categorical_crossentropy)

labels = []
data = []
for i in range(len(train_flow)):
    imgs, l = next(train_flow)
    data.append(pre_model.predict(imgs))
    labels.append(l)

labels = np.concatenate(labels)
data = np.concatenate(data, axis=0)
data = data.reshape(-1, np.prod(data.shape[1:]))

Data is under github.com/thebeancounter/data

thebeancounter avatar Jun 16 '19 10:06 thebeancounter

http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports may be helpful for writing an example.

Does the error show up if you have a dummy dataset where two columns have no variance?


From: thebeancounter [email protected] Sent: Sunday, June 16, 2019 5:21 AM To: dask/dask-ml Cc: Tom Augspurger; Mention Subject: Re: [dask/dask-ml] LogisticRegression cannot train from Dask DataFrame (#84)

@TomAugspurgerhttps://github.com/TomAugspurger

Data is defined It's regular cifar10 data, passed via a pre trained resnet 50 for feature extraction. Trains well with sklearn. I can't guarantee that there are no zero variance columns but those should not prevent learning anyway! Only waste some processing time.

Here is the data zipped (read it from folder with generator just for preventing memory from exploding)

i = ImageDataGenerator(preprocessing_function=preprocess_input)

train_flow = i.flow_from_directory(directory=test_dir, target_size=(224, 224), class_mode="sparse", batch_size=1024, shuffle=True)

pre_model = ResNet50(weights="imagenet", include_top=False) pre_model.compile(optimizer=Adam(), loss=categorical_crossentropy)

labels = [] data = [] for i in range(len(train_flow)): imgs, l = next(train_flow) data.append(pre_model.predict(imgs)) labels.append(l)

labels = np.concatenate(labels) data = np.concatenate(data, axis=0) data = data.reshape(-1, np.prod(data.shape[1:]))

Data is under github.com/thebeancounter/data

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dask/dask-ml/issues/84?email_source=notifications&email_token=AAKAOIUVGSFQ74FUQXZGRYDP2YH3TA5CNFSM4ECHN6G2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXZJ35I#issuecomment-502439413, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AAKAOIQAUZ67RBF65SEHGMDP2YH3TANCNFSM4ECHN6GQ.

TomAugspurger avatar Jun 16 '19 11:06 TomAugspurger

@TomAugspurger

Hi, I posted the code and the data. It's a solid example :-)

Anyhow, Can you maybe post a working example for using numpy array for logistic regression in dask?

thebeancounter avatar Jun 16 '19 15:06 thebeancounter

I’m guessing it’s not minimal. Simplifying it may reveal the issue.

Why do you want to use dask-ml’s LR on a numpy array?

On Jun 16, 2019, at 10:49, thebeancounter [email protected] wrote:

@TomAugspurger

Hi, I posted the code and the data. It's a solid example :-)

Anyhow, Can you maybe post a working example for using numpy array for logistic regression in dask?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

TomAugspurger avatar Jun 16 '19 16:06 TomAugspurger

@TomAugspurger my data originally comes from a numpy array, I need to convert it to some form that dask can learn on. Can't find any example for that in the tutorial, maybe that's the issue, can you point me to something of that kind?

thebeancounter avatar Jun 17 '19 09:06 thebeancounter

https://docs.dask.org/en/latest/array-creation.html documents creating dask arrays, including from array-like things like NumPy arrays.

Though my (vague) question was a bit deeper. Why do you want to use dask's LR, rather than scikit-learn's or Scipy's? If you're coming from a NumPy array, then does your data fit in memory? If so, you should just use one of those.

On Mon, Jun 17, 2019 at 4:11 AM thebeancounter [email protected] wrote:

@TomAugspurger https://github.com/TomAugspurger my data originally comes from a numpy array, I need to convert it to some form that dask can learn on. Can't find any example for that in the tutorial, maybe that's the issue, can you point me to something of that kind?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/dask/dask-ml/issues/84?email_source=notifications&email_token=AAKAOIQRF2DG7VWTZ2IU7RDP25IKXA5CNFSM4ECHN6G2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODX2RKZQ#issuecomment-502601062, or mute the thread https://github.com/notifications/unsubscribe-auth/AAKAOIQ4AUET3IQRB3QVNULP25IKXANCNFSM4ECHN6GQ .

TomAugspurger avatar Jun 17 '19 13:06 TomAugspurger

@TomAugspurger

  • Unknown chunksize

I have seen above and there is the case:

X2 = df.values  # dask.array with unknown chunks along first dim

For me if i use .values, I will not know the chunksize for this array

x= df_train.values
dask.array<values, shape=(nan, 11), dtype=float64, chunksize=(nan, 11)>

And will this influence the distributed computation? Like the managing the memory, the speed?

  • fit_intercept: The same question with the block above:
m_dkl.fit(df_train.values,df["target"])

NotImplementedError: Can not add intercept to array with unknown chunk shape

Will i need to use fit_intercept = False? will the performance be the same as sci-kit learn?

  • The difference between dask-ml glm and sci-kit learn glm
import dask_ml.linear_model as dkl  
import sklearn.linear_model as skl 
m_skl = skl.LogisticRegression(C=0.01, penalty='l1', n_jobs=-1,random_state=0)
m_dkl = dkl.LogisticRegression(C=0.01, penalty='l1', n_jobs=-1,random_state=0)

m_skl.fit(df_train,df["target"])
m_dkl.fit(df_train.values,df["target"])

In my case, I find that the sci-kit learn estimator accept the dask data fomat(array, dataframe),so, what is the big difference between these? Is the dask-glm just fitting better in the case "big data" with the specific chunksize ? If we don't know the chunksize above, dask-ml.glm will do it as sci-kit learn or we will have a auto chunksize for distribution?

xiaozhongtian avatar Jun 19 '19 10:06 xiaozhongtian

@TomAugspurger

Scikit learn will not utilize the machines cores, and takes way way way too long to run... Looking for a multithreaded solution.

thebeancounter avatar Jun 19 '19 10:06 thebeancounter

@xiaozhongtian can you please clarify? are you asking a question? Not sure I see the connection to this thread.

thebeancounter avatar Jun 19 '19 10:06 thebeancounter

@TomAugspurger I'm asking a question with the same confusion in the above.

xiaozhongtian avatar Jun 19 '19 11:06 xiaozhongtian

@thebeancounter

Scikit learn will not utilize the machines cores, and takes way way way too long to run...

With the n_job = -1 in sci-kit learn, it uses the multi-process to fit. no?

But here, I want to know the manage of the memory for scikit learn and dask-ml. If we don't use the chunk to divise the dataset, there will be no different with sci-ket learn in my opinion.

xiaozhongtian avatar Jun 19 '19 11:06 xiaozhongtian