sklearn-pandas icon indicating copy to clipboard operation
sklearn-pandas copied to clipboard

GridSearch - how to access internal parameters

Open hshteingart opened this issue 8 years ago • 13 comments

How can I access the Mapper's internal parameters e.g. number of bits for a FeatureHasher?

hshteingart avatar Jul 31 '16 20:07 hshteingart

First of all, sorry for not getting back to you sooner.

What you ask for is not possible right now, but would be a very interesting addition. I will review your related PR to see if we can introduce this implementation change without breaking existing pipelines.

dukebody avatar Aug 03 '16 13:08 dukebody

While changing the whole DataFrameMapper to use FeatureUnion allows to set transformers internal parameters and has the benefit of being able to run in parallel, I wonder if we cannot do the same:

  • Implementing set_params.
  • Using joblib's Parallel in our own loops.

This is because I feel we might be losing a great deal of control if we delegate all the work to FeatureUnion; for example, implementing #13 would probably be much harder. But OTOH I don't know how important is this with respect to code simplicity and speed.

Thoughts? cc @calpaterson

dukebody avatar Aug 04 '16 16:08 dukebody

See https://github.com/paulgb/sklearn-pandas/tree/feature_union_pipe

dukebody avatar Aug 04 '16 16:08 dukebody

Hi,

This is my first contribution to a GitHub public project, so I am excited :-)

I find that FeatureUnion is superior because everything is in terms of Scikit-Learn native object which reduces friction. Adding the "output feature naming" is possible with an additional PostNameTransformer which will just name the columns according to the original column name. If you feel this breaks the DataeFrameMapper entirely, maybe my solution fits better the Scikit-Learn package.

What do you think?

chanansh avatar Aug 05 '16 08:08 chanansh

I agree with what @chanansh said. I think the function of output feature naming will be very useful for feature selection and feature union.

As far as I know, pandas.get_dummies() can automatically generate feature names by setting parameter prefix, also sklearn.feature_extraction.DictVectorizer() has the method of get_feature_names(). Maybe we can borrow idea from them?

However, feature naming may be very complex as there are too many ways to do feature engineering. It's hard to find a general rule for feature naming.

DataTerminatorX avatar Aug 24 '16 03:08 DataTerminatorX

@chanansh you're right that it is better to reuse existing proven and maintained code instead of replicating the same feature whenever possible. It's just that I'm afraid of breaking something that is working for current users. However if we make a 2.0 release this should not be an issue; people can always continue using the 1.x branch if needed.

Probably the output feature tracking can be implemented by subclassing FeatureUnion and storing the length of each output feature in an attribute. @DataTerminatorX I believe the issue here is to track which input features generated which output ones, since some transformers like OneHotEncoder or LabelBinarizer can expand features. The output names can simply be <input_feature_name>_0, <input_feature_name>_1, etc.

dukebody avatar Aug 25 '16 14:08 dukebody

Could you please review my PR and tell me what you think?

dukebody avatar Aug 27 '16 15:08 dukebody

Sure, I will be happy to help doing the 2.0 version if you want to collaborate together.

From: Israel Saeta Pérez [mailto:[email protected]] Sent: יום ה 25 אוגוסט 2016 17:51 To: paulgb/sklearn-pandas [email protected] Cc: Hanan Shteingart [email protected]; Author [email protected] Subject: Re: [paulgb/sklearn-pandas] GridSearch - how to access internal parameters (#61)

@chananshhttps://github.com/chanansh you're right that it is better to reuse existing proven and maintained code instead of replicating the same feature whenever possible. It's just that I'm afraid of breaking something that is working for current users. However if we make a 2.0 release this should not be an issue; people can always continue using the 1.x branch if needed.

Probably the output feature tracking can be implemented by subclassing FeatureUnion and storing the length of each output feature in an attribute. @DataTerminatorXhttps://github.com/DataTerminatorX I believe the issue here is to track which input features generated which output ones, since some transformers like OneHotEncoder or LabelBinarizer can expand features. The output names can simply be <input_feature_name>_0, <input_feature_name>_1, etc.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/paulgb/sklearn-pandas/issues/61#issuecomment-242416305, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AQvKxbx_oSnWQDFaouTYb-OFg3SL75guks5qjau-gaJpZM4JZHkS.

hshteingart avatar Sep 02 '16 12:09 hshteingart

Awesome! So can you review https://github.com/paulgb/sklearn-pandas/pull/64 please, @chanansh ?

dukebody avatar Sep 03 '16 06:09 dukebody

cc @hshteingart

dukebody avatar Sep 03 '16 06:09 dukebody

I feel that if we move to feature union we should actually PR scikit learn so much larger community can enjoy it. What do you say?

On Sep 3, 2016 9:41 AM, "Israel Saeta Pérez" [email protected] wrote:

Awesome! So can you review #64 https://github.com/paulgb/sklearn-pandas/pull/64 please, @chanansh https://github.com/chanansh ?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/paulgb/sklearn-pandas/issues/61#issuecomment-244530623, or mute the thread https://github.com/notifications/unsubscribe-auth/AFtFzGwU085tbO-lK7dmrGkGHzhBYDdeks5qmRaAgaJpZM4JZHkS .

chanansh avatar Sep 03 '16 06:09 chanansh

AFAIK sklearn itself has no pandas dependencies and I believe they want to keep it that way for now. But feel free to ask in their issue tracker.

dukebody avatar Sep 03 '16 07:09 dukebody

Asked here: https://github.com/scikit-learn/scikit-learn/issues/7334

chanansh avatar Sep 03 '16 09:09 chanansh