Sebastian Raschka
Sebastian Raschka
That sounds like the effect size in Cochran's Q. One could return sth like this; this could maybe be useful. Would you be interested in adding this as a PR?
Hm that's weird and shouldn't happen. I just ran a quick example and couldn't reproduce this issue: E.g., for backward selection: ```python from sklearn.neighbors import KNeighborsClassifier from mlxtend.feature_selection import SequentialFeatureSelector...
I can't spot an issue in the example above, it looks all fine to me. So, based on the plot above, I would expect the SFS to return a subset...
Oh, maybe this was a misunderstanding then. Say you set `k_features=(25, 30)`. - If you use forward selection, it will start with 0 features and then evaluate all features up...
No worries, and I am glad to hear that there's no bug :)
Haven't used it myself, but year, this looks like it could get the job done: ```python from galaxy_ml.keras_galaxy_models import KerasGClassifier # build a DNN classifier model = Sequential() model.add(Dense(64)) model.add(Activation(‘relu'))...
Sounds good. Please note that I have to enter the grades on Monday, so please don't wait too long.
Yes, it's still alive! And I would definitely welcome contributions! 🙌
Looks like we already have an example here: http://rasbt.github.io/mlxtend/user_guide/evaluate/permutation_test/#example-3-paired-two-sample-randomization-test
Overall, this sounds like a great idea, and I would be in favor of such a solution for both the exhaustive and sequential feature selectors. Refactoring this into custom iterators...