df-dn-paper
df-dn-paper copied to clipboard
Conceptual & empirical comparisons between decision forests & deep networks
Bumps [numpy](https://github.com/numpy/numpy) from 1.18.5 to 1.22.0. Release notes Sourced from numpy's releases. v1.22.0 NumPy 1.22.0 Release Notes NumPy 1.22.0 is a big release featuring the work of 153 contributors spread...
When creating train+val loaders for tuning CNNs for vision data, the following traceback is received: ``` INFO 05-12 21:06:37] ax.service.utils.instantiation: Inferred value type of ParameterType.FLOAT for parameter lr. If that...
Specifically, use a left-out set of Tabular datasets to tune the hyperparameters. Possibly implement 5-fold cross validations (tune on 4 folds, test on 1 fold) to evaluate classifier performances. Within...
Step of #29
List of candidates to consider (all following `sklearn` api): - ~~[`GradientBoostingClassifier`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html)~~ - [`XGBClassifier`](https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.XGBClassifier) - ~~[`HistGradientBoostingClassifier`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.HistGradientBoostingClassifier.html)~~
e.g. - F-score - Splitting by # of classes for Tabular - more...
Saving raw predictions of classification tasks helps improving evaluation metrics (changing/adding/deleting/...). As the test sets are randomly generated, test labels need to be saved at the same time.
Reference repo: https://github.com/NeuroDataDesign/manifold_random_forests/tree/optimize