Preston Parry
Preston Parry
it doesn't do probability predictions well anymore. it still seems to train well (same accuracy), it just doesn't return probability predictions.
for some reason, keras optimizers have wildly different learning rates. some of them default to the values in the original papers, which might work well on small datasets where you...
then we'd skip to that part of the fitting process, and just fit the final estimator, and continue as normal from there.
either a string, or ideally, their own custom function. could get messy. we'll see.
in __init__ take in a couple new params, and set them on the pipeline itself (we might have to extend the pipeline further to have these properties) auto_ml_version sklearn_version pandas_version...
as per https://github.com/ClimbsRocks/auto_ml/issues/322 column names being ints breaks things. so we should just go through and modify them ourselves to be ints. this should be totally fine for dictionaries (keys...
MVP idea: - [ ] just two algorithms - [ ] average them together - [ ] no dataset modification- same dataset for both MVP implementation - [ ] we'll...
fix how we log the params we tried when the params are just model names (seems to work well for normal params) adjust some test bounds
categorical column with float and np.nan throws error when being converted to strings only with lgbm
maybe try explicitly casting to strings any column that should be categorical?
we should have some kind of saddle point detection, or bump up the learning rate, or something like that to make sure we don't just get stuck in local minima...