apavlo89
apavlo89
Yes, I would like the answer to this as well. How do you get historical data for other data other than open,high,low,close,volume, e.g., for a period of five years?
The accuracy score might increase even more as the optimisation code is still running on my unis cluster computer :P . I'll be updating main post with updated best
I also created a new variable called B_finish_decision_ratio (also for R_) where I sum all the decision finish columns / finish columns. This improves the score. Keep in mind when...
Question for people who've connected muse s. How is it the same? Are electrode column locations the same? Hasn't muse s scrapped mastoid/ear electrodes?
Having trouble adding classifier metric + question on which metric should be used from existing ones
Thank you @weixuanfu ,I didn't realise log loss is already included, great! If I just call it from the default TPOT installation then greater_is_better will be set automatically to false...
Having trouble adding classifier metric + question on which metric should be used from existing ones
How would one go about calling neg_log_loss? I've put it in `scoring='neg_log_loss'` and I declared `greater_is_better=False` but I still get the same error
Having trouble adding classifier metric + question on which metric should be used from existing ones
Thank you for looking at this. I'm putting my code first to see whether something jumps out. ``` import pandas as pd import numpy as np from tpot import TPOTClassifier...
Having trouble adding classifier metric + question on which metric should be used from existing ones
I found the issue, neg_log_loss doesn't work with LOOCV. Not 100% sure why that is.
Having trouble adding classifier metric + question on which metric should be used from existing ones
Fu you are a rockstar! Thanks, it works great now. The number I am getting through the generations is minus (e.g., -0.15) but as it works through the generations it...
Having trouble adding classifier metric + question on which metric should be used from existing ones
Well I got a low balanced_accuracy score from a log_loss of 0.053 and then I realised that I included the `X, y = make_classification(n_samples=20, n_features=263, random_state=16)` that you posted above...