Matt Hall

Results 133 comments of Matt Hall

Hello! Yes, either that or I did not happen to find a seed that gave a high score. The PA Team model also had the lowest variance, so they were...

It's this: f1_score(y_blind, y_pred, average='micro') [Read about it here.](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html#sklearn.metrics.f1_score) This is the same as the metric provided by Brendon's `accuracy()` function.

Thank you for raising this, Lukas. You mean because, even without using the well explicitly in the model, parameters, features, etc, will be chosen to fit that well? Any ideas...

Thank you again @LukasMosser for raising this issue of meta-overfitting, and @lperozzi for chiming in. We can (and expect to) learn as we go, so we'll look at k-fold CV...

Hi everyone... A quick update on this issue of meta-overfitting. We have some labels for the STUART and CRAWFORD wells. These labels were not part of the data package, so...

Hey @mycarta ... Yes, use all the wells in training. And yes, we will validate against STUART and CRAWFORD, and it's that score that will count. I'm not sure yet...

@CannedGeo Yes, please do put that in an Issue of its own so other people will be more likely to see it. Short answer: you can do anything you like...

On the k-fold CV issue... I just made [a small demo of stepwise dropout](https://github.com/seg/2016-ml-contest/blob/master/LiamLearn/K-fold_CV_F1_score__MATT.ipynb). Please have a look and see if you agree that it does what we've been talking...

You can submit as often as you like, but I can't guarantee that I can score more than one a day. Yours are fairly easy to score, since you give...

Hi... Indeed, I used the `accuracy` function in `utils.py`. This is the same as [`sklearn.metrics.f1_score`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) with `average='micro'`. This was discussed [in another issue](https://github.com/seg/2016-ml-contest/issues/49). It seems that using `average='weighted'` may have...