neuropredict icon indicating copy to clipboard operation
neuropredict copied to clipboard

Statistical tests on a test set

Open dinga92 opened this issue 6 years ago • 17 comments

I would like to add a functionality to easily run statistical tests (against null, against other classifiers) on an independent test set. Since the test set is independent, this should be easy to do (no need to deal with dependencies between folds).

IMHO the main task will be to make some usable API

dinga92 avatar Jul 04 '18 11:07 dinga92

Certainly.

Make something assuming the ideal version you need, and we will work backwards to have it in neuropredict.

raamana avatar Jul 19 '18 15:07 raamana

Hi Dinga, did you get a chance to work on this yet?

raamana avatar Aug 13 '18 15:08 raamana

Hi Richard, did you get a chance to think about this? Take a look at related discussion here: https://github.com/maximtrp/scikit-posthocs/issues/8

raamana avatar Sep 19 '18 15:09 raamana

sorry for the delay, i was still in a vacation mode, and before I had to finish other papers.

I am working on this now, i was looking at theory for tests and also how is sklearn doing things, so we could be consistent, also many usefull things are already implemented there and in statsmodels

What kind of tests are you looking for in scikit-posthoc?

dinga92 avatar Sep 19 '18 21:09 dinga92

I don't think sklearn has anything in this regard - let me know if you see something.

I am particularly interested Friedman test and Nemenyi posthoc, but am open to learning, trying and testing all others too.

raamana avatar Sep 19 '18 22:09 raamana

They have permutation test. This might be of interest to you https://arxiv.org/abs/1606.04316 together with code https://github.com/BayesianTestsML/tutorial/

Comparing multiple models on multiple datasets is not as important to me at the moment, also, i think it is quite a niche feature in general.

I will focus now on geting valid external validation and some reporting for one model, and add something more complex later. Probably for comparing competing models on the same test set. What do you say?

dinga92 avatar Sep 19 '18 22:09 dinga92

I am doing lots of power comparisons and model comparisons now, so what i do, i will try to make it usable and put it here.

dinga92 avatar Sep 19 '18 22:09 dinga92

Sure, we can start with something small.

Yeah, do it only if it helps your research, and something you will use in short to medium term.

raamana avatar Sep 19 '18 22:09 raamana

any hints on how to write tests?

dinga92 avatar Sep 20 '18 14:09 dinga92

Funny you ask, I was just informing folks about this: https://twitter.com/raamana_/status/1039150311842164737

raamana avatar Sep 20 '18 15:09 raamana

Sounds good, but which one are you using here? (sorry for a noob question)

dinga92 avatar Sep 21 '18 07:09 dinga92

NP, I use pytest. Its easy to learn.

raamana avatar Sep 21 '18 10:09 raamana

So this is a little demo of what I have now:

dataset_size = 50
X, y = datasets.make_classification(n_samples=dataset_size, 
                                    n_features=5,
                                    n_informative=2,
                                    flip_y=0,
                                    class_sep=0.5)
X_train, X_test, y_train, y_test = train_test_split(X, 
                                                    y, 
                                                    test_size=0.5,
                                                    stratify=y)
fit = LogisticRegression(C=1, penalty='l1').fit(X_train, y_train)
predicted_probabilities = fit.predict_proba(X_test)

results = validate_out_of_sample_predictions(y_test, predicted_probabilities)
print(np.array(results))

Out:

Accuracy:  [[0.76  0.007 0.593 0.927]
AUC:        [0.821 0.004 0.649 0.992]
Logscore:   [0.532 0.01  0.    0.   ]
Brierscore: [0.559 0.005 0.    0.   ]]

validate_out_of_sample_predictions takes (probabilistic) predictions as sckit learn outputs them and computes accuracy, AUC, logscore and brier score with it's p-values and CI. At the moment I am using permutation test to get p-values for logscore and brierscore and I don't have a way to compute CI for those, but I think I will do it with bootstrap. I have these measures there because that's what I am using in my paper at the moment, but I would like to add different ones that are interpretable and according to best pracitices.

Is this functionality something you would like here?

dinga92 avatar Sep 21 '18 12:09 dinga92

Can you push the code for validate_out_of_sample_predictions to your repo and point that to me?

Also, please do take a look at at the scikit-posthocs repo, and play with some examples..

I think you and I are on slightly different pages.

raamana avatar Sep 21 '18 13:09 raamana

This is what I have now https://github.com/dinga92/neuropredict/commit/8e7a445424f8c649a6583567f5692fdf73d7e1d9 it's more in a script stage to run my own stuff and not really in a merging stage.

Now I need to compare models against null, later I will also compare 2 models against each other. As far as I understand, the post-hoc tests you are referring to are to compare multiple models against each other, am I right?

dinga92 avatar Sep 21 '18 13:09 dinga92

Yes.

Also, Will you be at OHBM next month?

raamana avatar May 21 '19 12:05 raamana

Most probably I will

On Tue, May 21, 2019 at 2:07 PM Pradeep Reddy Raamana < [email protected]> wrote:

Yes.

Also, Will you be at OHBM next month?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/raamana/neuropredict/issues/23?email_source=notifications&email_token=ACMVL432B3DQJPDVKNI6MA3PWPQXJA5CNFSM4FIJUDA2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODV3WFGA#issuecomment-494363288, or mute the thread https://github.com/notifications/unsubscribe-auth/ACMVL44QSNCQQOL3E5SCC2LPWPQXJANCNFSM4FIJUDAQ .

dinga92 avatar May 23 '19 09:05 dinga92