Addition of Randomly Permuted Models for Predictive Performance Validation
Would it be possible to include automatic generation of a randomly permuted dataset (generated by randomly permuting the class identities) to have these models run in parallel to further validate predictive performance as is done in the following paper (https://academic.oup.com/braincomms/article/3/2/fcab084/6237484?login=true)? Another paper (https://pubmed.ncbi.nlm.nih.gov/25596422/) demonstrates that when sample sizes are small (which is common in biological contexts), prediction accuracy by chance alone can approach 70% or higher.
I just wanted to circle back on the potential implementation of this?
I have some potential code form R that may be able to help make this a reality?
Hi @abadgerw,
Thanks for suggestion!
I need to go through your papers and check it out. Not sure if someone else wishes to contribute this, since my current availability is quite limited.
If you have a code example, post it here I am sure it will be of great help for getting insight of it!
Sorry for the delay. Attached is an example that provides some insight into the randomly permuted models. It also performs OPLS-DA which relates to the other open inquiry. Do you think this will help facilitate implementation?
Is the example code helpful for potential implementation?
HI @abadgerw, thanks for sharing the example I am sure it will be helpful as a reference point for any future implementation.
I wanted to see if there were still plans to add this?
Not discarded, still in todo list