Adam Li
Adam Li
To implement this, we want to implement a Cython splitter that takes in: - lists of numpy arrays of shape (n_patch, n_patch), which are the "kernels" that weight the image...
https://github.com/angus924/rocket https://github.com/angus924/minirocket/tree/main/code and the related papers would be interesting to compare
Reelvant too: [https://projecteuclid.org/journals/annals-of-statistics/volume-50/issue-6/Local-permu[…]-tests-for-conditional-independence/10.1214/22-AOS2233.short](https://projecteuclid.org/journals/annals-of-statistics/volume-50/issue-6/Local-permutation-tests-for-conditional-independence/10.1214/22-AOS2233.short)
Do we wanna leave this open @PSSF23 ?
I asked @jdey4 if he could post a GH issue, so I'm unsure how he's running things. It is true that MORF is not very well tested and benchmarked currently....
Possibly something for Edward's team et al. to consider? @jovo It would be nice to have some measure of performance that we can run from n_samples 100 to >> 100.
Ah I see. That's interesting. I wouldn't expect that to happen. How many trees are you training simultaneously?
Sorry I am asking how many jobs are you training in parallel. I.e. if you're training 100 trees in parallel, I am less surprised that you're running out of RAM
Ah I see... that is then training 1 tree at a time. Can you inform: 1. How deep is one tree? 2. If you do `clf.estimators_[0].tree_.get_projection_matrix()`, what is an example...
My inclination is just do 1