Oliver Dunbar
Oliver Dunbar
We have come up with some preferred reviewers, In no particular order: - matbesancon - odow - ziyiyin97 - tmigot - juliohm If there are significant delays, we would be...
Yeah - I definitely wouldn't rule it out as being something else. I was primarily suspicious of this package as it only seems to occur when calling the `SciKitLearn.jl` method...
Adding this here, Another nice thing would be to add correlation/covariance/transformations to the data processing https://stats.stackexchange.com/questions/53/pca-on-correlation-or-covariance
Merging to support code availability in the submission https://arxiv.org/abs/2407.00584
The memory issue was resolved in `darcy_accelerated.jl` by adding explicit `GC` calls in between the trials, however, it may be that there is still a way of making memory usage...
Added a small note in the documentation for this https://clima.github.io/EnsembleKalmanProcesses.jl/dev/ensemble_kalman_inversion/#Sparsity-Inducing-Ensemble-Kalman-Inversion
As a continuation here, we have removed the `DataMisfitController()` tests as these have started leading to test failures.
I know I am late to the party here, but SKlearn also involves python calls. And GaussianProcesses.jl is again very slow (regards to optimization of hyperparameters) for when I tried...
Preliminarily from @szy21 we see that 8 threads gives only 2x speed-up to sampling in the EDMF example, I'll continue the investigation with other examples.
Oftentimes, the downstream dependencies will greedily harness all available threads, thus calling with `-t 8` and not putting in any code changes (e.g. dont add the Threads.@threads) often gives significant...