Results 47 comments of Cameron Hummels

For further reference, using `n_neighbors = 1` and `n_samples = 3e6` (the same number of particles now as the number of cells in the original dataset, we still don't entirely...

It seems to me that because we're producing the particle-based dataset from the original grid-based dataset, there is additional information present that can be used for the `smoothing_length` instead of...

Yeah, a bit like google translating things back and forth to . I think it would introduce some noise, but should in principle work.

@MatthewTurk The bottleneck in this process is never the monte carlo step. In the case where the number of particles (`n_samples`) is large (>=1e6), then the bottleneck is the creation...

@agurvich I can try with 1e7 particles, but it's going to take like 8-12 hours to run, and I don't think this is a viable solution for most datasets, given...

I addressed a bunch of the feedback from the first iteration. It now works with a specified data object; it can either calculate the smoothing length from the kdtree or...

Yeah, I think this would be useful to go in, since all the basic functionality is there. I guess we just never settled on what to use as the smoothing...

Good idea. Perhaps listing this in the docs as well as a framework/note in the skeleton frontend.

You're both right that this PR has sat idle for a long while, and it should be addressed for everyone's benefit. Let's figure out a solution, so we can move...

To resolve the current issue and merge this PR, is it possible to just remove the text surrounding the deprecation cycle policy that still seems under discussion? I don't think...