Lehman Garrison
Lehman Garrison
I'll mention that my own timeline for getting blosc compression working is pretty accelerated, as I'm working under deadline pressure and need to start compressing my data this week. But...
I agree that's a concern. I explored this a bit while tuning the ASDF blocking and blosc blocking factors for my dataset, and the answers seemed to vary according to...
The 4 MB is the default compression block size that's currently in `compression.py`. I was calling it the "ASDF block size" to distinguish it from the blosc block size, even...
In that case, perhaps it could be a method of the `AsdfFile` or the tree, rather than the `NDArrayType`? Such as: ```python with asdf.open(fn) as af: tupleofkeys = ('data','a','b') #...
I also ran into this. One trick to work around this in GH Actions is to install the build requirements (including `Cython
I'm seeing the same. I've included a GDB backtrace in case it's useful. Google reveals all sorts of problems that cause segfaults in `elf_machine_rela`, but I haven't found anything that...
Sounds nice! Although I don't actually know the breakdown of time spent in the distance calculation versus binning these days. Do you know?
True, there might be an efficiency gain from larger histogramming chunks. If we want to optimize the histogram update for the case that `ravg` isn't needed, then the hybrid lookup...
> I will check out the hybrid LUT. Though my experience with the linear binning suggests that the bottleneck is not at the bin-index computing. That's what I would have...
> Huh - do you know what they did? I think it's all in their paper; I haven't investigated beyond that. Looks like the source code is [here](https://github.com/cheng-zhao/FCFC/). > We...