Tom White
Tom White
Closing since this was fixed in #192
Hi @shoyer, thanks for the great questions! I haven't done any performance comparisons, but a lot of effort has been made to make Cubed scale horizontally, with conservative modelling/prediction of...
> Tom, sorry to hijack your issue tracker to discuss other projects! 🙃 That's fine! This is all very interesting - I feel it should be possible to combine efforts...
Thinking about this more, it should be possible for Cubed to *delegate* to Xarray-Beam for its two "primitive ops" (https://github.com/tomwhite/cubed#design): blockwise and rechunk. Cubed implements the whole of the array...
> it should be possible for Cubed to _delegate_ to Xarray-Beam for its two "primitive ops" (https://github.com/tomwhite/cubed#design): blockwise and rechunk I've created a prototype that does this here: https://github.com/tomwhite/cubed/tree/xarray-beam The...
> I do wonder whether the overhead of building seperate pcollections for each array will turn out to be problematic (vs. putting all arrays in an xarray.Dataset into a single...
> I tried adding xarray to the lithops runtime `requirements.txt` but no change. I did this then ran ```shell lithops runtime build -f requirements.txt cubed-runtime -b gcp_functions lithops runtime deploy...
> I think this is due to the large number of rounds in the reduce. This could possibly be improved by trying a larger `allowed_mem` (since it can then do...
I changed the `mean` to a `sum` and the 21 min runtime went down to 7 min. I think this shows that the overhead of using structured arrays is significant,...
Closing as this is fixed now