Results 506 comments of Tom White

This is failing for some edge cases for arrays of size 0 (tracked in #420), so I haven't re-enabled array API tests for this.

Thanks @abourramouss. I got it working by using the instructions at https://github.com/tomwhite/cubed/tree/main/examples/lithops/gcf, but using Python 3.9 rather than 3.8 (as that's the minimum for Cubed now): ``` conda create --name...

Closing this as it's quite old - please reopen if it's still a problem

Here's a basic proof of concept: https://github.com/tomwhite/cubed/commit/5f6e38e8e790a05298ca9b7ee89a55e7b7e9edfd

Glad it worked! We can update the docs to mention those environment variables. > I also had to use sudo for some docker commands. Was this when running `lithops runtime...

Another thing that's missing from the docs is setting `ulimit` ```shell ulimit -n 1024 ``` I found I needed this for running the larger lithops examples.

> Another thing that's missing from the docs is setting `ulimit` I added a note about this in https://github.com/cubed-dev/cubed/commit/4a371a9c5c7c9458ade56463f5b23d3271d1cafd

Thanks @dcherian - great suggestion. It would be interesting to see how we could implement this in Cubed. The Python Array API spec has a [proposal](https://github.com/data-apis/array-api/pull/653) for `cumulative_sum`, which is...

I think the relevant part of the [Nvidia doc](https://developer.nvidia.com/gpugems/gpugems3/part-vi-gpu-computing/chapter-39-parallel-prefix-sum-scan-cuda) is "39.2.4 Arrays of Arbitrary Size", which explains how to apply the algorithm to chunked (or blocked) arrays. We could implement...