Tom Nicholas
Tom Nicholas
#297 looks good, thank you @elyall ! > The question is, do you want to create a `dataarray_from_kerchunk_refs` equivalent to `dataset_from_kerchunk_refs` or do you want to add an optional `key`...
I've established that my code is creating one extra S3 client per task...
I feel like if I were able to follow the `executor.get_result(futures)` pattern then this would work... But `RetryingFunctionExecutor` doesn't have that method yet (see https://github.com/lithops-cloud/lithops/pull/1291#issue-2212994457). So I tried looking at...
It seems the client connection gets created when the `S3Backend` class gets initiated, which is a specific instance of the `StorageBackend`. So I'm somehow triggering lots of `StorageBackends` to be...
Yeah I thought the same thing. I looked at your implementation and tried experimenting locally to copy what you did in Cubed @TomWhite but all your straggler logic made it...
I wondered if it had something to do with using / not using the executor as a context manager... I would have naively expected that the S3 client would connect...
Here's what I think should be a reproducer: Script ```python import lithops from lithops.retries import RetryingFunctionExecutor def hello(name: str) -> str: return 'Hello {}!'.format(name) def map_over_names(names: list[str], *, use_retries: bool)...
I did not, and still need to sort it out
I also think that easily saving the output of a single cell (as png/html) would be a useful feature. I think some people here are suggesting a new cell magic,...
Note that you can use this same idea to open data quickly that's referred to by Kerchunk / DMR++ files, because that's also already cloud-optimized: ```python from virtualizarr.parsers import DMRPPParser...