s3fs icon indicating copy to clipboard operation
s3fs copied to clipboard

Shutdown in asynchronous context leaves session and connector open

Open gbip opened this issue 10 months ago • 9 comments

When opening an asynchronous S3FileSsytem, it seems that some aiohttp ressources are not cleaned properly.

I can provide an example that uses xarray, fsspec, s3fs and zarr to create this issue. However I could not find any free zarr dataset. Here is the warning that are raised upong leaving the python interpreter :

Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x7fe2071f6c60>
Unclosed connector
connections: ['deque([(<aiohttp.client_proto.ResponseHandler object at 0x7fe2071ca6f0>, 24766.009516954), (<aiohttp.client_proto.ResponseHandler object at 0x7fe2071cac30>, 24766.015895832)])']
connector: <aiohttp.connector.TCPConnector object at 0x7fe2071ab800>

IIt seems that a similar issue existed with gcsfs, here is the PR that patched it : https://github.com/fsspec/gcsfs/pull/657/files. HTTPFileSystem might have a similar issue : https://github.com/zarr-developers/zarr-python/issues/2674#issuecomment-2581263695.

I tried to implement a similar thing, but after trying for a few hours I gave up because I am not familiar with Python async programming :(

gbip avatar Mar 05 '25 14:03 gbip

I believe there may already be an issue referencing this, although I can't immediately see it.

The documentation recommends explicitly stashing and closing the async session yourself, since automatic cleanup is indeed tricky: https://s3fs.readthedocs.io/en/latest/#async How are you in fact calling s3fs in your code?

martindurant avatar Mar 05 '25 14:03 martindurant

Thank you for your answer !

I do something like that :

s3_endpoint = "http://localhost:19900"

fs = fsspec.filesystem(
    "s3",
    key="ACCESS",
    secret="S3CRET",
    endpoint_url=s3_endpoint,
    asynchronous=True,
)
store = zarr.storage.FsspecStore(fs, path="/mybucket/myzarr.zarr")
dataset = xarray.open_zarr(store, zarr_format=3)

I'll try to make a context manager to perform ressource cleaning, as this is for a library and I don't want to put this burden on the final user.

EDIT : Note that I do not wish to perform async operation at the moment, however the zarr_python library raises a warning if I do not create an asynchronous object :

UserWarning: fs (<s3fs.core.S3FileSystem object at 0x7f654795dfd0>) was not created with `asynchronous=True`, this may lead to surprising behavior

gbip avatar Mar 06 '25 08:03 gbip

Zarr should be able to do this for you automatically, I think.

The problem is, that the filesystem you make is created outside of an asynchronous context. This means, that by the time of cleanup (probably at interpreter shutdown), there may be no event loop running anymore.

martindurant avatar Mar 06 '25 14:03 martindurant

I had a similar issue but using async S3FS directly with this sort of pattern

  • Created the async s3fs file system outside of any async loop, in this case a classes constructor (called self.fs)
  • Use the file system inside of the async loop on some function call (execute loop.run_until_complete(...))
  • Loop would either stop / exit scope and this error would occur

Calling:

s3fs.S3FileSystem.close_session(asyncio.get_event_loop(), self.fs.s3) at the end of the work done inside of the async loop solved the problem.

This had to be called inside the async function, otherwise loop would potentially not be running. I'm wondering if close_session could be slightly modified to grab the current event loop if the file system is in async mode. Not sure.

NickGeneva avatar May 13 '25 17:05 NickGeneva

I'm wondering if close_session could be slightly modified to grab the current event loop if the file system is in async mode.

I'm afraid not: running on a different loop would be an error; closed loops cannot be restarted; new loops cannot be created at shutdown time. I think it's OK to have the user warned about open resources when the interpreter is exiting anyway, and it does no harm.

martindurant avatar May 13 '25 18:05 martindurant

Thanks @martindurant

Maybe another naive question, but I'm curious why we dont see such warnings when using something like HTTP or I think even the gcs file systems which are both async as well.

Is it because this is a problem thats unique to the use of aiobotocore (vs the use of just aiohttp?) No need for a deep investigation but just something I think I observed.

Something like HTTPfilesystem seems to follow the more context manager route when dealing with aio sessions so they always get properly closed perhaps.

NickGeneva avatar May 13 '25 18:05 NickGeneva

http and gcsfs both depend on aiohttp, so they have the same behaviour, and it proves relatively easy to find and close the underlying connection even in non-async code. aiobotocore is better at hiding its internals.

The line s3._client._endpoint.http_session._connector._close() is supposed to do it when async methods fail, but perhaps it isn't getting called? Perhaps there is a reference cycle.

martindurant avatar May 13 '25 18:05 martindurant

I wonder if the upstream libraries could expose to the end user the explicit way to call something like await session.close() when the user knows the work is done and they can disconnect. Perhaps this would be a more straightforward approach than attempts to clean up resources on __del__ or other tricky workarounds.

alessio-locatelli avatar May 14 '25 06:05 alessio-locatelli

@alessio-locatelli , that is exactly what is suggested in https://filesystem-spec.readthedocs.io/en/latest/async.html#using-from-async . There are two issues:

  • aiobotocore is actually designed to be used with async context managers, which are a pain
  • at interpreter shutdown, del might be called in any thread, and we can't depend on the state of the even loop at that point, so we still need some kind of fallback.

martindurant avatar May 14 '25 14:05 martindurant