Christopher Barber
Christopher Barber
Ok. For now, I am just going to try to hack my Azure subclasses to get this working for my case.
Ideally you should support both regular and gen2 storage in the same subclass because there is no way to tell from the URL alone which one it its. A URL...
No, that is not the case. gen2 storage can be accessed using `.blob.` paths using the blob api. I don't know if the `.dfs.` URL is just an alias or...
FYI: here is my subclass that implements the `azure:////...` URL scheme and supports gen2 storage. I haven't tested it exhaustively but it seems to work for my use cases. ```python...
This is also broken for blob storage using Data Lake 2 Storage which DOES have a concept of hierarchical directory namespaces.
If you create a data lake 2 storage container and create a directory in the container using `mkdir()`, which works, the current implementation returns False for `is_dir()` and subsequently does...
FYI, here is a similar feature I added to the AzurePath hack I posted on #157: ```python @register_path_class('azure2') class AzurePath(AzureBlobPath): """CloudPath for accessing Azure blob/dfs storage using azure:// URL scheme."""...
The simplest thing to do would be to provide ways in the public API to clear or ignore the cached value. I would start out with that. E.g.: * Add...
BTW, it probably would also be a good idea to cache other information returned in the directory entries that could be used to populate the `stat()` call, e.g. the file...
Note that methods that create an entry should make sure to fix the cached state appropriately.