Giulio D'Ippolito
Giulio D'Ippolito
I think @joel-mb is referring to the official Python wheel of carla available in pypi (https://pypi.org/project/carla/) not supporting Python 3.9
@bernatx would it be possible to provide a wheels for Python 3.9 whenever a new (0.9.14) release will be cut? At the moment there is no wheel available in Pypi...
Is there any ETA to have this request merged? We are having the same issue and it would be nice to have this fixed.
You can try setting `TORCH_CUDA_ARCH_LIST=8.0` it should be compatible with 8.6 (both are Ampere architecture). 7.5 is a different CUDA architecture (Turing), so it is expected not to work.
@xiangyan99 I have create a simple repository to illustrate how we have workaround the rate limit on the managed identity endpoint [here](https://github.com/gdippolito/azure_python_sdk_issue_26177) Our use case is mostly during AI training....
Hi @xiangyan99 thanks for the response. Would you mind clarifying what single credential or single storage client means? In my case we spawn many thread and processes I'm not sure...
Hi @xiangyan99 thanks for the response. I think this will only work if you define multiple clients sequentially. However, the token won't be cached when using multiple Threads and processes....
Is there a target date for this dependency to be removed from azure-cli? The repository has been archived as of the 10th of April 2024 so I think it would...
I got the same issue. In my case I'm using Pytorch 2.2.2 (compiled with ABI) and CUDA 11.8. I tried flash attention V2 from the wheel and compiling from source....
Hi @tridao thanks for your response. I'm not too sure why it would fail. In my case I'm building flash-attention inside a docker container with the following commands: ``` ENV...