FastFold icon indicating copy to clipboard operation
FastFold copied to clipboard

About the distributed inference

Open YiningWang2 opened this issue 2 years ago • 2 comments

Hi, I saw you upload inference.py. I thought that it can support the inference on multi-gpu. So i wonder how to set the parameter on "--model-device". Thanks so much.

YiningWang2 avatar Apr 06 '22 08:04 YiningWang2

When i set --model-device=cuda, the following error occurred. RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasCreate(handle)

YiningWang2 avatar Apr 06 '22 08:04 YiningWang2

I update a new version of inference.py and README.md. Now no need --model-device, the scripts will use visible devices to do the inference. Usage of inference.py can refer to https://github.com/hpcaitech/FastFold#inference

For CUDA error, I hope you can provide more hardware details and how you run the code.

Shenggan avatar Apr 08 '22 02:04 Shenggan