shenggan

Results 47 comments of shenggan

Hi, you can try the current latest main branch which fixed this timeout issue. The reason for the timeout is that multiprocessing inference launch multiple processes and uses one of...

I think you can try the new code in main branch. Or you can send T1050.fasta to me, and I will have a performance test on our machines.

We run T1050.fasta on single A100 (80 GB PCIe) with new code in main branch (9ab281fea) , inference time of fastfold is 65.22272s. Here is the log: [inference-only.log](https://github.com/hpcaitech/FastFold/files/9572925/inference-only.log) Issue closed,...

The most likely reason is indeed out of memory, reasoning about single-precision sequences on a 40GB card, 5000 is the limit of length. It is recommended to use --inplace --chunk_size...

I think if you can check `args.gpus` in the code. It suppose to be 3 if you add parameter correctly. Alphafold's embedding presentations take up a lot of memory as...

Intel MKL has no support of `fftw3_mpi` by default, you may need to build your own MPI FFTW3 Wrappers. Hope this document can help you: https://software.intel.com/en-us/mkl-developer-reference-c-mpi-fftw3-wrappers.

Could you please check for your cuda environment, suppose you should have your nvcc compiler. ```shell nvcc -V ``` If you do not have cuda compiler. conda environment maybe only...

Ok, could you please provide your cuda path with `which nvcc`, and the way you install `triton`. The simple way is to uninstall `triton`, and the code will fallback to...

The log shows that maybe the network problem, you can not download the llvm from github. You should use `pip install triton==2.0.0.dev20221005` to install specify version triton. The main branch...

The expected output file is correct. You can already get great acceleration with the cuda kernel when triton is not installed. Triton kernel is currently experimental. It can have some...