MarkA
MarkA
@brianloyal would you mind sharing the AMI (I didn't find it on AWS cuda 12.4) and how you started the docker ultimately?
thank you @brianloyal ! when deploying with that 12.4 AMI, and the docker you used ( ghcr.io/soedinglab/mmseqs2:17-b804f-cuda12 ), it returns an error that that version requires cuda 12.6 ``` docker:...
I'm having a similar issue with 10min per sequence on a large with gpu, which still seems long to me. What is the expected speed per sequence?