Learneducn
Learneducn
I used the SLAM framework to fine-tune the inference results. Why are the test results on librispeech not as good as directly using the whisper open source model?
Hello, excuse me. When I run the inference and training scripts, I specified the CUDA ID, but it always defaulted to on CUDA=0. How to solve this?
Hello, excuse me. When I ran the inference and training scripts, I specified the CUDA ID, but it always defaulted to cuda=0. How should I solve this? In short, an...
> 我相信多 GPU 支持有一个简单的实现。您可以使用外部脚本包装现有脚本,该脚本处理测试集的拆分并相应地传递 GPU ID。这种方法类似于 FunASR 之前所做的。 Thank you very much. The problem about specifying a certain card for testing has been solved. I have encountered another problem now....
I used 4 46 GB GPUs. I still get an OOM error all the time, how should I fix this?