CodeGen
CodeGen copied to clipboard
Code translation inference optimzation
I noticed the inference time for code translation is kind slow. I assume it only uses CPU whileing use the translate.py? I cannot find any other information about if it can use GPU to speed up the inference time
It's done on GPUs but we didn't really optimize the translate.py script. For instance it can only take one example at a time. If you want to translate several functions, batching them would be much more efficient.