Xiaohui Wang
Xiaohui Wang
If there are anything in common, it is probably that they are all used by project from ByteDance. To generate your own config.pbtxt, you can checkout here for details: https://docs.nvidia.com/deeplearning/triton-inference-server/archives/tensorrt_inference_server_120/tensorrt-inference-server-guide/docs/model_configuration.html...
I need to clarify two questions: 1. Is the bleu score during evaluation right? 2. Do you inference using pytorch(or export model to lightseq proto)? If yes, then you can...
https://github.com/bytedance/lightseq/tree/master/examples/inference/python You can try this way to inference after training
The master branch works fine: https://github.com/bytedance/lightseq/blob/master/examples/training/fairseq/ls_fairseq_wmt14en2de.sh Can you give us some detail about your repair to fix the "No module named 'fairseq_user_dir_13687.ls_fs_transformer_decoder_layer'"
Maybe by the end of May or June this year.
It's not in the planning yet.
OK, we will review the new version and check out the difference
Converting without calibration or finetune will cause loss of accuracy. So currently not supported.