xiao2mo
xiao2mo
same problem here
for emergency use. +++++++
wait for ur response!! Give me some hand plz.
time Update: fp32: huggingface : 6.214039353188127s [python backend] lightseq: 4.908s fp16: huggingfae: 1.9158 s [python backend] fp16: batch=8 lightseq time is: 3.6968295574188232s fp16: batch=128 lightseq time: 3.481155266985297s fp16: batch=64 lightseq...
In conclusion, I have done all model result comparison between lightseq and pytorch implemetation by way of resolving a lot of implementation differences. It seems that the results is not...
Call it an end.
Can I have your wechat, I've got some problems in openai vit transform. Is the VIT you mentioned is the modeling_vit in huggingface? It seems that the encoder implenmentation is...
got it. Thanks a a lot