weisihao
weisihao
model is Llama-2-13b-chat-hf, I just tried to set max_total_token_num to 6000 and it worked,thanks!
Is there any connection between the size of the model and the (max_total_token_num parameter)? and how should I set this parameter if I later test it with 70B's llama2?
Thank you, I found it. May I ask, what is the difference in code between joyhallo and hallo?
> The aggregation module is different. I can't found aggregation module,can you mark the location?