hekaijie123
hekaijie123
I ran into a similar problem.The first time I ran with one RTX3090,and just set "amp" to "True" , "num_processes" to "1" and "num_workers" to "16" .Keep the default Settings...
@sukjunhwang
@myownskyW7 @LightDXY @lvhan028
@LightDXY 大概什么时候会开源?
@lvhan028 @whai362
在Internvl2.5模型中,使用--backend turbomind,输出的logprob也为None for item in api_client.chat_completions_v1(model=model_name, temperature=0, max_tokens=length, messages=messages,logprobs=True):
同样的问题