Weimin Li

Results 5 issues of Weimin Li

再请教一个问题,训练好的sat模型如何再两张GPU上加载并推理? 我目前只有A100(40G)的版本,有时推理会报内u才能溢出。 请问如何设置多GPU推理? 下面模型加载部分如何设置呢?谢谢 `# load model model, model_args = AutoModel.from_pretrained( args.from_pretrained, args=argparse.Namespace( deepspeed=None, local_rank=0, rank=0, world_size=1, model_parallel_size=1, mode='inference', skip_init=True, use_gpu_initialization=True if torch.cuda.is_available() else False, device='cuda', overwrite_args={'model_parallel_size': 2}, **vars(args) ))...

请问代码一直卡在了torch.distributed.init_process_group 这个方法,请问如何解决? 环境信息:单击多卡 ![image](https://github.com/Coobiw/MiniGPT4Qwen/assets/29700371/d6caaca2-2fa5-46fc-9862-a54753a9d55c) OS 设置: os.environ['RANK'] = '0' os.environ['WORLD_SIZE'] = '4' # 因为我需要只用其中的4张卡 os.environ['LOCAL_RANK'] = '0' os.environ['MASTER_ADDR'] = '127.0.0.1' # rank0 对应的地址 os.environ['MASTER_PORT'] = '29500' # 任何空闲的端口 os.environ['NCCL_IB_DISABLE'] =...

Notice: In order to resolve issues more efficiently, please raise issue following the template. (注意:为了更加高效率解决您遇到的问题,请按照模板提问,补充细节) ## ❓ Questions and Help ### Before asking: 1. search the issues. 2. search the...

question

The grounding ability of the fine-tuned model still falls short of meeting production requirements, showing a significant gap compared to the CogAgent model. ## examples ![68e2fb4e6c95e66c829f8992aa6fb5a1](https://github.com/InternLM/InternLM-XComposer/assets/29700371/8f74c1cb-b558-45de-8858-0604620d9f9d) {"query": " In the...

如: VoLTE ,读成V-O-L-T-E , 不要读成: “喔替”

stale