Weimin Li
Weimin Li
我也遇到了相同的问题,请问你有解决么? gr.Row(visible=True), gr.Accordion(visible=True)) **TypeError: Accordion.__init__() missing 1 required positional argument: 'label'** Traceback (most recent call last):
确实如此,刚升级到4.28.0 ,又报错了。 ception in thread Thread-4 (_do_normal_analytics_request): Traceback (most recent call last): File "/home/workspace/lwm/lwm-ven/lib/python3.10/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions yield File "/home/workspace/lwm/lwm-ven/lib/python3.10/site-packages/httpx/_transports/default.py", line 233, in handle_request resp = self._pool.handle_request(req) File "/home/workspace/lwm/lwm-ven/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py",...
看论文里面介绍 大概几千条train数据, 分4个阶段来训练。具体时间没有更多详细信息,估计需要使用A100(80G)*8
# load model model, model_args = AutoModel.from_pretrained( args.from_pretrained, args=argparse.Namespace( deepspeed=None, local_rank=0, rank=0, world_size=1, model_parallel_size=1, mode='inference', skip_init=True, use_gpu_initialization=True if torch.cuda.is_available() else False, device='cuda', overwrite_args={'model_parallel_size': 2}, **vars(args) )) model.add_mixin('auto-regressive', CachedAutoregressiveMixin()) self.model =...
不要使用deepseek 的分布式,直接运行finetune代码,from_pretrain()中设置 device-map=”auto“。 其实就是改成 单线程多GPU形式微调。自动布满全部GPU
或者改成deepspeed 的PP 模式 流水线并行。 不过流水线需要你自己拆解模型为多个层;
> > Thanks for your feedback! Do u use the `AutoPeftModelForCausalLM` class [here](https://github.com/InternLM/InternLM-XComposer/blob/main/finetune/README.md#lora-finetuning) to load the model? > > 您好,感谢您的工作!我想请教一下,使用AutoPeftModelForCausalLM加载模型后,参照finetune.py中的lora设置代码继续训练,出现下面报错如何解决?我确认设置了model.tokenizer,似乎没有成功 _to_regress_embeds, attention_mask, targets, im_mask = self.interleav_wrap( File "/root/.cache/huggingface/modules/transformers_modules/xcomposer2-4khd/modeling_internlm_xcomposer2.py", line 226,...
> > Can you comment [this line](https://huggingface.co/internlm/internlm-xcomposer2-vl-7b/blob/main/modeling_internlm_xcomposer2.py#L214) and [these two lines](https://github.com/InternLM/InternLM-XComposer/blob/main/finetune/finetune.py#L141-L142), and re-try to see if this issue still exists? > > still error > > 我把bitch_size改成1 训练数据改成: { "id":...
我也遇到这个问题,暂时没没有解决