Yeqi Sun
Yeqi Sun
> For unwanted files, you can delete them manually. Excuse me, if I only save the parameters corresponding to the adapter (bin file), how should I load the trained model?
> The default batch size is 1 thus no padding. Hi, if I want to train or inference with a larger batch size, how do I modify the code?
> Thanks for your feedback! Do u use the `AutoPeftModelForCausalLM` class [here](https://github.com/InternLM/InternLM-XComposer/blob/main/finetune/README.md#lora-finetuning) to load the model? 您好,感谢您的工作!我想请教一下,使用AutoPeftModelForCausalLM加载模型后,参照finetune.py中的lora设置代码继续训练,出现下面报错如何解决?我确认设置了model.tokenizer,似乎没有成功 _to_regress_embeds, attention_mask, targets, im_mask = self.interleav_wrap( File "/root/.cache/huggingface/modules/transformers_modules/xcomposer2-4khd/modeling_internlm_xcomposer2.py", line 226, in interleav_wrap part_tokens...
> ## 目前还不支持返回logits的,这个迭代我们会考虑支持 > Currently it's not supported yet, but we are considering this feature at this sprint 感谢回复,我通过修改llm/utils/utils.py line800 inference()中调用model.generate()参数,return_dict_in_generate=True, output_logits=True,本地实现了接收logits输出:)
> 请问如何调用呢 根据3.0官方文档,应该可以使用参数logprobs=True获取对数logits
> > main分支支持了 > > 请问如何调用 可以看下最新文档的logprobs参数
> > 您好,非常感谢您提供的解决方法。试过了很有效。但是下载的图片有些问题,部分图片不能完全展示,类似图中这样(下半部分缺失),您知道该如何解决这个问题吗?  > > 你是怎么拼接的呀 是这样么class MyImagesPipeline(ImagesPipeline): def get_media_requests(self, item, info): if len(item['weibo']['pics']) == 1: image_url = '[https://image.baidu.com/search/down?url='+item['weibo']['pics'][0]](https://image.baidu.com/search/down?url='+item%5B'weibo'%5D%5B'pics'%5D%5B0%5D) yield scrapy.Request( image_url, meta={ 'item': item, 'sign': '' }) else:...