hardlipay

Results 13 issues of hardlipay

AttributeError: partially initialized module 'cv2' has no attribute 'gapi_wip_gst_GStreamerPipeline' (most likely due to a circular import) I ensure that no erro in pip install and pip list ,but when i...

I can't connect my training device to the network, I have downloaded the full folder of pre-trained weight models locally from huggingface, how do I write bash scripts to load...

enhancement

model = load_checkpoint_and_dispatch(model, model_path, device_map="auto", no_split_module_classes=["?????"], dtype=torch.float16) I don't know which modules can't be split, I hope the developers or others will give a list Thanks!

** My Code Follow the official documentation:https://www.deepspeed.ai/tutorials/inference-tutorial/ ``` import os import torch import deepspeed import transofrmers from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline transofrmers.logging.set_verbosity_error() os.environ['CUDA_VISIBLE_DEVICES'] = '0,1' local_rank = int(os.environ['LOCAL_RANK','0']) world_size...

bug
inference

I have deployed BLIP2 locally and loaded the pre-trained 2.7b model. It performs well in the official demo, but when I apply it to my personal project, it doesn't work...

The official mentioned that blip uses a resolution of 224, which may not be good for image detail understanding, can finetune training modify the image size? If not, and need...

请问,支持detection返回bbox吗?很多多模态模型已经支持了,且性能也很优秀,为什么cpmv我测了感觉不支持。 但是cpmv对空间位置关系的感知能力也很强,是什么原因导致放弃了训练这一功能呢? 如果加上detect的预训练,做finetune时有bbox输入对应文本,是否能让模型更容易获取新知识?

``` encoder_outputs = self.encoder( embedding_output, head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) ``` Here is the output returned by dinov2 in hg/transformer. Just encoder_outputs.output_hidden_states to get the [bs, 257, 768] feature tensor....

**Describe the bug** 1.8的脚本没问题,已经完成过训练,刚刚升级了2.0就出错了,回退后一切正常 **Your hardware and system info** torch 2.2.0 py 3.10 cuda 11.8 A100-40g ![ec6bf8fb22f5e027bb88ae60d64988d](https://github.com/modelscope/swift/assets/126460983/58502b5c-b9f7-4d7e-8c97-cb91c24dd4aa)

您好,请问下脚本可以用来转llama2和其他llama家族的模型吗?