Yi icon indicating copy to clipboard operation
Yi copied to clipboard

When using web-demo on VL-6B

Open hellangleZ opened this issue 1 year ago • 7 comments

Reminder

  • [X] I have searched the Github Discussion and issues and have not found anything similar to this.

Environment

- OS:WSL ubuntu20.04
- Python:3.10
- PyTorch:2.1.2
- CUDA:12.2

Current Behavior

Hi expert:

When use the web_demo, why release this error?

Thanks

python web_demo.py -c "/home/root123/aml/Yi-VL-6B" Traceback (most recent call last): File "/home/root123/aml/Yi/demo/web_demo.py", line 211, in model = AutoModelForCausalLM.from_pretrained( File "/home/root123/miniconda3/envs/zby/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 569, in from_pretrained raise ValueError( ValueError: Unrecognized configuration class <class 'transformers.models.llava.configuration_llava.LlavaConfig'> for this kind of AutoModel: AutoModelForCausalLM. Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

Expected Behavior

No response

Steps to Reproduce

1.WSL 2.using Yi-VL-6B 3.using web.demo

Anything Else?

no

hellangleZ avatar Jan 27 '24 01:01 hellangleZ

I was unable to reproduce this error, but it might be because I was using a different OS. Would you please be more specific on your steps to reproduce the error? BTW, I believe you meant --model-path instead of -c in your code, if not please try using --model-path to specify the model path.

markli404 avatar Jan 27 '24 02:01 markli404

I was unable to reproduce this error, but it might be because I was using a different OS. Would you please be more specific on your steps to reproduce the error? BTW, I believe you meant --model-path instead of -c in your code, if not please try using --model-path to specify the model path.

yes -c, and this problem also occurs on A100 Ubuntu 18.04, torch 2.1 cuda 12.1

root@A100:/aml/Yi/demo# python web_demo.py -c '/aml2/Yi-VL-6B' --share Traceback (most recent call last): File "/aml/Yi/demo/web_demo.py", line 211, in model = AutoModelForCausalLM.from_pretrained( File "/aml/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 569, in from_pretrained raise ValueError( ValueError: Unrecognized configuration class <class 'transformers.models.llava.configuration_llava.LlavaConfig'> for this kind of AutoModel: AutoModelForCausalLM. Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

BTW ,single inference is OK for working

hellangleZ avatar Jan 27 '24 09:01 hellangleZ

I was unable to reproduce this error, but it might be because I was using a different OS. Would you please be more specific on your steps to reproduce the error? BTW, I believe you meant --model-path instead of -c in your code, if not please try using --model-path to specify the model path.

yes -c, and this problem also occurs on A100 Ubuntu 18.04, torch 2.1 cuda 12.1

root@A100:/aml/Yi/demo# python web_demo.py -c '/aml2/Yi-VL-6B' --share Traceback (most recent call last): File "/aml/Yi/demo/web_demo.py", line 211, in model = AutoModelForCausalLM.from_pretrained( File "/aml/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 569, in from_pretrained raise ValueError( ValueError: Unrecognized configuration class <class 'transformers.models.llava.configuration_llava.LlavaConfig'> for this kind of AutoModel: AutoModelForCausalLM. Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

BTW ,single inference is OK for working

While I was running -c, I found this error:

web_demo.py: error: unrecognized arguments: -c /content/Yi-VL-6B

I don't think -c is a valid argument for web_demo.py, have you modified any files in the repo?

markli404 avatar Jan 27 '24 18:01 markli404

It dues to my fault, not by the code, sorry, please close it

hellangleZ avatar Jan 28 '24 02:01 hellangleZ

It dues to my fault, not by the code, sorry, please close it

how do you solve this problem? I meet it too.

Williamqiang avatar Jan 29 '24 02:01 Williamqiang

It dues to my fault, not by the code, sorry, please close it

how do you solve this problem? I meet it too.

you need python run the same name file(webdemo.py)under the file path (/VL)

hellangleZ avatar Jan 29 '24 03:01 hellangleZ

It dues to my fault, not by the code, sorry, please close it

how do you solve this problem? I meet it too.

you need python run the same name file(webdemo.py)under the file path (/VL)

Thank u. It works.

Williamqiang avatar Jan 30 '24 02:01 Williamqiang