InternLM-XComposer
InternLM-XComposer copied to clipboard
huggingface的代码示例报错
这是我的文件结构图,里面已经把模型下载好了
但我用怕跑的时候报这个错误
/home/shf/anaconda3/envs/llama/bin/python /media/shf/sda/code/InternLM-XComposer-main/test.py
You are using a model of type internlmxcomposer2 to instantiate a model of type internlm. This is not supported for all configurations of models and can yield errors.
[2024-03-25 14:12:17,398] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Set max length to 4096
Traceback (most recent call last):
File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/util/connection.py", line 95, in create_connection
raise err
File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
OSError: [Errno 101] Network is unreachable
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connectionpool.py", line 714, in urlopen httplib_response = self._make_request( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connectionpool.py", line 403, in _make_request self._validate_conn(conn) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connectionpool.py", line 1053, in _validate_conn conn.connect() File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connection.py", line 363, in connect self.sock = conn = self._new_conn() File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connection.py", line 186, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7fd9260ae160>: Failed to establish a new connection: [Errno 101] Network is unreachable
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connectionpool.py", line 798, in urlopen retries = retries.increment( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/util/retry.py", line 592, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14-336/resolve/main/config.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fd9260ae160>: Failed to establish a new connection: [Errno 101] Network is unreachable'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1261, in hf_hub_download metadata = get_hf_file_metadata( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1667, in get_hf_file_metadata r = _request_wrapper( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 385, in _request_wrapper response = _request_wrapper( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 408, in _request_wrapper response = get_session().request(method=method, url=url, **params) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/utils/_http.py", line 67, in send return super().send(request, *args, **kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14-336/resolve/main/config.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fd9260ae160>: Failed to establish a new connection: [Errno 101] Network is unreachable'))"), '(Request ID: 0f1ae55b-e895-4ed1-af62-de65d803ba20)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/utils/hub.py", line 398, in cached_file resolved_file = hf_hub_download( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1406, in hf_hub_download raise LocalEntryNotFoundError( huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/media/shf/sda/code/InternLM-XComposer-main/test.py", line 7, in
请问这是为什么 该如何解决 并让它正常进行推理呢
用这个modelscope的代码时 ,用int4版本会出现缺少配置文件的错误 OSError: /home/shf/.cache/modelscope/hub/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b-4bit does not appear to have a file named config.json. Checkout 'https://huggingface.co//home/shf/.cache/modelscope/hub/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b-4bit/None' for available files.
File "/root/miniconda3/lib/python3.8/site-packages/transformers/utils/hub.py", line 429, in cached_file raise EnvironmentError( OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like openai/clip-vit-large-patch14-336 is not the path to a directory containing a file named config.json. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'. 我也遇到这个问题
这是我的文件结构图,里面已经把模型下载好了
但我用怕跑的时候报这个错误 /home/shf/anaconda3/envs/llama/bin/python /media/shf/sda/code/InternLM-XComposer-main/test.py You are using a model of type internlmxcomposer2 to instantiate a model of type internlm. This is not supported for all configurations of models and can yield errors. [2024-03-25 14:12:17,398] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) Set max length to 4096 Traceback (most recent call last): File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/util/connection.py", line 95, in create_connection raise err File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) OSError: [Errno 101] Network is unreachable
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connectionpool.py", line 714, in urlopen httplib_response = self._make_request( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connectionpool.py", line 403, in _make_request self._validate_conn(conn) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connectionpool.py", line 1053, in _validate_conn conn.connect() File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connection.py", line 363, in connect self.sock = conn = self._new_conn() File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connection.py", line 186, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7fd9260ae160>: Failed to establish a new connection: [Errno 101] Network is unreachable
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connectionpool.py", line 798, in urlopen retries = retries.increment( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/util/retry.py", line 592, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14-336/resolve/main/config.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fd9260ae160>: Failed to establish a new connection: [Errno 101] Network is unreachable'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1261, in hf_hub_download metadata = get_hf_file_metadata( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1667, in get_hf_file_metadata r = _request_wrapper( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 385, in _request_wrapper response = _request_wrapper( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 408, in _request_wrapper response = get_session().request(method=method, url=url, **params) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/utils/_http.py", line 67, in send return super().send(request, *args, **kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14-336/resolve/main/config.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fd9260ae160>: Failed to establish a new connection: [Errno 101] Network is unreachable'))"), '(Request ID: 0f1ae55b-e895-4ed1-af62-de65d803ba20)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/utils/hub.py", line 398, in cached_file resolved_file = hf_hub_download( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1406, in hf_hub_download raise LocalEntryNotFoundError( huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/media/shf/sda/code/InternLM-XComposer-main/test.py", line 7, in model = AutoModel.from_pretrained('internlm/internlm-xcomposer2-vl-7b',local_files_only=True , trust_remote_code=True).cuda().eval() File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 556, in from_pretrained return model_class.from_pretrained( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/modeling_utils.py", line 3375, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/home/shf/.cache/huggingface/modules/transformers_modules/internlm-xcomposer2-vl-7b/modeling_internlm_xcomposer2.py", line 67, in init self.vit = build_vision_tower() File "/home/shf/.cache/huggingface/modules/transformers_modules/internlm-xcomposer2-vl-7b/build_mlp.py", line 11, in build_vision_tower return CLIPVisionTower(vision_tower) File "/home/shf/.cache/huggingface/modules/transformers_modules/internlm-xcomposer2-vl-7b/build_mlp.py", line 58, in init self.load_model() File "/home/shf/.cache/huggingface/modules/transformers_modules/internlm-xcomposer2-vl-7b/build_mlp.py", line 62, in load_model self.vision_tower = CLIPVisionModel.from_pretrained( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2981, in from_pretrained config, model_kwargs = cls.config_class.from_pretrained( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/models/clip/configuration_clip.py", line 251, in from_pretrained config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/configuration_utils.py", line 633, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/configuration_utils.py", line 688, in _get_config_dict resolved_config_file = cached_file( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/utils/hub.py", line 441, in cached_file raise EnvironmentError( OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like openai/clip-vit-large-patch14-336 is not the path to a directory containing a file named config.json. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'. (llama) shf@shf-Z790-UD:/media/shf/sda/code/InternLM-XComposer-main$
请问这是为什么 该如何解决 并让它正常进行推理呢
你解决了吗
用这个modelscope的代码时 ,用int4版本会出现缺少配置文件的错误 OSError: /home/shf/.cache/modelscope/hub/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b-4bit does not appear to have a file named config.json. Checkout 'https://huggingface.co//home/shf/.cache/modelscope/hub/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b-4bit/None' for available files.
我也遇到了这个问题
用这个modelscope的代码时 ,用int4版本会出现缺少配置文件的错误 OSError: /home/shf/.cache/modelscope/hub/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b-4bit does not appear to have a file named config.json. Checkout 'https://huggingface.co//home/shf/.cache/modelscope/hub/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b-4bit/None' for available files.
@panzhang0212 作者能解答一下吗
我在微调ShareGPT4V时也遇到类似问题
问题描述
对预训练模型进行Instruction Tuning,执行train_mem.py时传入的本地LLM&projector和Visual Encoder模型权重路径均正确,但是代码无法正确找到Visual Encoder的本地权重
原因分析——微调模型背后的代码逻辑
- 运行train_mem.py
- train_mem.py调用train.py
- train_mem.py在第938行附近调用
share4v_arch.py
的函数(如下)
model.get_model().initialize_vision_modules(
model_args=model_args,
fsdp=training_args.fsdp
)
- share4v_arch.py在第36行附近调用
builder.py
的函数build_vision_tower(model_args)
,该函数的定义中包含如下代码
vision_tower = getattr(vision_tower_cfg, 'mm_vision_tower', getattr(vision_tower_cfg, 'vision_tower', None))
这行代码的意思是优先从LLM+projector的权重文件(一般从huggingface下载到本地)中的配置文件config.json
中查找属性 'mm_vision_tower'
(以7B模型为例,在hf仓库中,这个属性的值为"Lin-Chen/ShareGPT4V-7B_Pretrained_vit-large336-l12"
),如果查找失败,则从用户通过命令行传入的参数中查找'vision_tower'
属性(也就是我们传入的Visual Encoder的本地权重文件夹的路径)。
总而言之,这段代码使得在加载Visual Encoder时优先从huggingface远程仓库中下载权重,而不是使用已经下载到本地的权重。
解决方法
手动将下载到本地的LLM+projector权重文件的config.json
中的'mm_vision_tower'`修改为Visual Encoder权重文件的本地路径。
config.json的'mm_vision_tower'`
我在分享GPT4V时也遇到类似问题
问题描述
对预训练模型进行指令调优,执行train_mem.py时确定本地LLM&projector和Visual Encoder模型权重路径均正确,但代码无法找到正确Visual Encoder的本地权重
原因分析——控制器模型背后的代码逻辑
- 运行train_mem.py
- train_mem.py调用train.py
- train_mem.py在第938行附近调用
share4v_arch.py
的函数(如下)model.get_model().initialize_vision_modules( model_args=model_args, fsdp=training_args.fsdp )
- share4v_arch.py在第36行附近调用
builder.py
的函数build_vision_tower(model_args)
,该函数的定义中包含以下代码vision_tower = getattr(vision_tower_cfg, 'mm_vision_tower', getattr(vision_tower_cfg, 'vision_tower', None))
这行代码的意思是优先从LLM+投影机的权重文件(一般从huggingface下载到本地)中的配置文件
config.json
中查找属性'mm_vision_tower'
(以7B模型为例,在hf仓库中,该属性的值"Lin-Chen/ShareGPT4V-7B_Pretrained_vit-large336-l12"
),如果查找则查找失败,则从用户通过命令行建立的参数中查找'vision_tower'
属性(因此我们建立的Visual Encoder的本地权限重文件夹的路径)。 总而言之,可能代码使得在加载Visual Encoder时优先从huggingface远程仓库中下载权限重,而不是使用已经下载到本地的权限重。解决方法
手动将下载到本地的LLM+投影机权重文件中
config.json
的'mm_vision_tower'`为Visual Encoder修改权重文件的本地路径。
config.json没有'mm_vision_tower'`吧,没搜到
config.json的'mm_vision_tower'`
我在分享GPT4V时也遇到类似问题
问题描述
对预训练模型进行指令调优,执行train_mem.py时确定本地LLM&projector和Visual Encoder模型权重路径均正确,但代码无法找到正确Visual Encoder的本地权重
原因分析——控制器模型背后的代码逻辑
- 运行train_mem.py
- train_mem.py调用train.py
- train_mem.py在第938行附近调用
share4v_arch.py
的函数(如下)model.get_model().initialize_vision_modules( model_args=model_args, fsdp=training_args.fsdp )
- share4v_arch.py在第36行附近调用
builder.py
的函数build_vision_tower(model_args)
,该函数的定义中包含以下代码vision_tower = getattr(vision_tower_cfg, 'mm_vision_tower', getattr(vision_tower_cfg, 'vision_tower', None))
这行代码的意思是优先从LLM+投影机的权重文件(一般从huggingface下载到本地)中的配置文件
config.json
中查找属性'mm_vision_tower'
(以7B模型为例,在hf仓库中,该属性的值"Lin-Chen/ShareGPT4V-7B_Pretrained_vit-large336-l12"
),如果查找则查找失败,则从用户通过命令行建立的参数中查找'vision_tower'
属性(因此我们建立的Visual Encoder的本地权限重文件夹的路径)。 总而言之,可能代码使得在加载Visual Encoder时优先从huggingface远程仓库中下载权限重,而不是使用已经下载到本地的权限重。解决方法
手动将下载到本地的LLM+投影机权重文件中
config.json
的'mm_vision_tower'`为Visual Encoder修改权重文件的本地路径。config.json没有'mm_vision_tower'`吧,没搜到
我是在微调ShareGPT4V,https://huggingface.co/Lin-Chen/ShareGPT4V-7B_Pretrained_vit-large336-l12_vicuna-7b-v1.5/blob/main/config.json 可以看到'mm_vision_tower',如果你是在使用InternLM-XComposer2可能会有所不同
InternLM-XComposer2
是的,我用的是InternLM-XComposer2,@panzhang0212可以看看吗
遇到了同样的问题,请问有解决吗?
已经解决了这个问题: 原因是下载不了这个clip模型。手动下载下clip。在build_mlp.py里面改下:
def build_vision_tower(): # vision_tower = 'openai/clip-vit-large-patch14-336' vision_tower = '/InternLM-XComposer-main/internlm/clip-vit-large-patch14-336' return CLIPVisionTower(vision_tower)
已经解决了这个问题: 原因是下载不了这个clip模型。手动下载下clip。在build_mlp.py里面改下:
def build_vision_tower(): # vision_tower = 'openai/clip-vit-large-patch14-336' vision_tower = '/InternLM-XComposer-main/internlm/clip-vit-large-patch14-336' return CLIPVisionTower(vision_tower)
请问一下您这里的'/InternLM-XComposer-main/internlm/clip-vit-large-patch14-336'
对应的是本地文件里的哪一个呀?我没有找到。我用的是internlm-xcomposer2-vl-7b
已经解决了这个问题: 原因是下载不了这个clip模型。手动下载下clip。在build_mlp.py里面改下: def build_vision_tower(): # vision_tower = 'openai/clip-vit-large-patch14-336' vision_tower = '/InternLM-XComposer-main/internlm/clip-vit-large-patch14-336' return CLIPVisionTower(vision_tower)
请问一下您这里的
'/InternLM-XComposer-main/internlm/clip-vit-large-patch14-336'
对应的是本地文件里的哪一个呀?我没有找到。我用的是internlm-xcomposer2-vl-7b
要自己去下载这个模型的。文件就是:build_mlp.py。推荐你debug下。
用这个modelscope的代码时 ,用int4版本会出现缺少配置文件的错误 OSError: /home/shf/.cache/modelscope/hub/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b-4bit does not appear to have a file named config.json. Checkout 'https://huggingface.co//home/shf/.cache/modelscope/hub/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b-4bit/None' for available files.
我也是遇到了相同的问题,请问怎么解决?