Baichuan2 icon indicating copy to clipboard operation
Baichuan2 copied to clipboard

fastchat 里面的baichuan config还能用吗?

Open 2533245542 opened this issue 1 year ago • 3 comments

现在是这个 (https://github.com/lm-sys/FastChat/blob/56744d1d947ad7cc94763e911529756b17139505/fastchat/conversation.py#L782)

register_conv_template(
    Conversation(
        name="baichuan-chat",
        roles=("<reserved_102>", "<reserved_103>"),
        sep_style=SeparatorStyle.NO_COLON_SINGLE,
        sep="",
        stop_token_ids=[],
    )
)

但是我看baichuan2里的roles应该改成下面这样?

        roles=("<reserved_106>", "<reserved_107>")
>>> model.generation_config.user_token_id
195
>>> model.generation_config.assistant_token_id
196
>>> tokenizer.decode([195])
'<reserved_106>'
>>> tokenizer.decode([196])
'<reserved_107>'

2533245542 avatar Sep 08 '23 08:09 2533245542

不好用呀

blankxyz avatar Oct 09 '23 02:10 blankxyz

root@58c8455c9d58:/home/model_hub# CUDA_VISIBLE_DEVICES=1,2 python3.9 -m fastchat.serve.cli --model-path Baichuan2-13B-Chat-V1 --num-gpus 2 Xformers is not installed correctly. If you want to use memory_efficient_attention to accelerate training use the following command to install Xformers pip install xformers. You are using an old version of the checkpointing format that is deprecated (We will also silently ignore gradient_checkpointing_kwargs in case you passed it).Please update to the new format on your modeling file. To use the new format, you need to completely remove the definition of the method _set_gradient_checkpointing in your model. Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s]/usr/local/lib/python3.9/dist-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.get(instance, owner)() Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████| 3/3 [00:12<00:00, 4.14s/it] <reserved_106>: hallo <reserved_107>: Traceback (most recent call last): File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/cli.py", line 304, in main(args) File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/cli.py", line 227, in main chat_loop( File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/inference.py", line 532, in chat_loop outputs = chatio.stream_output(output_stream) File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/cli.py", line 63, in stream_output for outputs in output_stream: File "/usr/local/lib/python3.9/dist-packages/torch/utils/_contextlib.py", line 56, in generator_context response = gen.send(request) File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/inference.py", line 190, in generate_stream indices = torch.multinomial(probs, num_samples=2) RuntimeError: probability tensor contains either inf, nan or element < 0

为啥会报这个错呀

yuege613 avatar Jan 31 '24 02:01 yuege613

root@58c8455c9d58:/home/model_hub# CUDA_VISIBLE_DEVICES=1,2 python3.9 -m fastchat.serve.cli --model-path Baichuan2-13B-Chat-V1 --num-gpus 2 Xformers is not installed correctly. If you want to use memory_efficient_attention to accelerate training use the following command to install Xformers pip install xformers. You are using an old version of the checkpointing format that is deprecated (We will also silently ignore gradient_checkpointing_kwargs in case you passed it).Please update to the new format on your modeling file. To use the new format, you need to completely remove the definition of the method _set_gradient_checkpointing in your model. Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s]/usr/local/lib/python3.9/dist-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.get(instance, owner)() Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████| 3/3 [00:12<00:00, 4.14s/it] <reserved_106>: hallo <reserved_107>: Traceback (most recent call last): File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/cli.py", line 304, in main(args) File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/cli.py", line 227, in main chat_loop( File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/inference.py", line 532, in chat_loop outputs = chatio.stream_output(output_stream) File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/cli.py", line 63, in stream_output for outputs in output_stream: File "/usr/local/lib/python3.9/dist-packages/torch/utils/_contextlib.py", line 56, in generator_context response = gen.send(request) File "/usr/local/lib/python3.9/dist-packages/fastchat/serve/inference.py", line 190, in generate_stream indices = torch.multinomial(probs, num_samples=2) RuntimeError: probability tensor contains either inf, nan or element < 0

为啥会报这个错呀

这个是baichuan2-v1.0版本

yuege613 avatar Jan 31 '24 02:01 yuege613