Results 7 issues of vincent507cpu

在创建了 `Linear` 类以后,第一次实例化这个类的时候 `linear = Linear(units=8)`,系统报错。反复与原始代码比较,没发现不同的地方。 ``` class Linear(layers.Layer): def __init__(self, units=32, **kwargs): super(Linear, self).__init__(**kwargs) self.units = units def build(self, input_shape): self.w = self.add_weight('w', shape=(input_shape[-1], self.units), initializer='random_normal', trainable=True) self.b =...

请帮忙看一下,非常感谢! 脚本: ``` PYTHONPATH=../../.. \ CUDA_VISIBLE_DEVICES=1,2,3,4 \ torchrun \ --nproc_per_node=4 \ --master_port 29500 \ llm_sft.py \ --model_revision master \ --tuner_backend swift \ --template_type llama \ --dtype fp16 \ --output_dir output...

bug

### Bug Description ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[32], [line 3](vscode-notebook-cell:?execution_count=32&line=3) [1](vscode-notebook-cell:?execution_count=32&line=1) query_engine = index.as_query_engine(similarity_top_k=3) ----> [3](vscode-notebook-cell:?execution_count=32&line=3) response = query_engine.query('What year was Elizabeth Matory the opponent...

bug
triage

``` Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/site-packages/mmengine/runner/_flexible_runner.py", line 1271, in call_hook getattr(hook, fn_name)(self, **kwargs) File "/root/miniconda3/lib/python3.10/site-packages/xtuner/engine/hooks/evaluate_chat_hook.py", line 230, in before_train self._generate_samples(runner, max_new_tokens=50) File "/root/miniconda3/lib/python3.10/site-packages/xtuner/engine/hooks/evaluate_chat_hook.py", line 216, in _generate_samples self._eval_images(runner,...

如题,希望可以加入 llava 多轮对话的预训练、微调。谢谢!

### Description of the bug | 错误描述 使用官方样例报 `ModuleNotFoundError: No module named 'mupdf'`。尝试安装 mupdf,提示已安装。尝试安装 mupdf,报错。 ### How to reproduce the bug | 如何复现 ``` import os from magic_pdf.data.data_reader_writer import FileBasedDataWriter,...

bug

您好,我在运行 train_tokenizer.py 的时候,在最后阶段 kill 了。 因为没有任何额外信息,我不知道是什么原因。然后我在运行 train_pretrain.py 的时候,报下面的错: ``` LLM可训练总参数量:25.830 百万 Epoch:[1/1](0/44160) loss:8.933 lr:0.000550000000 epoch_Time:574.0min: Traceback (most recent call last): File "/home/llm/Documents/GitHub/minimind/trainer/train_pretrain.py", line 198, in train_epoch(epoch, wandb) File "/home/llm/Documents/GitHub/minimind/trainer/train_pretrain.py", line...