ykallan
ykallan
老挖继续更新呀
催更催更
i had try install pywin32 in a conda env, and use it by: `from win32com import client word = client.Dispatch("Word.Application") ` and catch an error , like : ` Traceback...
### Is there an existing issue for this? - [X] I have searched the existing issues and checked the recent builds/commits ### What happened? i start api server by command:...
```python import json import argparse from fengshen import UbertPipelines import torch total_parser = argparse.ArgumentParser("TASK NAME") total_parser = UbertPipelines.pipelines_args(total_parser) args = total_parser.parse_args() args.pretrained_model_path = r'D:\nlp_about\pretrained_model\Erlangshen-Ubert-110M-Chinese' args.default_root_dir = './train/' args.max_epochs = 5...
训练代码 ```python from datasets import Dataset import pandas as pd from transformers import AutoTokenizer, AutoModelForCausalLM, \ DataCollatorForSeq2Seq, TrainingArguments, Trainer import torch from peft import LoraConfig, TaskType, get_peft_model json_path = r"E:\nlp_about\self-llm\dataset\huanhuan.json"...
问题描述: 使用peft微调llama3 8b,训练代码基本是按照样例稍作修改,在训练的时候 前10个steps,loss稍高,后面输出的loss,一直都是0.0了 微调代码: ```python import torch from datasets import Dataset import pandas as pd from transformers import AutoTokenizer, AutoModelForCausalLM, DataCollatorForSeq2Seq, TrainingArguments, Trainer from peft import LoraConfig, TaskType, get_peft_model...
**Describe the bug** fine-tuning `Qwen/Qwen2.5-72B-Instruct` ,with 2080ti * 3, but it raise error like : ```shell ==((====))== Unsloth - 2x faster free finetuning | Num GPUs used = 3 \\...
**Describe the bug** A clear and concise description of what the bug is. Please fill out the following sections and provide a minimal reproduction script so that we can provide...
### 软件环境 ```Markdown paddle2onnx 2.0.1 paddleformers 0.4.0 paddlefsl 1.1.0 paddlenlp 2.8.1 paddlepaddle-gpu 3.2.2 paddleslim 2.6.0 use_triton_in_paddle 0.1.0 ``` ### 重复问题 - [x] I have searched the existing issues ### 错误描述...