mindnlp icon indicating copy to clipboard operation
mindnlp copied to clipboard

Easy-to-use and high-performance NLP and LLM framework based on MindSpore, compatible with models and datasets of 🤗Huggingface.

Results 176 mindnlp issues
Sort by recently updated
recently updated
newest added

任务链接:https://gitee.com/mindspore/community/issues/IAUP1T 结果:参考指标为在测试集上的准确率,mindspore + mindnlp 复现结果略优于 pytorch, 代码详细说明及结果见文件[README.md](https://github.com/Alemax067/mindnlp/blob/【开源实习】bert模型微调/llm/finetune/bert/README.md) 测试集准确率: ### my results on mindspore |Model Variant|Accuracy on Dev Set| |-------------|-------------------| |BERT (no finetuning)|81.25%| |BERT (with finetuning)|90.07%| ### my results on pytorch...

任务链接:https://gitee.com/mindspore/community/issues/IAUPL0 结果:参考指标为在测试集上的损失,mindspore下由于混合精度暂不兼容,所以并未开启,收敛速度较pytorch下开启混合精度慢一些,但收敛趋势一致,具体结果见[README.md](https://github.com/Alemax067/mindnlp/blob/【开源实习】blip模型微调/llm/finetune/blip/README.md) loss下降图: ![image](https://github.com/user-attachments/assets/58202ffc-dcf0-42b4-981f-22104d6596a2)

Added support for qwen2.5 from transformers/main

【开源实习】chatglm-4模型微调:https://gitee.com/mindspore/community/issues/IB4YYU 基于GPU+Pytorch上的结果对比基于NPU+MindSpore上的结果,由于数据过多,本表格仅节选,详细见files Step | Loss of GPU+Pytorch | Loss of NPU+MindSpore -- | -- | -- 10 | 1.253500 | 3.384100 20 | 0.384100 | 0.277700 30 | 0.051100 |...

issue地址:https://gitee.com/mindspore/community/issues/IAN0OJ 任务开始发现vera微调方式并未实现,因此本次pr首先实现了vera微调,再利用实现的vera进行finetuning,finetuning和示例的对比如下: 首先对比模型微调参数占比: torch: ![image](https://github.com/user-attachments/assets/c450c221-1011-4c56-b466-f5545f60beae) mindnlp: ![image](https://github.com/user-attachments/assets/0e820373-fdcc-4cba-be2b-ad8f21ca845c) 再对比微调结果: torch:![image](https://github.com/user-attachments/assets/661ba959-5897-4286-8670-2c7930749531) mindnlp: ![image](https://github.com/user-attachments/assets/c0044c52-b197-4ad9-8809-a024cad07d60) 两者基本相同

实现了"albert/albert-base-v1"模型在"SetFit/20_newsgroups"数据集上的微调实验。 任务链接在https://gitee.com/mindspore/community/issues/IAUONP transformers+pytorch+4060的benchmark是自己编写的,仓库位于https://github.com/outbreak-sen/albert_finetuned 更改代码位于llm/finetune/albert,只包含mindnlp+mindspore的 实验结果如下 # Albert的20Newspaper微调 ## 硬件 资源规格:NPU: 1*Ascend-D910B(显存: 64GB), CPU: 24, 内存: 192GB 智算中心:武汉智算中心 镜像:mindspore_2_5_py311_cann8 torch训练硬件资源规格:Nvidia 3090 ## 模型与数据集 模型:"albert/albert-base-v1" 数据集:"SetFit/20_newsgroups" ## 训练与评估损失 由于训练的损失过长,只取最后十五个loss展示 ### mindspore+mindNLP |...