Haodong Zhao

Results 9 issues of Haodong Zhao

求问,在FATE-LLM中的pellm中,将base_model替换为添加了adapter(如lora等)的peft_model,在客户端训练后、聚合时仅传递给aggregator adapter部分的参数、不传递base_model参数是在哪里实现的?

Add tutorial Split Learning for bank marketing-torch # Preamble For Pull Request initiator, please read the following steps carefully. 请合并请求的发起人仔细阅读下列步骤。 --- If you are creating a `Pull Request` about listed...

Fixed #776 在secretflow中实现拆分学习 SPlitGuard 检测防御算法 # Preamble For Pull Request initiator, please read the following steps carefully. 请合并请求的发起人仔细阅读下列步骤。 --- If you are creating a `Pull Request` about listed topic below,...

### Issue Type Feature Request ### Source binary ### Secretflow Version 1.2.0.dev20230918 ### OS Platform and Distribution Centos7 ### Python version 3.8.18 ### Bazel version _No response_ ### GCC/Compiler version...

### Issue Type Feature Request ### Source source ### Secretflow Version 0.8.1b0 ### OS Platform and Distribution Centos7 ### Python version 3.8.13 ### Bazel version _No response_ ### GCC/Compiler version...

按照教程https://zhuanlan.zhihu.com/p/621793987 使用lora对ChatGLM微调后,模型的answer全部为很多个 ?? ![image](https://github.com/liguodongiot/llm-action/assets/45898026/d2392199-88ec-43d9-858a-3b2c7c606fd2)

### Feature request Does peft supports the custom setting of trainable parameters(for example, some params in word_embeddings) ### Motivation To use the method [EP](https://arxiv.org/abs/2103.15543) ### Your contribution maybe

Add research report on horizontal federated learning backdoor attack and defense. @zhaocaibei123