FATE-LLM
FATE-LLM copied to clipboard
Federated Learning for LLMs.
https://github.com/FederatedAI/FATE-LLM/blob/main/doc/tutorial/parameter_efficient_llm/ChatGLM-6B_ds.ipynb this tutorial contain a picture that is now 404: https://raw.githubusercontent.com/FederatedAI/FATE-LLM/4a5911b8903c4df559a03f7dda3f258ddd6aae6d/doc/tutorial/images/fate-llm-chatglm-6b.png
Hi, I want to know how to use model when i complete GPT2-example, is there any sample or README? screenshot as below: i only fetch 3 sample data from IMDB.csv...
目前已经通过docker compose部署了fate_flow,并成功测试了heteroLinR,homoNN等算法。请问在此基础上怎么加载大模型?直接修改配置文件中的算法改为ALL吗?  (目前项目还是用的cpu)。 第二个问题是我只找到了pipeline 实现FATE_LLM样例,请问能通过编写config和DSL来实现LLM吗?
I found no documentation on fate-llm, and the gpt2 documentation has disappeared. Please tell me how to use this project and I would like to use llama. 
When following the GPT2 example, the following errors occur:  After debugging, this error is caused by the json/encoder.py file when trying to reference the target_modules = ['c_attn'] component of...
出现以下错误gpu请求过大,实际上我有两块GPU。同时,我使用的是python提交命令,而非jupyter [ERROR] [2023-10-07 22:58:01,892] [202310072257522498440] [22816:139678211446592] - [deepspeed_utils._run] [line:67]: failed to call CommandURI(_uri=v1/cluster-manager/job/submitJob) to xxx.xxx.xxx.xxx:4670:
1.错误描述:  使用chatglm6b进行联邦大模型训练,报错: ValueError: IP not configured. Please use command line tool `pipeline init` to set it. 执行pipeline init配置后,仍然报相同错误: ValueError: IP not configured. Please use command line tool `pipeline init`...
单机版的部署,可以用本项目来做联邦大模型训练吗?比如例子中的chatglm-6b
When I followed the **GPT2-example** in the tutorial, I encountered the following problem. By the way, in the **GPT2-example** in the tutorial, the packages of **TrainerParam** and **DatasetParamn** are missing,...
How to deploy and run Federated LLM on a single machine? Is there a guide document?