tanglu86

Results 10 issues of tanglu86

How to reset the counter through the console or web client. No relevant instructions are found in the document

documentation

Hello developer, I deployed AgentGPT successfully through the method of "setup.sh --docker" and it's running normally. I passed the correct API KEY during the deployment process and checked that the...

question

![1](https://user-images.githubusercontent.com/38269740/236129203-04e6bec8-350f-44d7-adbf-f86b50743570.jpg) 部署环境: 自有CentOS 7独立服务器 部署方式: ./setup.sh --docker 部署完成后访问:3000可以正常打开,但是进行任何提问都会卡住,一直处于"运行中"状态。 降级为1.0.3,使用同样的操作方式则是正常的,但是1.0.3存在回答无法汉化的问题

question

![1](https://github.com/TigerResearch/TigerBot/assets/38269740/da3b61ef-bdbe-43ae-a248-b354ebb04f5b) 当我使用pip3 install -r requirements.txt安装依赖环境时出现图中错误,查看requirements发现版本低于了项目实际需要的版本,请问是否需要更新? requires numpy>=1.21.6 requires torch==2.0.1

Assuming Master (172.22.8.11:3306) has two slave nodes, SlaveA (172.22.8.12:3306) and SlaveB (172.22.8.13:3306) RM configuration is as follows: ``` db-servers-hosts = "172.22.8.11:3306,172.22.8.12:3306,172.22.8.13:3306" db-servers-prefered-master = "172.22.8.11:3306" ``` If Master (172.22.8.11:3306) goes down,...

help wanted
documentation
config
master slave

### Describe the bug I have four graphics cards on my server and I want to load the Qwen Model using GPU 3/4, but I can't seem to correctly specify...

bug

使用LLaMA-Efficient-Tuning提供的API部署在进行调用时经常2分钟才返回,直接使用web端是非常快的 ,所以想换一个方式进行测试,官方是否有提供API?

![1](https://github.com/baichuan-inc/Baichuan-13B/assets/38269740/e31a53c3-5fb2-4633-9716-b06928b33ed7) `CUDA_VISIBLE_DEVICES=0 python3 src/train_bash.py \ --stage sft \ --model_name_or_path /data/LLM_Project/Baichuan-13B-Chat \ --template baichuan \ --do_train \ --dataset xiaogang \ --finetuning_type lora \ --lora_rank 8 \ --lora_target W_pack \ --output_dir /data/LLM_OUTPUT_Project/Baichuan-13B-Chat...

目前微信群超过200人,只能邀请制,扫码无法加入