yanguangcang2019

Results 3 issues of yanguangcang2019

(llama3_env) root@cuda22:~/llama3# torchrun --nproc_per_node 1 example_chat_completion.py --ckpt_dir /root/llama3/Meta-Llama-3-8B/ --tokenizer_path /root/llama3/Meta-Llama-3-8B/tokenizer.model --max_seq_len 512 --max_batch_size 6 > initializing model parallel with size 1 > initializing ddp with size 1 > initializing pipeline...

https://docs.dify.ai/getting-started/install-self-hosted/local-source-code ![image](https://github.com/langgenius/dify/assets/76649700/cb4aa70b-a316-4cfa-8704-181fd918dc3d) I run Dify in docker,not local-source-code,I am also encountering the same issue where the progress of my file upload to the knowledge base is always at 0. I've...

🐞 bug

### System Info / 系統信息 cuda 12.1,python editon 3.10.12 ### Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece? - [ ] docker / docker - [X] pip install /...

gpu
stale