torchchat
torchchat copied to clipboard
Run PyTorch LLMs locally on servers, desktop and mobile
### 🐛 Describe the bug I ran into an issue with loading the tokenizer, which was root caused to me using my local PyTorch build. After building the aoti runner,...
### 🚀 The feature, motivation and pitch should be able to use --generate_etrecord to get the artifacts for debugging like this. `python3 torchchat.py export llama3.1 --quantize torchchat/quant_config/mobile.json --output-pte-path llama3.1.pte --generate_etrecord`...
We're thrilled to share that [your project](https://hellogithub.com/en/repository/5f89b85f3de64ca980f228a5c5db7e78) has caught the attention of the HelloGitHub community and has been recognized for its merit. Your work is truly inspiring, and we'd like...