torchchat
torchchat copied to clipboard
Run PyTorch LLMs locally on servers, desktop and mobile
When exporting, an export path is required. This PR makes this explicit **python torchchat.py export stories15M** ``` usage: torchchat export [-h] [--checkpoint-path CHECKPOINT_PATH] (--output-pte-path OUTPUT_PTE_PATH | --output-dso-path OUTPUT_DSO_PATH) [--dtype {fp32,fp16,bf16,float,half,float32,float16,bfloat16,fast,fast16}]...
### 🚀 The feature, motivation and pitch Having good basic pytorch support for inferencing LLMs is key to continued success of pytorch. Vision LLM models tend to have uneven support...
As described in Issue [932](https://github.com/pytorch/torchchat/issues/932), the legacy implementation of the arg parser results in subcommands requiring the existence of cli args that they don't actually use. This PR fixes this...
### 🚀 The feature, motivation and pitch A crash is encountered when the tokenizer in the Android .aar doesn't line up with the tokenizer needed by the model. We should...
### 🚀 The feature, motivation and pitch There's a lot of good content in the README, but having a visual component will help reinforce the messaging It'll be the first...
### 🚀 The feature, motivation and pitch The repo quite doesn't pass the smell test for being an example in that the number of files and folders in root are...
### 🐛 Describe the bug See example below: ``` $ python3 torchchat.py generate --help Traceback (most recent call last): File "/home/nshulga/git/pytorch/torchchat/torchchat.py", line 53, in args = parser.parse_args() ^^^^^^^^^^^^^^^^^^^ File "/home/nshulga/miniconda3/envs/py311/lib/python3.11/argparse.py",...
provide ability to hook up browser / flask app with native execution binary runner/run.cpp