torchchat
torchchat copied to clipboard
[UX] We are too quiet about errors - in particular missing HF authentication...
The command python3 torchchat.py where llama3
fails quietly presumably because I might not have the HF Token configured.
I assumed the code was broken, though because I got a backtrace of the program and a message that some stuff could not be found.
Running with stories15M convinced me (after looking and source and not finding anything disagreeable....) that this must be a problem with downloading llama3, but nobody told me that as a user.
And yes, I'm a typical user here... I sorta followed instructions (well, I logged in a while ago, should be enough?) and now I'm submitted a bug. --- Point being this might attract a lot of "(potential) operator error" type messages that become error reports we have to triage and in parallel unsatisfied users, because they have convinced themselves it's torchchat that's broken, not that they've performed an imperfect jobs
(py311) mikekg@mikekg-mbp torchchat % python3 torchchat.py where llama3
Downloading meta-llama/Meta-Llama-3-8B-Instruct from HuggingFace...
Converting meta-llama/Meta-Llama-3-8B-Instruct to torchchat format...
known configs: ['13B', '70B', 'CodeLlama-7b-Python-hf', '34B', 'stories42M', '30B', 'stories110M', '7B', 'stories15M', 'Mistral-7B', 'Meta-Llama-3-8B']
Model config {'block_size': 2048, 'vocab_size': 128256, 'n_layers': 32, 'n_heads': 32, 'dim': 4096, 'hidden_dim': 14336, 'n_local_heads': 8, 'head_dim': 128, 'rope_base': 500000.0, 'norm_eps': 1e-05, 'multiple_of': 1024, 'ffn_dim_multiplier': 1.3, 'use_tiktoken': True, 'max_seq_length': 8192}
Traceback (most recent call last):
File "/Users/mikekg/ci/torchchat/torchchat.py", line 169, in <module>
check_args(args, "where")
File "/Users/mikekg/ci/torchchat/cli.py", line 39, in check_args
download_and_convert(args.model, args.model_directory, args.hf_token)
File "/Users/mikekg/ci/torchchat/download.py", line 97, in download_and_convert
_download_hf_snapshot(model_config, temp_dir, hf_token)
File "/Users/mikekg/ci/torchchat/download.py", line 61, in _download_hf_snapshot
convert_hf_checkpoint(
File "/Users/mikekg/miniconda3/envs/py311/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/mikekg/ci/torchchat/build/convert_hf_checkpoint.py", line 60, in convert_hf_checkpoint
raise RuntimeError(
RuntimeError: Could not find /Users/mikekg/.torchchat/model-cache/downloads/meta-llama/Meta-Llama-3-8B-Instruct/pytorch_model.bin.index.json or /Users/mikekg/.torchchat/model-cache/downloads/meta-llama/Meta-Llama-3-8B-Instruct/original/consolidated.00.pth plus /Users/mikekg/.torchchat/model-cache/downloads/meta-llama/Meta-Llama-3-8B-Instruct/original/tokenizer.model
(py311) mikekg@mikekg-mbp torchchat % python3 torchchat.py where stories15M
/Users/mikekg/.torchchat/model-cache/stories15M
(py311) mikekg@mikekg-mbp torchchat %