Deepankar Sharma
Results
4
issues of
Deepankar Sharma
### Feature motivation ### Feature description ### Feature implementation
feature
I tried this on colab : ! torchrun --nproc_per_node 1 example_text_completion.py \ ! --ckpt_dir llama-2-7b-chat/ \ ! --tokenizer_path tokenizer.model \ ! --max_seq_len 64 --max_batch_size 1 #(instead of 4) and getting...
needs-more-information
fixed flags
Corrected `transformers.LlamaForCausalLM`