Zack Manning

Results 4 comments of Zack Manning

Same issue as well with a 7b model: ``` base_model: -7b model_type: LlamaForCausalLM tokenizer_type: LlamaTokenizer is_llama_derived_model: true datasets: val_set_size: 0.05 output_dir: ./out dataset_prepared_path: last_run_prepared load_in_8bit: false load_in_4bit: false strict: false...

Happens with a bitsandbytes source build as well at hash `136721a8c1437042f0491972ddc5f35695e5e9b2`

Would love to know this as well as I have a custom build pushed to our internal repo that keeps me up at night.

The workaround from the link worked for me but I got the same error. Workaround: ``` import search from "flexsearch" const i: search.Index = ... ```