Mohammed Faheem

Results 6 issues of Mohammed Faheem

### Question I ran the llava.eval.run_llava.py with model as "liuhaotian/LLaVA-Lightning-MPT-7B-preview".What's The problem here? /content/llava/model/mpt/attention.py:148: UserWarning: Using `attn_impl: torch`. If your model does not use `alibi` or `prefix_lm` we recommend using...

is there a way to access the special tokens via the object of youtokenme.BPE(), it's in ```yytm.pyx``` file and it's object is created inside the train method of the same...

in NLPS -> course 2 -> week 4 -> Assignment -> Ex.4 -> we should separate the l1 computation into l1_relu and l1 variables. l1_relu should be used for gradient...

is it possible to train the model to generate structured output with custom json schema? Pleases help me ASAP

You said we can pass planner7b lora for run.py which use the merged model as base model. you used condition in nodes/Planner.py you formatted the prompt into instruction and input...

The Code: ```py from transformers import AutoTokenizer, TextGenerationPipeline, TextStreamer, GenerationConfig from auto_gptq import AutoGPTQForCausalLM import torch from transformers_stream_generator import init_stream_support init_stream_support() repo = "TheBloke/tulu-7B-GPTQ" model_basename = "gptq_model-4bit-128g" test_tokenizer = AutoTokenizer.from_pretrained(...