Mustangs007

Results 7 comments of Mustangs007

### Help where i have to put above code..... input_text = [ #'What is the capital of United States?', 'I like', ] MAX_LENGTH = 128 input_tokens = model.tokenizer(input_text, return_tensors="np", return_attention_mask=False,...

mx.load(str(to_load_path)) Help where i have to put above code..... input_text = [ #'What is the capital of United States?', 'I like', ] MAX_LENGTH = 128 input_tokens = model.tokenizer(input_text, return_tensors="np", return_attention_mask=False,...

can you give a sample example expected_output to put here thanks

Thanks for replying but i am attaching LLM to agent only , can you please elaborate more thanks. hoping for faster reply. thanks

why i put config because it is require otherwise by defualt it will use gpt. can you please provide a sample working example.. thanks

> from langchain.llms import Ollama ollama_llm = Ollama(model="llama2",temperature=0) from langchain.llms import Ollama ollama_llm = Ollama(model="llama2",temperature=0) is above way of using ollama model and what you mentioned is different or same??

I also need help how to use csv rag with local llama, it is not working..