CHesketh76
CHesketh76
Still having issues with ```tokenizer = cAutoTokenizer.from_pretrained(model)``` but using ```Open-Orca/Mistral-7B-OpenOrca``` for the tokenizer appears to resolved it. I am not too happy about the speed though. When using ```lllm =cAutoModelForCausalLM.from_pretrained(...)```...
So i get x15 faster token output by having no gpu layers.... I think something is wrong
@byshiue I am hoping to get this working without using a docker. I eventually want to move this work over to my company computer but the company that I work...
@pipul 还没有.
`{'results': []}` This is the final output, so no memories are added.
I think I found the issues. mem0 fails when the added text as more than 100 characters long.
```python config = { "llm": { "provider": "ollama", "config": { "model": "mistral-nemo", "temperature": 0.1, "max_tokens": 2000, } }, "embedder": { "provider": "ollama", "model": "verdx/gte-base-zh", "embedding_dims": 768, "max_tokens": 2000, }, "vector_store":...