cognee
cognee copied to clipboard
[COG-1894] Try cognee with ollama with different small models
This issue suggests running the same data and pipeline workflow in cognee but swapping out different LLM models via Ollama and showcase different graph outputs.
Comparing outputs across various small models can highlight differences in accuracy, speed, and resource usage. It can show whether smaller, more efficient models can handle certain tasks well enough to replace or supplement large models.
Share your findings to help guide best practices when choosing a model for a certain project.
From SyncLinear.com | COG-1894
pls assign.