TwoAI
TwoAI copied to clipboard
A simple experiment on letting two local LLM have a conversation about anything!
Results
1
TwoAI issues
Sort by
recently updated
recently updated
newest added
Please add Ollama "num_gpu" as a parameter. It makes some LLMs respond faster.