Gianluca Sabato
Gianluca Sabato
Hi and thanks, @MohMaz. I’ve tried the following provisioning modes: - **Ollama (local)** - **Ollama with liteLLM (local)** - **together.ai (remote)** I’ve been focusing on various Llama versions and tested...
It happened even without truncation. I added it trying to solve the problem. I'll post later an output without truncation. Or you can try it disabling the truncation in the...
Here it's an output without truncation, the same problem happens. ``` ============================== Starting Chat using model: meta-llama/Llama-3.2-90B-Vision-Instruct-Turbo ============================== Trying to create collection. 2024-11-29 21:45:19,089 - autogen.agentchat.contrib.retrieve_user_proxy_agent - INFO - Use...
@thinkall Hi and thanks. I didn't try this exact code iteration with OpenAI. In general, the same code performed better and in a most reliable way with OpenAI APIs. I...
@thinkall I tried to fix the `custom_speaker_selection_func` function like this: ``` def custom_speaker_selection_func(last_speaker: Agent, groupchat: GroupChat): """Define a customized speaker selection function. A recommended way is to define a transition...