建议: Custom Ai Settings
I can't seem to find the documentation that guides you through Custom Ai Settings. I'm trying to connect a model via ollama, but the server is throwing an error. How can I fix this?
I can't seem to find the documentation that guides you through Custom Ai Settings. I'm trying to connect a model via ollama, but the server is throwing an error. How can I fix this?
Hello, you can try version 0.3.6.
Hi thanks, for the great job on chat2db .. I can't manage to change the settings they are like ... fixed, impossible to set (on mac and linux anyway)
Hi thanks, for the great job on chat2db .. I can't manage to change the settings they are like ... fixed, impossible to set (on mac and linux anyway)
Hello, can you provide a screenshot and the software version used?
0.3.6
here are the settings blocked .. impssible to reset
(by the way what is the path to provide for ollama : http://serverip:11434/. normally)
OK it's working with ollama with version 0.3.7 with http://serverollama:11434/v1/chat/completions/
Cheers !
@mehdi-belkhayat Thank you very much for your help! Your suggestion with Ollama worked perfectly. However, I have one issue: I am getting the entire response from the LLM instead of just the SQL query. Do you know how I can solve this problem? An example of the response is attached below.
prompt: get all users
Response: Based on the properties you provided, here is an example of a single column query to retrieve all users from the specified MySQL table:
SELECT * FROM users;
This will return all columns (*) for every row in the users table.
Please note that if your table has any constraints (like foreign keys), the above SQL might not work as expected due to the lack of joins.
You are welcome, i think that you use a generic LLM like llama3.2, or for those cases, you must use the instruct version of LLM, capable of using tools and to follow instructions.
I reproduced the same probleme with generic llama3.2 which was very verbose rather than giving me the sql request only , but then when i used llama3.2:3b-instruct-q8_0 it worked as intended
Visit : https://ollama.com/library/llama3.2/tags and select the instruct version you like, you have many possible choices,
And i remind you the miracle of DeepSeek R1 last week, which is capable of reasoning, you should try it to, you have small reasoning models to here : https://ollama.com/library/deepseek-r1/tags
Good hunt for your best llm !