devika
devika copied to clipboard
Not working with LLM
Describe the bug I've tested the "Implement Conway's Game of Life in Python using Pygame", but I always encounter "Invalid response from the model, trying again..."
I've attempted using llama2:70b and qwen:72b - both yielded identical outcomes.
What I feel is it's llama2's inability to follow the prompt and give the correct format response. Try with another LLM, like Claude3 (for which you get free 5$ initially) or Gemini till May is free i guess
I can reproduce this issue. Looks like the LLM's response is not in the intended format.
I have submitted a pull request for a modification that I believe will improve on the issue.
The Llama2 model is not providing the responses in the expect format.
JSON { ...... }
Instead it includes additional explanation/justification sections in some of its responses. Causing the JSON validation to fail
Llama2 output: JSON{...} explanation: ....
This can be somewhat corrected by modifying the prompt to explicitly inform the model to adhere to the output format requested. Please have a look at the prompt.jinja file modification here: https://github.com/stitionai/devika/commit/ea4076945f458b2ba492d5beba576b87c21747d1 Check and confirm if the suggested modification helps.
@stan1233 are you in ollama?
Use: command-r:35b-v0.1-q4_K_M openchat:7b-v3.5-1210-q5_K_M mistral-openorca:7b-q6_K
Latest Devika Version was Working with Ollama, I have Freezed it on my Repo: git push https://github.com/hqnicolas/devika cd devika create config file edit the docker compose to ollama url sudo docker compose build sudo docker compose up
This is exactly my problem
I have submitted a pull request for a modification that I believe will improve on the issue. The Llama2 model is not providing the responses in the expect format.
JSON { ...... }Instead it includes additional explanation/justification sections in some of its responses. Causing the JSON validation to fail Llama2 output:JSON{...} explanation: ....This can be somewhat corrected by modifying the prompt to explicitly inform the model to adhere to the output format requested. Please have a look at the prompt.jinja file modification here: ea40769 Check and confirm if the suggested modification helps.
@brPreetham How can you see the raw Ollama output?
I'm new to the project and would like to know where to look to debug exactly what my model is formatting incorrectly.
I am currently working on the problem here :)
I am currently working on the problem here :) Wow, that sounds great!
fixed. fetch the latest changes. for more read the changelog in discord.
fixed. fetch the latest changes. for more read the changelog in discord.
I still face this issue with llama3-8B-Instruct-Q_8