devika icon indicating copy to clipboard operation
devika copied to clipboard

Need more robust information to explain why a response from the model was invalid and know I'm not in an undefined loop

Open cafeTechne opened this issue 10 months ago • 7 comments

Describe the bug After asking a question to intercede on the initial (unsatisfactory plan), it seems as if I'm in some sort of loop : "Invalid response from the model, trying again... 24.04.02 10:50:32: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 71202} 24.04.02 10:50:52: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 70863} Invalid response from the model, trying again... 24.04.02 10:50:52: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 71760} 24.04.02 10:51:08: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 71385} Invalid response from the model, trying again... 24.04.02 10:51:08: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 72300} " To Reproduce Steps to reproduce the behavior:

  1. Ask a question
  2. Ask another question when it is Active
  3. See above.

Expected behavior I expect more console output to explain what is happening and when it is happening--a config VERBOSE="true" would be useful.

Screenshots image

Desktop (please complete the following information):

  • OS: Ubuntu
  • Browser Firefox
  • Version [e.g. 22]

cafeTechne avatar Apr 02 '24 15:04 cafeTechne

ooof token usage :D

image

sinanisler avatar Apr 02 '24 15:04 sinanisler

24.04.03 00:43:51: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 291755} Invalid response from the model, trying again... 24.04.03 00:43:51: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 297391} 24.04.03 00:43:58: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 294960} Invalid response from the model, trying again... 24.04.03 00:43:58: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 300647} 24.04.03 00:44:05: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 297908} Invalid response from the model, trying again... 24.04.03 00:44:05: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 303749} 24.04.03 00:44:14: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 301448} Invalid response from the model, trying again... 24.04.03 00:44:14: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 307070} 24.04.03 00:44:23: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 305445} Invalid response from the model, trying again... 24.04.03 00:44:23: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 310729} 24.04.03 00:44:30: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 308266} Invalid response from the model, trying again... 24.04.03 00:44:30: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 313969} 24.04.03 00:44:41: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 313296} Invalid response from the model, trying again... 24.04.03 00:44:41: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 318104} 24.04.03 00:44:47: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 315421} Invalid response from the model, trying again... 24.04.03 00:44:48: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 321234} 24.04.03 00:44:54: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 318597} Invalid response from the model, trying again... 24.04.03 00:44:54: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 324387}

I also encounter the same problem, how to solve it?

phcha avatar Apr 02 '24 16:04 phcha

Check your account with your LLM provider. They have hard limits on number of requests, context window, token usage, etc. Sometimes their error message in the API is cryptic like the one you're seeing. Also these LLMs go up and down randomly when heavily used during peak hours.

Whytehorse2022 avatar Apr 03 '24 19:04 Whytehorse2022

Describe the bug After asking a question to intercede on the initial (unsatisfactory plan), it seems as if I'm in some sort of loop : "Invalid response from the model, trying again... 24.04.02 10:50:32: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 71202} 24.04.02 10:50:52: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 70863} Invalid response from the model, trying again... 24.04.02 10:50:52: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 71760} 24.04.02 10:51:08: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 71385} Invalid response from the model, trying again... 24.04.02 10:51:08: root: INFO : SOCKET tokens MESSAGE: {'token_usage': 72300} " To Reproduce Steps to reproduce the behavior:

  1. Ask a question
  2. Ask another question when it is Active
  3. See above.

Expected behavior I expect more console output to explain what is happening and when it is happening--a config VERBOSE="true" would be useful.

Screenshots image

Desktop (please complete the following information):

  • OS: Ubuntu
  • Browser Firefox
  • Version [e.g. 22]

which model are you using??

ARajgor avatar Apr 03 '24 19:04 ARajgor

I am using open llama local models. There is no token limit.

On Wed, Apr 3, 2024 at 3:22 PM James Snodgrass @.***> wrote:

Check your account with your LLM provider. They have hard limits on number of requests, context window, token usage, etc. Sometimes their error message in the API is cryptic like the one you're seeing. Also these LLMs go up and down randomly when heavily used during peak hours.

— Reply to this email directly, view it on GitHub https://github.com/stitionai/devika/issues/307#issuecomment-2035407421, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFACJ5RGE3EY4ELKWBFVZWDY3RJGJAVCNFSM6AAAAABFTSX432VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZVGQYDONBSGE . You are receiving this because you authored the thread.Message ID: @.***>

cafeTechne avatar Apr 04 '24 13:04 cafeTechne

The same error

alex89607 avatar Apr 08 '24 20:04 alex89607

same...

mast3rm1nd avatar Apr 22 '24 21:04 mast3rm1nd

fixed. fetch the latest changes. for more read the changelog in discord.

ARajgor avatar Apr 25 '24 09:04 ARajgor