danieldekay

Results 30 comments of danieldekay

One solution: `SMART_TOKEN_LIMIT=8000`´ Prevented the broken JSON effect. TODO: better handling of the error when a JSON just cannot be repaired by json_repair.

in my env these files are actually installed on the library root level, e.g. `backend` is next to `gpt-researcher` and not inside it.

@assafelovic - it's not just a naming issue. I have the case now that I am installing gpt-researcher via poetry from a git branch, and the backend is just not...

Yes, this installs again. Is there a reason we don't go to 0.3 yet? ```toml [tool.poetry.dependencies] python = ">=3.11,

I used the following prompt on jules.google.com letting it operate on my fork. Unfortunately that is probably not fully deterministically reproducible. Maybe Github Copilot could have done it, as well....

> Can you document it somewhere in the help files? It is good to have a snapshot, it is better to be able to update it from time to time....

Love the idea, as you can build out a knowledge base through various queries this way. It adds a bit more human-in-the-loop for complex topics. One use case could be...

did you not get a "Failed to get response from {llm_provider} API" somewhere? If the model cannot be loaded, the error should be elsewhere, e.g. `create_chat_completion `in llm.py, if not...

Just an idea. Poetry could have different groups for different custom models, and the package installation could parse the .env file for the activated model, and then include the corresponding...

Can you be more specific in where this error comes from? Do you have a log file or trace to share?