agentic
agentic copied to clipboard
Fix prompt length calculation
Current _buildMessages()
calculates tokens in prompt in a wrong way - there's a small difference with a usage amount that comes from ChatGPT (in non-stream mode).
I fix it, picking the logic from here
Now our estimated numTokens
should be equal to message.detail.usage.prompt_tokens
Better to check the model and then calc. gpt3.5 and 4 seems calc differently.
I also make the change in #546, but your code looks better 😀
after test, at least in gpt-3.5, tokens_per_message is 5. I think it's one missing \n
that's not calculated
Is there anyone to merge or review this pr?
This project is undergoing a major revamp; closing out old PRs as part of the prep process.
Sorry I never got around to reviewing this PR. The chatgpt
package is pretty outdated at this point. I recommend that you use the openai package or the openai-fetch package.