OpenAI-API-dotnet icon indicating copy to clipboard operation
OpenAI-API-dotnet copied to clipboard

StreamResponseEnumerableFromChatbotAsync Usage is null

Open 62316e opened this issue 2 years ago • 3 comments

MostResentAPIResult is not null MostResentAPIResult.Usage is null for this specific case. Other endpoints returns Usage object. Please advice.

62316e avatar May 02 '23 21:05 62316e

Exact the same here. MostResentAPIResult.Usage is null when using StreamResponseEnumerableFromChatbotAsync.

jsalichos avatar May 20 '23 20:05 jsalichos

I've run into this limitation as well. As a workaround you can always just get the number of tokens in your conversation using https://github.com/dluc/openai-tools and calculate it yourself. Prompt tokens would be everything before a new message and completion tokens would be everything after. Admittedly there might be some extra invisible prefixing on the prompt, so I'm not sure if it would be perfectly accurate, but it would probably be pretty close. If not you can cross reference what results you get with what you see on the OpenAI billing dashboard.

ZakFahey avatar Jul 01 '23 19:07 ZakFahey

I'm using SharpToken

public static int NumTokensFromMessages(OpenAI_API.Chat.Conversation convo)
    {
        var model = convo.Model.ModelID;

        if (model.StartsWith("gpt-3.5-turbo"))
            model = "gpt-3.5-turbo";
        else if (model.StartsWith("gpt-4"))
            model = "gpt-4";

        GptEncoding encoding = GptEncoding.GetEncodingForModel(model);

        int tokensPerMessage;
        int tokensPerName;

        if (model.StartsWith("gpt-3.5-turbo"))
        {
            tokensPerMessage = 4;
            tokensPerName = -1;
        }
        else if (model.StartsWith("gpt-4"))
        {
            tokensPerMessage = 3;
            tokensPerName = 1;
        }
        else
        {
            throw new NotImplementedException($"numTokensFromMessages() is not implemented for model {model}.");
        }

        int numTokens = 0;
        foreach (var message in convo.Messages)
        {
            numTokens += tokensPerMessage;

            numTokens += encoding.Encode(message.Content).Count;
            if (message.Name != null)
            {
                numTokens += tokensPerName;
            }

        }
        numTokens += 3; // every reply is primed with assistant
        return numTokens;
    }

62316e avatar Jul 01 '23 19:07 62316e