guidance
guidance copied to clipboard
Outline where in the Engine class you can access OpenAI API usage information
Sample Code:
from guidance import model, gen
model = models.OpenAI("gpt-3.5-turbo", api_key=openai_key)
model += "I am sending a request" + gen(name="return_val")
print(model.__dict__.keys())
print(model.engine.__dict__.keys())
Is your feature request related to a problem? Please describe. I'm trying to determine if Guidance is cheaper than direct prompting but the actual OpenAI call is hidden in the logic of the model Engine, I believe here (for example): https://github.com/guidance-ai/guidance/blob/main/guidance/models/_openai.py#L158
I want to access the response.usage
dict that OpenAI returns after you've sent a prompt. I see that guidance's model.engine._token_ids
is returned, but it's unclear what this actually is, and I also see model.engine._token_count but this doesn't seem to be how many tokens the prompt included.
Describe the solution you'd like I'd like it to be clearer how to access usage related information with the Guidance wrapper code around the OpenAI API.
Describe alternatives you've considered I could fork the Guidance repo and look at the generator object myself to verify if it has the usage information, and then include it as a class variable, but I'd rather the library expose this directly.
Hi @lashmore , apologies for the delay here but we're working on adding a new token counting API that should hopefully expose token metrics in more detail :). We'll leave this issue open in the interim while we develop out the feature.