autolabel
autolabel copied to clipboard
Modify get_cost to account for prompt caching
OpenAI supports prompt caching by default for gpt-4o-2024-08-06 and gpt-4o-mini-2024-07-18. When using the cached prompts, it'll automatically apply 50% discount to the cached prompt tokens. Making that adjustment to the get_cost to account for this discount.
Testing: Tested using model gpt-4o-2024-08-06