langchain
                                
                                 langchain copied to clipboard
                                
                                    langchain copied to clipboard
                            
                            
                            
                        OpenAI Cost calculation could have a bug!
System Info
I've run a prompt that said 1 + 1 = ? with my agent, I've used get_openai_callback to show some metric (see the image):

Used LLM model is GPT-3.5-turbo
On OpenAI website we have for GPT-3.5-turbo model a  0.002/1k tokens, that means my test/prompt should cost:
2206 รท 1000 x 0.002 = 0.004412
The weird thing based on my test is the cost, it shows $0.04412  and not $0.004412
It could be a bug, Any ideas?
Here is the code:
with get_openai_callback() as cb:
    response = agent.run(prompt)
    # Show OpenAI Cost
    print(f"Total Tokens: {cb.total_tokens}")
    print(f"Prompt Tokens: {cb.prompt_tokens}")
    print(f"Completion Tokens: {cb.completion_tokens}")
    print(f"Total Cost (USD): ${cb.total_cost}")
Can anyone please explain what's going on? Thanks in advance.
Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
@agola11 @hwchase17
I've found the issue, the issue is I've used model_name instead of model
-llm = ChatOpenAI(temerature=0, model_name="gpt-3.5-turbo")
+llm = ChatOpenAI(temerature=0, model="gpt-3.5-turbo")
With model_name it resulted in davinci model by default, which affected the cost ($0.02/1K tokens for davinci).
HHHH, By the way, I've just received an email from OpenAI informing me of price changes, I've chacked the website and it seems the pricing has changed , and new models have just been released :D
Hi, @medram! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.
From what I understand, you noticed a discrepancy in the cost calculation for the GPT-3.5-turbo model. It seems that using model_name instead of model resulted in the default davinci model being used, which affected the cost. You also mentioned receiving an email from OpenAI about price changes and new models being released.
Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
Thank you for your contribution to the LangChain repository!