To create a public link, set share=True in launch().
Files before translation: ['compressed-200.pdf', 'Mark Levine - The Jazz Piano Book-3-10.pdf']
{'files': ['pdf2zh_files\compressed-200.pdf'], 'pages': None, 'lang_in': 'en', 'lang_out': 'zh', 'service': 'google', 'output': WindowsPath('pdf2zh_files'), 'thread': 4, 'callback': <function translate_file..progress_bar at 0x00000234858AD9D0>}
100%|████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.81s/it]
Traceback (most recent call last):
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\gradio\queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\gradio\route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\gradio\blocks.py", line 1935, in process_api
result = await self.call_function(
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\gradio\blocks.py", line 1520, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\anyio_backends_asyncio.py", line 2505, in run_sync_in_worker_thread
return await future
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\anyio_backends_asyncio.py", line 1005, in run
result = context.run(func, *args)
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\gradio\utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\pdf2zh\gui.py", line 165, in translate_file
translate(**param)
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\pdf2zh\high_level.py", line 278, in translate
s_mono, s_dual = translate_stream(s_raw, **locals())
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\pdf2zh\high_level.py", line 213, in translate_stream
obj_patch: dict = translate_patch(fp, **locals())
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\pdf2zh\high_level.py", line 148, in translate_patch
interpreter.process_page(page)
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\pdf2zh\pdfinterp.py", line 266, in process_page
ops_new = self.device.end_page(page)
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\pdf2zh\converter.py", line 56, in end_page
return self.receive_layout(self.cur_item)
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\pdf2zh\converter.py", line 224, in receive_layout
or vflag(child.fontname, child.get_text()) # 3. 公式字体
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python39\lib\site-packages\pdf2zh\converter.py", line 175, in vflag
font = font.decode()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xcb in position 0: invalid continuation byte
Thank you for reporting this issue.
It seems likely that the problem lies in the version of the OpenAI Python library you have installed. I tested a similar setup using the following versions:
• Python: 3.12.6
• langchain-openai: 0.2.14
• openai: 1.58.1
Here’s the code I used for testing:
from langchain_openai import ChatOpenAI
chat_model = ChatOpenAI(
model_name="gpt-4o",
max_completion_tokens=10,
openai_api_key="your_key"
)
response = chat_model.invoke("Hello, how are you?")
print(response)
The call to the OpenAI client is done here in the LangChain codebase: https://github.com/langchain-ai/langchain/blob/ccf69368b424acf65644a9ed0a1fb9058a5e2a8d/libs/partners/openai/langchain_openai/chat_models/base.py#L717
In this test, the payload being sent to OpenAI contained:
{
"messages": [...],
"model": "gpt-4o",
"stream": False,
"n": 1,
"temperature": 0.7,
"max_completion_tokens": 10
}
This payload worked correctly without any errors.
Could you double-check the version of the OpenAI Python library in your environment? Specifically, ensure you are using openai==1.58.1.
If you confirm that you’re using the correct version and the issue persists, please share additional details about your setup or any modifications you might have made to the code. Maybe the OpenAI lib uses different parameters depending on the model you use?
Hi, @Armasse. I'm Dosu, and I'm helping the LangChain team manage their backlog. I'm marking this issue as stale.
Issue Summary:
- You reported a bug with the
max_completion_tokens parameter in the ChatOpenAI() function.
- QuentinFuxa suggested the issue might be related to the OpenAI Python library version.
- QuentinFuxa tested with specific versions and found no errors, recommending openai version 1.58.1.
Next Steps:
- Please confirm if this issue is still relevant with the latest LangChain version. If so, feel free to comment to keep the discussion open.
- If there are no updates, this issue will be automatically closed in 7 days.
Thank you for your understanding and contribution!