langchain icon indicating copy to clipboard operation
langchain copied to clipboard

Output cutoff with ChatOpenAI

Open Anil-matcha opened this issue 1 year ago • 5 comments

With the newly released ChatOpenAI model, the completion output is being cut off randomly in between

For example I used the below input

Write me an essay on Pune

I got this output

Pune, also known as Poona, is a city located in the western Indian state of Maharashtra. It is the second-largest city in the state and is often referred to as the "Oxford of the East" due to its reputation as a center of education and research. Pune is a vibrant city with a rich history, diverse culture, and a thriving economy.\n\nThe history of Pune dates back to the 8th century when it was founded by the Rashtrakuta dynasty. Over the centuries, it has been ruled by various dynasties, including the Marathas, the Peshwas, and the British. Pune played a significant role in India's struggle for independence, and many freedom fighters, including Mahatma Gandhi, spent time in the city.\n\nToday, Pune is a bustling metropolis with a population of over 3 million people. It is home to some of the most prestigious educational institutions in India, including the University of Pune, the Indian Institute of Science Education and Research, and the National Defense Academy. The city is also a hub for research and development, with many multinational companies setting up their research centers in Pune.\n\nPune is a city of contrasts, with modern skyscrapers standing alongside ancient temples and historical landmarks. The city's

As you can see the message is cutoff in between. I followed the official documentation from here https://github.com/hwchase17/langchain/blob/master/docs/modules/chat/getting_started.ipynb

This was not the issue before with OpenAIChat but with ChatOpenAI this is posing an issue

Anil-matcha avatar Mar 14 '23 05:03 Anil-matcha

I believe the max_token is by default set to 256 in ChatOpenAI (where it wasn't set in OpenAIChat).

You can try adjusting parameter when you initialize the llm

chat = ChatOpenAI(temperature=0, max_tokens = 2056)

There is an issue open to allow for passing -1 to default to max tokens: https://github.com/hwchase17/langchain/issues/1532

sbc-max avatar Mar 14 '23 13:03 sbc-max

@sbc-max What is the max token limit which can be passed to ChatOpenAI ? It is 2056 ?

Anil-matcha avatar Mar 14 '23 14:03 Anil-matcha

4096 https://platform.openai.com/docs/api-reference/chat/create#chat/create-max_tokens

sbc-max avatar Mar 14 '23 14:03 sbc-max

@sbc-max Thanks for the reference, should I close this issue ?

Anil-matcha avatar Mar 14 '23 14:03 Anil-matcha

Also experienced this. Adding the max tokens fixed the problem.

I guess adding some words to the getting started tutorial would help newbies.

huerlisi avatar Mar 15 '23 12:03 huerlisi

When you set it to chat = ChatOpenAI(temperature=0, max_tokens = 2056)

you will get kind of exception : This model's maximum context length is 4097 tokens. However, you requested 6005 tokens (3949 in the messages, 2056 in the completion)

ahmed-bhs avatar Jul 07 '23 08:07 ahmed-bhs