standardize stop token and extra kwargs across all llm wrappers
Would it make sense for stop to be an optional attribute for the LLMChain class? When I formulate a prompt, I might craft it such that I expect "##" to separate examples and hence be a good stop token across all inputs. In which case, I don't want to have to call chain.run({...inputs..., stop='##'}), but rather, instantiate the LLMChain with that stop token to be applied automatically.
I think then the generate method of LLMChain could first use this value, and validate that inputs have a matching stop if one is specified.
Hi, @hwchase17! I'm here to help the LangChain team manage their backlog and I wanted to let you know that we are marking this issue as stale.
From what I understand, the issue you opened is about standardizing the stop token and extra keyword arguments across all language model wrappers. There has been a comment by kahkeng suggesting that the stop attribute could be made optional for the LLMChain class, allowing for a default stop token to be set during instantiation. This would simplify the usage of the generate method.
I wanted to check with you if this issue is still relevant to the latest version of the LangChain repository. If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself or it will be automatically closed in 7 days.
Thank you for your contribution to the LangChain repository!