langchain icon indicating copy to clipboard operation
langchain copied to clipboard

Accept message-like things in Chat models, LLMs and MessagesPlaceholder

Open nfcampos opened this issue 1 year ago • 3 comments

nfcampos avatar Jan 23 '24 02:01 nfcampos

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Ignored Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Visit Preview Jan 25, 2024 10:38pm

vercel[bot] avatar Jan 23 '24 02:01 vercel[bot]

PR Analysis

  • Main theme: Enhancing language model input handling
  • PR summary: This PR introduces the ability to accept message-like inputs for language models, chat models, and message placeholders, improving flexibility and input processing.
  • Type of PR: Enhancement
  • Score (0-100, higher is better): 85
  • Relevant tests added: No
  • Estimated effort to review (1-5, lower is better): 2 The changes are clear and well-structured, but require careful consideration of type handling and potential edge cases.

PR Feedback

  • General suggestions: The PR successfully introduces a more flexible input handling mechanism for language models, allowing for various message-like representations. However, it is crucial to ensure that type safety and input validation are maintained throughout the changes. Additionally, consider adding relevant unit tests to cover the new input formats, ensuring that they integrate seamlessly with existing functionalities. It's also important to evaluate the performance implications of these changes, as input processing is often a critical path in language model operations.

Code feedback

file: libs/core/langchain_core/language_models/base.py

  • Suggestions:
  1. Consider adding type checks or validation within the _convert_to_message function to ensure that the provided MessageLikeRepresentation adheres to expected formats. [important] Relevant line:In libs/core/langchain_core/language_models/base.py at line 54
+LanguageModelInput = Union[PromptValue, str, Sequence[MessageLikeRepresentation]]

file: libs/core/langchain_core/language_models/chat_models.py

  • Suggestions:
  1. Ensure convert_to_messages handles all the edge cases and document what happens if the conversion fails. What exception is raised, and is it caught and handled properly? [important] Relevant line:In libs/core/langchain_core/language_models/chat_models.py at line 145
+ return ChatPromptValue(messages=convert_to_messages(input))

file: libs/core/langchain_core/language_models/llms.py

  • Suggestions:
  1. Similar to the above suggestion for chat_models.py, verify exception handling for convert_to_messages in this file. [important] Relevant line:In libs/core/langchain_core/language_models/llms.py at line 215
+ return ChatPromptValue(messages=convert_to_messages(input))

file: libs/core/langchain_core/messages/init.py

  • Suggestions:
  1. Include unit tests to ensure that _convert_to_message handles all supported formats and raises appropriate exceptions for unsupported formats. [important] Relevant line:In libs/core/langchain_core/messages/init.py at line 117
+def _convert_to_message(

file: libs/core/langchain_core/prompts/chat.py

  • Suggestions:
  1. Optimize the format_messages method to avoid unnecessary conversions if the input is already in the correct type. [medium] Relevant line:In libs/core/langchain_core/prompts/chat.py at line 127
+ for v in convert_to_messages(value):

bitomukesh avatar Jan 23 '24 15:01 bitomukesh

PR Analysis

  • Main theme: Enhancing language model input flexibility
  • PR summary: This PR introduces changes to allow the language models within the LangChain library to accept a wider variety of message-like inputs, improving the flexibility and usability of the models.
  • Type of PR: Enhancement
  • Score (0-100, higher is better): 85
  • Relevant tests added: No
  • Estimated effort to review (1-5, lower is better): 2 The changes are isolated and the added functionality seems straightforward, but careful attention is needed to ensure correct type handling and conversion.

PR Feedback

  • General suggestions: The PR effectively broadens the input types that can be processed by the language models, which is a valuable enhancement. However, it is important to ensure that the new input types are thoroughly tested to prevent any type-related issues. The use of the convert_to_messages function seems to be a critical change and should be verified for all possible input scenarios to maintain robustness.

Code feedback

file: libs/core/langchain_core/messages/init.py

  • Suggestions:
  1. It's important to ensure that the new MessageLikeRepresentation type is fully compatible with all existing message processing functions. Since the convert_to_messages function is a significant addition that affects several parts of the system, it should be covered by comprehensive unit tests to validate its behavior with various input types. [important] Relevant line:In libs/core/langchain_core/messages/init.py at line 117
+MessageLikeRepresentation = Union[BaseMessage, Tuple[str, str], str]

file: libs/core/langchain_core/language_models/base.py

  • Suggestions:
  1. Consider verifying that the LanguageModelInput type change does not introduce any regressions or unexpected behavior when interacting with existing code that expects a sequence of BaseMessage instances. This is particularly important since the type has been broadened to include MessageLikeRepresentation. [important] Relevant line:In libs/core/langchain_core/language_models/base.py at line 49
+LanguageModelInput = Union[PromptValue, str, Sequence[MessageLikeRepresentation]]

file: libs/core/langchain_core/language_models/chat_models.py

  • Suggestions:
  1. Ensure that the convert_to_messages function is properly handling all types of input, especially when converting sequences that may contain mixed types. This is to maintain the integrity of the ChatPromptValue messages. [important] Relevant line:In libs/core/langchain_core/language_models/chat_models.py at line 144
+ return ChatPromptValue(messages=convert_to_messages(input))

file: libs/core/langchain_core/language_models/llms.py

  • Suggestions:
  1. Similar to the suggestion for chat_models.py, verify that convert_to_messages is correctly converting all input types to maintain the integrity of the ChatPromptValue messages in the context of LLMs. [important] Relevant line:In libs/core/langchain_core/language_models/llms.py at line 210
+ return ChatPromptValue(messages=convert_to_messages(input))

file: libs/core/langchain_core/prompts/chat.py

  • Suggestions:
  1. Ensure that the convert_to_messages function is invoked with the correct assumptions about the input types, as this is now used in a loop that previously expected BaseMessage instances. [important] Relevant line:In libs/core/langchain_core/prompts/chat.py at line 126
+ for v in convert_to_messages(value):

bitomukesh avatar Jan 23 '24 15:01 bitomukesh

That's message prompt templates, this is messages

nfcampos avatar Jan 26 '24 17:01 nfcampos

That's message prompt templates, this is messages

all the Union[str, tuple, dict, Message] -> tuple of type, content/template logic could be shared, and then you either pass to a _prompt_from_type or a _message_from_type method? can also refactor later

baskaryan avatar Jan 26 '24 20:01 baskaryan