langflow
langflow copied to clipboard
⚡️ Speed up function `convert_to_langchain_type` by 8% in `src/backend/base/langflow/services/tracing/utils.py`
📄 convert_to_langchain_type() in src/backend/base/langflow/services/tracing/utils.py
📈 Performance improved by 8% (0.08x faster)
⏱️ Runtime went down from 2.24 milliseconds to 2.09 milliseconds
Explanation and details
The program you provided is quite extensive and involves multiple classes and methods. There are a few optimizations and improvements we can make to enhance its performance. Here are some key changes.
- Avoid unnecessary deepcopy operations.
- Simplify class attributes and method lookups.
- Utilize list comprehensions for optimization.
- Remove unnecessary instance checks and conversions.
Here is the optimized version.
Key changes include.
- Using comprehensions to optimize loops and conditional checks.
- Simplified attribute management within the
Dataclass through direct dictionary updates. - Optimized
asynciocalls for synchronous functions to reduce overhead. - Removed unnecessary imports and redundant method calls to streamline performance.
These optimizations should improve the efficiency of the code while maintaining the same functionality and correctness.
Correctness verification
The new optimized code was tested for correctness. The results are listed below.
🔘 (none found) − ⚙️ Existing Unit Tests
✅ 19 Passed − 🌀 Generated Regression Tests
(click to show generated tests)
# imports
import asyncio
import copy
import json
from datetime import datetime, timezone
# function to test
from typing import (Annotated, Any, AsyncIterator, Iterator, List, Optional,
cast)
from uuid import UUID
import pytest # used for our unit tests
from fastapi.encoders import jsonable_encoder
from langchain_core.documents import Document
from langchain_core.load import load
from langchain_core.messages import (AIMessage, BaseMessage, HumanMessage,
SystemMessage)
from langchain_core.prompt_values import ImagePromptValue
from langchain_core.prompts import (BaseChatPromptTemplate, ChatPromptTemplate,
PromptTemplate)
from langchain_core.prompts.image import ImagePromptTemplate
from langflow.base.prompts.utils import dict_values_to_string
from langflow.schema.data import Data
from langflow.schema.image import Image, get_file_paths, is_image_file
from langflow.utils.constants import (MESSAGE_SENDER_AI,
MESSAGE_SENDER_NAME_AI,
MESSAGE_SENDER_NAME_USER,
MESSAGE_SENDER_USER)
from loguru import logger
from pydantic import (BaseModel, BeforeValidator, ConfigDict, Field,
field_serializer, field_validator, model_serializer,
model_validator)
from src.backend.base.langflow.services.tracing.utils import \
convert_to_langchain_type
# unit tests
def test_convert_none():
codeflash_output = convert_to_langchain_type(None)
# Outputs were verified to be equal to the original implementation
def test_convert_empty_dict():
codeflash_output = convert_to_langchain_type({})
# Outputs were verified to be equal to the original implementation
def test_convert_empty_list():
codeflash_output = convert_to_langchain_type([])
# Outputs were verified to be equal to the original implementation
def test_convert_simple_string():
codeflash_output = convert_to_langchain_type("hello")
# Outputs were verified to be equal to the original implementation
def test_convert_simple_int():
codeflash_output = convert_to_langchain_type(42)
# Outputs were verified to be equal to the original implementation
def test_convert_simple_float():
codeflash_output = convert_to_langchain_type(3.14)
# Outputs were verified to be equal to the original implementation
def test_convert_simple_bool():
codeflash_output = convert_to_langchain_type(True)
# Outputs were verified to be equal to the original implementation
def test_convert_nested_dict():
input_value = {"level1": {"level2": {"level3": "value"}}}
expected_value = {"level1": {"level2": {"level3": "value"}}}
codeflash_output = convert_to_langchain_type(input_value)
# Outputs were verified to be equal to the original implementation
def test_convert_nested_list():
input_value = ["level1", ["level2", ["level3", "item"]]]
expected_value = ["level1", ["level2", ["level3", "item"]]]
codeflash_output = convert_to_langchain_type(input_value)
# Outputs were verified to be equal to the original implementation
def test_convert_dict_with_list():
input_value = {"key": ["item1", "item2"]}
expected_value = {"key": ["item1", "item2"]}
codeflash_output = convert_to_langchain_type(input_value)
# Outputs were verified to be equal to the original implementation
def test_convert_list_with_dict():
input_value = [{"key1": "value1"}, {"key2": "value2"}]
expected_value = [{"key1": "value1"}, {"key2": "value2"}]
codeflash_output = convert_to_langchain_type(input_value)
# Outputs were verified to be equal to the original implementation
def test_convert_message_with_text():
message = Message(text="Hello", sender="user")
codeflash_output = convert_to_langchain_type(message)
# Outputs were verified to be equal to the original implementation
def test_convert_message_with_files():
message = Message(text="Hello", sender="user", files=["path/to/image.jpg"])
codeflash_output = convert_to_langchain_type(message)
# Outputs were verified to be equal to the original implementation
def test_convert_message_with_prompt():
message = Message(prompt={"type": "chat", "kwargs": {"messages": [{"type": "human", "content": "Hi"}]}})
codeflash_output = convert_to_langchain_type(message)
# Outputs were verified to be equal to the original implementation
def test_convert_data_with_text():
data = Data(data={"text": "Hello"})
codeflash_output = convert_to_langchain_type(data)
# Outputs were verified to be equal to the original implementation
def test_convert_data_with_metadata():
data = Data(data={"text": "Hello", "meta": "data"})
codeflash_output = convert_to_langchain_type(data)
# Outputs were verified to be equal to the original implementation
def test_convert_data_with_files():
data = Data(data={"text": "Hello", "files": ["path/to/image.jpg"]})
codeflash_output = convert_to_langchain_type(data)
# Outputs were verified to be equal to the original implementation
def test_convert_invalid_type():
with pytest.raises(TypeError):
convert_to_langchain_type(set([1, 2, 3]))
# Outputs were verified to be equal to the original implementation
def test_convert_large_dict():
large_dict = {f"key{i}": f"value{i}" for i in range(1000)}
codeflash_output = convert_to_langchain_type(large_dict)
# Outputs were verified to be equal to the original implementation
def test_convert_large_list():
large_list = [f"item{i}" for i in range(1000)]
codeflash_output = convert_to_langchain_type(large_list)
# Outputs were verified to be equal to the original implementation
def test_convert_deeply_nested_dict():
nested_dict = {"level1": {"level2": {"level3": {"level4": {"level5": "value"}}}}}
codeflash_output = convert_to_langchain_type(nested_dict)
# Outputs were verified to be equal to the original implementation
def test_convert_deeply_nested_list():
nested_list = ["level1", ["level2", ["level3", ["level4", ["level5", "item"]]]]]
codeflash_output = convert_to_langchain_type(nested_list)
# Outputs were verified to be equal to the original implementation
def test_convert_large_message_with_files():
message = Message(text="Hello", sender="user", files=[f"path/to/image{i}.jpg" for i in range(1000)])
codeflash_output = convert_to_langchain_type(message)
# Outputs were verified to be equal to the original implementation
def test_convert_large_data_with_metadata():
data = Data(data={"text": "Hello", "meta": {f"key{i}": f"value{i}" for i in range(1000)}})
codeflash_output = convert_to_langchain_type(data)
# Outputs were verified to be equal to the original implementation
🔘 (none found) − ⏪ Replay Tests
Pull Request Validation Report
This comment is automatically generated by Conventional PR
Whitelist Report
| Whitelist | Active | Result |
|---|---|---|
| Pull request is submitted by a bot and should be ignored | ✅ | ✅ |
| Pull request is a draft and should be ignored | ✅ | ❌ |
| Pull request is made by a whitelisted user and should be ignored | ❌ | ❌ |
| Pull request is submitted by administrators and should be ignored | ❌ | ❌ |
Result
Pull request matches with one (or more) enabled whitelist criteria. Pull request validation is skipped.
Last Modified at 03 Aug 24 00:19 UTC
This pull request is automatically being deployed by Amplify Hosting (learn more).
Access this pull request here: https://pr-3181.dmtpw4p5recq1.amplifyapp.com
This PR has been automatically closed because the original PR #3216 by EvgenyK1 was closed.