[WIP] LangChain Agent
Tracking issue
https://github.com/flyteorg/flyte/issues/3936
Example
import os
from typing import Any, Union
from flytekit import workflow
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts.prompt import PromptTemplate
from langchain_openai import ChatOpenAI
api_key = os.environ.get("OPENAI_API_KEY")
model = ChatOpenAI(model="gpt-3.5-turbo",
openai_api_key=api_key,
openai_organization="org-NayNG68kGnVXMJ8Ak4PMgQv7",)
prompt = PromptTemplate(
input_variables=[
"question",
],
template="Question: {question}?",
)
output_parser = StrOutputParser()
@workflow
def wf(input: str) -> Union[str, Any]:
message = prompt(input=input)
o0 = model(input=message)
o1 = output_parser(input=o0)
return o1
Screen Shots
local execution
remote execution
Details for discussion and review
LangChain LCEL
https://hackmd.io/@Future-Outlier/HykgW1P70
LangChain Agent Roadmap
https://hackmd.io/J3erYppETQe0OigP74-uJw?view
Potential Problems
We have to try to support cases of Any -> str when the input is str.
Since we want to use StrOutputParser as the output.
how to setup langchain dev mode for debugger trace code
reference: https://python.langchain.com/docs/contributing/code
- install langchain from github
git clone https://github.com/langchain-ai/langchain.git
-
cd langchain/libs
cd libs/community
poetry install --with lint,typing,test,test_integration
cd libs/core
poetry install --with test
cd libs/experimental
poetry install --with test
cd libs/partners
poetry add --optional openai
- check your dependency
pip list | grep -i lang
langchain 0.1.5 /Users/future-outlier/code/langchain/libs/langchain
langchain-community 0.0.19 /Users/future-outlier/code/langchain/libs/community
langchain-core 0.1.21 /Users/future-outlier/code/langchain/libs/core
langchain-experimental 0.0.50 /Users/future-outlier/code/langchain/libs/experimental
langchain-openai 0.0.5 /Users/future-outlier/code/langchain/libs/partners/openai
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 83.48%. Comparing base (
fa2aa0b) to head (0980ad0). Report is 3 commits behind head on master.
Additional details and impacted files
@@ Coverage Diff @@
## master #2191 +/- ##
==========================================
- Coverage 83.89% 83.48% -0.42%
==========================================
Files 342 324 -18
Lines 25483 24716 -767
Branches 3725 3516 -209
==========================================
- Hits 21380 20634 -746
+ Misses 3472 3450 -22
- Partials 631 632 +1
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Current thought:
- ChatGPT-like implementation in LangChain is definitely doable and might be not that hard.
- But 1 needs to support too much implementation
- It is better to find a way to support Airflow-like implementation
Current Test Example
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
import os
from flytekit import task, workflow
# before
prompt = ChatPromptTemplate.from_template("{topic}")
# print(prompt)
# after
# prompt = ChatPromptTemplate(task_id="tmp")
# template = prompt.from_template("{topic}")
"""
<class 'tuple'> <class 'dict'>
args: (<class 'langchain_core.prompts.prompt.PromptTemplate'>,)
args len: 1
kwargs: {'input_variables': ['topic'], 'template': '{topic}', 'template_format': 'f-string', 'partial_variables': {}}
kwargs len: 4
"""
# model = ChatOpenAI(model="gpt-3.5-turbo", openai_api_key=os.environ.get('OPENAI_API_KEY'))
"""
<class 'tuple'> <class 'dict'>
args: (<class 'langchain_openai.chat_models.base.ChatOpenAI'>,)
args len: 1
kwargs: {'model': 'gpt-3.5-turbo', 'openai_api_key': 'xxx'}
kwargs len: 2
"""
# output_parser = StrOutputParser()
"""
<class 'tuple'> <class 'dict'>
args: (<class 'langchain_core.output_parsers.string.StrOutputParser'>,)
args len: 1
kwargs: {}
kwargs len:
"""
@workflow
def wf(input: str) -> str:
p = prompt(input=input)
model = ChatOpenAI(model="gpt-3.5-turbo", openai_api_key=os.environ.get('OPENAI_API_KEY'))
output = model(input=input)
return output
if __name__ == "__main__":
print(wf(input="hi"))
Have made some notes on Heptabase, if someone can help, I can send you my notes, thank you.
Currnet Work Version
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
import os
from typing import Any
from flytekit import workflow
api_key = os.environ.get('OPENAI_API_KEY')
model = ChatOpenAI(model="gpt-3.5-turbo", openai_api_key=api_key)
output_parser = StrOutputParser()
@workflow
def wf() -> str:
o = model(input="hello")
return output_parser(input=o)
if __name__ == '__main__':
print(wf())
Update: maybe it will be doable if we monkey path the class, but not the method.
https://stackoverflow.com/questions/3765222/monkey-patch-python-class