yocto-gl
yocto-gl copied to clipboard
[BUG] Langchain RunnableParallel has no attribute "steps"
Issues Policy acknowledgement
- [X] I have read and agree to submit bug reports in accordance with the issues policy
Where did you encounter this bug?
Local machine
Willingness to contribute
Yes. I would be willing to contribute a fix for this bug with guidance from the MLflow community.
MLflow version
- Client: 1.12.1
- Tracking server: 1.12.1
System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu
- Python version: Python 3.10.12
- yarn version, if running the dev UI:
Describe the problem
Trying to log a RunnableParallel in langchain results in an error.
workaround: in mlflow.langchain.runnables in _save_runnable_with_steps change
steps = model.steps
# to
if hasattr(model, "steps"):
steps = model.steps
else:
steps = model.steps__
# makes saving the chain possible
# alternatively one could check if the model is of instance RunnableParallel, and use steps__ if thats the case
Tracking information
Tracking URI: file:///home/user/mlruns
Artifact URI: mlflow-artifacts:/0/81284ebe1b844f2bab7b990aad40e276/artifacts
System information: Linux #1 SMP Fri Jan 27 02:56:13 UTC 2023
Python version: 3.10.12
MLflow version: 2.12.1
MLflow module location: .venv/lib/python3.10/site-packages/mlflow/__init__.py
Active experiment ID: 0
Active run ID: 81284ebe1b844f2bab7b990aad40e276
Active run artifact URI: mlflow-artifacts:/0/81284ebe1b844f2bab7b990aad40e276/artifacts
MLflow dependencies:
Flask: 3.0.3
Jinja2: 3.1.3
aiohttp: 3.9.3
alembic: 1.13.1
click: 8.1.7
cloudpickle: 3.0.0
docker: 7.0.0
entrypoints: 0.4
gitpython: 3.1.43
graphene: 3.3
gunicorn: 21.2.0
importlib-metadata: 7.1.0
markdown: 3.6
matplotlib: 3.8.4
numpy: 1.26.4
packaging: 23.2
pandas: 2.2.1
protobuf: 4.25.3
pyarrow: 15.0.2
pydantic: 2.7.0
pytz: 2024.1
pyyaml: 6.0.1
querystring-parser: 1.2.4
requests: 2.31.0
scikit-learn: 1.4.2
scipy: 1.13.0
sqlalchemy: 2.0.29
sqlparse: 0.4.4
tiktoken: 0.6.0
virtualenv: 20.25.1
Code to reproduce issue
import mlflow
from langchain.llms.fake import FakeListLLM
from langchain.schema.runnable.passthrough import RunnablePassthrough
from langchain.schema.output_parser import StrOutputParser
from langchain.prompts import PromptTemplate
import mlflow.langchain
llm = FakeListLLM(responses=["not relevant"])
chain = (
{"question": RunnablePassthrough()}
| PromptTemplate.from_template("hello {question}")
| llm
| StrOutputParser())
chain.invoke("Hey")
# mlflow.set_tracking_uri("http://127.0.0.1:5000")
mlflow.langchain.log_model(
chain,
artifact_path="model",
)
Stack trace
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home//.vscode-server/extensions/ms-python.debugpy-2024.4.0/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/home//.vscode-server/extensions/ms-python.debugpy-2024.4.0/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/home//.vscode-server/extensions/ms-python.debugpy-2024.4.0/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "/home//.vscode-server/extensions/ms-python.debugpy-2024.4.0/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "/home//.vscode-server/extensions/ms-python.debugpy-2024.4.0/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/home//.vscode-server/extensions/ms-python.debugpy-2024.4.0/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "/home//projects//minimal.py", line 23, in <module>
mlflow.langchain.log_model(
File "/home//projects//.venv/lib/python3.10/site-packages/mlflow/langchain/__init__.py", line 519, in log_model
return Model.log(
File "/home//projects//.venv/lib/python3.10/site-packages/mlflow/models/model.py", line 625, in log
flavor.save_model(path=local_path, mlflow_model=mlflow_model, **kwargs)
File "/home//projects//.venv/lib/python3.10/site-packages/mlflow/langchain/__init__.py", line 311, in save_model
model_data_kwargs = _save_model(lc_model, path, loader_fn, persist_dir)
File "/home//projects//.venv/lib/python3.10/site-packages/mlflow/langchain/__init__.py", line 549, in _save_model
return _save_runnables(model, path, loader_fn=loader_fn, persist_dir=persist_dir)
File "/home//projects//.venv/lib/python3.10/site-packages/mlflow/langchain/runnables.py", line 401, in _save_runnables
_save_runnable_with_steps(
File "/home//projects//.venv/lib/python3.10/site-packages/mlflow/langchain/runnables.py", line 322, in _save_runnable_with_steps
raise MlflowException(f"Failed to save runnable sequence: {unsaved_runnables}.")
mlflow.exceptions.MlflowException: Failed to save runnable sequence: {'0': "steps__={'question': RunnablePassthrough()} -- 'RunnableParallel' object has no attribute 'steps'"}.
Other info / logs
What component(s) does this bug affect?
- [ ]
area/artifacts
: Artifact stores and artifact logging - [ ]
area/build
: Build and test infrastructure for MLflow - [ ]
area/deployments
: MLflow Deployments client APIs, server, and third-party Deployments integrations - [ ]
area/docs
: MLflow documentation pages - [ ]
area/examples
: Example code - [ ]
area/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registry - [X]
area/models
: MLmodel format, model serialization/deserialization, flavors - [ ]
area/recipes
: Recipes, Recipe APIs, Recipe configs, Recipe Templates - [ ]
area/projects
: MLproject format, project running backends - [ ]
area/scoring
: MLflow Model server, model deployment tools, Spark UDFs - [ ]
area/server-infra
: MLflow Tracking server backend - [ ]
area/tracking
: Tracking Service, tracking client APIs, autologging
What interface(s) does this bug affect?
- [ ]
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev server - [ ]
area/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Models - [ ]
area/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registry - [ ]
area/windows
: Windows support
What language(s) does this bug affect?
- [ ]
language/r
: R APIs and clients - [ ]
language/java
: Java APIs and clients - [ ]
language/new
: Proposals for new client languages
What integration(s) does this bug affect?
- [ ]
integrations/azure
: Azure and Azure ML integrations - [ ]
integrations/sagemaker
: SageMaker integrations - [ ]
integrations/databricks
: Databricks integrations
This should be fixed by https://github.com/mlflow/mlflow/pull/11774
I found the same problem.
@mlflow/mlflow-team Please assign a maintainer and start triaging this issue.