mlem
mlem copied to clipboard
Investigate options to apply MLEM models within Airflow DAGS
As discussed with @mnrozhkov and @tapadipti, it would be great if MLEM would somehow simplify model application within Airflow DAGs. Two questions to start with:
- We can create something like MLEMOperator. What would be it's functionality? How it will help users?
- We need to either build virtual environment or Docker image to apply the model in the required environment. Two options to provide those would be either do this in CI or as a task in the same DAG. We need to explore these options and find out how MLEM can simplify work for users here. Note: if you run multiple workers then it may be beneficial to build env in advance. If you have one worker, you may be ok with building it while running MLEMOperator.
Other notes:
- Sometimes data is huge and you need to process it in chunks (it may or may not be the case with pyspark. Without pyspark it can be too hard to fit all data in RAM). We need some way to resolve this, e.g. iterate on batches and then compile answer containing predictions from all batches.
- Usually, you DAG=processing+scoring. Roughly, in 25% you load data from disk; in other 50% you work with big data (pyspark, Hadoop); in last 25% you work with distributed computing (spark).
DAGs example https://gitlab.com/iterative.ai/cse/use_cases/home_credit_default/-/blob/airflow-dags/dags/scoring.py Showcase of different options: https://gitlab.com/iterative.ai/cse/rnd/deploy-to-airflow
Summary from Mikhail https://iterativeai.slack.com/archives/C0249LW0HAQ/p1631885782026400
For the reference, there is Airflow extension that adds support for DVC operations: https://github.com/covid-genomics/airflow-dvc
UPD: we can take a look at other orchestration tools (dagster, prefect, etc.)
Posting here a short discussion with @mike0sv:
-
mlem apply model hdfs://…
should work already thanks to fsspec - need to check this though - if map-reduce saved many files that should be merged, the above command may not work. We may implement special
Reader
in MLEM that will receive many artifacts, will merge them in a single dataframe beforeapply
(or will output each file as a batch) - to
apply
the models in spark, we can create a UDF for MLEM models - we can think about supporting
sparkml
in MLEM - we can implement export to spark UDF model that will start
spark
and run the data through it when you domlem apply spark-udf-model …
(could have a wow-effect) - with hadoop we can implement
Reader
s as well, that will startspark
and read data in batches
To my mind, we need to get the working demo as simple as possible first. If we can work with mlem apply model hdfs://…
- nice, this can be used. If not, downloading data as csv and using mlem apply model data.csv
can work as well. @mnrozhkov mentioned they have a project with batch scoring in GitLab, we can start with using MLEM there.
One issue I found: when you want to build a docker image for Batch Scoring scenario, you absolutely need to specify --server
option. Why? Let's either specify fastapi
as default so people won't need to confuse themselves about this, or allow to build without any server to skip installing extra dependencies like FastAPI people may not need.
UPD: it was an issue with my local MLEM installation. We don't need to specify --server
. By default it's FASTAPI. Maybe good to give an option to skip installing it anyway, but this is not a priority now.
For reference, a product similar to MLEM that does export to Airflow Pipelines. Need to take a deeper look at it https://docs.lineapy.org/en/latest/guide/build_pipelines/pipeline_basics.html
For now we are building docker images only for serving, so server option is required and run command is mlem serve
.
That can be changed, but for that we first need to understand what it means to build docker for batch scoring