mlem icon indicating copy to clipboard operation
mlem copied to clipboard

Investigate options to apply MLEM models within Airflow DAGS

Open aguschin opened this issue 2 years ago • 6 comments

As discussed with @mnrozhkov and @tapadipti, it would be great if MLEM would somehow simplify model application within Airflow DAGs. Two questions to start with:

  1. We can create something like MLEMOperator. What would be it's functionality? How it will help users?
  2. We need to either build virtual environment or Docker image to apply the model in the required environment. Two options to provide those would be either do this in CI or as a task in the same DAG. We need to explore these options and find out how MLEM can simplify work for users here. Note: if you run multiple workers then it may be beneficial to build env in advance. If you have one worker, you may be ok with building it while running MLEMOperator.

Other notes:

  1. Sometimes data is huge and you need to process it in chunks (it may or may not be the case with pyspark. Without pyspark it can be too hard to fit all data in RAM). We need some way to resolve this, e.g. iterate on batches and then compile answer containing predictions from all batches.
  2. Usually, you DAG=processing+scoring. Roughly, in 25% you load data from disk; in other 50% you work with big data (pyspark, Hadoop); in last 25% you work with distributed computing (spark).

DAGs example https://gitlab.com/iterative.ai/cse/use_cases/home_credit_default/-/blob/airflow-dags/dags/scoring.py Showcase of different options: https://gitlab.com/iterative.ai/cse/rnd/deploy-to-airflow

Summary from Mikhail https://iterativeai.slack.com/archives/C0249LW0HAQ/p1631885782026400

aguschin avatar Sep 10 '21 12:09 aguschin

For the reference, there is Airflow extension that adds support for DVC operations: https://github.com/covid-genomics/airflow-dvc

aguschin avatar Oct 19 '21 06:10 aguschin

UPD: we can take a look at other orchestration tools (dagster, prefect, etc.)

aguschin avatar Oct 06 '22 04:10 aguschin

Posting here a short discussion with @mike0sv:

  • mlem apply model hdfs://… should work already thanks to fsspec - need to check this though
  • if map-reduce saved many files that should be merged, the above command may not work. We may implement special Reader in MLEM that will receive many artifacts, will merge them in a single dataframe before apply (or will output each file as a batch)
  • to apply the models in spark, we can create a UDF for MLEM models
  • we can think about supporting sparkml in MLEM
  • we can implement export to spark UDF model that will start spark and run the data through it when you do mlem apply spark-udf-model … (could have a wow-effect)
  • with hadoop we can implement Readers as well, that will start spark and read data in batches

To my mind, we need to get the working demo as simple as possible first. If we can work with mlem apply model hdfs://… - nice, this can be used. If not, downloading data as csv and using mlem apply model data.csv can work as well. @mnrozhkov mentioned they have a project with batch scoring in GitLab, we can start with using MLEM there.

aguschin avatar Oct 15 '22 04:10 aguschin

One issue I found: when you want to build a docker image for Batch Scoring scenario, you absolutely need to specify --server option. Why? Let's either specify fastapi as default so people won't need to confuse themselves about this, or allow to build without any server to skip installing extra dependencies like FastAPI people may not need.

UPD: it was an issue with my local MLEM installation. We don't need to specify --server. By default it's FASTAPI. Maybe good to give an option to skip installing it anyway, but this is not a priority now.

aguschin avatar Oct 17 '22 04:10 aguschin

For reference, a product similar to MLEM that does export to Airflow Pipelines. Need to take a deeper look at it https://docs.lineapy.org/en/latest/guide/build_pipelines/pipeline_basics.html

aguschin avatar Oct 20 '22 07:10 aguschin

For now we are building docker images only for serving, so server option is required and run command is mlem serve. That can be changed, but for that we first need to understand what it means to build docker for batch scoring

mike0sv avatar Oct 27 '22 17:10 mike0sv