ludwig icon indicating copy to clipboard operation
ludwig copied to clipboard

Triton ensemble export

Open abidwael opened this issue 3 years ago • 2 comments

Exports Triton configs and scripted models as well an an ensemble config.

  1. train a ludwig model.
from ludwig.api import LudwigModel
from ludwig.datasets import titanic

training_set, test_set, _ = titanic.load(split=True)
model = LudwigModel(config="./titanic.yaml")

train_stats, preprocessed_data, output_directory = model.train(training_set=training_set,
                                                               test_set=test_set,
                                                               experiment_name="simple_experiment",
                                                               model_name="simple_model",
                                                               skip_save_processed_input=True)
  1. To export to models and configs to a Triton-compliant structure
export_triton(model, data_example, output_path, model_name, model_version, device, device_count)
  1. Find the exported models under model_repository/

abidwael avatar Jul 11 '22 01:07 abidwael

Unit Test Results

       6 files  ±0         6 suites  ±0   2h 44m 41s :stopwatch: - 3m 47s 2 966 tests ±0  2 914 :heavy_check_mark:  - 3    52 :zzz: +3  0 :x: ±0  8 898 runs  ±0  8 706 :heavy_check_mark:  - 9  192 :zzz: +9  0 :x: ±0 

Results for commit e34d411a. ± Comparison against base commit f654591b.

:recycle: This comment has been updated with latest results.

github-actions[bot] avatar Jul 11 '22 02:07 github-actions[bot]

  • The preprocessor and postprocessor are always exported with max_batch_size = 0. This is because Triton cannot batch certain input types like strings. No specific reason behind setting max_batch_size = 0 for the postprocessor.
  • The default for predictor is max_batch_size = 1 with dynamic batching enabled. In the future, we can have other defaults based on feature types and model size.

abidwael avatar Aug 09 '22 03:08 abidwael