ludwig
ludwig copied to clipboard
Triton ensemble export
Exports Triton configs and scripted models as well an an ensemble config.
- train a ludwig model.
from ludwig.api import LudwigModel
from ludwig.datasets import titanic
training_set, test_set, _ = titanic.load(split=True)
model = LudwigModel(config="./titanic.yaml")
train_stats, preprocessed_data, output_directory = model.train(training_set=training_set,
test_set=test_set,
experiment_name="simple_experiment",
model_name="simple_model",
skip_save_processed_input=True)
- To export to models and configs to a Triton-compliant structure
export_triton(model, data_example, output_path, model_name, model_version, device, device_count)
- Find the exported models under
model_repository/
Unit Test Results
6 files ±0 6 suites ±0 2h 44m 41s :stopwatch: - 3m 47s 2 966 tests ±0 2 914 :heavy_check_mark: - 3 52 :zzz: +3 0 :x: ±0 8 898 runs ±0 8 706 :heavy_check_mark: - 9 192 :zzz: +9 0 :x: ±0
Results for commit e34d411a. ± Comparison against base commit f654591b.
:recycle: This comment has been updated with latest results.
- The preprocessor and postprocessor are always exported with
max_batch_size = 0. This is because Triton cannot batch certain input types like strings. No specific reason behind settingmax_batch_size = 0for the postprocessor. - The default for predictor is
max_batch_size = 1with dynamic batching enabled. In the future, we can have other defaults based on feature types and model size.