transformerlab-app
transformerlab-app copied to clipboard
It's not clear how to start an inference server via API
We have a /worker/start endpoint in the API but it doesn't allow you to set the inference engine or inference parameters.
It's also not clear what "model_filename" refers to in the app since we refer to the unique id of the model in several ways across the API (uniqueId, filename, huggingface_id).