transformerlab-app
transformerlab-app copied to clipboard
Open Source Application for Advanced LLM Engineering: interact, train, fine-tune, and evaluate large language models on your own computer.
The MLX plugin, for example, installs pip dependencies. If you delete the transformerlab conda environment, and then re-install, the MLX plugin will think it is installed but it won't be...
Investigate user al-sabr reports: > The HG download from the UI is not resuming when the wifi gets disconnected which is pain in the eye and there is no status...
We have a /worker/start endpoint in the API but it doesn't allow you to set the inference engine or inference parameters. It's also not clear what "model_filename" refers to in...
Jobs can create artifacts that need to be kept but don't have a place to go. eg.: - output logs from training jobs get stored in the plugin directory currently...
Model: mlx-community/TinyDolphin-2.8-1.1b-4bit-mlx Dataset: samsum Plugin: mlx-lora-trainer `Loading pretrained model Fetching 7 files: 0%| | 0/7 [00:00
log attached [message.txt](https://github.com/transformerlab/transformerlab-app/files/14482240/message.txt)
Empty conversation without any system prompt already starts at 500+ tokens in the counter 
For most users it would be much easier to just configure epochs + batch-size and auto-calculate the amount of iterations based on the amount of training data. # GPT-4 solution:...
Currently exported models just add the export format to the original model ID (eg. Mistral-7B-Instruct-v0.2 - MLX). It'd be better if they included any quantization in the name as most...
When creating a training template add a configuration setting to decide if you want to fuse the adapter/lora after training into the base model or not, instead of always doing...