transformerlab-app icon indicating copy to clipboard operation
transformerlab-app copied to clipboard

Open Source Application for Advanced LLM Engineering: interact, train, fine-tune, and evaluate large language models on your own computer.

Results 54 transformerlab-app issues
Sort by recently updated
recently updated
newest added

The MLX plugin, for example, installs pip dependencies. If you delete the transformerlab conda environment, and then re-install, the MLX plugin will think it is installed but it won't be...

bug
good first issue

Investigate user al-sabr reports: > The HG download from the UI is not resuming when the wifi gets disconnected which is pain in the eye and there is no status...

bug

We have a /worker/start endpoint in the API but it doesn't allow you to set the inference engine or inference parameters. It's also not clear what "model_filename" refers to in...

bug

Jobs can create artifacts that need to be kept but don't have a place to go. eg.: - output logs from training jobs get stored in the plugin directory currently...

enhancement

Model: mlx-community/TinyDolphin-2.8-1.1b-4bit-mlx Dataset: samsum Plugin: mlx-lora-trainer `Loading pretrained model Fetching 7 files: 0%| | 0/7 [00:00

bug

log attached [message.txt](https://github.com/transformerlab/transformerlab-app/files/14482240/message.txt)

bug

Empty conversation without any system prompt already starts at 500+ tokens in the counter ![image](https://github.com/transformerlab/transformerlab-app/assets/149181237/6de3d7f2-628d-4817-93d2-6f2a8532e277)

bug

For most users it would be much easier to just configure epochs + batch-size and auto-calculate the amount of iterations based on the amount of training data. # GPT-4 solution:...

enhancement

Currently exported models just add the export format to the original model ID (eg. Mistral-7B-Instruct-v0.2 - MLX). It'd be better if they included any quantization in the name as most...

bug

When creating a training template add a configuration setting to decide if you want to fuse the adapter/lora after training into the base model or not, instead of always doing...

enhancement
good first issue