LLM-Finetuning-Toolkit icon indicating copy to clipboard operation
LLM-Finetuning-Toolkit copied to clipboard

Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.

Results 27 LLM-Finetuning-Toolkit issues
Sort by recently updated
recently updated
newest added

A lot of people want LLMs to output JSON for easy parsing/post-processing A test that passes if the output is valid JSON, fails otherwise.

DRAFT FOR NOW I'm working on some documentation to help users test their LLMs. It's a really difficult task, and there's no agreed upon best-practice. This should both help guide...

Is it possible to provide a config file that shows how to run inference on an already fine-tuned model? I have run the starter config, and it looks like the...

The command `llmtune inference [experiment_dir]` aims to provide a versatile interface for running inference on pre-trained language models, allowing users to: 1. Load and run inference on a dataset; or...

enhancement

Ran: llmtune generate config llmtune run ./config.yml Things worked well (once I fixed my mistake with Mistral/huggingface repo permissions). The job ran very fast and put results into the "experiment"...

This improves user experience when desire output is json E.g. ```yaml prompt_stub: >- { "foo": {col_1}, "bar": {col_2} } ``` The current approach would not work as we're capturing everything...

enhancement
Breaking Change

- Right now debug outputs and warnings are suppressed in favor of a cleaner UI - Should leave users to choose a more verbose output by running something like ```shell...

enhancement
good first issue

Some of my examples of historical chat logs. How should I incorporate this into the input ideally?

- Makes it clear that this method performs inference on the test set