h2o-llmstudio
h2o-llmstudio copied to clipboard
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://docs.h2o.ai/h2o-llmstudio/
### 🔧 Proposed code refactoring Allow simple import of dataset from Hugging Face ```python from datasets import load_dataset import pandas as pd Load your dataset dataset = load_dataset('fka/awesome-chatgpt-prompts') # replace...
### 🚀 Feature We currently have a few settings for 4bit and 8bit quantization hard-coded ```python load_in_8bit=True, llm_int8_threshold=0.0, ``` ```python load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16, bnb_4bit_quant_type="nf4", ``` We may want to open these...
### 🔧 Proposed code refactoring During import, the user can currently only specify prompt and answer column.
Add a guide on how to deploy LLM Studio with HTTPs support with links to the Wave docs. Neccessary steps in brief: - Run Wave server (waved) and wave app...
### 🚀 Feature Fix how responsive the web pages are so they can be viewed and used effectively on mobile devices. - [ ] Home page - [ ] Experiments...
### 🚀 Feature Add the capability to the UI to kick off a grid-search over a set of hyperparameters (with specified search increments for continuous parameters, and specified attributes for...
### 🚀 Feature Integrate with https://wandb.ai/ ### Motivation Track and compare your model performance visually with Weights and Biases. Similar to https://neptune.ai/ ### Implementation Notes - See Neptune for reference:...
Removes `save_best_checkpoint` and adds new setting `save_checkpoint` with options: - last - best - disable which will not save checkpoint additional checks to disable chat, download and pushing of model...
### 🚀 Feature Would be helpful to have a setting to disable saving the checkpoint, such as for tests or benchmark runs to not fill up local disk. Specifically useful...
### 🐛 Bug Native bfloat16 model fine-tuned with bfloat16 gets pushed to HuggingFace as float16 ### To Reproduce 1. Choose a HF model like [Llama-3](https://huggingface.co/meta-llama/Meta-Llama-3-8B) with weights natively as bfloat16...