alpaca-lora
alpaca-lora copied to clipboard
Questions about training on Google Colab.
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
},
"accelerator": "GPU",
"gpuClass": "standard"
},
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "XLnZXSOA1_QC"
},
"outputs": [],
"source": [
"%cd /content\n",
"!rm -rf ./*\n",
"!git clone https://github.com/acheong08/alpaca-lora\n",
"%cd alpaca-lora"
]
},
{
"cell_type": "code",
"source": [
"!git pull"
],
"metadata": {
"id": "bfONMRGg2Isd"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"!pip3 install -r requirements.txt"
],
"metadata": {
"id": "adivpYCG3S0B"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"!wget https://huggingface.co/datasets/acheong08/nsfw_reddit/resolve/main/data/data.jsonl"
],
"metadata": {
"id": "5uPB8R0y4uaC"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"!python3 finetune.py --base_model='decapoda-research/llama-7b-hf'"
],
"metadata": {
"id": "Def4qxGi41OT"
},
"execution_count": null,
"outputs": []
}
]
}
After training with this, the output always just repeats the input. Is there something wrong with how I'm training?
it would be nice to have an example notebook
{"train_runtime": 2584.0063, "train_samples_per_second": 0.531, "train_steps_per_second": 0.003, "train_loss": 2.3480387793646917, "epoch": 2.5}
@acheong08 I have an example notebook for this: https://github.com/TianyiPeng/Colab_for_Alpaca_Lora