autotrain-advanced
autotrain-advanced copied to clipboard
Error with --project-name argument in !autotrain llm command
Getting error when running on google collab, telling me that i didn't provide project name but yet i did.
!autotrain llm --train --project_name 'Llama2 testing-model' --model meta-llama/Llama-2-7b-chat-hf --data_path vicgalle/alpaca-gpt4 --text_column text --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 2 --num_train_epochs 3 --trainer sft --model_max_length 2048 --push_to_hub --repo_id student100/llama2-testing -block_size 2048 > training.log &
usage: autotrain
change it to --project_name 'Llama2testing-model'
no space.
change it to
--project_name 'Llama2testing-model'no space.
i have removed the space as you mentioned but i still get the very same result , no idea why its happened.
!autotrain llm --train --project_name 'Llama2testing-model' --model meta-llama/Llama-2-7b-chat-hf --data_path vicgalle/alpaca-gpt4 --text_column text --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 2 --num_train_epochs 3 --trainer sft --model_max_length 2048 --push_to_hub --repo_id student100/llama2-testing -block_size 2048 > training.log &
usage: autotrain
project-name?
--project_name 'Llama2testing-model'
see the hyphen :) ill fix it so it allows underscore too
well the output remains the same
!autotrain llm --train --project_name 'Llama2testingmodel' --model meta-llama/Llama-2-7b-chat-hf --data_path vicgalle/alpaca-gpt4 --text_column text --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 2 --num_train_epochs 3 --trainer sft --model_max_length 2048 --push_to_hub --repo_id student100/llama2-testing -block_size 2048 > training.log &
see the hyphen between project and name please
sorry but i don't get it ,is it to removed the hyphen within the project name?
i also tried to removed from the command
!autotrain llm --train --projectname 'Llama2testingmodel' --model meta-llama/Llama-2-7b-chat-hf --data_path vicgalle/alpaca-gpt4 --text_column text --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 2 --num_train_epochs 3 --trainer sft --model_max_length 2048 --push_to_hub --repo_id student100/llama2-testing -block_size 2048 > training.log &
!autotrain llm --train --project-name 'Llama2testingmodel' --model meta-llama/Llama-2-7b-chat-hf --data_path vicgalle/alpaca-gpt4 --text_column text --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 2 --num_train_epochs 3 --trainer sft --model_max_length 2048 --push_to_hub --repo_id student100/llama2-testing -block_size 2048 > training.log &
okay different output right now ,but still getting other kind of problems haha
!autotrain llm --train --project-name 'Llama2testingmodel' --model meta-llama/Llama-2-7b-chat-hf --data_path vicgalle/alpaca-gpt4 --text_column text --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 2 --num_train_epochs 3 --trainer sft --model_max_length 2048 --push_to_hub --repo_id student100/llama2-testing -block_size 2048 > training.log &
usage: autotrain
i changed all the underscore from the commands into hyphen
!autotrain llm --train --project-name 'Llama2testingmodel' --model meta-llama/Llama-2-7b-chat-hf --data-path vicgalle/alpaca-gpt4 --text-column text --use-peft --use-int4 --learning-rate 2e-4 --train-batch-size 2 --num-train-epochs 3 --trainer sft --model_max_length 2048 --push-to-hub --repo-id student100/llama2-testing -block-size 2048 > training.log &
usage: autotrain
you can get all arguments using "autotrain llm --help". im not sure where you have taken the command from but they seem quite off and old version.
you can also follow the colab link in readme.
This issue is stale because it has been open for 15 days with no activity.
This issue was closed because it has been inactive for 20 days since being marked as stale.