autotrain-advanced icon indicating copy to clipboard operation
autotrain-advanced copied to clipboard

[BUG] No UI available

Open dejankocic opened this issue 2 months ago • 6 comments

Prerequisites

  • [X] I have read the documentation.
  • [X] I have checked other issues for similar problems.

Backend

Local

Interface Used

CLI

CLI Command

autotrain app --port 8080 --host 127.0.0.1

UI Screenshots & Parameters

image

Error Logs

autotrain app --port 8080 --host 127.0.0.1 Your installed package nvidia-ml-py is corrupted. Skip patch functions nvmlDeviceGet{Compute,Graphics,MPSCompute}RunningProcesses. You may get incorrect or incomplete results. Please consider reinstall package nvidia-ml-py via pip3 install --force-reinstall nvidia-ml-py nvitop. Your installed package nvidia-ml-py is corrupted. Skip patch functions nvmlDeviceGetMemoryInfo. You may get incorrect or incomplete results. Please consider reinstall package nvidia-ml-py via pip3 install --force-reinstall nvidia-ml-py nvitop. INFO | 2024-05-09 15:07:07 | autotrain.app::33 - Starting AutoTrain... WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: batch_size, train_split, username, logging_steps, lora_alpha, evaluation_strategy, text_column, save_total_limit, model_max_length, valid_split, token, rejected_text_column, data_path, scheduler, push_to_hub, trainer, warmup_ratio, prompt_text_column, weight_decay, max_grad_norm, use_flash_attention_2, model, gradient_accumulation, optimizer, auto_find_batch_size, lr, lora_r, dpo_beta, project_name, seed, merge_adapter, disable_gradient_checkpointing, lora_dropout, model_ref, max_prompt_length, add_eos_token WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: batch_size, epochs, train_split, username, weight_decay, max_grad_norm, logging_steps, model, evaluation_strategy, text_column, save_total_limit, gradient_accumulation, optimizer, auto_find_batch_size, lr, valid_split, token, project_name, seed, max_seq_length, data_path, scheduler, push_to_hub, target_column, warmup_ratio WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: batch_size, username, epochs, train_split, weight_decay, max_grad_norm, image_column, logging_steps, model, evaluation_strategy, save_total_limit, gradient_accumulation, optimizer, auto_find_batch_size, lr, valid_split, token, project_name, seed, data_path, scheduler, push_to_hub, target_column, warmup_ratio WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: max_target_length, batch_size, username, train_split, epochs, logging_steps, lora_alpha, evaluation_strategy, text_column, save_total_limit, valid_split, peft, max_seq_length, data_path, scheduler, push_to_hub, target_column, warmup_ratio, weight_decay, max_grad_norm, model, gradient_accumulation, optimizer, auto_find_batch_size, lr, quantization, lora_r, project_name, seed, lora_dropout, token WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: id_column, username, train_split, num_trials, model, time_limit, numerical_columns, valid_split, task, project_name, seed, data_path, push_to_hub, categorical_columns, target_columns, token WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: adam_weight_decay, pre_compute_text_embeddings, validation_epochs, epochs, text_encoder_use_attention_mask, username, image_path, scale_lr, adam_epsilon, adam_beta1, checkpoints_total_limit, dataloader_num_workers, checkpointing_steps, prior_preservation, scheduler, lr_power, push_to_hub, validation_images, local_rank, logging, revision, resume_from_checkpoint, tokenizer_max_length, num_class_images, max_grad_norm, model, class_image_path, xl, validation_prompt, rank, class_prompt, adam_beta2, prior_generation_precision, warmup_steps, project_name, seed, class_labels_conditioning, prior_loss_weight, num_validation_images, center_crop, allow_tf32, sample_batch_size, num_cycles, token, tokenizer WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: batch_size, epochs, train_split, username, weight_decay, max_grad_norm, logging_steps, model, evaluation_strategy, save_total_limit, gradient_accumulation, optimizer, auto_find_batch_size, tags_column, lr, valid_split, token, project_name, seed, tokens_column, max_seq_length, data_path, scheduler, push_to_hub, warmup_ratio WARNING | 2024-05-09 15:07:07 | autotrain.trainers.common:init:174 - Parameters not supplied by user and set to default: batch_size, epochs, train_split, username, weight_decay, max_grad_norm, logging_steps, model, evaluation_strategy, text_column, save_total_limit, gradient_accumulation, optimizer, auto_find_batch_size, lr, valid_split, token, project_name, seed, max_seq_length, data_path, scheduler, push_to_hub, target_column, warmup_ratio INFO | 2024-05-09 15:07:10 | autotrain.app::157 - AutoTrain started successfully

Additional Information

After successful start of the application, no UI available.

running environment, WSL

dejankocic avatar May 09 '24 13:05 dejankocic