autotrain-advanced
autotrain-advanced copied to clipboard
RuntimeError: operator torchvision::nms does not exist
Prerequisites
- [X] I have read the documentation.
- [X] I have checked other issues for similar problems.
Backend
Colab
Interface Used
UI
CLI Command
No response
UI Screenshots & Parameters
No response
Error Logs
INFO | 2024-06-28 13:00:09 | autotrain.cli.autotrain:main:58 - Using AutoTrain configuration: conf.yaml
INFO | 2024-06-28 13:00:09 | autotrain.parser:post_init:133 - Running task: lm_training
INFO | 2024-06-28 13:00:09 | autotrain.parser:post_init:134 - Using backend: local
INFO | 2024-06-28 13:00:09 | autotrain.parser:run:194 - {'model': 'meta-llama/llama-2-7b-chat-hf', 'project_name': 'devmansur', 'data_path': 'data/', 'train_split': 'train', 'valid_split': None, 'add_eos_token': True, 'block_size': 1024, 'model_max_length': 2048, 'padding': 'right', 'trainer': 'default', 'use_flash_attention_2': False, 'log': 'tensorboard', 'disable_gradient_checkpointing': False, 'logging_steps': -1, 'eval_strategy': 'epoch', 'save_total_limit': 1, 'auto_find_batch_size': False, 'mixed_precision': 'fp16', 'lr': 0.0002, 'epochs': 1, 'batch_size': 1, 'warmup_ratio': 0.1, 'gradient_accumulation': 4, 'optimizer': 'adamw_torch', 'scheduler': 'linear', 'weight_decay': 0.01, 'max_grad_norm': 1.0, 'seed': 42, 'chat_template': None, 'quantization': 'none', 'target_modules': 'all-linear', 'merge_adapter': False, 'peft': True, 'lora_r': 8, 'lora_alpha': 32, 'lora_dropout': 0.05, 'model_ref': None, 'dpo_beta': 0.1, 'max_prompt_length': 128, 'max_completion_length': None, 'prompt_text_column': None, 'text_column': 'text', 'rejected_text_column': None, 'push_to_hub': False, 'username': 'abc', 'token': '', 'unsloth': False}
Saving the dataset (1/1 shards): 100% 4/4 [00:00<00:00, 422.24 examples/s]
Saving the dataset (1/1 shards): 100% 4/4 [00:00<00:00, 1860.83 examples/s]
INFO | 2024-06-28 13:00:09 | autotrain.backends.local:create:8 - Starting local training...
INFO | 2024-06-28 13:00:09 | autotrain.commands:launch_command:400 - ['accelerate', 'launch', '--num_machines', '1', '--num_processes', '1', '--mixed_precision', 'fp16', '-m', 'autotrain.trainers.clm', '--training_config', 'devmansur/training_params.json']
INFO | 2024-06-28 13:00:09 | autotrain.commands:launch_command:401 - {'model': 'meta-llama/llama-2-7b-chat-hf', 'project_name': 'devmansur', 'data_path': 'devmansur/autotrain-data', 'train_split': 'train', 'valid_split': None, 'add_eos_token': True, 'block_size': 1024, 'model_max_length': 2048, 'padding': 'right', 'trainer': 'default', 'use_flash_attention_2': False, 'log': 'tensorboard', 'disable_gradient_checkpointing': False, 'logging_steps': -1, 'eval_strategy': 'epoch', 'save_total_limit': 1, 'auto_find_batch_size': False, 'mixed_precision': 'fp16', 'lr': 0.0002, 'epochs': 1, 'batch_size': 1, 'warmup_ratio': 0.1, 'gradient_accumulation': 4, 'optimizer': 'adamw_torch', 'scheduler': 'linear', 'weight_decay': 0.01, 'max_grad_norm': 1.0, 'seed': 42, 'chat_template': None, 'quantization': 'none', 'target_modules': 'all-linear', 'merge_adapter': False, 'peft': True, 'lora_r': 8, 'lora_alpha': 32, 'lora_dropout': 0.05, 'model_ref': None, 'dpo_beta': 0.1, 'max_prompt_length': 128, 'max_completion_length': None, 'prompt_text_column': 'autotrain_prompt', 'text_column': 'autotrain_text', 'rejected_text_column': 'autotrain_rejected_text', 'push_to_hub': False, 'username': 'abc', 'token': '', 'unsloth': False}
Traceback (most recent call last):
File "/usr/local/bin/accelerate", line 5, in
Additional Information
No response