memAE icon indicating copy to clipboard operation
memAE copied to clipboard

Issue while executing Train.py

Open saikumarkethi opened this issue 5 years ago • 10 comments

--------------PyTorch VERSION: 1.7.0+cu101 ..............device cpu Traceback (most recent call last): File "/content/memAE-2/Train.py", line 76, in args.dataset_augment_test_type) File "/content/memAE-2/data/utils.py", line 148, in give_data_folder return train_folder, test_folder UnboundLocalError: local variable 'train_folder' referenced before assignment

saikumarkethi avatar Nov 11 '20 07:11 saikumarkethi

Hi, so the dataset_type should be one of these three datasets "Avenue, UCSDped2, and UCSDped1".

lyn1874 avatar Nov 12 '20 16:11 lyn1874

Can you please tell me where should i load my data in your code my data path is "/content/drive/MyDrive/UCSD_Anomaly_Dataset.v1p2/UCSD_Anomaly_Dataset/UCSDped1"

saikumarkethi avatar Nov 19 '20 19:11 saikumarkethi

Hi, I have provided scripts about how to prepare for the dataset. Maybe that can give you some help.

lyn1874 avatar Nov 20 '20 14:11 lyn1874

Hi, I have provided scripts about how to prepare for the dataset. Maybe that can give you some help.

Hello mam i am getting error while executing run.sh could you help me to resolve this thank you

./run.sh UCSDped2 /home/sai/Desktop/UCSDped2 /home/sai/Desktop/save --------------PyTorch VERSION: 1.6.0+cpu ..............device cpu The training path /home/sai/Desktop/UCSDped2/UCSDped2/Train_jpg/ The testing path /home/sai/Desktop/UCSDped2/UCSDped2/Test_jpg/ --There is no other augmentation except resizing, grayscale and normalization-- ['Train001', 'Train002', 'Train003', 'Train004', 'Train005', 'Train006', 'Train007', 'Train008', 'Train009', 'Train010', 'Train011', 'Train012', 'Train013', 'Train014', 'Train015', 'Train016'] ['Test001', 'Test002', 'Test003', 'Test004', 'Test005', 'Test006', 'Test007', 'Test008', 'Test009', 'Test010', 'Test011', 'Test012'] Training data shape 192 Validation data shape 152 AutoEncoderCov3DMem Traceback (most recent call last): File "Train.py", line 132, in for batch_idx, frame in enumerate(train_batch): File "/home/sai/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 363, in next data = self._next_data() File "/home/sai/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 989, in _next_data return self._process_data(data) File "/home/sai/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1014, in _process_data data.reraise() File "/home/sai/anaconda3/lib/python3.8/site-packages/torch/_utils.py", line 395, in reraise raise self.exc_type(msg) FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/sai/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop data = fetcher.fetch(index) File "/home/sai/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/sai/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/sai/Desktop/memAE-master/data/utils.py", line 49, in getitem image = self.load_image(self.videos[video_name]['frame'][frame_name+i]) File "/home/sai/Desktop/memAE-master/data/utils.py", line 37, in load_image image = Image.open(filename) File "/home/sai/anaconda3/lib/python3.8/site-packages/PIL/Image.py", line 2891, in open fp = builtins.open(filename, "rb") FileNotFoundError: [Errno 2] No such file or directory: 'Train001_082.jpg'

saikumarkethi avatar Dec 02 '20 21:12 saikumarkethi

Hi, I have provided scripts about how to prepare for the dataset. Maybe that can give you some help.

./run.sh Avenue /home/sai/Desktop/Avenue/ /home/sai/Desktop/save --------------PyTorch VERSION: 1.6.0+cpu ..............device cpu The training path /home/sai/Desktop/Avenue/Avenue/frames/training/ The testing path /home/sai/Desktop/Avenue/Avenue/frames/testing/ --There is no other augmentation except resizing, grayscale and normalization-- ['training/training_video_01', 'training/training_video_02', 'training/training_video_03', 'training/training_video_04', 'training/training_video_05', 'training/training_video_06', 'training/training_video_07', 'training/training_video_08', 'training/training_video_09', 'training/training_video_10', 'training/training_video_11', 'training/training_video_12', 'training/training_video_13', 'training/training_video_14', 'training/training_video_15', 'training/training_video_16'] ['testing/testing_video_01', 'testing/testing_video_02', 'testing/testing_video_03', 'testing/testing_video_04', 'testing/testing_video_05', 'testing/testing_video_06', 'testing/testing_video_07', 'testing/testing_video_08', 'testing/testing_video_09', 'testing/testing_video_10', 'testing/testing_video_11', 'testing/testing_video_12', 'testing/testing_video_13', 'testing/testing_video_14', 'testing/testing_video_15', 'testing/testing_video_16', 'testing/testing_video_17', 'testing/testing_video_18', 'testing/testing_video_19', 'testing/testing_video_20', 'testing/testing_video_21'] Training data shape 1257 Validation data shape 1250 AutoEncoderCov3DMem ./run.sh: line 5: 16405 Killed python Train.py --dataset_path $datapath --dataset_type $datatype --version 0 --EntropyLossWeight 0 --lr 1e-4 --exp_dir $expdir

saikumarkethi avatar Dec 05 '20 07:12 saikumarkethi

As for the error "no such file or directory", I made a mistake in the data loading file, sorry. Please change line 93 in file data.utils.py from all_video_frames = np.array([v for v in os.listdir(path) if '.jpg' in v]) to all_video_frames = np.array([path + v for v in os.listdir(path) if '.jpg' in v]) As for the second error, killed. From my experience, normally it means you are out of memory, maybe reduce the batch size

lyn1874 avatar Dec 09 '20 10:12 lyn1874

As for the error "no such file or directory", I made a mistake in the data loading file, sorry. Please change line 93 in file data.utils.py from all_video_frames = np.array([v for v in os.listdir(path) if '.jpg' in v]) to all_video_frames = np.array([path + v for v in os.listdir(path) if '.jpg' in v])

As for the second error, killed. From my experience, normally it means you are out of memory, maybe reduce the batch size

#This error occurring for UCSDped2 dataset

--------------PyTorch VERSION: 1.7.0+cu101 ..............device cuda The training path /content/drive/MyDrive/UCSDped2/UCSDped2/Train_jpg/ The testing path /content/drive/MyDrive/UCSDped2/UCSDped2/Test_jpg/ --There is no other augmentation except resizing, grayscale and normalization-- ['Train001', 'Train002', 'Train003', 'Train004', 'Train005', 'Train006', 'Train007', 'Train009', 'Train012', 'Train013', 'Train014', 'Train015', 'Train016'] ['Test001', 'Test002', 'Test003', 'Test004', 'Test005', 'Test007', 'Test008', 'Test009', 'Test010', 'Test011', 'Test012'] Training data shape 8 Validation data shape 0 AutoEncoderCov3DMem 2020-12-09 10:34:36.288661: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 Traceback (most recent call last): File "/content/memAE-2/Train.py", line 132, in for batch_idx, frame in enumerate(train_batch): File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 435, in next data = self._next_data() File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 1085, in _next_data return self._process_data(data) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 1111, in _process_data data.reraise() File "/usr/local/lib/python3.6/dist-packages/torch/_utils.py", line 428, in reraise raise self.exc_type(msg) IndexError: Caught IndexError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/content/memAE-2/data/utils.py", line 49, in getitem image = self.load_image(self.videos[video_name]['frame'][frame_name+i]) IndexError: index 26 is out of bounds for axis 0 with size 26


#This error is occuring for Avenue dataset

--------------PyTorch VERSION: 1.7.0+cu101 ..............device cuda The training path /content/drive/MyDrive/Avenue/Avenue/frames/training/ The testing path /content/drive/MyDrive/Avenue/Avenue/frames/testing/ --There is no other augmentation except resizing, grayscale and normalization-- Traceback (most recent call last): File "/content/memAE-2/Train.py", line 81, in train_dataset = data_utils.DataLoader(train_folder, frame_trans, time_step=args.t_length - 1, num_pred=1) File "/content/memAE-2/data/utils.py", line 22, in init self.videos, video_string = setup(self.dir, self.videos) File "/content/memAE-2/data/utils.py", line 89, in setup video_string = sorted(video_string, key=lambda s:int(s.strip().split('')[-1])) File "/content/memAE-2/data/utils.py", line 89, in video_string = sorted(video_string, key=lambda s:int(s.strip().split('')[-1])) ValueError: invalid literal for int() with base 10: 'checkpoints'

saikumarkethi avatar Dec 09 '20 10:12 saikumarkethi

Hi, I have validated the scripts for both UCSD and Avenue datasets. Except for the error in line 93 in file data/utils.py, everything seems fine. Could you please run it again?

lyn1874 avatar Dec 09 '20 17:12 lyn1874

Hi, I have validated the scripts for both UCSD and Avenue datasets. Except for the error in line 93 in file data/utils.py, everything seems fine. Could you please run it again?

I already run your code mam but i am getting errors like i commented about ..since i dont have gpu in my system and i ran your code in colab https://colab.research.google.com/drive/1iVlZDh5N42Gds3vHvasixprbFC0fn_81?usp=sharing can you please view the link and can tell me what mistake i have done

saikumarkethi avatar Dec 09 '20 19:12 saikumarkethi

How did you get this file? Avenue_gt.npy

chenming1999 avatar Dec 21 '20 03:12 chenming1999