Issue while executing Train.py
--------------PyTorch VERSION: 1.7.0+cu101
..............device cpu
Traceback (most recent call last):
File "/content/memAE-2/Train.py", line 76, in
Hi, so the dataset_type should be one of these three datasets "Avenue, UCSDped2, and UCSDped1".
Can you please tell me where should i load my data in your code my data path is "/content/drive/MyDrive/UCSD_Anomaly_Dataset.v1p2/UCSD_Anomaly_Dataset/UCSDped1"
Hi, I have provided scripts about how to prepare for the dataset. Maybe that can give you some help.
Hi, I have provided scripts about how to prepare for the dataset. Maybe that can give you some help.
Hello mam i am getting error while executing run.sh could you help me to resolve this thank you
./run.sh UCSDped2 /home/sai/Desktop/UCSDped2 /home/sai/Desktop/save
--------------PyTorch VERSION: 1.6.0+cpu
..............device cpu
The training path /home/sai/Desktop/UCSDped2/UCSDped2/Train_jpg/
The testing path /home/sai/Desktop/UCSDped2/UCSDped2/Test_jpg/
--There is no other augmentation except resizing, grayscale and normalization--
['Train001', 'Train002', 'Train003', 'Train004', 'Train005', 'Train006', 'Train007', 'Train008', 'Train009', 'Train010', 'Train011', 'Train012', 'Train013', 'Train014', 'Train015', 'Train016']
['Test001', 'Test002', 'Test003', 'Test004', 'Test005', 'Test006', 'Test007', 'Test008', 'Test009', 'Test010', 'Test011', 'Test012']
Training data shape 192
Validation data shape 152
AutoEncoderCov3DMem
Traceback (most recent call last):
File "Train.py", line 132, in
Hi, I have provided scripts about how to prepare for the dataset. Maybe that can give you some help.
./run.sh Avenue /home/sai/Desktop/Avenue/ /home/sai/Desktop/save --------------PyTorch VERSION: 1.6.0+cpu ..............device cpu The training path /home/sai/Desktop/Avenue/Avenue/frames/training/ The testing path /home/sai/Desktop/Avenue/Avenue/frames/testing/ --There is no other augmentation except resizing, grayscale and normalization-- ['training/training_video_01', 'training/training_video_02', 'training/training_video_03', 'training/training_video_04', 'training/training_video_05', 'training/training_video_06', 'training/training_video_07', 'training/training_video_08', 'training/training_video_09', 'training/training_video_10', 'training/training_video_11', 'training/training_video_12', 'training/training_video_13', 'training/training_video_14', 'training/training_video_15', 'training/training_video_16'] ['testing/testing_video_01', 'testing/testing_video_02', 'testing/testing_video_03', 'testing/testing_video_04', 'testing/testing_video_05', 'testing/testing_video_06', 'testing/testing_video_07', 'testing/testing_video_08', 'testing/testing_video_09', 'testing/testing_video_10', 'testing/testing_video_11', 'testing/testing_video_12', 'testing/testing_video_13', 'testing/testing_video_14', 'testing/testing_video_15', 'testing/testing_video_16', 'testing/testing_video_17', 'testing/testing_video_18', 'testing/testing_video_19', 'testing/testing_video_20', 'testing/testing_video_21'] Training data shape 1257 Validation data shape 1250 AutoEncoderCov3DMem ./run.sh: line 5: 16405 Killed python Train.py --dataset_path $datapath --dataset_type $datatype --version 0 --EntropyLossWeight 0 --lr 1e-4 --exp_dir $expdir
As for the error "no such file or directory", I made a mistake in the data loading file, sorry. Please change line 93 in file data.utils.py from all_video_frames = np.array([v for v in os.listdir(path) if '.jpg' in v]) to all_video_frames = np.array([path + v for v in os.listdir(path) if '.jpg' in v])
As for the second error, killed. From my experience, normally it means you are out of memory, maybe reduce the batch size
As for the error "no such file or directory", I made a mistake in the data loading file, sorry. Please change line 93 in file data.utils.py from all_video_frames = np.array([v for v in os.listdir(path) if '.jpg' in v]) to all_video_frames = np.array([path + v for v in os.listdir(path) if '.jpg' in v])
As for the second error, killed. From my experience, normally it means you are out of memory, maybe reduce the batch size
#This error occurring for UCSDped2 dataset
--------------PyTorch VERSION: 1.7.0+cu101
..............device cuda
The training path /content/drive/MyDrive/UCSDped2/UCSDped2/Train_jpg/
The testing path /content/drive/MyDrive/UCSDped2/UCSDped2/Test_jpg/
--There is no other augmentation except resizing, grayscale and normalization--
['Train001', 'Train002', 'Train003', 'Train004', 'Train005', 'Train006', 'Train007', 'Train009', 'Train012', 'Train013', 'Train014', 'Train015', 'Train016']
['Test001', 'Test002', 'Test003', 'Test004', 'Test005', 'Test007', 'Test008', 'Test009', 'Test010', 'Test011', 'Test012']
Training data shape 8
Validation data shape 0
AutoEncoderCov3DMem
2020-12-09 10:34:36.288661: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
Traceback (most recent call last):
File "/content/memAE-2/Train.py", line 132, in
#This error is occuring for Avenue dataset
--------------PyTorch VERSION: 1.7.0+cu101
..............device cuda
The training path /content/drive/MyDrive/Avenue/Avenue/frames/training/
The testing path /content/drive/MyDrive/Avenue/Avenue/frames/testing/
--There is no other augmentation except resizing, grayscale and normalization--
Traceback (most recent call last):
File "/content/memAE-2/Train.py", line 81, in
Hi, I have validated the scripts for both UCSD and Avenue datasets. Except for the error in line 93 in file data/utils.py, everything seems fine. Could you please run it again?
Hi, I have validated the scripts for both UCSD and Avenue datasets. Except for the error in line 93 in file data/utils.py, everything seems fine. Could you please run it again?
I already run your code mam but i am getting errors like i commented about ..since i dont have gpu in my system and i ran your code in colab https://colab.research.google.com/drive/1iVlZDh5N42Gds3vHvasixprbFC0fn_81?usp=sharing can you please view the link and can tell me what mistake i have done
How did you get this file? Avenue_gt.npy