memae-anomaly-detection icon indicating copy to clipboard operation
memae-anomaly-detection copied to clipboard

Testing AUC on Ped2 Unmatched

Open Wolfybox opened this issue 4 years ago • 26 comments

I had run the testing script on Ped2 dataset and got an auc of only around 85.1%.

Wolfybox avatar Jul 01 '20 10:07 Wolfybox

Can you share the Ped2 data preparation code or give me some guidance? I'm stuck in Ped2 data preparation for many weeks.

WYZhang999 avatar Jul 02 '20 02:07 WYZhang999

Thank you very much.

WYZhang999 avatar Jul 02 '20 02:07 WYZhang999

Can you share the Ped2 data preparation code or give me some guidance? I'm stuck in Ped2 data preparation for many weeks.

def gen_frame_index(clip_len=16): vfolder = r'F:\dataset\UCSD\ped2\testing\frames' save_dir = r'F:\dataset\UCSD\ped2\testing\indices' for vname in tqdm(os.listdir(vfolder)): vdir = os.path.join(vfolder, vname) flist = sorted(os.listdir(vdir)) fnum = len(flist) clip_num = math.ceil(fnum / clip_len) clip_num_len = len(str(clip_num)) target_dir = os.path.join(save_dir, vname) if not os.path.exists(target_dir): os.makedirs(target_dir) for clip_i in range(clip_num): start_fi = clip_i * clip_len end_fi = (clip_i + 1) * clip_len if (clip_i + 1) * clip_len < fnum else fnum clip_list = flist[start_fi: end_fi] clip_list = np.array(clip_list) np.save(os.path.join(target_dir, f'{str(clip_i).zfill(clip_num_len)}.npy'), clip_list)

Well, this is how i generate the so-called indices which the author's code required. However, to use these indices, you will also have to modify a few lines of code in the 'video_dataset.py'. Actually the data preparation is not troublesome at all. The basic logic is simple: the frames' indices(or more specifically the name of images in the frame folder) are divided by clip and then saved to individual clip indices file.

Wolfybox avatar Jul 02 '20 02:07 Wolfybox

You are so nice! I still have a lot to learn.Have a nice day!

WYZhang999 avatar Jul 02 '20 02:07 WYZhang999

Execuse me.It's me again.Can you please share the training codes? I want to learn about it and I really appreciate you.

WYZhang999 avatar Jul 02 '20 03:07 WYZhang999

Execuse me.It's me again.Can you please share the training codes? I want to learn about it and I really appreciate you.

Welp, I haven't done the training part. :P

Wolfybox avatar Jul 02 '20 03:07 Wolfybox

Okay,fine.Thank you again~

WYZhang999 avatar Jul 02 '20 03:07 WYZhang999

Execuse me.I want to use these indices but I failed to modify codes in the 'video_datasets.py'. Can I see your modified code in the 'video_datasets.py'?

WYZhang999 avatar Jul 02 '20 07:07 WYZhang999

Execuse me.I want to use these indices but I failed to modify codes in the 'video_datasets.py'. Can I see your modified code in the 'video_datasets.py'?

`class VideoDatasetOneDir(Dataset): def init(self, idx_dir, frame_root, is_testing=False, use_cuda=False, transform=None): self.idx_dir = idx_dir self.frame_root = frame_root self.idx_name_list = [name for name in os.listdir(self.idx_dir)] self.idx_name_list.sort() self.use_cuda = use_cuda self.transform = transform self.is_testing = is_testing

def __len__(self):
    return len(self.idx_name_list)

def __getitem__(self, clip_idx):
    """ get a video clip with stacked frames indexed by the (idx) """
    idx_name = self.idx_name_list[clip_idx]
    frame_idx = np.load(os.path.join(self.idx_dir, idx_name))
    v_dir = self.frame_root

    sample_frame = cv2.imread(os.path.join(v_dir, frame_idx[0]), cv2.IMREAD_GRAYSCALE)

    sample_frame_shape = sample_frame.shape
    h = sample_frame_shape[0]
    w = sample_frame_shape[1]

    # each sample is concatenation of the indexed frames
    clip = []
    for fname in frame_idx:
        cur_frame = cv2.imread(os.path.join(v_dir, fname), cv2.IMREAD_GRAYSCALE)
        cur_frame = cv2.resize(cur_frame, (w + 8, h), cv2.INTER_CUBIC)
        cur_frame = torch.from_numpy(cur_frame)
        clip.append(cur_frame)
    if len(clip) < 16:
        clip += [clip[-1]] * (16 - len(clip))
    clip = torch.stack(clip, dim=0)
    clip = clip.unsqueeze(dim=0).float()
    return clip_idx, clip`

Wolfybox avatar Jul 03 '20 01:07 Wolfybox

Thank you very much!!!Have a nice day ! I love HIT!

WYZhang999 avatar Jul 03 '20 01:07 WYZhang999

Thanks for sharing your code. I have a little confusion. I attained the index from frames using the shared code, but what about the gt files? I have downloaded dataset directly from UCSD. The gt files are also in frames, but the structure for dataset that the author here showed seems like gt frames are transformed into one matrix file, instead of clips.

I do not know the purpose of video_datasets.py

callbarian avatar Jul 03 '20 06:07 callbarian

Thanks for sharing your code. I have a little confusion. I attained the index from frames using the shared code, but what about the gt files? I have downloaded dataset directly from UCSD. The gt files are also in frames, but the structure for dataset that the author here showed seems like gt frames are transformed into one matrix file, instead of clips.

I do not know the purpose of video_datasets.py

The gt file of the Ped2 is named "ped2.mat" which is an array-like data of 12 different tuples indicating the starting and ending frame of the anomalous event. The corresponding evaluation part lies in 'scrip_eval_video.py' and 'util/eval.py'. However, the format of the gt file or data doesn't influence the 'script_testing.py' since there are two individual files. And this is how I load the gt file from Ped2:

gt_path = r'F:\dataset\UCSD\ped2\ped2.mat'
gt_list = []
gt_data = sio.loadmat(gt_path)['gt'][0]
for gt_tuple in gt_data:
    gt_tuple = gt_tuple.squeeze()
    start, end = gt_tuple[0], gt_tuple[1]
    gt_list.append((start, end))

To generate ground truth data, I applied following processing which is simple:

for i in range(len(gt_list)):
    start, end = gt_list[i]
    fnum = fnum_list[i]
    y_true = [0] * start + [1] * (end - start) + [0] * (fnum - end)
    y_trues.extend(y_true)

About the 'video_dataset.py' codes, I am afraid I can't explain further since it will be a long story to describe my ideas for data loading.

Wolfybox avatar Jul 03 '20 11:07 Wolfybox

Thanks a lot.It's very helpful.

WYZhang999 avatar Jul 04 '20 01:07 WYZhang999

Thank you for the comment! As reading the paper, I found out that the index files should be 'frame_number - 15', since the 16frame long clips are made sliding. if there are 180 frames, then the clips will be 165 clips. The center frame of each clips will be evaluated to match the ground truth. That is why the ground truth for the first 8 and the last 7 are excluded. I attained the auc with 86.63%. MemAE on Ped2(Test005 was excluded since the frames were missing)

I have modified the code that you suggested

def gen_frame_index(clip_len=16): print(os.getcwd()) vfolder = os.getcwd() + '/dataset/UCSD_P2_256/testing' save_dir = os.getcwd() + '/dataset/UCSD_P2_256/testing_idx' if not os.path.exists(save_dir): os.makedirs(save_dir)

tqdm = os.listdir(vfolder)
for vname in tqdm:
    vdir = os.path.join(vfolder, vname)
    if(vname ==".DS_Store"):
        continue
    flist = sorted(os.listdir(vdir))
    fnum = len(flist)
    #clip_num = math.ceil(fnum / clip_len)
    fnum_len = len(str(fnum))
    target_dir = os.path.join(save_dir, vname)
    if not os.path.exists(target_dir):
        os.makedirs(target_dir)
    for clip_i in range(fnum-15):
        start_fi = clip_i
        end_fi = start_fi + 16
        #frame_idx = []
        #frame_idx.append(list(range(start_fi,end_fi)))
        clip_list = flist[start_fi: end_fi]
        clip_list = np.array(clip_list)
        #save_numpy = f'{str(clip_i).zfill(fnum_len)}.mat'
        save_numpy = f'{str(clip_i).zfill(fnum_len)}.npy'
        np.save(os.path.join(target_dir, save_numpy), clip_list)

thank you for sharing the code. It was very helpful

callbarian avatar Jul 06 '20 09:07 callbarian

@callbarian

Actually, I think the way you prepared the dataset is more likely to be consistent with the original paper (i.e. the 16-frame-long clip sliding strategy)

LiUzHiAn avatar Jul 08 '20 10:07 LiUzHiAn

Thank you for the comment! As reading the paper, I found out that the index files should be 'frame_number - 15', since the 16frame long clips are made sliding. if there are 180 frames, then the clips will be 165 clips. The center frame of each clips will be evaluated to match the ground truth. That is why the ground truth for the first 8 and the last 7 are excluded. I attained the auc with 86.63%. MemAE on Ped2(Test005 was excluded since the frames were missing)

I have modified the code that you suggested

def gen_frame_index(clip_len=16): print(os.getcwd()) vfolder = os.getcwd() + '/dataset/UCSD_P2_256/testing' save_dir = os.getcwd() + '/dataset/UCSD_P2_256/testing_idx' if not os.path.exists(save_dir): os.makedirs(save_dir)

tqdm = os.listdir(vfolder)
for vname in tqdm:
    vdir = os.path.join(vfolder, vname)
    if(vname ==".DS_Store"):
        continue
    flist = sorted(os.listdir(vdir))
    fnum = len(flist)
    #clip_num = math.ceil(fnum / clip_len)
    fnum_len = len(str(fnum))
    target_dir = os.path.join(save_dir, vname)
    if not os.path.exists(target_dir):
        os.makedirs(target_dir)
    for clip_i in range(fnum-15):
        start_fi = clip_i
        end_fi = start_fi + 16
        #frame_idx = []
        #frame_idx.append(list(range(start_fi,end_fi)))
        clip_list = flist[start_fi: end_fi]
        clip_list = np.array(clip_list)
        #save_numpy = f'{str(clip_i).zfill(fnum_len)}.mat'
        save_numpy = f'{str(clip_i).zfill(fnum_len)}.npy'
        np.save(os.path.join(target_dir, save_numpy), clip_list)

thank you for sharing the code. It was very helpful

I just noticed that it was written in the paper that "the normality of each frame is evaluated by the reconstruction error of the cuboid centering on it. " So I guess the author are referring to an overlapped sliding strategy.

Wolfybox avatar Jul 08 '20 10:07 Wolfybox

@Wolfybox

Yep, the 'cuboid centering on it' might give the clues. BTW, have you guys finished the training process?

LiUzHiAn avatar Jul 08 '20 11:07 LiUzHiAn

@Wolfybox

Yep, the 'cuboid centering on it' might give the clues. BTW, have you guys finished the training process?

I wrote a training script yet it only got me an AUC of around 86% on Ped2. BTW, I noticed the author didnt implement cosine similarity when computing the attention weight.

Wolfybox avatar Jul 18 '20 01:07 Wolfybox

@Wolfybox can you show some train detail? like init learning optimizer totoal_epoch? i am re-implement this paper in this week. since the author write code : gt_labels[8:-7] i suppose he ignore the border frames for each video when eval auc, did you do that too?

sdjsngs avatar Aug 11 '20 14:08 sdjsngs

Thanks for the fruitful discussion. I got an accuracy of 94% on UCSDped2 using the pretrained model ckpt, the only difference I had with @Wolfybox 's dataloader is that I simply used torch transformation pipeline frame_trans = transforms.Compose([ transforms.Resize([height, width]), transforms.Grayscale(num_output_channels=1), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]), ])

I have also noticed gt_labels[8:-7] in the evaluation file, and I think the reason for doing this is that they assign the averaged reconstruction error in a video clip to the center frame: i.e., if the video clip starts from frame_001.jpg and end with frame_016.jpg, then the averaged reconstruction error is assumed to be the error for the frame frame_008.jpg.

lyn1874 avatar Aug 18 '20 11:08 lyn1874

Thanks for the fruitful discussion. I got an accuracy of 94% on UCSDped2 using the pretrained model ckpt, the only difference I had with @Wolfybox 's dataloader is that I simply used torch transformation pipeline frame_trans = transforms.Compose([ transforms.Resize([height, width]), transforms.Grayscale(num_output_channels=1), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]), ]) I have also noticed gt_labels[8:-7] in the evaluation file, and I think the reason for doing this is that they assign the averaged reconstruction error in a video clip to the center frame: i.e., if the video clip starts from frame_001.jpg and end with frame_016.jpg, then the averaged reconstruction error is assumed to be the error for the frame frame_008.jpg.

can you share training code please

https://github.com/lyn1874/memAE

lyn1874 avatar Oct 20 '20 13:10 lyn1874

Hi guys, thanks for the discussion and clarification. Specifically, thanks @lyn1874 for the wonderful repo and reproduction. I uploaded an example for dataset preparation and training. Hope that can be helpful.

donggong1 avatar Apr 27 '21 07:04 donggong1

Thanks for the fruitful discussion. I got an accuracy of 94% on UCSDped2 using the pretrained model ckpt, the only difference I had with @Wolfybox 's dataloader is that I simply used torch transformation pipeline frame_trans = transforms.Compose([ transforms.Resize([height, width]), transforms.Grayscale(num_output_channels=1), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]), ])

I have also noticed gt_labels[8:-7] in the evaluation file, and I think the reason for doing this is that they assign the averaged reconstruction error in a video clip to the center frame: i.e., if the video clip starts from frame_001.jpg and end with frame_016.jpg, then the averaged reconstruction error is assumed to be the error for the frame frame_008.jpg.

Exactly! Using the pytorch bulit-in transformation pipeline and the pretrained model provided by the author, I could also get 94.1277% on Ped2. Thanks quite much.

gdwang08 avatar Apr 28 '21 02:04 gdwang08

@gdwang08 @donggong1 @lyn1874 In the testing script for a given video, why don't we compare the score frame-wise? We can always save the reconstruction error and hence the score for every frame. Why would that be incorrect/different? Why do it with the center frame as you have explained in earlier comments?

It would be great if you guys could just define what does "frame-level AUC" means? I was under the impression that we compare each frame's score but that doesn't seem to be the case.

abhishekaich27 avatar Aug 18 '21 03:08 abhishekaich27

Thanks for the fruitful discussion. I got an accuracy of 94% on UCSDped2 using the pretrained model ckpt, the only difference I had with @Wolfybox 's dataloader is that I simply used torch transformation pipeline frame_trans = transforms.Compose([ transforms.Resize([height, width]), transforms.Grayscale(num_output_channels=1), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]), ]) I have also noticed gt_labels[8:-7] in the evaluation file, and I think the reason for doing this is that they assign the averaged reconstruction error in a video clip to the center frame: i.e., if the video clip starts from frame_001.jpg and end with frame_016.jpg, then the averaged reconstruction error is assumed to be the error for the frame frame_008.jpg.

Exactly! Using the pytorch bulit-in transformation pipeline and the pretrained model provided by the author, I could also get 94.1277% on Ped2. Thanks quite much.

the author is lyn1874?

huyi1998 avatar May 14 '22 05:05 huyi1998

ped2 dataset,i only get 0.86 AUC ?why ?

ZJX-CV avatar Sep 13 '24 02:09 ZJX-CV