wtalc-pytorch
wtalc-pytorch copied to clipboard
ActivityNet 1.2 feature
Hi, will you release the feature of ActivityNet 1.2?
@sujoyp @zhenyingfang Hi! Recently, I re-implemented the code of W-TALC on ActivityNet v1.2. I obtain the classification mAP of 94% on ActivityNet v1.2. But the detection performance is bad. Why is that? Is it parameter setting?
@Rheelt We smoothened the output of our model before applying the threshold for detection in ActivityNet1.2. Good to hear that you obtained better results than what we reported in the paper.
@zhenyingfang you can find the features in the link under the Data section.
@sujoyp Thanks for your reply. I appreciate your help very much. I downloaded the features of ActivityNet v1.2. I used the following Settings to get [email protected]=16.62.
def smooth(v): l = min(351, len(v)); l = l - (1-l%2) if len(v) <= 3: return v return savgol_filter(v, l, 1) threshold = np.max(tmp) - (np.max(tmp) - np.min(tmp))*0.5
But But when I change the threshold to threshold = np.max(tmp) - (np.max(tmp) - np.min(tmp))*0.7, I got [email protected] = 33.99. I think it's because the action instances in the ActivityNet v1.2 dataset occupy most of the video and each video contains only 1.5 action instances. So turning the threshold down can get better results. Is there any skill in parameter setting to get the results in the paper?
@Rheelt we actually used the savgol_filter, which is already there in the smooth() function of detectionMAP.py
hi, when I train on ActivityNet v1.2. the I got mIOU is nan. how to solve the problem?
hi, when I train on ActivityNet v1.2. the I got mIOU is nan. how to solve the problem?
@Rheelt we actually used the savgol_filter, which is already there in the smooth() function of detectionMAP.py
hi, when I train on ActivityNet v1.2. the I got mIOU is nan. how to solve the problem?
@sunsiyang2015 First, check whether detection weight is right. Then, check evaluation code
@sunsiyang2015 First, check whether detection weight is right. Then, check evaluation code
I use the reference evaluation code as same as detectionMAP.py
@sunsiyang2015 First, check whether detection weight is right. Then, check evaluation code
@sujoyp Thanks for your reply. I appreciate your help very much. I downloaded the features of ActivityNet v1.2. I used the following Settings to get [email protected]=16.62.
def smooth(v): l = min(351, len(v)); l = l - (1-l%2) if len(v) <= 3: return v return savgol_filter(v, l, 1) threshold = np.max(tmp) - (np.max(tmp) - np.min(tmp))*0.5But But when I change the threshold tothreshold = np.max(tmp) - (np.max(tmp) - np.min(tmp))*0.7, I got [email protected] = 33.99. I think it's because the action instances in the ActivityNet v1.2 dataset occupy most of the video and each video contains only 1.5 action instances. So turning the threshold down can get better results. Is there any skill in parameter setting to get the results in the paper?
@sujoyp Thanks for your reply. I appreciate your help very much. I downloaded the features of ActivityNet v1.2. I used the following Settings to get [email protected]=16.62.
def smooth(v): l = min(351, len(v)); l = l - (1-l%2) if len(v) <= 3: return v return savgol_filter(v, l, 1) threshold = np.max(tmp) - (np.max(tmp) - np.min(tmp))*0.5But But when I change the threshold tothreshold = np.max(tmp) - (np.max(tmp) - np.min(tmp))*0.7, I got [email protected] = 33.99. I think it's because the action instances in the ActivityNet v1.2 dataset occupy most of the video and each video contains only 1.5 action instances. So turning the threshold down can get better results. Is there any skill in parameter setting to get the results in the paper?
I use the reference evaluation code as same as detectionMAP.py, but the result is nan.
@sujoyp Hi ! Thanks for your excellent work. Can you share the parameter (window length) of savgol_filter?