LSTA
LSTA copied to clipboard
Question for GTEA61 Dataset
Hi @swathikirans
In EPIC-KITCHENS dataset, as you said in your paper, the outputs are verb and noun and activity class.
And in the GTEA61 dataset, I found the output are just activity class. In Table 1 of the Supplementary Document, you also evaluate the action and object separately in GTEA61 dataset. So could you please tell me how to split the action and object from activity class.
For example in GTEA61:
- Activity: close_chocolate, Action: close, Object: chocolate
- Activity: pour_ketchup, hotdog, bread , Action: pour, Object: ketchup, hotdog, bread
for multi-objects, how the network to output the different number of objects for different clips?
We trained the network with action level supervision and during inference separated the action to verb-noun pairs. This was done to analyze how the network improved on verb and noun. We combined the multiple objects as one single class. For example, ketchup_hotdog_bread as a single class
Thanks, so how can we separated the action level output to verb-noun pairs.
The activity labels are of the format "verb_nounGroup". I mapped the activity labels (from the output of the network) to the names and separated them after the first '_' to get the verb and noun classes.
Hi @swathikirans I am so lucky to look through your paper about LSTA,but when I ran your source code on windows,it occurs a problem: line 213, in main_run.py avg_loss = epoch_loss / iterPerEpoch ZeroDivisionError: division by zero I supposed if the dataset proposed in your paper ,such as 'gtea_61',is downloaded in advance to the root director looking forward to your reply,thanks
@KrisLee512 I am getting same error. I think it requires data rearrangement to run it. Have you found any solution?
@KrisLee512 and @i-amgeek, indeed you need to download the data beforehand. You may check this repo for information regarding GTEA61 dataset setup https://github.com/swathikirans/ego-rnn
Hi @swathikirans
- I am so lucky to read your papers about egocentric action recognition. I trained the LSTA_rgb model on gtea61, but in any case my accuracy could not reach 74.14%. My best result is 62%. I will try it on other datasets which you answered in ego-rnn.
- I tried to apply LSTA on flow following your paper and the "RGB" way, but it didn't converge. So could you tell me how to apply LSTA on flow?
- Due to the smaller size and class imbalance, GTEA61 dataset is a bit tricky. You may try EGTEA dataset which is large enough.
- The results reported in the paper use standard ConvLSTM for optical flow and not LSTA.
@GinTsuki9349 After I run main_rgb.py, I only have an accuracy rate of more than 40. How did you get to more than 60?
In my memory, I did get this result after running for several times. But I'm sorry that I can't remember the details because it has been too long. The relevant operation records on the server have also been clear. Maybe there are some special Settings or tricks.-------- 原始邮件 --------发件人: Fanye12 @.>日期: 2022年3月19日周六 11:47收件人: swathikirans/LSTA @.>抄送: GinTsuki9349 @.>, Mention @.>主 题: Re: [swathikirans/LSTA] Question for GTEA61 Dataset (#1) @GinTsuki9349 After I run main_rgb.py, I only have an accuracy rate of more than 40. How did you get to more than 60?
—Reply to this email directly, view it on GitHub, or unsubscribe.Triage notifications on the go with GitHub Mobile for iOS or Android. You are receiving this because you were mentioned.Message ID: @.***>