MGANs
MGANs copied to clipboard
How to get the np files for training?
Hello,Thank you for your work.I quite don't know how to prepare my np files as you said"Please prepare your training data as np files"in Readme file. I'm confused is this np file just the .mat file which should be loaded when I run the gait.py?
Hello,Thank you for your work.I quite don't know how to prepare my np files as you said"Please prepare your training data as np files"in Readme file. I'm confused is this np file just the .mat file which should be loaded when I run the gait.py?
I am sorry for some negligence in the README, you only need to prepare the .mat file. For example in CASIA-B, 'data' is the PEIs of all sequences in the CASIA-B. 'view' is view information (1-11) of all sequences. 'label' is the identity information (1-124) of all sequences. 'cov' is the walking condition (1-10) of all sequences. The meaning of these numbers can be found in the .ipynb file.
Sorry,I'm still confused about how to get the right .mat file. Should the .mat file contain all datas produced by different gait sequences folders (for example 124 subjects*11 views *4 covs , just 5456 training examples)?Can you tell me some details about how to produce the .mat file ,such as what the dimention of view \label\ cov should be?I will be waiting for your reply sincerely.
Sorry,I'm still confused about how to get the right .mat file. Should the .mat file contain all datas produced by different gait sequences folders (for example 124 subjects*11 views *4 covs , just 5456 training examples)?Can you tell me some details about how to produce the .mat file ,such as what the dimention of view \label\ cov should be?I will be waiting for your reply sincerely.
In the CASIA-B dataset, the number of sequence is about 124x11x10=13640. Therefore, the dimension of 'label', 'view' and 'cov' is [13640], and the dimension of 'data' is [13640, channel_num, height, width].
I'm sorry for making you confused. I didn't understand how to write code elegantly in deep learning project when I was doing this project. I strongly recommend you to rewrite the code of loading data by using the PyTorch API such as 'torch.utils.data.Dataset' and 'torch.utils.data.DataLoader'.