scenic icon indicating copy to clipboard operation
scenic copied to clipboard

How to prepare the data for projects/mbt?

Open wentaozhu opened this issue 2 years ago • 11 comments

The dataset processing is unclear. The readme only shows "Additionally, pre-process the training dataset in the same way as done by the ViViT project here." And vivit refers the pre-processing to hmdb pro-processing in dmvr.

From this information, it is pretty hard for users to run the code. The only data processing provided is for hmdb. But MBT uses RGB+spectrogram. Could you please provide a bit of information on how to process the data so that users can run the code?

Thank you so much!

wentaozhu avatar Mar 31 '22 04:03 wentaozhu

Thank you so much, Mostafa! @MostafaDehghani and @anuragarnab

wentaozhu avatar Apr 01 '22 16:04 wentaozhu

Hi,

The audio for all datasets is sampled at 16kHz and converted to mono channel. We then extract log mel spectrograms with a frequency dimension of 128 computed using a 25ms Hamming window with hop length 10ms. This gives us an input of size 128 × 100t for t seconds of audio. No other processing is applied to the spectrograms before they are stored in tfrecords.

The details are described in Sec. 4.2 here: https://arxiv.org/pdf/2107.00135.pdf. Please let me know if you have any more questions!

a-nagrani avatar Apr 11 '22 14:04 a-nagrani

Could you provide a script for generating correspoding tfrecords file from audioset dataset?

LogicSense1 avatar Apr 21 '22 12:04 LogicSense1

Hi, sorry but we can't release our data processing scripts. However you can follow the instructions here: https://github.com/deepmind/dmvr/tree/master/examples to create tfrecords files in the correct DMVR format.

a-nagrani avatar Apr 25 '22 19:04 a-nagrani

Hi all, I have a few questions about reproducing this project as well:

  1. I suppose this means that we have to download the Youtube videos ourselves and apply the pre-processing as per https://github.com/deepmind/dmvr/tree/master/examples - Is that correct?

  2. Also, to expand on @wentaozhu's point, MBT supports RGB as well as spectrograms, but in your projects/mbt/configs/audioset/balanced_audioset_base.py config file, config.dataset_configs.tables only seems to contain spectrogram tfrecords, i.e. balanced_train.se.melspec.tfrecord.sst@1024. How can the RGB component of the data be integrated into this config file?

  3. It is also not very clear to me how the dataset split is generated. For training and validation, I assume we use the .csv files provided by Audioset, but they make no mention of a test set. Do we just use the same records as the validation set for the test as well?

  4. Finally, a minor query about the naming convention of the tfrecord files: what is the significance of the .sst@1024 at the end of each record? This is from the config file I mentioned earlier. Does this have something to do with the number of shards the dataset is split into?

Sorry for the barrage of questions, this is my first foray into deep learning research and I'm trying to get an understanding of the best practices etc!

Thank you!

jayathungek avatar Aug 04 '22 09:08 jayathungek

Hello everyone,

I also got some questions for the data preparing process. I couldn't find answers for these in the paper.

  1. To get the log mel spectrograms from the audio data. In the process of converting from amplitude to db what reference point was used ?(1, max, median...)

  2. I didn't see it get mentioned in the paper but I've seen in the code there is optional zero centering for both the rgb and the spectrogram. I wanted to know if this was used for the data used to train model checkpoints.

  3. Lastly if it’s possible, where can we get the build configurations that the checkpoints expect of AVTFRecordDatasetFactory ? There seems to be some mismatch that created some confusion. For example when we wan’t to create the dataset using AVTF default num_spec_frame is 5 but checkpoint seems to expect and the paper mentions 8 seconds sampled. I might have seen additional mismatch as well so I would like to be sure.

Sorry for piling on more questions :) I am warming up to these topics so, if you want to point me to additional resources that would be great as well.

Thanks a lot!

uck16m1997 avatar Aug 06 '22 17:08 uck16m1997

Hello everyone,

I also got some questions for the data preparing process. I couldn't find answers for these in the paper.

  1. To get the log mel spectrograms from the audio data. In the process of converting from amplitude to db what reference point was used ?(1, max, median...)
  2. I didn't see it get mentioned in the paper but I've seen in the code there is optional zero centering for both the rgb and the spectrogram. I wanted to know if this was used for the data used to train model checkpoints.
  3. Lastly if it’s possible, where can we get the build configurations that the checkpoints expect of AVTFRecordDatasetFactory ? There seems to be some mismatch that created some confusion. For example when we wan’t to create the dataset using AVTF default num_spec_frame is 5 but checkpoint seems to expect and the paper mentions 8 seconds sampled. I might have seen additional mismatch as well so I would like to be sure.

Sorry for piling on more questions :) I am warming up to these topics so, if you want to point me to additional resources that would be great as well.

Thanks a lot!

Hi, have you resolved these issues?

yangjiangeyjg avatar Sep 12 '22 09:09 yangjiangeyjg

Hi all, I have a few questions about reproducing this project as well:

  1. I suppose this means that we have to download the Youtube videos ourselves and apply the pre-processing as per https://github.com/deepmind/dmvr/tree/master/examples - Is that correct?
  2. Also, to expand on @wentaozhu's point, MBT supports RGB as well as spectrograms, but in your projects/mbt/configs/audioset/balanced_audioset_base.py config file, config.dataset_configs.tables only seems to contain spectrogram tfrecords, i.e. balanced_train.se.melspec.tfrecord.sst@1024. How can the RGB component of the data be integrated into this config file?
  3. It is also not very clear to me how the dataset split is generated. For training and validation, I assume we use the .csv files provided by Audioset, but they make no mention of a test set. Do we just use the same records as the validation set for the test as well?
  4. Finally, a minor query about the naming convention of the tfrecord files: what is the significance of the .sst@1024 at the end of each record? This is from the config file I mentioned earlier. Does this have something to do with the number of shards the dataset is split into?

Sorry for the barrage of questions, this is my first foray into deep learning research and I'm trying to get an understanding of the best practices etc!

Thank you!

Hi, have you resolved these issues?

yangjiangeyjg avatar Sep 12 '22 09:09 yangjiangeyjg

Could you provide a script for generating correspoding tfrecords file from audioset dataset?

Hi, have you resolved these issues?

yangjiangeyjg avatar Sep 12 '22 09:09 yangjiangeyjg

Hi, have you resolved these issues?

Hi, have you resolved these issues?

yangjiangeyjg avatar Sep 13 '22 02:09 yangjiangeyjg

Hi, sorry but we can't release our data processing scripts. However you can follow the instructions here: https://github.com/deepmind/dmvr/tree/master/examples to create tfrecords files in the correct DMVR format.

Hi, could you disclose the processed data?

yangjiangeyjg avatar Sep 13 '22 08:09 yangjiangeyjg

@wentaozhu Following this up, any updates on this?

BDHU avatar Oct 18 '22 21:10 BDHU

No sorry, we cannot release the processed data either.

On Tue, Oct 18, 2022 at 5:33 PM Edward Hu @.***> wrote:

@wentaozhu https://github.com/wentaozhu Following this up, any updates on this?

— Reply to this email directly, view it on GitHub https://github.com/google-research/scenic/issues/254#issuecomment-1283030193, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFQINRF3FOLZBZXS6DHLITDWD4JSRANCNFSM5SEEEBCA . You are receiving this because you commented.Message ID: @.***>

a-nagrani avatar Oct 18 '22 21:10 a-nagrani