co-tracker icon indicating copy to clipboard operation
co-tracker copied to clipboard

The training dataset prepareation

Open qianduoduolr opened this issue 2 years ago • 29 comments

Hi, I am following your work to prepare the training data for MOVI-f. Could you please give more details about the instructions for annotation generation?

I generally follow the function create_kubric_eval_train_dataset in this link, and set the train_size=(512,512) and tracks_to_sample=2000 in create_point_tracking_dataset. Besides, I also modify the 'movi_e/256x256' in this link in order to generate MOVI-f.

Is that right?

qianduoduolr avatar Jul 20 '23 12:07 qianduoduolr

Hi @qianduoduolr, these are the settings that we used to prepare 11000 training sequences for Kubric MOVI-f:

create_point_tracking_dataset(
  train_size=(512, 512),
  shuffle_buffer_size=None,
  split="train",
  batch_dims=tuple(),
  repeat=True,
  vflip=False,
  random_crop=True,
  tracks_to_sample=2048,
  sampling_stride=4,
  max_seg_id=25,
  max_sampled_frac=0.1,
  num_parallel_point_extraction_calls=16
)

Most of these sequences are repeated with different crops because we used repeat=True and random_crop=True.

nikitakaraevv avatar Jul 20 '23 15:07 nikitakaraevv

Thanks for your reply. Do you modify the 'movi_e/256x256' in this link to movi_f ?

qianduoduolr avatar Jul 20 '23 15:07 qianduoduolr

As far as I remember, we had to download the dataset first for it to work. Then, we load MOVI-f (not MOVI-e) from a local path.

nikitakaraevv avatar Jul 20 '23 15:07 nikitakaraevv

Ok, thanks, I will try later.

qianduoduolr avatar Jul 20 '23 16:07 qianduoduolr

Hi @nikitakaraevv, can you provide the link to download the MOVI-f data you used? Did you simply run create_point_tracking_dataset() with the mentioned parameters to generate annotations on the dataset? How was the file structure used for training?

FYI, it seems like the description in the main README for generation of annotation is not complete. It is at least not so easy to follow, although I might miss something here...

Thanks for your help!

Anderstask1 avatar Oct 06 '23 14:10 Anderstask1

Hello @nikitakaraevv and @ernestchu, I'm encountering the same problem that @Anderstask1 described in the previous post. Could you please take a look and let us know if there's any update or potential solution on the horizon? Your assistance would be greatly appreciated.

NEbrahimi avatar Nov 08 '23 02:11 NEbrahimi

Hi @Anderstask1 and @NEbrahimi, I apologize for missing your messages as the issue was closed. I modified this function:

https://github.com/google-research/kubric/blob/e140e24e078d5e641c4ac10bf25743059bd059ce/challenges/point_tracking/dataset.py#L992

Here are the changes I made:

import torch
from PIL import Image

dataset_dir = "./kubric_movi_f"
os.makedirs(dataset_dir, exist_ok=True)

ds = tfds.as_numpy(create_point_tracking_dataset(
  train_size=(512, 512),
  shuffle_buffer_size=None,
  split="train",
  batch_dims=tuple(),
  repeat=True,
  vflip=False,
  random_crop=True,
  tracks_to_sample=2048,
  sampling_stride=4,
  max_seg_id=25,
  max_sampled_frac=0.1,
  num_parallel_point_extraction_calls=16
))

for i, data in enumerate(ds):
    print(i)
    seq_num = "0" * (4 - len(str(i))) + str(i)
    os.makedirs(os.path.join(dataset_dir, seq_num), exist_ok=True)
    os.makedirs(os.path.join(dataset_dir, seq_num, "frames"), exist_ok=True)
    for i_frame, frame in enumerate(data["video"]):
        Image.fromarray((((frame + 1) / 2.0) * 255.0).astype("uint8")).save(
            os.path.join(dataset_dir, seq_num, "frames", f"{i_frame:03d}.png")
        )
    traj_annots = {"coords": data["target_points"], "visibility": data["occluded"]}
    np.save(os.path.join(dataset_dir, seq_num, seq_num + ".npy"), traj_annots)
   
    # visualize res
    disp = plot_tracks(
        data["video"] * 0.5 + 0.5, data["target_points"], data["occluded"]
    )
    media.write_video(os.path.join(dataset_dir, f"{seq_num}.mp4"), disp, fps=10)

Then I just ran python dataset.py to create point tracking annotations.

nikitakaraevv avatar Nov 13 '23 15:11 nikitakaraevv

Hi @nikitakaraevv, I'm encountering the same problem that @Anderstask1 described in the previous post. When I ran python dataset.py, I got the message as follows and there was no data downloaded, 1699948423827 How to solve it or can you provide the link to download the training data you used? Thanks for your help!

wwsource avatar Nov 14 '23 07:11 wwsource

Hi @wwsource, I also had this problem. I had to install gsutil and download Kubric locally first with gsutil -m cp -r gs://kubric-public/tfds/movi_f ./kubric_movi_f. Then I was able to load it with:

ds = tfds.load(
        "512x512",
        data_dir=f"./kubric_movi_f",
        shuffle_files=shuffle_buffer_size is not None,
        download=False,
        **kwargs,
    )

nikitakaraevv avatar Nov 16 '23 13:11 nikitakaraevv

Hi @nikitakaraevv,

I simply downloaded the dataset.py script and updated the main function with the modifications you suggested.

When I tried running dataset.py, I got the following error:

tensorflow.python.framework.errors_impl.InvalidArgumentError: 
{{function_node__wrapped__IteratorGetNext_output_types_4_device_/
job:localhost/replica:0/task:0/device:CPU:0}} 
Incompatible shapes at component 0: expected [2048,24] but got [924,24]. 
[Op:IteratorGetNext] name:

Then I changed tracks_to_sample=2048 to tracks_to_sample=924 when calling create_point_tracking_dataset. With that, it ran for 11 iterations of the loop, but eventually failed with the following error:

tensorflow.python.framework.errors_impl.InvalidArgumentError:
{{function_node__wrapped__IteratorGetNext_output_types_4_device_/
job:localhost/replica:0/task:0/device:CPU:0}} 
Incompatible shapes at component 0: expected [924,24] but got [810,24]. 
[Op:IteratorGetNext] name: 

Do you have any suggestions on what I should do or what I might have done wrong?

Also, the file structure that was being generated seemed wrong for the train.py script, since no kubric_movi_f/movi_f/frames directory was being created, but a bunch of kubric_movi_f/0000/frames, kubric_movi_f/0001/frames, kubric_movi_f/0002/frames, etc, one per iteration of the loop. This behavior seems expected from dataset.py since you do:

os.makedirs(os.path.join(dataset_dir, seq_num, "frames"), exist_ok=True)
        for i_frame, frame in enumerate(data["video"]):
            Image.fromarray((((frame + 1) / 2.0) * 255.0).astype("uint8")).save(
                os.path.join(dataset_dir, seq_num, "frames", f"{i_frame:03d}.png")
            )

but I just wanted to check that this is the correct file structure, seemed off.

Thank you!

rogerioagjr avatar Nov 18 '23 01:11 rogerioagjr

Hi @rogerioagjr, sorry, somehow I missed your message. Honestly, I don't know why the Kubric data is failing now. You might want to open an issue here.

As for your second question, seq_num corresponds to sequence numbers: 0001, 0002, 0003,... So, it seems to be correct, right?

nikitakaraevv avatar Jan 22 '24 20:01 nikitakaraevv

Hi @nikitakaraevv . May I ask how large is the training data and the annotations? And if it is possible for you to upload the generated data to ease the data preparation process? Thanks!

XiaoyuShi97 avatar Feb 27 '24 05:02 XiaoyuShi97

Hi @XiaoyuShi97, it would be better to ask the creators of Kubric to do it, as we simply generated a dataset using their provided script.

nikitakaraevv avatar Feb 28 '24 16:02 nikitakaraevv

Hi @XiaoyuShi97 , In my impression, it would require 500+G. According to the explanation upstairs, generating the annotations is not so difficult in my point of view.

sinkers-lan avatar Mar 01 '24 06:03 sinkers-lan

Hi @XiaoyuShi97 , In my impression, it would require 500+G. According to the explanation upstairs, generating the annotations is not so difficult in my point of view.

Hi @nikitakaraevv , I generate the training data following the instructions, but the total size is only ~65G, which is much smaller than @sinkers-lan 's experience. Could you please provide the size of training data? Thx!

XiaoyuShi97 avatar Mar 17 '24 14:03 XiaoyuShi97

Hi @XiaoyuShi97, it seems to be correct. 65Gb must be the size of generated annotations, while the annotations + the tensorflow record consume 500+ Gb.

nikitakaraevv avatar Mar 17 '24 21:03 nikitakaraevv

Thanks for your kind reply!

XiaoyuShi97 avatar Mar 18 '24 06:03 XiaoyuShi97

I download part of kubric_movi_f/512x512 datasets, then run create_point_tracking_dataset(),but got the following issue: File "D:\A_research\tapnet-pytorch\tapnet-main\generateKubric\kubric_dataset.py", line 1036, in main() File "D:\A_research\tapnet-pytorch\tapnet-main\generateKubric\kubric_dataset.py", line 1016, in main for i, data in enumerate(ds): File "D:\ProgramData\Anaconda3\envs\tapnet\lib\site-packages\tensorflow_datasets\core\dataset_utils.py", line 76, in _eager_dataset_iterator for elem in ds: File "D:\ProgramData\Anaconda3\envs\tapnet\lib\site-packages\tensorflow\python\data\ops\iterator_ops.py", line 809, in next return self._next_internal() File "D:\ProgramData\Anaconda3\envs\tapnet\lib\site-packages\tensorflow\python\data\ops\iterator_ops.py", line 772, in _next_internal ret = gen_dataset_ops.iterator_get_next( File "D:\ProgramData\Anaconda3\envs\tapnet\lib\site-packages\tensorflow\python\ops\gen_dataset_ops.py", line 3050, in iterator_get_next _result = pywrap_tfe.TFE_Py_FastPathExecute( UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd5 in position 274: invalid continuation byte 2\1.0.0\movi_f-train.tfrecord-00021-of-01024 GetLastError: 2

why did this?

pubyLu avatar May 26 '24 10:05 pubyLu

Thanks for the great work, and thanks @nikitakaraevv for sharing these instructions here. Perhaps, useful to cross-reference on kubric as well.

shivanimall avatar Jun 11 '24 15:06 shivanimall

Hi @nikitakaraevv , I downloaded the tensorflow-record to local disk(only 512x512 of movi_f, which takes about ~552GB), and arrange them like this: image after I run python dataset.py, I get the following error: image image my dataset.py is: image I must be messing around with the directory or dataset name, etc, could you give some hints about it?

qsisi avatar Jun 27 '24 03:06 qsisi

Hi @nikitakaraevv , I downloaded the tensorflow-record to local disk(only 512x512 of movi_f, which takes about ~552GB), and arrange them like this: image after I run python dataset.py, I get the following error: image image my dataset.py is: image I must be messing around with the directory or dataset name, etc, could you give some hints about it?

It has been solved.

qsisi avatar Jul 01 '24 09:07 qsisi

Hi @nikitakaraevv, @qsisi Thank you for sharing your excellent work. I've been attempting to create a training dataset by following your instructions, which involved downloading the dataset and modifying the conversion code you provided. However, when I run the dataset.py script, I encounter an error. Do you have any insights regarding the error I'm experiencing? The error message is as follows:

image

Any assistance you could provide would be greatly appreciated. Thank you for your time.

whikwon avatar Jul 28 '24 00:07 whikwon

Hi @nikitakaraevv, @qsisi Thank you for sharing your excellent work. I've been attempting to create a training dataset by following your instructions, which involved downloading the dataset and modifying the conversion code you provided. However, when I run the dataset.py script, I encounter an error. Do you have any insights regarding the error I'm experiencing? The error message is as follows:

image Any assistance you could provide would be greatly appreciated. Thank you for your time.

I remember such an issue is caused by either inappropriate tensorflow_datasets version, or messing around with the directory or dataset name. Try changing the tensorflow_datasets version, and try a different dataset name like "512x512" or "movi_f/512x512".

qsisi avatar Jul 29 '24 02:07 qsisi

@qsisi Setting different dataset name "movi_f/512x512" works. Thank you so much.

whikwon avatar Jul 30 '24 03:07 whikwon

Hi @nikitakaraevv, I followed your instructions and it is generating annotations now, but the generation speed is quite slow on my end, roughly 2 seconds per iteration. Is it the same on your side? Or any speed-up suggestions?

littlepure2333 avatar Oct 01 '24 04:10 littlepure2333

Hi @nikitakaraevv , I'm following your CoTracker3, and I found that the previous Kubric dataset for CoTracker2 is not suitable for CoTracker3, as CoTracker3 requires a seq_len of 64, while CoTracker2 only needs a seq_len of 24. I am wondering how to create a Kubric dataset with 64 frames. Thank you for your guidance!

sinkers-lan avatar Oct 27 '24 10:10 sinkers-lan

Hi @littlepure2333 @sinkers-lan, that's how we render the dataset used for training of CoTracker3: https://github.com/facebookresearch/co-tracker/issues/130#issuecomment-2511130424

nikitakaraevv avatar Dec 02 '24 10:12 nikitakaraevv

Hi @qianduoduolr, these are the settings that we used to prepare 11000 training sequences for Kubric MOVI-f:

create_point_tracking_dataset(
  train_size=(512, 512),
  shuffle_buffer_size=None,
  split="train",
  batch_dims=tuple(),
  repeat=True,
  vflip=False,
  random_crop=True,
  tracks_to_sample=2048,
  sampling_stride=4,
  max_seg_id=25,
  max_sampled_frac=0.1,
  num_parallel_point_extraction_calls=16
)

Most of these sequences are repeated with different crops because we used repeat=True and random_crop=True.

Hi, if I set repeat=True, does it mean that data generation will loop indefinitely? When should I stop it?

jin-sss avatar Dec 02 '24 20:12 jin-sss

Perhaps you can refer to this repo, which contains the pre-processed data of kubir-movi-f, and can find 60+G videos, as well as annotation information. Not verified. https://github.com/16lemoing/dot

Air1000thsummer avatar Dec 19 '24 08:12 Air1000thsummer