lerobot icon indicating copy to clipboard operation
lerobot copied to clipboard

Adds bridgedatav2 dataset

Open husain-zaidi opened this issue 1 year ago β€’ 16 comments

What this does

Adds bridgedatav2 format. Currently supports scripted_raw directory Data set link

How to checkout & try? (for the reviewer)

run the from_raw_to_lerobot_format in bridgedatav2_format.py with files in ./data example main function:

if __name__ == "__main__":
    dataset, _, _ = from_raw_to_lerobot_format(Path("./data"), Path("./out"), 30, False, True)
    print(dataset)

testing using scripts

python lerobot/scripts/push_dataset_to_hub.py --raw-dir data/bridge_data_v2 --repo-id lerobot/bridgedatav2_scripted --raw-format bridgedatav2 --dry-run 1 --local-dir data/lerobot/bridgedatav2_scripted --push-to-hub 0 --debug 1 --video 0 

export DATA_DIR='data/bridge_data_v2'
python lerobot/scripts/visualize_dataset.py --repo-id lerobot/bridgedatav2_scripted  --episode-index 0

This change is Reviewable

husain-zaidi avatar May 25 '24 14:05 husain-zaidi

Really cool @husain-zaidi ! Thank so much :)

I am wondering if we should add a function to download the raw data, like we did for pusht: https://github.com/huggingface/lerobot/blob/5ad8170c37000ff2bdeaf1fa062573fd83235f83/lerobot/common/datasets/push_dataset_to_hub/_download_raw.py#L68-L77

Did you visualize the dataset with rerun? (python lerobot/scripts/visualize_dataset.py --help)

Looks good?

How is this dataset used in the litterature? Could we reproduce a SOTA result with our existing policies (ACT, diffusion policy)?

Best

Cadene avatar May 28 '24 09:05 Cadene

I have added all the basic code to load the scripted_raw folder. The scripts run fine but the images are not in-order when viewing from rerun. Will fix and then publish the PR. Will train with ACT

husain-zaidi avatar Jun 02 '24 17:06 husain-zaidi

images are in order and have tested visualizing with rerun

husain-zaidi avatar Jun 16 '24 19:06 husain-zaidi

@Cadene was able to train an ACT model for 250 steps, got the loss down to: 1.3996

husain-zaidi avatar Jul 04 '24 17:07 husain-zaidi

@Cadene was able to train the model for 250 steps, got the loss down to: 1.3996

Really cool ! Is there anything we can do to validate this dataset? For instance comparing results in the litterature (in any? ^^)

cc @michel-aractingi for visibility

Cadene avatar Jul 04 '24 18:07 Cadene

The paper(https://arxiv.org/pdf/2308.12952) has evaluation for ACT on the dataset, but it is a success rate averaged over 10 trials on tasks. The model was evaluated on a real robot. We'll need to make a sim environment to rollout the policy trained to get equivalent scores.

So far, I think things look good in the rerun visualizer and ACT training seems to be learning

husain-zaidi avatar Jul 05 '24 13:07 husain-zaidi

Hi I was wondering if there was a specific reason for downloading from rail-berkly rather than using the HuggingFace version: https://huggingface.co/datasets/jxu124/OpenX-Embodiment .

Is it due to any known issues with the OpenX-Embodiment bridge version?

Best,

Sebastian

sebbyjp avatar Jul 09 '24 14:07 sebbyjp

Hi. I didn't really check that out. I had worked with the rails dataset earlier for a project. So, I just started with what code I had earlier and integrated it here. Can take a look at using the hugging face version,

husain-zaidi avatar Jul 09 '24 17:07 husain-zaidi

@sebbyjp @husain-zaidi what's the difference between OpenX and Bridgev2? By the way, @michel-aractingi is going to assist you to merge this PR. Sorry for the delay but the rest of the team is focused on other urgent stuff at the moment.

Cadene avatar Jul 09 '24 20:07 Cadene

The bridge data in OpenX (rlds format) is a converted from the original bridge_data_v2 hosted here. The Openx version is an almost identical lossy version of the raw data. (e.g. the openx rlds resized the img to 256x256).

youliangtan avatar Jul 15 '24 19:07 youliangtan

@Cadene OpenX is 60 datasets of which bridge_v2 is a part of: https://robotics-transformer-x.github.io/ . Unless you were asking about the data format difference which is the same RLDS format but different action spaces, state spaces, and observation spaces for each dataset.

sebbyjp avatar Jul 15 '24 21:07 sebbyjp

@youliangtan how big was the download size? I downloaded it before but it was far too small to contain the 60k trajectories the website says it has. But maybe I did something wrong.

sebbyjp avatar Jul 15 '24 21:07 sebbyjp

@youliangtan Do you know why the BridgeDataV2 is reported in the original paper to have 60K trajectories while in openX it only has 25K?

michel-aractingi avatar Jul 16 '24 07:07 michel-aractingi

Nice that you notice the diff. The Official OpenXE bridge_data (version 0.1.0) hosted on gs://gresearch/robotics isnt up-to-date (slow update on google side :smiling_face_with_tear: ). It also doesn't include the optional 2nd or 3rd cameras, since RTX architecture only uses single camera.

For the up-to-date version (v1.0.0) that consists of all 60k trajectory, it is hosted in rail berkeley nfs or hugging-face dataset

youliangtan avatar Jul 16 '24 20:07 youliangtan

As the OpenX PR would be able to load more datasets (including bridgedata) shall we proceed with that rather than have individual data formats for each dataset?

husain-zaidi avatar Jul 31 '24 18:07 husain-zaidi

Yes @husain-zaidi! since BridgeData is included in OpenX it makes more sense to add it under the OpenX format that is unified for more that 60 other datasets. We are currently in the process of importing the OpenX datasets to LeRobot. The code is available here user/michel_aractingi/2024_07_17_oxe_data_format (branched from PR#286) if you wish have a look.

michel-aractingi avatar Jul 31 '24 23:07 michel-aractingi

Closing this in favor of #354 This was a good practice in using πŸ€— datasets as well as understanding robotic datasets!

husain-zaidi avatar Aug 25 '24 05:08 husain-zaidi