ChangeNet
ChangeNet copied to clipboard
How to generate the 'change_dataset_train.pkl' using the downloaded VL-CMU-CD dataset?
I have downloaded the VL-CMU-CD dataset, however, the dataset folder contains all bi-temporal .png images and GTs. Then, how to generate the 'change_dataset_train.pkl' and 'change_dataset_train.pkl' using the downloaded VL-CMU-CD dataset? thus I can run the Train.py in this project.
@Anysomeday Hello - could you let me know how to access the VL-CMU-CD dataset? I've not found a way yet. Much thanks
@Anysomeday Hello - could you let me know how to access the VL-CMU-CD dataset? I've not found a way yet. Much thanks
You can download the dataset from:
https://drive.google.com/open?id=0B-IG2NONFdciOWY5QkQ3OUgwejQ https://drive.google.com/file/d/1hN2xcOxg-Za1Lp-Rg7uyD5cYqLAr4_qB/view?usp=sharing
If you reproduce the experiment results using this dataset, could you please contact me for giving some instructions? My email address is [email protected], Thank you.
We did a code read thru today and believe we need to somehow map the RGB files into the right order to make a file list that can be ingested by the ExploreDataset notebook. Also the GT is coded and somehow we remap that into a GT suited for training the ChangeNet - which we surmise is any object is mapped to 1 and no-change is 0. We may need to piece this together @leonardoaraujosantos Any pointers on how to create the right assets from the VL CMU CD dataset so it can be preped for training? Thanks
We have a script (crude) to recreate the train.txt and val.txt in a format that appears to work with the ExploreDataset.ipynb to create the pkl files. However, we are trying to understand the labeling - the raw labeling does not appear to be a form suitable for the current model output and loss calculation. Any insights would be valueable - meanwhile we are reconstructing labeling to ingest into classes in a manner we believe would be consistent with model intent
I think that the reply to this question is in the imreadgtpng.m and in the paper. The input of the model is 2 pictures, but the output are of [224,224,11] shape witch represents a mask for each class. I think that the code is not complet, we need to add a conversion layer between images label and real output format. I will try to code this conversion tomorrow, but any comment would be welcom !
We have a script (crude) to recreate the train.txt and val.txt in a format that appears to work with the ExploreDataset.ipynb to create the pkl files. However, we are trying to understand the labeling - the raw labeling does not appear to be a form suitable for the current model output and loss calculation. Any insights would be valueable - meanwhile we are reconstructing labeling to ingest into classes in a manner we believe would be consistent with model intent
Hello, could you please try to give me the script to recreate the train.txt and val.txt ?