SSD-Tensorflow
SSD-Tensorflow copied to clipboard
ValueError: No data files found in ./VOCdevkit/VOC2007/voc_2007_test_*.tfrecord
hi,when I run the eval_ssd_network.py, it gave this error:
Traceback (most recent call last):
File "eval_ssd_network.py", line 346, in
maybe you can use train substitute test
I met the same problem,have you solved it?
@Salight @maxlchina I got the same problem. Have you solved it?
run the command line: (if you data is voc2007) python tf_convert_data.py --dataset_name=pascalvoc --dataset_dir=./voc2007_standard/VOCtest_06-Nov-2007/VOCdevkit/VOC2007/ --output_name=voc_2007_test --output_dir=./tfrecords/
I was on the same page, try this
DATASET_DIR=./VOC2007/test/
OUTPUT_DIR=./tfrecords
python tf_convert_data.py
--dataset_name=pascalvoc
--dataset_dir=${DATASET_DIR}
--output_name=voc_2007_test
--output_dir=${OUTPUT_DIR}
and then copy ./tfrecords/* into ./VOC2007/test/
I was on the same page, try this DATASET_DIR=./VOC2007/test/ OUTPUT_DIR=./tfrecords python tf_convert_data.py --dataset_name=pascalvoc --dataset_dir=${DATASET_DIR} --output_name=voc_2007_test --output_dir=${OUTPUT_DIR} and then copy ./tfrecords/* into ./VOC2007/test/
I was on the same page, try this DATASET_DIR=./VOC2007/test/ OUTPUT_DIR=./tfrecords python tf_convert_data.py --dataset_name=pascalvoc --dataset_dir=${DATASET_DIR} --output_name=voc_2007_test --output_dir=${OUTPUT_DIR} and then copy ./tfrecords/* into ./VOC2007/test/
Hello, I am wondering that if you split your dataset (include the .xml and .jpg) file into two different folders like test and train in your VOC2007.
I used an auto-split program to create some txt files which contain the file names of data of train dataset and test dataset in the new folder named /ImageSet/Main, but I find that is totally useless in the train process. could you use it before?
Hello, I am wondering that if you split your dataset (include the .xml and .jpg) file into two different folders like test and train in your VOC2007.
I used an auto-split program to create some txt files which contain the file names of data of train dataset and test dataset in the new folder named /ImageSet/Main, but I find that is totally useless in the train process. could you use it before?
I modified this about three months ago. I can't sure but, I remember I didn't touch it after downloading and decompressing the dataset from SSD caffe version (https://github.com/weiliu89/caffe/tree/ssd). As I was debugging SSD-Tensorflow, I thought the data path was twisted in my case, so I left comments like that.
Maybe this is a way to solve it. According to this https://github.com/balancap/SSD-Tensorflow/issues/18 It says :
The dataset needs to be split into different files at the conversion stage. The Pascal VOC 2007 has two different zip files, one for training and one for testing, and you can run the script on both two get the two splitted TFRecords datasets.The split_name parameter is just here for matching filenames from training or testing files.
So you should have this fellow folders:
VOC2007 ├─test │ ├─Annotations │ ├─ImageSets │ │ └─Main │ └─JPEGImages └─train ├─Annotations ├─ImageSets │ └─Main └─JPEGImages
And then manual split some img into tran/test,and then copy the xml of test into /test/Annotations , copy the imgs of test into /test/ImageSets .Then do the same thing to train but repleace the 'test' with 'train'.
Then run this code:
- make test dataset:
DATASET_DIR=./VOC2007/test/ OUTPUT_DIR=./tfrecords
python tf_convert_data.py
--dataset_name=pascalvoc
--dataset_dir=./VOC2007/test/
--output_name=voc_2007_test
--output_dir=./tfrecords
- make train dataset:
DATASET_DIR=./VOC2007/train/
OUTPUT_DIR=./tfrecords
python tf_convert_data.py
--dataset_name=pascalvoc
--dataset_dir=./VOC2007/train/
--output_name=voc_2007_train
--output_dir=./tfrecords
Finally go to folder /tfrecords and copy the voc_2007_test_000.tfrecord to the VOC2007\test
This is my way to handle this problem. Hope this have some help to you .
Thank you for your reply. Best wishes!
------------------ 原始邮件 ------------------ 发件人: "Damon"[email protected]; 发送时间: 2019年8月23日(星期五) 中午12:56 收件人: "balancap/SSD-Tensorflow"[email protected]; 抄送: "787155268"[email protected];"Comment"[email protected]; 主题: Re: [balancap/SSD-Tensorflow] ValueError: No data files found in./VOCdevkit/VOC2007/voc_2007_test_*.tfrecord (#229)
Maybe this is a way to solve it. According to this https://github.com/balancap/SSD-Tensorflow/issues/18 It says :
The dataset needs to be split into different files at the conversion stage. The Pascal VOC 2007 has two different zip files, one for training and one for testing, and you can run the script on both two get the two splitted TFRecords datasets.The split_name parameter is just here for matching filenames from training or testing files.
So you should have this fellow folders:
VOC2007 ├─test │ ├─Annotations │ ├─ImageSets │ │ └─Main │ └─JPEGImages └─train ├─Annotations ├─ImageSets │ └─Main └─JPEGImages
And then manual split some img into tran/test,and then copy the xml of test into /test/Annotations , copy the imgs of test into /test/ImageSets .Then do the same thing to train but repleace the 'test' with 'train'.
Then run this code:
make test dataset:
DATASET_DIR=./VOC2007/test/ OUTPUT_DIR=./tfrecords
python tf_convert_data.py --dataset_name=pascalvoc --dataset_dir=./VOC2007/test/ --output_name=voc_2007_test --output_dir=./tfrecords
make train dataset:
DATASET_DIR=./VOC2007/train/ OUTPUT_DIR=./tfrecords python tf_convert_data.py --dataset_name=pascalvoc --dataset_dir=./VOC2007/train/ --output_name=voc_2007_train --output_dir=./tfrecords
Finally go to folder /tfrecords and copy the voc_2007_test_000.tfrecord to the VOC2007\test
This is my way to handle this problem. Hope this have some help to you .
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
/tfrecords
python3: can't open file 'tf_convert_data.py': [Errno 2] No such file or directory
I found this error. I took as you mentioned I didnot find the file folder /tfrecords , is it already available ? please let me know. than you in advance ?