SSD-Tensorflow icon indicating copy to clipboard operation
SSD-Tensorflow copied to clipboard

ValueError: No data files found in ./VOCdevkit/VOC2007/voc_2007_test_*.tfrecord

Open maxlchina opened this issue 6 years ago • 10 comments

hi,when I run the eval_ssd_network.py, it gave this error: Traceback (most recent call last): File "eval_ssd_network.py", line 346, in tf.app.run() File "/home/maxl/anaconda3/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 48, in run _sys.exit(main(sys.argv[:1] + flags_passthrough)) File "eval_ssd_network.py", line 136, in main dataset.data_sources, FLAGS.eval_dir) File "/home/maxl/SSD-Tensorflow-master/tf_utils.py", line 82, in print_configuration print_config(None) File "/home/maxl/SSD-Tensorflow-master/tf_utils.py", line 78, in print_config data_files = parallel_reader.get_data_files(data_sources) File "/home/maxl/anaconda3/lib/python3.5/site-packages/tensorflow/contrib/slim/python/slim/data/parallel_reader.py", line 311, in get_data_files raise ValueError('No data files found in %s' % (data_sources,)) ValueError: No data files found in ./VOCdevkit/VOC2007/voc_2007_test*.tfrecord

maxlchina avatar Apr 17 '18 07:04 maxlchina

maybe you can use train substitute test

changtingwai avatar Jun 20 '18 08:06 changtingwai

I met the same problem,have you solved it?

Salight avatar Jun 25 '18 09:06 Salight

@Salight @maxlchina I got the same problem. Have you solved it?

tgbamg avatar Jul 17 '18 00:07 tgbamg

run the command line: (if you data is voc2007) python tf_convert_data.py --dataset_name=pascalvoc --dataset_dir=./voc2007_standard/VOCtest_06-Nov-2007/VOCdevkit/VOC2007/ --output_name=voc_2007_test --output_dir=./tfrecords/

SmallMunich avatar Jul 19 '18 11:07 SmallMunich

I was on the same page, try this DATASET_DIR=./VOC2007/test/ OUTPUT_DIR=./tfrecords python tf_convert_data.py
--dataset_name=pascalvoc
--dataset_dir=${DATASET_DIR}
--output_name=voc_2007_test
--output_dir=${OUTPUT_DIR} and then copy ./tfrecords/* into ./VOC2007/test/

TaehoLi avatar Jan 28 '19 11:01 TaehoLi

I was on the same page, try this DATASET_DIR=./VOC2007/test/ OUTPUT_DIR=./tfrecords python tf_convert_data.py --dataset_name=pascalvoc --dataset_dir=${DATASET_DIR} --output_name=voc_2007_test --output_dir=${OUTPUT_DIR} and then copy ./tfrecords/* into ./VOC2007/test/

I was on the same page, try this DATASET_DIR=./VOC2007/test/ OUTPUT_DIR=./tfrecords python tf_convert_data.py --dataset_name=pascalvoc --dataset_dir=${DATASET_DIR} --output_name=voc_2007_test --output_dir=${OUTPUT_DIR} and then copy ./tfrecords/* into ./VOC2007/test/

Hello, I am wondering that if you split your dataset (include the .xml and .jpg) file into two different folders like test and train in your VOC2007.

I used an auto-split program to create some txt files which contain the file names of data of train dataset and test dataset in the new folder named /ImageSet/Main, but I find that is totally useless in the train process. could you use it before?

xinkai-jiang avatar May 01 '19 07:05 xinkai-jiang

Hello, I am wondering that if you split your dataset (include the .xml and .jpg) file into two different folders like test and train in your VOC2007.

I used an auto-split program to create some txt files which contain the file names of data of train dataset and test dataset in the new folder named /ImageSet/Main, but I find that is totally useless in the train process. could you use it before?

I modified this about three months ago. I can't sure but, I remember I didn't touch it after downloading and decompressing the dataset from SSD caffe version (https://github.com/weiliu89/caffe/tree/ssd). As I was debugging SSD-Tensorflow, I thought the data path was twisted in my case, so I left comments like that.

TaehoLi avatar May 07 '19 13:05 TaehoLi

Maybe this is a way to solve it. According to this https://github.com/balancap/SSD-Tensorflow/issues/18 It says :

The dataset needs to be split into different files at the conversion stage. The Pascal VOC 2007 has two different zip files, one for training and one for testing, and you can run the script on both two get the two splitted TFRecords datasets.The split_name parameter is just here for matching filenames from training or testing files.


So you should have this fellow folders:

VOC2007 ├─test │ ├─Annotations │ ├─ImageSets │ │ └─Main │ └─JPEGImages └─train ├─Annotations ├─ImageSets │ └─Main └─JPEGImages

And then manual split some img into tran/test,and then copy the xml of test into /test/Annotations , copy the imgs of test into /test/ImageSets .Then do the same thing to train but repleace the 'test' with 'train'.


Then run this code:

  • make test dataset:

DATASET_DIR=./VOC2007/test/ OUTPUT_DIR=./tfrecords

python tf_convert_data.py
--dataset_name=pascalvoc
--dataset_dir=./VOC2007/test/
--output_name=voc_2007_test
--output_dir=./tfrecords

  • make train dataset:

DATASET_DIR=./VOC2007/train/ OUTPUT_DIR=./tfrecords python tf_convert_data.py
--dataset_name=pascalvoc
--dataset_dir=./VOC2007/train/
--output_name=voc_2007_train
--output_dir=./tfrecords


Finally go to folder /tfrecords and copy the voc_2007_test_000.tfrecord to the VOC2007\test

This is my way to handle this problem. Hope this have some help to you .

DamonDBT avatar Aug 23 '19 04:08 DamonDBT

Thank you for your reply. Best wishes!

------------------ 原始邮件 ------------------ 发件人: "Damon"[email protected]; 发送时间: 2019年8月23日(星期五) 中午12:56 收件人: "balancap/SSD-Tensorflow"[email protected]; 抄送: "787155268"[email protected];"Comment"[email protected]; 主题: Re: [balancap/SSD-Tensorflow] ValueError: No data files found in./VOCdevkit/VOC2007/voc_2007_test_*.tfrecord (#229)

Maybe this is a way to solve it. According to this https://github.com/balancap/SSD-Tensorflow/issues/18 It says :

The dataset needs to be split into different files at the conversion stage. The Pascal VOC 2007 has two different zip files, one for training and one for testing, and you can run the script on both two get the two splitted TFRecords datasets.The split_name parameter is just here for matching filenames from training or testing files.

So you should have this fellow folders:

VOC2007 ├─test │ ├─Annotations │ ├─ImageSets │ │ └─Main │ └─JPEGImages └─train ├─Annotations ├─ImageSets │ └─Main └─JPEGImages

And then manual split some img into tran/test,and then copy the xml of test into /test/Annotations , copy the imgs of test into /test/ImageSets .Then do the same thing to train but repleace the 'test' with 'train'.

Then run this code:

make test dataset:

DATASET_DIR=./VOC2007/test/ OUTPUT_DIR=./tfrecords

python tf_convert_data.py --dataset_name=pascalvoc --dataset_dir=./VOC2007/test/ --output_name=voc_2007_test --output_dir=./tfrecords

make train dataset:

DATASET_DIR=./VOC2007/train/ OUTPUT_DIR=./tfrecords python tf_convert_data.py --dataset_name=pascalvoc --dataset_dir=./VOC2007/train/ --output_name=voc_2007_train --output_dir=./tfrecords

Finally go to folder /tfrecords and copy the voc_2007_test_000.tfrecord to the VOC2007\test

This is my way to handle this problem. Hope this have some help to you .

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

chenzhengdeeplearning avatar Aug 29 '19 05:08 chenzhengdeeplearning

/tfrecords

python3: can't open file 'tf_convert_data.py': [Errno 2] No such file or directory

I found this error. I took as you mentioned I didnot find the file folder /tfrecords , is it already available ? please let me know. than you in advance ?

Neetusigger avatar May 03 '20 01:05 Neetusigger