GCNetwork
GCNetwork copied to clipboard
i cannot find the result picture
thanks for your work! i download it and want to test it performance. but after the command. i found no picture on my result folder. this is my command, "python test.py -data ./pic" (the 'pic ' is the folder where i put my test pictures in 'left' and 'right',such as"./pic/left/0l.png"and "./pic/right/0r.png")
the content on the 'test_params.json' are: "{ "pspath": "./res", "batch_size": 1, "w_path": "./model/pretrained_model_weight.hdf5", "max_q_size": 3, "verbose": 1 } " the content on the 'util_param.json' are: "{ "crop_width": 128, "crop_height": 96, "new_max": 1, "new_min": -1, "old_max": 256, "old_min": 0, "val_ratio": 0.1, "file_extension": "png", "seed": 1234, "fraction": 1 } " thank you for your help!
Hi, could you please post parts of the message?
Btw, I am still training the model with the SceneFlow dataset, so the result might not be good.
now. it have these problem sing TensorFlow backend. 2017-09-16 10:22:56.473879: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-09-16 10:22:56.473910: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-09-16 10:22:56.473918: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-09-16 10:22:56.601133: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2017-09-16 10:22:56.601492: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties: name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate (GHz) 1.7845 pciBusID 0000:01:00.0 Total memory: 5.93GiB Free memory: 5.58GiB 2017-09-16 10:22:56.601512: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0 2017-09-16 10:22:56.601521: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y 2017-09-16 10:22:56.601532: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0) Predict data using generator... Exception in thread Thread-1: Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner self.run() File "/usr/lib/python2.7/threading.py", line 754, in run self.__target(*self.__args, **self.__kwargs) File "/usr/local/lib/python2.7/dist-packages/keras/utils/data_utils.py", line 568, in data_generator_task generator_output = next(self._generator) StopIteration
Traceback (most recent call last):
File "test.py", line 49, in
Hi, please set max_q_size to 1, then test your model again. Show me the error message if it still occurs.
the test_params.json { "data":"./pic", "pspath": "./prediction", "batch_size": 1, "w_path": "model_weight.hdf5", "max_q_size": 1, "verbose": 1 }
the error message:
Using TensorFlow backend. Loading pretrained cost weight... 2017-09-18 13:54:29.217078: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-09-18 13:54:29.217100: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-09-18 13:54:29.217107: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-09-18 13:54:29.342172: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2017-09-18 13:54:29.342498: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties: name: GeForce GTX 1060 6GB major: 6 minor: 1 memoryClockRate (GHz) 1.7845 pciBusID 0000:01:00.0 Total memory: 5.93GiB Free memory: 5.59GiB 2017-09-18 13:54:29.342513: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0 2017-09-18 13:54:29.342519: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y 2017-09-18 13:54:29.342528: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0) Predict data using generator... Exception in thread Thread-1: Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner self.run() File "/usr/lib/python2.7/threading.py", line 754, in run self.__target(*self.__args, **self.__kwargs) File "/usr/local/lib/python2.7/dist-packages/keras/utils/data_utils.py", line 568, in data_generator_task generator_output = next(self._generator) StopIteration
1/1 [==============================] - 1s
Process finished with exit code 0
1.i use the newest version of your code.
when i python train.py i found the loss is bigger and bigger.
train_params.json are followed:
{
"weight_save_path": "model/model_weight.hdf5",
"period": 1,
"verbose": 1,
"log_save_path": "log",
"max_q_size": 1,
"save_best_only": 1,
"weight_path": "model/pretrained_model_weight.hdf5",
"learning_rate": 0.001,
"batch_size": 1,
"epochs": 1,
"epsilon": 0.00000001,
"rho": 0.9,
"decay": 0.0,
"loss_function": "mean_absolute_error",
"cost_volume_weight_save_path": "model/cost_weight.hdf5",
"cost_volume_weight_path": "model/cost_weight.hdf5",
"linear_output_weight_save_path": "model/linear_output_weight.hdf5",
"linear_output_weight_path": "model/linear_output_weight.hdf5",
"pspath": "./prediction",
"psdata":"./pic"
}
here are the environment.json { "sceneflow_root": "/home/lvhao/data/", "driving_root": "driving", "driving_train": "frames_cleanpass", "driving_label": "disparity", "monkaa_root": "monkaa", "monkaa_train": "frames_cleanpass", "monkaa_label": "disparity", "train_all": 0, "train_driving": 1, "train_monkaa": 0 } (i just use the driving sample) 2.in test.py i make some changes: add a line"psdata = tp['psdata']" behind the line23"pspath = tp['pspath']" and change the line 27 to "parser.add_argument('-data', help = 'data used for prediction', default = psdata)" and i set the left and right test pictures such as "./pic/left/0_left.png"and "./pic/right/0_right.png". (because i want to just input the 'python test.py') here are all changes i make . i dont know why it does not work! thank you for your time!
the message of util_params.json are here: { "crop_width": 128, "crop_height": 128, "val_ratio": 0.1, "file_extension": "png", "seed": 1234, "fraction": 1 }
- Hi, the error occurs because you feed only one data sample. However, you should see the prediction result in the directory.
- What value do you set for pspath?
- Since the model is trained with patches randomly cropped from images, it requires lots of epochs before the convergence. I am still training the model with the Driving and Monkaa dataset, and will upload the result to Github as soon as possible.
hi. after i copy the '0l.png' to '0l.png' and '1l.png'(the same as the right images) .the error message disappered . i can use PIL to show the result image~ now i want to know:why only one data sample make error message? thanks! and i am looking forward to your training result!!
six six six