liteflownet-tf2 icon indicating copy to clipboard operation
liteflownet-tf2 copied to clipboard

Facing below issue while executing the eval script of liteflownet-tf2

Open dilipv09 opened this issue 4 years ago • 3 comments

2020-04-15 15:11:29.506122: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1 2020-04-15 15:11:29.522529: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-04-15 15:11:29.523168: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: Quadro P2000 major: 6 minor: 1 memoryClockRate(GHz): 1.4805 pciBusID: 0000:01:00.0 2020-04-15 15:11:29.523294: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0 2020-04-15 15:11:29.524134: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0 2020-04-15 15:11:29.524814: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0 2020-04-15 15:11:29.524966: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0 2020-04-15 15:11:29.525879: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0 2020-04-15 15:11:29.526613: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0 2020-04-15 15:11:29.528157: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-04-15 15:11:29.528250: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-04-15 15:11:29.528731: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-04-15 15:11:29.529145: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2020-04-15 15:11:29.529463: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2020-04-15 15:11:29.555211: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3696000000 Hz 2020-04-15 15:11:29.556537: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x565252f88ad0 executing computations on platform Host. Devices: 2020-04-15 15:11:29.556551: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version 2020-04-15 15:11:29.616349: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-04-15 15:11:29.616989: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x565252fbbe20 executing computations on platform CUDA. Devices: 2020-04-15 15:11:29.617002: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Quadro P2000, Compute Capability 6.1 2020-04-15 15:11:29.617136: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-04-15 15:11:29.617879: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: Quadro P2000 major: 6 minor: 1 memoryClockRate(GHz): 1.4805 pciBusID: 0000:01:00.0 2020-04-15 15:11:29.617904: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0 2020-04-15 15:11:29.617914: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0 2020-04-15 15:11:29.617938: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0 2020-04-15 15:11:29.617959: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0 2020-04-15 15:11:29.617966: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0 2020-04-15 15:11:29.617991: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0 2020-04-15 15:11:29.618034: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-04-15 15:11:29.618128: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-04-15 15:11:29.618708: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-04-15 15:11:29.619273: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2020-04-15 15:11:29.619298: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0 2020-04-15 15:11:29.620168: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-04-15 15:11:29.620178: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 2020-04-15 15:11:29.620186: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N 2020-04-15 15:11:29.620361: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-04-15 15:11:29.620937: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-04-15 15:11:29.621506: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4009 MB memory) -> physical GPU (device: 0, name: Quadro P2000, pci bus id: 0000:01:00.0, compute capability: 6.1) WARNING:tensorflow:From /home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.init (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. 2020-04-15 15:11:29.888289: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-04-15 15:11:29.888938: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: Quadro P2000 major: 6 minor: 1 memoryClockRate(GHz): 1.4805 pciBusID: 0000:01:00.0 2020-04-15 15:11:29.888976: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0 2020-04-15 15:11:29.888989: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0 2020-04-15 15:11:29.888996: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0 2020-04-15 15:11:29.889004: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0 2020-04-15 15:11:29.889011: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0 2020-04-15 15:11:29.889019: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0 2020-04-15 15:11:29.889031: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-04-15 15:11:29.889071: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-04-15 15:11:29.889662: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-04-15 15:11:29.890228: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2020-04-15 15:11:29.890247: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-04-15 15:11:29.890252: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 2020-04-15 15:11:29.890256: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N 2020-04-15 15:11:29.890431: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-04-15 15:11:29.891040: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-04-15 15:11:29.891619: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4009 MB memory) -> physical GPU (device: 0, name: Quadro P2000, pci bus id: 0000:01:00.0, compute capability: 6.1) 2020-04-15 15:11:32.498632: W tensorflow/core/framework/op_kernel.cc:1622] OP_REQUIRES failed at save_restore_v2_ops.cc:184 : Not found: Key flownet/feature_extractor/sequential/conv2d/bias not found in checkpoint Traceback (most recent call last): File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1365, in _do_call return fn(*args) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1350, in _run_fn target_list, run_metadata) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1443, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.NotFoundError: Key flownet/feature_extractor/sequential/conv2d/bias not found in checkpoint [[{{node save/RestoreV2}}]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 1290, in restore {self.saver_def.filename_tensor_name: save_path}) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 956, in run run_metadata_ptr) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1180, in _run feed_dict_tensor, options, run_metadata) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run run_metadata) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.NotFoundError: Key flownet/feature_extractor/sequential/conv2d/bias not found in checkpoint [[node save/RestoreV2 (defined at /home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1751) ]]

Original stack trace for 'save/RestoreV2': File "eval.py", line 42, in saver = tf.train.Saver() File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 828, in init self.build() File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 840, in build self._build(self._filename, build_save=True, build_restore=True) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 878, in _build build_restore=build_restore) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 508, in _build_internal restore_sequentially, reshape) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 328, in _AddRestoreOps restore_sequentially) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 575, in bulk_restore return io_ops.restore_v2(filename_tensor, names, slices, dtypes) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/ops/gen_io_ops.py", line 1696, in restore_v2 name=name) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/framework/op_def_library.py", line 793, in _apply_op_helper op_def=op_def) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func return func(*args, **kwargs) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3360, in create_op attrs, op_def, compute_device) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3429, in _create_op_internal op_def=op_def) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 1751, in init self._traceback = tf_stack.extract_stack()

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 1300, in restore names_to_keys = object_graph_key_mapping(save_path) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 1618, in object_graph_key_mapping object_graph_string = reader.get_tensor(trackable.OBJECT_GRAPH_PROTO_KEY) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py", line 915, in get_tensor return CheckpointReader_GetTensor(self, compat.as_bytes(tensor_str)) tensorflow.python.framework.errors_impl.NotFoundError: Key _CHECKPOINTABLE_OBJECT_GRAPH not found in checkpoint

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "eval.py", line 43, in saver.restore(sess, args.model) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 1306, in restore err, "a Variable name or other graph key that is missing") tensorflow.python.framework.errors_impl.NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Key flownet/feature_extractor/sequential/conv2d/bias not found in checkpoint [[node save/RestoreV2 (defined at /home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1751) ]]

Original stack trace for 'save/RestoreV2': File "eval.py", line 42, in saver = tf.train.Saver() File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 828, in init self.build() File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 840, in build self._build(self._filename, build_save=True, build_restore=True) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 878, in _build build_restore=build_restore) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 508, in _build_internal restore_sequentially, reshape) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 328, in _AddRestoreOps restore_sequentially) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/training/saver.py", line 575, in bulk_restore return io_ops.restore_v2(filename_tensor, names, slices, dtypes) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/ops/gen_io_ops.py", line 1696, in restore_v2 name=name) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/framework/op_def_library.py", line 793, in _apply_op_helper op_def=op_def) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func return func(*args, **kwargs) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3360, in create_op attrs, op_def, compute_device) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3429, in _create_op_internal op_def=op_def) File "/home/abaghaie/anaconda2/envs/liteFlow/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 1751, in init self._traceback = tf_stack.extract_stack()

dilipv09 avatar Apr 15 '20 19:04 dilipv09

I am having the same issue.

JoshuaMathew avatar Apr 20 '20 20:04 JoshuaMathew

I updated the model file on Google Drive.

https://drive.google.com/drive/folders/1apeRotQKMsFji8MKKNzcx-QO4udkJcdJ

You can check one more time.

keeper121 avatar May 19 '20 13:05 keeper121

But I still have the same issue.

I updated the model file on Google Drive.

https://drive.google.com/drive/folders/1apeRotQKMsFji8MKKNzcx-QO4udkJcdJ

You can check one more time.

lvbubu avatar Mar 19 '22 08:03 lvbubu