DS
DS copied to clipboard
load my own trained model error
Hi,Thanks for your nice work. I have a question about how to get a right model. I tried to use “nwojke/cosine_metric_learning” to train my own model,but the c++ program will make an error when I replace the "tt1.pb"model with my own model. The following is the error message:
create graph in session failed: Invalid argument: Cannot assign a device for operation 'map/TensorArray': Could not satisfy explicit device specification '/gpu:4' because no supported kernel for GPU devices is available.
Colocation Debug Info:
Colocation group had the following types and devices:
TensorArrayReadV3: CPU
TensorArrayV3: CPU
Enter: GPU CPU
Placeholder: GPU CPU
TensorArrayScatterV3: CPU
[[Node: map/TensorArray = TensorArrayV3[clear_after_read=true, dtype=DT_UINT8, dynamic_size=false, element_shape=<unknown>, tensor_array_name="", _device="/gpu:4"](map/strided_slice)]]
CUDA Error: driver shutting down
test: /data4/gcy/ds-master/yoda/darknet/src/cuda.c:36: check_error: Assertion `0' failed.
Aborted (core dumped)
The program work when I commented out this line of code.
tf::graph::SetDefaultDevice("/gpu:4", &graph_def);
But I want to set a gpu id ,so I can't commented out it. The way I came up with was to train the same model as "tt1.pb" was.I use tensorflow1.2.1 ,tensorflow1.4.0 and tensorflow1.5 to train models,but none of these models can be used. So I want to know the source of "tt1.pb" model . Can you help me ? Thank you very much.
The solution has been found. Add the code:
tf::graph::SetDefaultDevice("/gpu:4", &graph_def);
opts.config.set_allow_soft_placement(true);