rust
rust copied to clipboard
How to use tensorflow-rust with estimator in tensorflow
Hi, I am using tf.contrib.learn.KMeansClustering in my code, and export the graph by export_savedmodel() method. But when I try to load graph by Graph.import_graph_def, it shows InvalidArgument: Invalid GraphDef. So what's the right way to use tensorflow-rust with predefined estimators in tensorflow.
There's an example in examples/regression_savedmodel.rs
. Short answer: use Session::from_saved_model
. If it still doesn't work, share your code, and we'll help.
By the way, we encourage people to ask "How do I...?" questions on the mailing list.
Thanks for your kindly help.
I guess my code can load the saved model now, but it causes an error:
NotFound: Op type not registered 'NearestNeighbors' in binary running on rust-tf. Make sure the Op and Kernel are registered in the binary running in this process.
My code is basically:
let mut graph = Graph::new();
let mut session = Session::from_saved_model(&SessionOptions::new(),
&["serve"],
&mut graph,
"saved")?;
let mut step = StepWithGraph::new();
session.run(&mut step)?;
And the model is exported by tf.contrib.learn.KMeansClustering.export_savedmodel() method.
Is there any problem in my code?
I might be wrong but I guess the KMeansClustering Ops are not (yet) implemented in C and/or Rust APIs.
Since contrib ops are lazily registered when the module is first accessed, we should access them first before loading a saved graph which used ops from tf.contrib. In python, this can be done with a single line, but I haven't found a way to do it in tf-rust.
I solved this problem by compiling clustring_ops to a dynamic library and load it through Library::load, but i wonder if there's any better way.
Sorry, I was out for a week for family reasons.
That seems like a reasonable way to load the ops. I can't find any handy shared library of all contrib ops. I tried
bazel query 'rdeps(attr(linkshared, 1, ...), tensorflow/contrib/factorization/kernels:clustering_ops)' --keep_going
inside the TensorFlow repo and besides errors, it only found
//tensorflow/contrib/factorization:python/ops/_clustering_ops.so
//tensorflow/contrib/factorization:python/ops/_clustering_ops.so_check_deps
//tensorflow/contrib/factorization/kernels:clustering_ops
@jhseu How are other languages handling contrib ops? Is building individual shared libraries and loading them the only approach at the moment? Is it possible to get these ops into the prebuilt tarball (whether as several small shared libs or one big one), or would that be too much bloat?
Other languages external to Google aren't really handling it yet. You'll need to call the TF_LoadLibrary() C API function on the specific .so file that's included with the pip install to make it work right now (it has to be built in the exact same environment because of C++ ABI issues). We use another method internally for other languages.