server icon indicating copy to clipboard operation
server copied to clipboard

tensorflow_text support for Triton

Open SimZhou opened this issue 2 years ago • 7 comments

Description A clear and concise description of what the bug is. Triton 21.10 does not support RegexSplitWithOffsets op. Similar to https://github.com/tensorflow/text/issues/200 or https://github.com/tensorflow/serving/issues/1490

Triton Information What version of Triton are you using? I am using triton version: 21.10-py3

Are you using the Triton container or did you build it yourself? I am using Triton container.

To Reproduce Describe the models (framework, inputs, outputs), ideally include the model configuration file (if using an ensemble include the model configuration file for that as well). I trained my model using Tensorflow 2.5.0. + Google's Bert Basically it is a text classification model with BertTokenizer + BertModel. Its input is a piece of text (String), output is an array of logits, indicating the 31 multi-label classification results.

I saved my model with savedModel fomat and made my config.pbtxt file as follows:

name: "aihivebox-intent"
platform: "tensorflow_savedmodel"
max_batch_size : 0
input [
  {
    name: "text_input"
    data_type: TYPE_STRING
    dims: [ -1 ]
  }
]
output [
  {
    name: "mlp"
    data_type: TYPE_FP32
    dims: [-1,31]
  }
]

The triton server starts normally, but when i do inference, it gives me this:

I0311 01:29:19.137185 1 grpc_server.cc:4117] Started GRPCInferenceService at 0.0.0.0:8001
I0311 01:29:19.137930 1 http_server.cc:2815] Started HTTPService at 0.0.0.0:8000
I0311 01:29:19.179941 1 http_server.cc:167] Started Metrics Service at 0.0.0.0:8002
2022-03-11 01:30:41.481080: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:568] function_optimizer failed: Not found: Op type not registered 'RegexSplitWithOffsets' in binary running on bd61380e8e17. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
2022-03-11 01:30:41.629883: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:568] function_optimizer failed: Not found: Op type not registered 'RegexSplitWithOffsets' in binary running on bd61380e8e17. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
2022-03-11 01:30:42.270766: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:568] function_optimizer failed: Not found: Op type not registered 'RegexSplitWithOffsets' in binary running on bd61380e8e17. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
2022-03-11 01:30:42.347079: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:568] function_optimizer failed: Not found: Op type not registered 'RegexSplitWithOffsets' in binary running on bd61380e8e17. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
2022-03-11 01:30:42.445530: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at partitioned_function_ops.cc:113 : Not found: Op type not registered 'RegexSplitWithOffsets' in binary running on bd61380e8e17. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

SimZhou avatar Mar 11 '22 02:03 SimZhou

btw, I am transfering tf-serving to triton and the model works fine on Tensorflow Serving 2.7.0

SimZhou avatar Mar 11 '22 02:03 SimZhou

duplicate of https://github.com/triton-inference-server/server/issues/2443

SimZhou avatar Mar 11 '22 03:03 SimZhou

Tried triton version 22.02-py3: issue remains. Tried triton version 21.08 (which the version of tensorflow is the same as where I trained my model, 2.5.0, indicated by https://github.com/triton-inference-server/server/issues/3604#issuecomment-982125998, support matrix): issue remains.

SimZhou avatar Mar 11 '22 10:03 SimZhou

Sorry @SimZhou this issue seems to have fell through crack. Have you tried the LD_PRELOAD trick with the custom op RegexSplitWithOffsets? doc link.

tanmayv25 avatar Apr 06 '22 02:04 tanmayv25

Sorry @SimZhou this issue seems to have fell through crack. Have you tried the LD_PRELOAD trick with the custom op RegexSplitWithOffsets? doc link.

Yes, I am trying but it seems I've encountered another issue, which failed me with loading the custom op library: #4212

SimZhou avatar Apr 13 '22 04:04 SimZhou

Hi @SimZhou I'm facing a similar issue, were you able to resolve this?

suhailbarot avatar Jun 21 '22 14:06 suhailbarot

Hi @SimZhou I'm facing a similar issue, were you able to resolve this?

Unfortunately No. But there are 2 possible alternatives instead of embedding the custom op into triton:

  1. Use python backend to do text-to-vector transformation and make it as an api available in Triton. Then everytime befor you do tasks, request the api for vector first.
  2. just do text-to-vector operation yourself.

SimZhou avatar Jul 07 '22 01:07 SimZhou

Tried triton version 22.02-py3: issue remains. Tried triton version 21.08 (which the version of tensorflow is the same as where I trained my model, 2.5.0, indicated by #3604 (comment), support matrix): issue remains.

Issue is still there with triton 22.07

elina-israyelyan avatar Aug 19 '22 06:08 elina-israyelyan

We have several customers already deployed tensorflow-text models successfully in Triton with LD_PRELOAD. As described in the linked issue, you have to make sure the version of tensorflow being used in Triton matches with the version of tensorflow-text you are pulling the custom ops from. You can look at my response here to learn more: https://github.com/triton-inference-server/server/issues/3604#issuecomment-982125998

The standard pip install tensorflow-text in python 3.8 installs tensorflow libs with 2.x TF versions. When launching tritonserver, you would have to provide the --backend-config=tensorflow,version=2 to use 2.x TF version. For 22.07, TF version in Triton containers are: 2.9.1 and 1.15.5 So, you should be copying _regex_split_ops.so from tensorflow-text==2.9.1 or tensorflow-text==1.15.5, to Triton container image.

Then you should launch Triton server as: If using TensorFlow v1:

export LD_LIBRARY_PATH=/opt/tritonserver/backends/tensorflow1:$LD_LIBRARY_PATH
LD_PRELOAD=/<path_to_custom_ops_from_tensorflow-text==1.15.5>/_regex_split_ops.so tritonserver --model-store=my_model/ --backend-config=tensorflow,version=1 

If using TensorFLow v2:

export LD_LIBRARY_PATH=/opt/tritonserver/backends/tensorflow2:$LD_LIBRARY_PATH
LD_PRELOAD=/<path_to_custom_ops_from_tensorflow-text==2.9.1>/_regex_split_ops.so tritonserver --model-store=my_model/
--backend-config=tensorflow,version=2 

The issue in #4212 is :

ERROR: ld.so: object '_regex_split_ops.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.

It just means _regex_split_ops.so was not in the path as the system was not able to find it. Looks like the entire ops directory was copied to the image. Hence,

--env LD_PRELOAD=ops/_regex_split_ops.so \

should have solved the issue. That being said a lot of tensorflow-text models are supported in Triton via LD_PRELOAD and in production use by lots of Triton's user.

Closing the issue to avoid future confusion. Please open a new GH issue if you are running into any other problem with the integration.

tanmayv25 avatar Aug 20 '22 00:08 tanmayv25