HarmitMinhas96
HarmitMinhas96
Hello, It seems it does work if : - my inputs are TYPE_UINT8 and TYPE_FP32 with different dimensions - my first input and second input are both TYPE_STRING with different...
Unfortunately that does not work either. This is how I created both the json and yaml files: ``` import numpy as np import json import yaml d = { "data"...
It works when saving it as a 1D array in the input-data file. I can use this as a workaround until the release. Thank you!
Here are the server logs: ``` I0801 13:59:29.354984 1 grpc_server.cc:3585] New request handler for ModelInferHandler, 0 I0801 13:59:29.355063 1 infer_request.cc:710] prepared: [0x0x7f382c001110] request id: Thread Num: 0, Iter: 0, First,...
Also, I've noticed that setting compression_algorithm to `gzip` in the infer call seems to significantly reduce the chances of these errors being thrown
I'm currently doing some testing and it seems this issue started in `Release 2.20.0 corresponding to NGC container 22.03`. I can't seem to recreate this in 22.02 so far.
I was able to recreate this with the first tensorflow saved model I found: https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b0/feature_vector/2 config.pbtxt I used: ``` name: "my_model" platform: "tensorflow_savedmodel" max_batch_size: 16 input { name: "input_1" data_type:...
Thank you for the fix! Will this fix be in the upcoming NGC container 22.08?