TensorFlowSharp
TensorFlowSharp copied to clipboard
Retrained graph unusable
Hi, I'am trying to use a retrained graph, out of tensorflows tutorial in a Unity project with the webcamtexture. But as a first test i simply tried to Load a picture and let it run through the graph. I basically tried what is shown in example "ExampleInceptionInference"
I used optimize for inference and transform_graph on it (with different calls and settings, i "collected" some graphs)
The graphs work perfectly in python after and before each editing step (retrain, optimize and transform) My guess is that i'am doing something wrong when i try to transform the picture to a tensor. I'am using the code out of the example with slight changes. (W,H, Mean are different)
private static void ConstructGraphToNormalizeImage(out TFGraph graph, out TFOutput input, out TFOutput output, TFDataType destinationDataType = TFDataType.Float)
{
const int W = 299;
const int H = 299;
const float Mean = 128;
const float Scale = 1;
graph = new TFGraph();
input = graph.Placeholder(TFDataType.String);
output = graph.Cast(graph.Div(
x: graph.Sub(
x: graph.ResizeBilinear(
images: graph.ExpandDims(
input: graph.Cast(
graph.DecodeJpeg(contents: input, channels: 3), DstT: TFDataType.Float),
dim: graph.Const(0, "make_batch")),
size: graph.Const(new int[] { W, H }, "size")),
y: graph.Const(Mean, "mean")),
y: graph.Const(Scale, "scale")), destinationDataType);
}
I Init my class with
var model = File.ReadAllBytes(Utils.getFilePath("dnn/10k_trans_t.pb"));
g.Import(model, "");
labels = File.ReadAllLines(Utils.getFilePath("dnn/retrained_labels_t.txt"));
session = new TFSession(g);
g_input = g["Mul"][0];
g_output = g["final_result"][0];
resultObject = "not sure";
resultProb = 0;
_file = Utils.getFilePath("Passion.jpg");
and basically after a few seconds i trigger
public void HandleJPG()
{
Debug.Log($"HandlePictures {TensorPics.Count}");
var tensor = ImageUtil.CreateTensorFromImageFile(_file);
var runner = session.GetRunner();
runner.AddInput(g_input, tensor).Fetch(g_output);
var output = runner.Run();
var bestIdx = 0;
float best = 0;
var result = output[0];
var rshape = result.Shape;
var probabilities = ((float[][])result.GetValue(jagged: true))[0];
for (int r = 0; r < probabilities.Length; r++)
{
if (probabilities[r] > best)
{
bestIdx = r;
best = probabilities[r];
}
}
Debug.Log("Tensorflow thinks this is: " + labels[bestIdx] + " Prob : " + best * 100);
}
But in TensorflowSharp i get completely different results.
so in short i have two questions:
- Has anyone a idea what has to be done differently on a retrained graph?
- Has anyone tried to transform the webcamtexture into a TFTensor? (could also use Opencvs Mats as a helper)
I was able to fix this on my own model by changing
const float Scale = 1
to
const float Scale = 128f
See if it works for you.
@hwvs I just spent 4 days troubleshooting why my trained models coming out of a python system weren't working correctly in my c# inference code and this was the culprit. Pre processing between c# and python are identical otherwise.
@Robertb84 I Have two Result with python My Model Work Well . with TensorflowSharp my model Is not Goood . Help Me . Is TensorflowSharp Good For Using or ... ??