Andrey Guskov
Andrey Guskov
There is a kludge to get around this problem — namely, allocating a 'fake' linear DHT whose size is equal to the intended rank and the elements equal the intended...
In my build I extended [DenseHostTensor value dump limit](https://github.com/tensorflow/runtime/blob/2b0062e34b35a6a354ee17bea2496de947d5a89b/lib/tensor/dense_host_tensor.cc#L77) from 32 to `SSIZE_MAX`, but what the network returns does not look normal: ``` $ bazel-bin/tools/tfrt_translate --mlir-to-bef integrationtest/resnet/resnet50_graph_inference.mlir -o=integrationtest/resnet/resnet50_graph_inference.bef $ bazel-bin/tools/bef_executor...
@lindong28 In an ideal scenario I\`d like to be able to upload my own 224×224×BGR images and get back the recognition percentages from SoftMax. Right now (on some implicit default...
@lindong28 So, regardless of the presence of SoftMax, is this output value of `58990869004419072.f` across all 1001 recognition classes the expected result? What is the input image I could use...
@lindong28 I did everything according to [this instruction](https://github.com/tensorflow/runtime/blob/master/documents/resnet.md), with one notable exception: I don\`t execute the test via Bazel but run both the conversion app and the inference app directly...
@lindong28 Thanks, will be looking forward to that!
@lindong28 And, since I\`m developing a custom backend, could you please share the version of RN50 with SoftMax enabled in MLIR?\ It\`ll allow me both to see how TFRT reacts...
make test set test_scope=NIGHTLY disable test_device_cpu disable benchdnn_all enable benchdnn_conv enable benchdnn_deconv enable benchdnn_reorder enable benchdnn_pool enable arch_gpu_xe-hpc enable arch_gpu_xe-hpg-atsm enable arch_gpu_xe-hpg-dg2 enable arch_gpu_xe-lp enable arch_gpu_xe-lpg enable arch_gpu_xe-lpg+ enable arch_gpu_xe2-hpg-bmg...
Not relevant anymore.
> @petercad Do we need a similar PR for gemmstone repo? We do. I'll prepare a separate PR and merge when the CI passes here.