Unable to find an entry point named 'TfLiteGpuDelegateV2BindInputBuffer'
Environment (please complete the following information):
- OS/OS Version: Android 11
- Source Version: master/v2.9.1-p1
- Unity Version: Unity 2020.3.36f1
Describe the bug
Logcat shows the following error on unity app with OpenCL delegate. I confirmed OpenCL delegate works correctly without tensor bindings.
2022/08/28 17:43:53.141 10652 14851 Error Unity EntryPointNotFoundException: Unable to find an entry point named 'TfLiteGpuDelegateV2BindInputBuffer' in 'libtensorflowlite_gpu_jni.so'.
Screenshots
Additional context
I know TfLiteGpuDelegateV2Bind*** requires your tensorflow patch.
Would you like to check if the native plugins include these entry points?
Right, I have forgotten to cherry-pick the patch to the latest release. Thanks.
@asus4
As for this issue, TfLiteGpuDelegateBindGlBufferToTensor seems to be helpful to support tensor bindings.
https://github.com/tensorflow/tensorflow/blob/2.9.0/tensorflow/lite/delegates/gpu/cl/gpu_api_delegate.cc#L380
I tried to call TfLiteGpuDelegateBindGlBufferToTensor like the following code.
/// <summary>
/// TfLiteGpuDataLayout
/// </summary>
public enum DataLayout
{
BHWC = 0,
DHWC4 = 1,
}
public bool BindBufferToInputTensor(Interpreter interpreter, int index, ComputeBuffer buffer)
{
var bufferID = (uint)buffer.GetNativeBufferPtr().ToInt32();
var tensorIndex = interpreter.GetInputTensorIndex(index);
var dataType = interpreter.GetInputTensorInfo(index).type;
var status = TfLiteGpuDelegateBindGlBufferToTensor(Delegate, bufferID, tensorIndex, dataType, DataLayout.BHWC);
return status == Interpreter.Status.Ok;
}
public bool BindBufferToOutputTensor(Interpreter interpreter, int index, ComputeBuffer buffer)
{
var bufferID = (uint)buffer.GetNativeBufferPtr().ToInt32();
var tensorIndex = interpreter.GetOutputTensorIndex(index);
var dataType = interpreter.GetOutputTensorInfo(index).type;
var status = TfLiteGpuDelegateBindGlBufferToTensor(Delegate, bufferID, tensorIndex, dataType, DataLayout.BHWC);
return status == Interpreter.Status.Ok;
}
[DllImport(TensorFlowLibraryGPU)]
private static extern Status TfLiteGpuDelegateBindGlBufferToTensor(
TfLiteDelegate gpuDelegate, uint buffer, int tensor_index, Interpreter.DataType data_type, DataLayout data_layout);
However, logcat showed the libtensorflowlite_gpu_jni.so doesn't contain TfLiteGpuDelegateBindGlBufferToTensor.
Do you have any idea to support TfLiteGpuDelegateBindGlBufferToTensor on the native plugins?
@stakemura I haven't noticed that there is another entry point in GPU delegate. You can not call TfLiteGpuDelegateBindGlBufferToTensor as the gpu_api_delegate.h is not linked to libtensorflowlite_gpu_delegate.so.
So I tested adding a new build setting in tensorflow/lite/delegates/gpu/BUILD, such as below. And the exported .so file includes TfLiteGpuDelegateBindGlBufferToTensor API. Although it requires another C# delegate, it may be worth trying.
# build -c opt --config android_arm64 --copt -Os --copt -DTFLITE_GPU_BINARY_RELEASE --linkopt -s --strip always :libtensorflowlite_gpu_delegate.so
cc_binary(
name = "libtensorflowlite_gpu_api_delegate.so",
linkopts = [
"-Wl,-soname=libtensorflowlite_gpu_api_delegate.so",
] + gpu_delegate_linkopts() + select({
"//tensorflow:windows": [],
"//conditions:default": [
"-fvisibility=hidden",
],
}),
linkshared = 1,
linkstatic = 1,
tags = [
"nobuilder",
"notap",
],
deps = ["//tensorflow/lite/delegates/gpu/cl:gpu_api_delegate"],
)
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.