tensorbuffer
tensorbuffer
Ok I figured out that I didn't copy over the opencv.jar. Now my problem is the add() subtract() multiply functions. I didn't see them in http://bytedeco.org/javacpp-presets/opencv/apidocs/overview-summary.html Are they supported?
Since the current app is written in Kotlin, I found there's issue that hard to resolve, for example there's code to convert a DoubleArray to Scalar, something like Scalar(DoubleArray(...)). And...
I think the problem is: opencv Scalar has "public Scalar(double[] vals)" https://docs.opencv.org/3.4/javadoc/org/opencv/core/Scalar.html javacpp scalar does not have this (array of double), it only has double /Pointer / Scalar http://bytedeco.org/javacpp-presets/opencv/apidocs/org/bytedeco/opencv/opencv_core/Scalar.html
Another question: for Mat, I should use org.bytedeco.opencv.opencv_core, not org.opencv.core, right? the opencv one has submat() function but not in bytedeco.
ok thanks. And also missing MatofPoint().
Actually, how hard is it to add UMat support in opencv java build? Where to start and which code in javacv can be a reference point? I am thinking maybe...
"the Java API wasn't designed for performance" --- yes performance is important to us. Why performance would be an issue, is it due to the JNI layer cost? I thought...
would like to use TF2 once this is resolved
@Xhark Any update? We will need to rewrite our model with TF1.x if this is not resolved. If it's not ready do you have a timeline?
I found it's not the opset issue, but the torch.onnx.export() with dynamo=True. If I use dynamo=False, together with opset_version=11, I can generate a new model: https://drive.google.com/file/d/1nnfuZ7MhhhhyNDRy20nUT96fLC97AGQC/view?usp=sharing onnx2tf has issue with...