Error: System.Runtime.InteropServices.SEHException: "External component has thrown an exception." - using PaddleDevice.Gpu()
Description
Today i've download your magnificent project. And I wrote sample from https://github.com/sdcb/PaddleSharp/blob/master/docs/ocr.md . It was worked perfectly, but very CPU demanding. And I'm trying to start program using GPU but I have error here -> _paddleOcrAll.Run(src).Text;
System.Runtime.InteropServices.SEHException: "External component has thrown an exception."
в Sdcb.PaddleInference.Native.PaddleNative.PD_TensorCopyFromCpuFloat(IntPtr pd_tensor, IntPtr data)
в Sdcb.PaddleInference.PaddleTensor.SetData(Single[] data)
в Sdcb.PaddleOCR.PaddleOcrDetector.RunRaw(Mat src, Size& resizedSize)
в Sdcb.PaddleOCR.PaddleOcrDetector.Run(Mat src)
в Sdcb.PaddleOCR.PaddleOcrAll.Run(Mat src, Int32 recognizeBatchSize)
в ScreTran.ExecutionService.PaddleOCRRecognize(Byte[] sampleImageData) в C:\Users\bbben\source\repos\ScreTran\Services\ExecutionService.cs:строка 131
в ScreTran.ExecutionService.RecognizeTextAndTranslate() в C:\Users\bbben\source\repos\ScreTran\Services\ExecutionService.cs:строка 141
в System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
BTW: I have Nvidia 4060 GPU. Maybe I need something else install except nuget packages?
Steps to reproduce the bug
Install latest nuget packages: Sdcb.PaddleInference Sdcb.PaddleInference.runtime.win64.cu120-sm86-89 Sdcb.PaddleOCR Sdcb.PaddleOCR.Models.Local OpenCvSharp4.runtime.win
Minimal program:
byte[] myPng = File.ReadAllBytes("..filename.png");
using var _paddleOcrAll = new PaddleOcrAll(LocalFullModels.EnglishV4, PaddleDevice.Gpu())
{
AllowRotateDetection = false,
Enable180Classification = false,
};
using var src = Cv2.ImDecode(myPng , ImreadModes.Color);
var resultText = _paddleOcrAll.Run(src).Text;
IDE
Visual Studio 2022
OS version
Windows 11
I've installed CUDA and cuDNN. It's already work with GPU. And how I understand I need to install TensorRT, but this page https://www.pythonf.cn/read/64495 unavailable.
@sdcb do the end user need to install CUDA and cuDNN? I've compiled sample project and send to my friend and program doesn't work. Friend doesn't install CUDA and cuDNN
do the end user need to install CUDA and cuDNN?
Yes.
Yes.
That's very sad. I thought it compiles with all needed libs. I can't force end user download CUDA and cuDNN. Then only thing left is stay on CPU recognition :C
Is it possible to use gpu without end user having to install cuda, cudnn.... i did it, you just need to download .whl files of cuda runtime, cudnn, cublab, cuff from https://pypi.org/ https://developer.nvidia.com/rdp/cudnn-archive or https://pypi.nvidia.com/ remember to find the correct version and corresponding operating system. after downloading, just rename .whl file to .zip, open with winrar and copy dll files inside to software folder (for me on windows). works for gpu perfectly. however i can't get tensor rt to work yet. This is the error message :
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0826 03:01:11.569357 26032 analysis_config.cc:1475] In CollectShapeInfo mode, we will disable optimizations and collect the shape information of all intermediate tensors in the compute graph and calculate the min_shape, max_shape and opt_shape.
--------------------------------------
C++ Traceback (most recent call last):
--------------------------------------
Not support stack backtrace yet.
----------------------
Error Message Summary:
----------------------
PreconditionNotMetError: To use Paddle-TensorRT, please compile with TENSORRT first. (at D:\a\PaddleSharp\PaddleSharp\paddle-src\paddle\fluid\inference\api\analysis_config.cc:787)
D:\Cong Viec\2025 Project\TestTensorRT\TestTensorRT\bin\x64\Debug\net9.0-windows10.0.19041.0\win-x64\TestTensorRT.exe (process 12236) exited with code -1 (0xffffffff).
To automatically close the console when debugging stops, enable Tools->Options->Debugging->Automatically close the console when debugging stops.
Press any key to close this window . . .
code:
public (PaddleOcrDetector, PaddleOcrRecognizer) InitializeOcrModels()
{
var detModel = DetectionModel.FromDirectory(@"Models\PP-OCRv5_mobile_det_infer", ModelVersion.V5);
var recModel = RecognizationModel.FromDirectoryV5(@"Models\PP-OCRv5_mobile_rec_infer");
//return (new PaddleOcrDetector(detModel, PaddleDevice.Gpu()), new PaddleOcrRecognizer(recModel, PaddleDevice.Gpu()));
var det = new PaddleOcrDetector(detModel, PaddleDevice.TensorRt(@"Models\txt\det.txt"));
var rec = new PaddleOcrRecognizer(recModel, PaddleDevice.TensorRt(@"Models\txt\rec.txt"));
return (det, rec);
}
i'm not very good at english, and when reading the sample i still don't understand where det.txt and rec.txt can get from? or will it be automatically generated? and what is wrong with my configuration? or missing? Any sample code or more detailed instructions are appreciated. info: installed the correct Paddle version for cuda11.8 cudnn 8.9 (supports Tensor RT as PaddleOCR says). There are full dlls cuda, cudnn, cublas, tensor rt in the software folder. version .net 9, operating system Windows 11