djl
djl copied to clipboard
There is an error dialog when i try to load a torchscript model.
Description
(A clear and concise description of what the bug is.) I use version 0.29.0, JDK environment is 17, operating system is Win11. The libtorch dependencies are loaded properly, and the pytorch engine is initialized properly, but when I try to load a torchscript model, a JNI error modal box pops up.
Expected Behavior
(what's the expected behavior?)
Error Message
Stack tracing:
22:04:57.143 [Test worker] DEBUG ai.djl.engine.Engine -- Registering EngineProvider: PyTorch
22:04:57.143 [Test worker] DEBUG ai.djl.engine.Engine -- Found default engine: PyTorch
22:04:57.159 [Test worker] DEBUG ai.djl.util.cuda.CudaUtils -- Found cudart: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.5\bin\cudart64_12.dll
22:04:57.263 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\asmjit.dll
22:04:57.263 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\c10.dll
22:04:57.263 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\cublas64_12.dll
22:04:57.335 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\cublasLt64_12.dll
22:04:57.385 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\cudart64_12.dll
22:04:57.385 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\cufft64_11.dll
22:04:57.401 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\cufftw64_11.dll
22:04:57.401 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\cupti64_2023.1.1.dll
22:04:57.401 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\curand64_10.dll
22:04:57.401 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\cusolver64_11.dll
22:04:57.401 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\cusolverMg64_11.dll
22:04:57.401 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\cusparse64_12.dll
22:04:57.416 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\libiomp5md.dll
22:04:57.416 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\libiompstubs5md.dll
22:04:57.416 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\mkl_core.1.dll
22:04:57.416 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\mkl_def.1.dll
22:04:57.416 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\mkl_intel_thread.1.dll
22:04:57.416 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\mkl_vml_def.1.dll
22:04:57.416 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\nvJitLink_120_0.dll
22:04:57.416 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\nvrtc-builtins64_121.dll
22:04:57.416 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\nvrtc64_120_0.dll
22:04:57.432 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\nvToolsExt64_1.dll
22:04:57.432 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\uv.dll
22:04:57.432 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\zlibwapi.dll
22:04:57.432 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\cudnn64_8.dll
22:04:57.432 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\cudnn_ops_infer64_8.dll
22:04:57.432 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\cudnn_ops_train64_8.dll
22:04:57.432 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\cudnn_cnn_infer64_8.dll
22:04:57.447 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\cudnn_cnn_train64_8.dll
22:04:57.447 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\cudnn_adv_infer64_8.dll
22:04:57.447 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\cudnn_adv_train64_8.dll
22:04:57.447 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\fbgemm.dll
22:04:57.447 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\caffe2_nvrtc.dll
22:04:57.447 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\torch_cpu.dll
22:04:59.879 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\c10_cuda.dll
22:04:59.879 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\torch_cuda.dll
22:05:00.020 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\torch.dll
22:05:00.020 [Test worker] DEBUG ai.djl.pytorch.jni.LibUtils -- Loading native library: D:\Env\Libtorch\2.3.1.CU121\lib\djl_torch.dll
22:05:00.020 [Test worker] INFO ai.djl.pytorch.engine.PtEngine -- PyTorch graph executor optimizer is enabled, this may impact your inference latency and throughput. See: https://docs.djl.ai/docs/development/inference_performance_optimization.html#graph-executor-optimization
22:05:00.051 [Test worker] INFO ai.djl.pytorch.engine.PtEngine -- Number of inter-op threads is 20
22:05:00.051 [Test worker] INFO ai.djl.pytorch.engine.PtEngine -- Number of intra-op threads is 14
22:05:00.067 [Test worker] DEBUG ai.djl.pytorch.jni.JniUtils -- mapLocation: false
22:05:00.067 [Test worker] DEBUG ai.djl.pytorch.jni.JniUtils -- extraFileKeys: []
22:05:00.067 [Test worker] DEBUG ai.djl.pytorch.jni.JniUtils -- open file failed because of errno 22 on fopen: Invalid argument, file path:
open file failed because of errno 22 on fopen: Invalid argument, file path:
ai.djl.engine.EngineException: open file failed because of errno 22 on fopen: Invalid argument, file path:
at ai.djl.pytorch.jni.PyTorchLibrary.moduleLoad(Native Method)
at ai.djl.pytorch.jni.JniUtils.loadModule(JniUtils.java:1761)
at ai.djl.pytorch.engine.PtModel.load(PtModel.java:99)
at ai.djl.Model.load(Model.java:110)
at buddha.djl.PytorchEngineTest.testLoadModel(PytorchEngineTest.java:31)
How to Reproduce?
So the error code looks like this, and the model file is downloaded from the pytorch model zoo provided by DJL.
@Test
public void testLoadModel() throws Throwable {
final Model model = Model.newInstance("YoloV8n");
model.load(Paths.get("ModelZoo/yolov8n.pt"));
}
Steps to reproduce
(Paste the commands you ran that produced the error.)
What have you tried to solve it?
Environment Info
Please run the command ./gradlew debugEnv from the root directory of DJL (if necessary, clone DJL first). It will output information about your system, environment, and installation that can help us debug your issue. Paste the output of the command below:
PASTE OUTPUT HERE