Trying to use deeplearning4j in Web App (.war) with WildFly (JBoss) - RuntimeException
hello there,
I'll been trying to create a simple vanilla deeplearning4j application using Wildfly 29 in both Windows and Linux (Docker container) but I'm getting the same error:
2025-04-30 17:31:28,633 ERROR [stderr] (default task-1) Caused by: java.lang.RuntimeException: java.lang.NullPointerException: Cannot invoke "org.bytedeco.javacpp.indexer.Raw.putLong(long, long)" because "org.bytedeco.javacpp.indexer.LongRawIndexer.RAW" is null
2025-04-30 17:31:28,633 ERROR [stderr] (default task-1) at deployment.sa-deeplearning-1.0.0-M2.1.war//org.nd4j.linalg.api.buffer.BaseDataBuffer.readContent(BaseDataBuffer.java:1678)
2025-04-30 17:31:28,633 ERROR [stderr] (default task-1) at deployment.sa-deeplearning-1.0.0-M2.1.war//org.nd4j.linalg.api.buffer.BaseDataBuffer.read(BaseDataBuffer.java:1572)
2025-04-30 17:31:28,633 ERROR [stderr] (default task-1) ... 55 more
2025-04-30 17:31:28,633 ERROR [stderr] (default task-1) Caused by: java.lang.NullPointerException: Cannot invoke "org.bytedeco.javacpp.indexer.Raw.putLong(long, long)" because "org.bytedeco.javacpp.indexer.LongRawIndexer.RAW" is null
2025-04-30 17:31:28,633 ERROR [stderr] (default task-1) at deployment.sa-deeplearning-1.0.0-M2.1.war//org.bytedeco.javacpp.indexer.LongRawIndexer.putRaw(LongRawIndexer.java:107)
2025-04-30 17:31:28,633 ERROR [stderr] (default task-1) at deployment.sa-deeplearning-1.0.0-M2.1.war//org.bytedeco.javacpp.indexer.LongRawIndexer.put(LongRawIndexer.java:111)
2025-04-30 17:31:28,633 ERROR [stderr] (default task-1) at deployment.sa-deeplearning-1.0.0-M2.1.war//org.nd4j.linalg.api.buffer.BaseDataBuffer.put(BaseDataBuffer.java:1268)
2025-04-30 17:31:28,633 ERROR [stderr] (default task-1) at deployment.sa-deeplearning-1.0.0-M2.1.war//org.nd4j.linalg.api.buffer.BaseDataBuffer.putByDestinationType(BaseDataBuffer.java:1035)
2025-04-30 17:31:28,633 ERROR [stderr] (default task-1) at deployment.sa-deeplearning-1.0.0-M2.1.war//org.nd4j.linalg.api.buffer.BaseDataBuffer.readContent(BaseDataBuffer.java:1630)
2025-04-30 17:31:28,633 ERROR [stderr] (default task-1) ... 56 more
I tried with all the tricks I could find in the web from adding the platform attribute during maven building:
mvn -Djavacpp.platform=windows-x86_64 clean install
To create an uber Jar file in both Windows and Linux but I'm still getting the same error, sometimes I get the ND4J backend error but that's fixed by adding the proper dependency:
<dependency>
<groupId>org.nd4j</groupId>
<artifactId>nd4j-native</artifactId>
<version>${dl4j-master.version}</version>
</dependency>
When I execute the test directly with Java everything works perfect on both my local Windows machine and in Docker terminal, the example is able to create the model, fit the training data and store the model + normalizer, but when I try to ether train the new model or load the saved model in WildFly I'm always getting this error, please help 🤕
@diavole hmm the basics seem ok. Are you building on your own local system then deploying to another one?
Note: why do you need the UI after the model is deployed? Updates? To be honest it's mainly meant to be a local interface to see how a model is training then you leave it. Are you doing updates remotely on your server? That will come with a whole host of problems. If you are doing training from scratch and using the UI, you could look at maybe setting up a remote connector or something instead as a separate service as well.
One thing that comes to mind is maybe setting org.bytedeco.logger.debug to true (a system property set via a -D argument)
thanks for the quick response @agibsonccc, you guys are awesome,
I just add the attribute to the server and ran the tests again, here is the full log, same issue.
Our main process works on App Servers like WildFly that's why is faster and safer for us to run the model in the same App Server where our main solution runs, is not that we want to have the UI accessing the model, we use an UI with other purposes but we want to have the models available for our solution to use them locally, I'm actually writing the workaround right now which is running the model in a separate container,
Cheers, Jorge