elasticsearch-jdbc
elasticsearch-jdbc copied to clipboard
ERROR [importer.jdbc unable to create new native thread
I run a scheduler with "schedule" : "0/5 0-59 0-23 ? * *", But,after about 20 hours,OOM error happened .... [04:11:11,016][INFO ][importer.jdbc ][pool-3-thread-2] already scheduled [04:11:11,017][ERROR][importer.jdbc ][pool-3-thread-2] unable to create new native thread java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) ~[?:1.8.0_101] at java.lang.Thread.start(Thread.java:714) ~[?:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950) ~[?:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1357) ~[?:1.8.0_101] at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134) ~[?:1.8.0_101] at org.xbib.pipeline.SimplePipelineExecutor.execute(SimplePipelineExecutor.java:100) ~[elasticsearch-jdbc-2.3.4.1.jar:?] at org.xbib.tools.JDBCImporter.execute(JDBCImporter.java:240) ~[elasticsearch-jdbc-2.3.4.1.jar:?] at org.xbib.tools.JDBCImporter.run(JDBCImporter.java:149) [elasticsearch-jdbc-2.3.4.1.jar:?] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_101] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_101] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_101] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_101] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_101] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]
Versions: elasticsearch-jdbc-2.3.4.1 elasticsearch-2.4.0
My running jdbc script: #!/bin/sh DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" bin=${DIR}/../bin lib=${DIR}/../lib
echo '
{
"type" : "jdbc",
"jdbc" : {
"url" : "jdbc:mysql://localhost:3306/my_db",
"statefile" : "statefile.json",
"schedule" : "0/5 0-59 0-23 ? * ",
"user" : "root",
"password" : "password",
"sql" : [
{
"statement" : "select * from my_table where mytimestamp > ?",
"parameter" : [ "$metrics.lastexecutionstart" ]
}
],
"index" : "my_index",
"type" : "my_table",
"elasticsearch" : {
"cluster" : "my_cluster",
"host" : "localhost",
"port" : 9300
}
}
}
' | java
-cp "${lib}/"
-Dlog4j.configurationFile=${bin}/log4j2.xml
org.xbib.tools.Runner
org.xbib.tools.JDBCImporter
Add one more perhaps usefull information. I have checked the elasticsearch log info. When the elasticsearch-jdbc occured the OOM errors, the elasticsearch was running well.
Looking forward to your enthusiastic answer.
One other problem: when I insert 300 rows into mysql table, there are about 5 rows missing。I have tested many times.
From the exception you faced it seems to be out of memory exception. So scale up your Heap size while starting the java process. For your reference - Increase Java Heap Size
Do you resolve this issue ? I also met this issue.
Do you resolve this issue ? I also met this issue.