nzbhydra2 icon indicating copy to clipboard operation
nzbhydra2 copied to clipboard

Database inflating to huge size with high CPU usage, triggered by "delete old search results" task

Open benmullins opened this issue 6 years ago • 16 comments

I'm having an issue where my nzbhydra.mv.db file will inflate hugely shortly after startup, seemingly after the first time the problem detection task runs. I've seen the other issues on here about large database files, but I haven't seen anyone mention this specific problem.

Basically, if I shut down or restart Hydra, the database file will be reduced in size to a little under 300 MB, but sometime within half an hour to a couple hours at most after the program starts, it will begin to inflate the database file to 15 GB (it always stops at 15 GB, every time). Once it gets started doing this this happens within around 10-12 minutes or so, hammering the CPU the entire time it's doing it.

I turned on debug logging, and this is an example of what is logged, starting around the time the database starts inflating and ending when it gets to its final destination of 15 GB:

2019-06-18 01:02:08.627 DEBUG --- [     HydraTask4] o.n.p.ProblemDetectorTask                : Exectung problem detector org.nzbhydra.problemdetection.OutOfMemoryDetector
2019-06-18 01:02:08.630 DEBUG --- [     HydraTask4] o.n.p.ProblemDetectorTask                : Exectung problem detector org.nzbhydra.problemdetection.OutdatedWrapperDetector
2019-06-18 01:14:37.170  WARN --- [     HydraTask2] o.h.engine.jdbc.spi.SqlExceptionHelper   : SQL Error: 50200, SQLState: HYT00
2019-06-18 01:14:37.171 ERROR --- [     HydraTask2] o.h.engine.jdbc.spi.SqlExceptionHelper   : Timeout trying to lock table {0}; SQL statement:
delete from SEARCHRESULT where FIRST_FOUND  < DATEADD('SECOND', ?, DATE '1970-01-01') AND ID not in (select SEARCH_RESULT_ID from INDEXERNZBDOWNLOAD  where SEARCH_RESULT_ID is not null) [50200-199]
2019-06-18 01:14:37.250 ERROR --- [     HydraTask2] o.s.s.s.TaskUtils$LoggingErrorHandler    : Unexpected error occurred in scheduled task.
javax.persistence.PessimisticLockException: could not execute statement
	at org.hibernate.internal.ExceptionConverterImpl.wrapLockException(ExceptionConverterImpl.java:273)
	at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:108)
	at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:181)
	at org.hibernate.query.internal.AbstractProducedQuery.executeUpdate(AbstractProducedQuery.java:1604)
	at org.nzbhydra.searching.cleanup.OldResultsCleanup.deleteOldResults(OldResultsCleanup.java:56)
	at org.nzbhydra.searching.cleanup.OldResultsCleanup$$FastClassBySpringCGLIB$$cad1559f.invoke(<generated>)
	at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
	at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:749)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
	at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:294)
	at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:98)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
	at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
	at org.nzbhydra.searching.cleanup.OldResultsCleanup$$EnhancerBySpringCGLIB$$f2169b7b.deleteOldResults(<generated>)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:84)
	at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
	at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:93)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.hibernate.PessimisticLockException: could not execute statement
	at org.hibernate.dialect.H2Dialect$3.convert(H2Dialect.java:353)
	at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:42)
	at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:113)
	at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:99)
	at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:178)
	at org.hibernate.engine.query.spi.NativeSQLQueryPlan.performExecuteUpdate(NativeSQLQueryPlan.java:107)
	at org.hibernate.internal.SessionImpl.executeNativeUpdate(SessionImpl.java:1593)
	at org.hibernate.query.internal.NativeQueryImpl.doExecuteUpdate(NativeQueryImpl.java:292)
	at org.hibernate.query.internal.AbstractProducedQuery.executeUpdate(AbstractProducedQuery.java:1594)
	... 23 common frames omitted
Caused by: org.h2.jdbc.JdbcSQLTimeoutException: Timeout trying to lock table {0}; SQL statement:
delete from SEARCHRESULT where FIRST_FOUND  < DATEADD('SECOND', ?, DATE '1970-01-01') AND ID not in (select SEARCH_RESULT_ID from INDEXERNZBDOWNLOAD  where SEARCH_RESULT_ID is not null) [50200-199]
	at org.h2.message.DbException.getJdbcSQLException(DbException.java:508)
	at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
	at org.h2.message.DbException.get(DbException.java:194)
	at org.h2.command.Command.filterConcurrentUpdate(Command.java:333)
	at org.h2.command.Command.executeUpdate(Command.java:274)
	at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:200)
	at org.h2.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:154)
	at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
	at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
	at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:175)
	... 27 common frames omitted
Caused by: org.h2.message.DbException: Row not found when trying to delete from index "PUBLIC.FKR5G21PDW3HHS1SEFVJY30TGMI_INDEX_C: 1330732348526739664" [90112-199]
	at org.h2.message.DbException.get(DbException.java:205)
	at org.h2.message.DbException.get(DbException.java:181)
	at org.h2.mvstore.db.MVSecondaryIndex.remove(MVSecondaryIndex.java:236)
	at org.h2.mvstore.db.MVTable.removeRow(MVTable.java:514)
	at org.h2.command.dml.Delete.update(Delete.java:123)
	at org.h2.command.CommandContainer.update(CommandContainer.java:133)
	at org.h2.command.Command.executeUpdate(Command.java:267)
	... 32 common frames omitted
Caused by: org.h2.jdbc.JdbcSQLTransientException: Row not found when trying to delete from index "PUBLIC.FKR5G21PDW3HHS1SEFVJY30TGMI_INDEX_C: 1330732348526739664" [90112-199]
	at org.h2.message.DbException.getJdbcSQLException(DbException.java:621)
	at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
	... 39 common frames omitted

I've watched this happen a few times and it seems like the DB inflation is triggered immediately after Hydra runs these problem detection tasks, at which point large temporary db files start getting written to the database directory and (I presume) rolled into the main db file. The SQL exception and Java traceback seem to be identical each time.

Although I don't think it has anything to do with this, since the memory usage was comfortably below 256MB anyway, I tried increasing the allocation to 512MB anyway because of the OOM detector task, and nothing changed.

And again, the database will be reduced back down to less than 300 MB if Hydra is shut down, but after starting again this cycle repeats. I've also tried running the manual shutdown defrag command mentioned in another issue, and while it did reduce my database size this inflation happened again afterward with no apparent change.

I'm using the linuxserver.io container, and it's up-to-date with the latest 2.6.11 release of Hydra.

benmullins avatar Jun 18 '19 09:06 benmullins

I apologize, I posted this issue a bit too hastily. I don't think it's the problem detection task, but the delete old search results task.

Manually triggering that task causes one of the regular instances of several minutes of high CPU usage that got me looking into what Hydra was up to in the first place. It also starts generating temp.db files (though the main nzbhydra.mv.db file doesn't grow very noticeably beyond the 15 GB size it reached on the previous run; maybe to 15.1 or 15.2 GB at most).

2019-06-18 02:39:51.542  INFO --- [0.0-5076-exec-1] org.nzbhydra.tasks.HydraTaskScheduler    : [Search: 840714, Host: 192.168.1.50] Running task "Delete old search results" now
2019-06-18 02:52:31.485  WARN --- [     HydraTask4] o.h.engine.jdbc.spi.SqlExceptionHelper   : SQL Error: 50200, SQLState: HYT00
2019-06-18 02:52:31.485 ERROR --- [     HydraTask4] o.h.engine.jdbc.spi.SqlExceptionHelper   : Timeout trying to lock table {0}; SQL statement:
delete from SEARCHRESULT where FIRST_FOUND  < DATEADD('SECOND', ?, DATE '1970-01-01') AND ID not in (select SEARCH_RESULT_ID from INDEXERNZBDOWNLOAD  where SEARCH_RESULT_ID is not null) [50200-199]
2019-06-18 02:52:31.527 ERROR --- [     HydraTask4] o.s.s.s.TaskUtils$LoggingErrorHandler    : Unexpected error occurred in scheduled task.

javax.persistence.PessimisticLockException: could not execute statement

Traceback omitted; identical to the one in the post above.

This makes more sense given what the error said in the first place. Although the inflation of the database file only happens the first time, this 12 or so minutes of CPU thrashing followed by an error dump in the log happens every time the "delete old search results" task runs, i.e. about once an hour.

benmullins avatar Jun 18 '19 09:06 benmullins

There's already an issue about the database size but until now I never knew what caused the problem. I'm currently on vacation but will take a closer look when I'm back. Thanks for the analysis.

On Tue, 18 Jun 2019, 10:56 Benjamin Mullins, [email protected] wrote:

I apologize, I posted this issue a bit too hastily. I don't think it's the problem detection task, but the delete old search results task.

Manually triggering that task causes one of the regular instances of several minutes of high CPU usage that got me looking into what Hydra was up to in the first place. It also starts generating temp.db files (though the main nzbhydra.mv.db file doesn't grow very noticeably beyond the 15 GB size it reached on the previous run; maybe to 15.1 or 15.2 GB at most).

2019-06-18 02:39:51.542 INFO --- [0.0-5076-exec-1] org.nzbhydra.tasks.HydraTaskScheduler : [Search: 840714, Host: 192.168.1.50] Running task "Delete old search results" now 2019-06-18 02:52:31.485 WARN --- [ HydraTask4] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 50200, SQLState: HYT00 2019-06-18 02:52:31.485 ERROR --- [ HydraTask4] o.h.engine.jdbc.spi.SqlExceptionHelper : Timeout trying to lock table {0}; SQL statement: delete from SEARCHRESULT where FIRST_FOUND < DATEADD('SECOND', ?, DATE '1970-01-01') AND ID not in (select SEARCH_RESULT_ID from INDEXERNZBDOWNLOAD where SEARCH_RESULT_ID is not null) [50200-199] 2019-06-18 02:52:31.527 ERROR --- [ HydraTask4] o.s.s.s.TaskUtils$LoggingErrorHandler : Unexpected error occurred in scheduled task.

javax.persistence.PessimisticLockException: could not execute statement

Traceback omitted; identical to the one in the post above.

This makes more sense given what the error said in the first place. Although the inflation of the database file only happens the first time, this 12 or so minutes of CPU thrashing followed by an error dump in the log happens every time the "delete old search results" task runs, i.e. about once an hour.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/theotherp/nzbhydra2/issues/405?email_source=notifications&email_token=ADNUA6ONJVK776VEHCSCTNDP3CWN7A5CNFSM4HY6EKCKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODX53NSY#issuecomment-503035595, or mute the thread https://github.com/notifications/unsubscribe-auth/ADNUA6LJEG7H3A2OIULOX2DP3CWN7ANCNFSM4HY6EKCA .

theotherp avatar Jun 18 '19 10:06 theotherp

~Would you be willing to send me your database file (as in upload and send the URL)? Full disclaimer: It contains not only your full history but also your API keys. I will not look at the contents of the database itself, just the metadata. I do not care for what you download and do not intend to steel anybody's API keys.~

theotherp avatar Jun 26 '19 07:06 theotherp

Nevermind, I can reproduce this.

theotherp avatar Jun 26 '19 08:06 theotherp

Seeing this same issue as well

ghost avatar Jun 29 '19 10:06 ghost

Seeing the same thing

LachlanGytha avatar Aug 05 '19 14:08 LachlanGytha

Same here as well. Went from 256 MB to 1.1 GB.

SBBH avatar Aug 26 '19 14:08 SBBH

@SBBH 1.1GB is somewhat reasonable. In other cases the database may grow to 20 or 30 GB which certainly is not...

theotherp avatar Aug 26 '19 15:08 theotherp

Have this as well. Size is at 22GB.

naq90 avatar Oct 01 '19 18:10 naq90

@naq90

Try this: Add a new folder config in the hydra main folder. There create a new file application.properties with this content: spring.datasource.url=jdbc:h2:file:${nzbhydra.dataFolder:.}/database/nzbhydra;MAX_COMPACT_TIME=${main.databaseCompactTime:15000};WRITE_DELAY=5000;TRACE_MAX_FILE_SIZE=16;WRITE_DELAY=20000;RETENTION_TIME=1000. Restart Hydra, give it a minute and restart again.

Monitor the database file size over a couple of days.

theotherp avatar Oct 01 '19 19:10 theotherp

Same here on Windows7x64 Hydra2 at latest build as of today. Some time ago (not sure which build) my database has just exploded and seems to refuse to compact despite increasing the compact time to a much larger number (up to 15MIN now) and still growing daily unless I manually compact it.

CollinChaffin avatar Oct 07 '19 10:10 CollinChaffin

Collin, have you tried the steps I described above?

On Mon, 7 Oct 2019, 12:38 Collin Chaffin, [email protected] wrote:

Same here on Windows7x64 Hydra2 at latest build as of today. Some time ago (not sure which build) my database has just exploded and seems to refuse to compact despite increasing the compact time to a much larger number (up to 15MIN now) and still growing daily unless I manually compact it.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/theotherp/nzbhydra2/issues/405?email_source=notifications&email_token=ADNUA6LXMWFHBMCV7ERZA2DQNMGSDA5CNFSM4HY6EKCKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAP2ERY#issuecomment-538944071, or mute the thread https://github.com/notifications/unsubscribe-auth/ADNUA6PVXQOTQCGK66JEAITQNMGSDANCNFSM4HY6EKCA .

theotherp avatar Oct 07 '19 11:10 theotherp

Yes - db gets corrupted now on every shutdown and hydra will not restart. DB restore then starts fine - and again results in corruption and failure to restart. Could it be due to the last 2 builds having implemented a db update then rollback? I have iteratively updated through EVERY build on Windows to date for quite some time, having not missed a single iterative build so I did have my db updated 2 builds ago, then rolled back via the 10-06 build.

CollinChaffin avatar Oct 27 '19 12:10 CollinChaffin

@CollinChaffin Please zip and upload your database file and send me the link via email to [email protected] so I can test it myself. Disclaimer: The database not only contains your history but also indexer API keys. I'll remove the data before doing any tests and have no interest whatsoever in what you download or in your API keys as I have enough of them...

theotherp avatar Oct 27 '19 12:10 theotherp

@theotherp I am using the latest docker container, and have the same issue after every "delete old search results" task. I "shutdown defrag" the H2 db, stop and remove the container and recreate it afterwards. The container is then running fine, db size is about 350MB and the process is behaving nicely.

As soon as I trigger the above mentioned task, CPU and memory usage skyrockets and the db is growing until it stops at 53GB.

I have tried application.properties mounted in /app/hydra2/config/ but I had to remove WRITE_DELAY=5000; from the config, because it complained about it being in there twice (so WRITE_DELAY=20000 is now my current setting). The issue remains.

2019-10-24 13:01:54.700 ERROR --- [     HydraTask2] o.h.engine.jdbc.spi.SqlExceptionHelper   : Timeout trying to lock table {0}; SQL statement:
delete from SEARCHRESULT where FIRST_FOUND  < DATEADD('SECOND', 1570704119, DATE '1970-01-01') AND ID not in (select SEARCH_RESULT_ID from INDEXERNZBDOWNLOAD where SEARCH_RESULT_ID is not null) [50200-199]
2019-10-24 13:01:54.742 ERROR --- [     HydraTask2] o.s.s.s.TaskUtils$LoggingErrorHandler    : Unexpected error occurred in scheduled task.

javax.persistence.PessimisticLockException: could not execute statement
        at org.hibernate.internal.ExceptionConverterImpl.wrapLockException(ExceptionConverterImpl.java:273)
        at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:108)
        at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:181)
        at org.hibernate.query.internal.AbstractProducedQuery.executeUpdate(AbstractProducedQuery.java:1604)
        at org.nzbhydra.searching.cleanup.OldResultsCleanup.deleteOldResults(OldResultsCleanup.java:58)
        at org.nzbhydra.searching.cleanup.OldResultsCleanup$$FastClassBySpringCGLIB$$cad1559f.invoke(<generated>)
        at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
        at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:749)
        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
        at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:294)
        at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:98)
        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
        at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
        at org.nzbhydra.searching.cleanup.OldResultsCleanup$$EnhancerBySpringCGLIB$$3d0fc2c8.deleteOldResults(<generated>)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Caused by: org.hibernate.PessimisticLockException: could not execute statement
        at org.hibernate.dialect.H2Dialect$3.convert(H2Dialect.java:353)
        at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:42)
        at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:113)
        at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:99)
        at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:178)
        at org.hibernate.engine.query.spi.NativeSQLQueryPlan.performExecuteUpdate(NativeSQLQueryPlan.java:107)
        at org.hibernate.internal.SessionImpl.executeNativeUpdate(SessionImpl.java:1593)
        at org.hibernate.query.internal.NativeQueryImpl.doExecuteUpdate(NativeQueryImpl.java:292)
        at org.hibernate.query.internal.AbstractProducedQuery.executeUpdate(AbstractProducedQuery.java:1594)
        at org.nzbhydra.searching.cleanup.OldResultsCleanup.deleteOldResults(OldResultsCleanup.java:58)
        at org.nzbhydra.searching.cleanup.OldResultsCleanup$$FastClassBySpringCGLIB$$cad1559f.invoke(<generated>)
        at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
        at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:749)
        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
        at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:294)
Caused by: org.h2.jdbc.JdbcSQLTimeoutException: Timeout trying to lock table {0}; SQL statement:
delete from SEARCHRESULT where FIRST_FOUND  < DATEADD('SECOND', 1570704119, DATE '1970-01-01') AND ID not in (select SEARCH_RESULT_ID from INDEXERNZBDOWNLOAD where SEARCH_RESULT_ID is not null) [50200-199]
        at org.h2.message.DbException.getJdbcSQLException(DbException.java:508)
        at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
        at org.h2.message.DbException.get(DbException.java:194)
        at org.h2.command.Command.filterConcurrentUpdate(Command.java:333)
        at org.h2.command.Command.executeUpdate(Command.java:274)
        at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:200)
        at org.h2.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:154)
        at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
        at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
        at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:175)
        at org.hibernate.engine.query.spi.NativeSQLQueryPlan.performExecuteUpdate(NativeSQLQueryPlan.java:107)
        at org.hibernate.internal.SessionImpl.executeNativeUpdate(SessionImpl.java:1593)
        at org.hibernate.query.internal.NativeQueryImpl.doExecuteUpdate(NativeQueryImpl.java:292)
        at org.hibernate.query.internal.AbstractProducedQuery.executeUpdate(AbstractProducedQuery.java:1594)
        at org.nzbhydra.searching.cleanup.OldResultsCleanup.deleteOldResults(OldResultsCleanup.java:58)
Caused by: org.h2.message.DbException: Row not found when trying to delete from index "PUBLIC.FKR5G21PDW3HHS1SEFVJY30TGMI_INDEX_C: 3952181201435365985" [90112-199]
        at org.h2.message.DbException.get(DbException.java:205)
        at org.h2.message.DbException.get(DbException.java:181)
        at org.h2.mvstore.db.MVSecondaryIndex.remove(MVSecondaryIndex.java:236)
        at org.h2.mvstore.db.MVTable.removeRow(MVTable.java:514)
        at org.h2.command.dml.Delete.update(Delete.java:123)
        at org.h2.command.CommandContainer.update(CommandContainer.java:133)
        at org.h2.command.Command.executeUpdate(Command.java:267)
        at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:200)
        at org.h2.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:154)
        at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
        at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
        at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:175)
        at org.hibernate.engine.query.spi.NativeSQLQueryPlan.performExecuteUpdate(NativeSQLQueryPlan.java:107)
        at org.hibernate.internal.SessionImpl.executeNativeUpdate(SessionImpl.java:1593)
        at org.hibernate.query.internal.NativeQueryImpl.doExecuteUpdate(NativeQueryImpl.java:292)
Caused by: org.h2.jdbc.JdbcSQLTransientException: Row not found when trying to delete from index "PUBLIC.FKR5G21PDW3HHS1SEFVJY30TGMI_INDEX_C: 3952181201435365985" [90112-199]
        at org.h2.message.DbException.getJdbcSQLException(DbException.java:621)
        at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
        at org.h2.message.DbException.get(DbException.java:205)
        at org.h2.message.DbException.get(DbException.java:181)
        at org.h2.mvstore.db.MVSecondaryIndex.remove(MVSecondaryIndex.java:236)
        at org.h2.mvstore.db.MVTable.removeRow(MVTable.java:514)
        at org.h2.command.dml.Delete.update(Delete.java:123)
        at org.h2.command.CommandContainer.update(CommandContainer.java:133)
        at org.h2.command.Command.executeUpdate(Command.java:267)
        at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:200)
        at org.h2.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:154)
        at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
        at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
        at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:175)
        at org.hibernate.engine.query.spi.NativeSQLQueryPlan.performExecuteUpdate(NativeSQLQueryPlan.java:107)

hanzoh avatar Oct 28 '19 09:10 hanzoh

The update to 2.7.6 should've recreated a new database. Did that fix anything?

theotherp avatar Nov 17 '19 10:11 theotherp