amoro icon indicating copy to clipboard operation
amoro copied to clipboard

[Bug]: Process deadlock causes the Amoro process to be stuck in committing permanently.

Open lintingbin opened this issue 11 months ago • 3 comments

What happened?

If the clean-orphan-file.enabled parameter is set to false before the Amoro process successfully commits, it will cause the process to remain stuck in the committing state and unable to complete.

Affects Versions

0.7.1

What table formats are you seeing the problem on?

Iceberg

What engines are you seeing the problem on?

AMS

How to reproduce

No response

Relevant log output

"async-table-runtime-refresh-executor-1" #68 prio=5 os_prio=0 tid=0x00007d750de2b000 nid=0x91 waiting on condition [0x00007d688acfe000]
   java.lang.Thread.State: WAITING (parking)
	at sun.misc.Unsafe.park(Native Method)
	- parking to wait for  <0x00007d6a51002b70> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
	at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
	at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
	at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
	at org.apache.amoro.server.optimizing.OptimizingQueue$TableOptimizingProcess.close(OptimizingQueue.java:426)
	at org.apache.amoro.server.table.TableRuntime.updateConfigInternal(TableRuntime.java:358)
	at org.apache.amoro.server.table.TableRuntime.lambda$refresh$9(TableRuntime.java:241)
	at org.apache.amoro.server.table.TableRuntime$$Lambda$789/1085405103.get(Unknown Source)
	at org.apache.amoro.server.persistence.StatedPersistentBase.invokeConsistency(StatedPersistentBase.java:64)
	at org.apache.amoro.server.table.TableRuntime.refresh(TableRuntime.java:238)
	at org.apache.amoro.server.table.executor.TableRuntimeRefreshExecutor.execute(TableRuntimeRefreshExecutor.java:86)
	at org.apache.amoro.server.table.executor.BaseTableExecutor.executeTask(BaseTableExecutor.java:84)
	at org.apache.amoro.server.table.executor.BaseTableExecutor.lambda$scheduleIfNecessary$4(BaseTableExecutor.java:95)
	at org.apache.amoro.server.table.executor.BaseTableExecutor$$Lambda$668/265438551.run(Unknown Source)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

   Locked ownable synchronizers:
	- <0x00007d69deae2070> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
	- <0x00007d6a1f792648> (a java.util.concurrent.ThreadPoolExecutor$Worker)


"async-optimizing-commit-executor-48" #80263 prio=5 os_prio=0 tid=0x00007d680004e000 nid=0x139d7 waiting on condition [0x00007d6783ffe000]
   java.lang.Thread.State: WAITING (parking)
	at sun.misc.Unsafe.park(Native Method)
	- parking to wait for  <0x00007d69deae2070> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
	at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
	at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
	at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
	at org.apache.amoro.server.persistence.StatedPersistentBase.invokeConsistency(StatedPersistentBase.java:48)
	at org.apache.amoro.server.table.TableRuntime.completeProcess(TableRuntime.java:288)
	at org.apache.amoro.server.optimizing.OptimizingQueue$TableOptimizingProcess.lambda$persistProcessCompleted$12(OptimizingQueue.java:664)
	at org.apache.amoro.server.optimizing.OptimizingQueue$TableOptimizingProcess$$Lambda$647/576698622.run(Unknown Source)
	at org.apache.amoro.server.persistence.PersistentBase$$Lambda$648/735764202.accept(Unknown Source)
	at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
	at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:647)
	at org.apache.amoro.server.persistence.PersistentBase.doAsTransaction(PersistentBase.java:77)
	at org.apache.amoro.server.optimizing.OptimizingQueue.access$400(OptimizingQueue.java:73)
	at org.apache.amoro.server.optimizing.OptimizingQueue$TableOptimizingProcess.persistProcessCompleted(OptimizingQueue.java:652)
	at org.apache.amoro.server.optimizing.OptimizingQueue$TableOptimizingProcess.commit(OptimizingQueue.java:575)
	at org.apache.amoro.server.table.executor.OptimizingCommitExecutor.execute(OptimizingCommitExecutor.java:53)
	at org.apache.amoro.server.table.executor.BaseTableExecutor.executeTask(BaseTableExecutor.java:84)
	at org.apache.amoro.server.table.executor.BaseTableExecutor.lambda$scheduleIfNecessary$4(BaseTableExecutor.java:95)
	at org.apache.amoro.server.table.executor.BaseTableExecutor$$Lambda$668/265438551.run(Unknown Source)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

   Locked ownable synchronizers:
	- <0x00007d69e3000aa0> (a java.util.concurrent.ThreadPoolExecutor$Worker)
	- <0x00007d6a51002b70> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)


	Found one Java-level deadlock:
	=============================
	"JettyServerThreadPool-37129725":
	  waiting for ownable synchronizer 0x00007d6a51002b70, (a java.util.concurrent.locks.ReentrantLock$NonfairSync),
	  which is held by "async-optimizing-commit-executor-48"
	"async-optimizing-commit-executor-48":
	  waiting for ownable synchronizer 0x00007d69deae2070, (a java.util.concurrent.locks.ReentrantLock$NonfairSync),
	  which is held by "async-table-runtime-refresh-executor-1"
	"async-table-runtime-refresh-executor-1":
	  waiting for ownable synchronizer 0x00007d6a51002b70, (a java.util.concurrent.locks.ReentrantLock$NonfairSync),
	  which is held by "async-optimizing-commit-executor-48"

Anything else

No response

Are you willing to submit a PR?

  • [X] Yes I am willing to submit a PR!

Code of Conduct

  • [X] I agree to follow this project's Code of Conduct

lintingbin avatar Jan 06 '25 06:01 lintingbin

The probability of this issue occurring is not high, but it is indeed a problem that needs to be addressed. Have you made any progress on your end?

zhoujinsong avatar Mar 12 '25 07:03 zhoujinsong

@zhoujinsong It seems that this issue has been fixed. We have merged the fix code from https://github.com/apache/amoro/pull/3366 into our production environment, and everything seems okay for now.

lintingbin avatar Mar 12 '25 09:03 lintingbin

It seems #3366 does not handle the deadlock problem. Maybe we need to try to reproduce this issue, let us leave this issue here for now.

zhoujinsong avatar Mar 12 '25 09:03 zhoujinsong

This issue has been automatically marked as stale because it has been open for 180 days with no activity. It will be closed in next 14 days if no further activity occurs. To permanently prevent this issue from being considered stale, add the label 'not-stale', but commenting on the issue is preferred when possible.

github-actions[bot] avatar Sep 09 '25 00:09 github-actions[bot]

This issue has been closed because it has not received any activity in the last 14 days since being marked as 'stale'

github-actions[bot] avatar Sep 24 '25 00:09 github-actions[bot]