disk-only pools freeze / go OFFLINE when 100% and users keep writing to them
this is 9,2
A pool group comprised of disk/only pools has filled up. Users continue to write to this pool group.
Instead of failing tranfers the pools go OFFLINE and unresponsive
(one can connect to pool admin interface but commands like info -a or mover ls hang)
Attempt to connect to System@
the only commands that work are "help" and "show pinboard"
Attempt to copy a file located on the pool using dccp hangs on
2 -> io [null] [0000FFE484FBCDE84EAFBAFD6674638374F8] {timer=2665424} [2][8637][2415422] WaitingForGetPool(34)
I have taken thread dump. Attached.
There are 278 thread waiting for lock:
Aug 18 15:58:18 pubstor2202 dcache@p-sbn-pubstor2202-5Domain[1591516]: "p-sbn-pubstor2202-5-0" #38 prio=5 os_prio=0 cpu=2977.32ms elapsed=4871.22s tid=0x00007ff7e9aad800 nid=0x18492f waiting for monitor entry [0x00007ff720bf6000]
Aug 18 15:58:18 pubstor2202 dcache@p-sbn-pubstor2202-5Domain[1591516]: java.lang.Thread.State: BLOCKED (on object monitor)
Aug 18 15:58:18 pubstor2202 dcache@p-sbn-pubstor2202-5Domain[1591516]: #011at org.dcache.pool.classic.MoverRequestScheduler.getLastAccessed(MoverRequestScheduler.java:591)
Aug 18 15:58:18 pubstor2202 dcache@p-sbn-pubstor2202-5Domain[1591516]: #011- waiting to lock <0x0000000703a9dfa0> (a org.dcache.pool.classic.MoverRequestScheduler)
Aug 18 15:58:18 pubstor2202 dcache@p-sbn-pubstor2202-5Domain[1591516]: #011at org.dcache.pool.classic.JobTimeoutManager.schedulerEntryInfo(JobTimeoutManager.java:41)
Aug 18 15:58:18 pubstor2202 dcache@p-sbn-pubstor2202-5Domain[1591516]: #011at org.dcache.pool.classic.JobTimeoutManager.lambda$getDataObject$0(JobTimeoutManager.java:70)
Aug 18 15:58:18 pubstor2202 dcache@p-sbn-pubstor2202-5Domain[1591516]: #011at org.dcache.pool.classic.JobTimeoutManager$$Lambda$398/0x00000008406c8440.accept(Unknown Source)
Aug 18 15:58:18 pubstor2202 dcache@p-sbn-pubstor2202-5Domain[1591516]: #011at java.util.concurrent.ConcurrentHashMap$ValueSpliterator.forEachRemaining([email protected]/ConcurrentHashMap.java:3603)
Aug 18 15:58:18 pubstor2202 dcache@p-sbn-pubstor2202-5Domain[1591516]: #011at java.util.stream.ReferencePipeline$Head.forEach([email protected]/ReferencePipeline.java:658)
Aug 18 15:58:18 pubstor2202 dcache@p-sbn-pubstor2202-5Domain[1591516]: #011at org.dcache.pool.classic.JobTimeoutManager.getDataObject(JobTimeoutManager.java:69)
Aug 18 15:58:18 pubstor2202 dcache@p-sbn-pubstor2202-5Domain[1591516]: #011at org.dcache.pool.classic.JobTimeoutManager.getInfo(JobTimeoutManager.java:61)
Aug 18 15:58:18 pubstor2202 dcache@p-sbn-pubstor2202-5Domain[1591516]: #011at org.dcache.cells.UniversalSpringCell.getInfo(UniversalSpringCell.java:465)
Aug 18 15:58:18 pubstor2202 dcache@p-sbn-pubstor2202-5Domain[1591516]: #011at dmg.cells.nucleus.CellAdapter.getInfo(CellAdapter.java:788)
Aug 18 15:58:18 pubstor2202 dcache@p-sbn-pubstor2202-5Domain[1591516]: #011at dmg.cells.nucleus.CellNucleus._getCellInfo(CellNucleus.java:345)
Aug 18 15:58:18 pubstor2202 dcache@p-sbn-pubstor2202-5Domain[1591516]: #011at dmg.cells.nucleus.CellNucleus.getCellInfo(CellNucleus.java:305)
Aug 18 15:58:18 pubstor2202 dcache@p-sbn-pubstor2202-5Domain[1591516]: #011at dmg.cells.nucleus.CellAdapter.getCellInfo(CellAdapter.java:485)
Aug 18 15:58:18 pubstor2202 dcache@p-sbn-pubstor2202-5Domain[1591516]: #011at org.dcache.cells.UniversalSpringCell.getCellInfo(UniversalSpringCell.java:477)
which looks like dcache monitoring stuck (getInfo)
The threads are waiting for lock 0x0000000703a9dfa0, which is held by the allocator that waits for a space:
java.lang.Thread.State: BLOCKED (on object monitor)
#011at org.dcache.pool.repository.AllocatorAwareRepositoryChannel.close(AllocatorAwareRepositoryChannel.java:85)
#011- waiting to lock <0x000000072a5b3ab8> (a java.lang.Object)
#011- locked <0x000000072a5b3a88> (a org.dcache.pool.repository.AllocatorAwareRepositoryChannel)
#011at org.dcache.pool.movers.MoverChannel.close(MoverChannel.java:101)
#011at org.dcache.pool.movers.MoverChannelDecorator.close(MoverChannelDecorator.java:86)
#011at com.google.common.io.Closer.close(Closer.java:205)
#011at org.dcache.util.TryCatchTemplate.failed(TryCatchTemplate.java:103)
#011at org.dcache.pool.movers.NettyTransferService$NettyMoverChannel.cancel(NettyTransferService.java:569)
#011at org.dcache.util.TryCatchTemplate.cancel(TryCatchTemplate.java:78)
#011at org.dcache.util.TryCatchTemplate.cancel(TryCatchTemplate.java:78)
#011at org.dcache.pool.classic.MoverRequestScheduler$PrioritizedRequest.kill(MoverRequestScheduler.java:772)
#011- locked <0x000000071a6ceac8> (a org.dcache.pool.classic.MoverRequestScheduler$PrioritizedRequest)
#011at org.dcache.pool.classic.MoverRequestScheduler.cancel(MoverRequestScheduler.java:430)
#011- locked <0x0000000703a9dfa0> (a org.dcache.pool.classic.MoverRequestScheduler)
java.lang.Thread.State: BLOCKED (on object monitor)
#011at org.dcache.pool.repository.Account.allocate(Account.java:110)
#011- waiting to lock <0x0000000700cc2118> (a org.dcache.pool.repository.Account)
#011at org.dcache.pool.classic.BlockingAllocator.allocate(BlockingAllocator.java:35)
#011at org.dcache.pool.repository.ForwardingAllocator.allocate(ForwardingAllocator.java:32)
#011at org.dcache.pool.classic.FairQueueAllocator.allocate(FairQueueAllocator.java:67)
#011at org.dcache.pool.repository.AllocatorAwareRepositoryChannel.allocate(AllocatorAwareRepositoryChannel.java:239)
#011at org.dcache.pool.repository.AllocatorAwareRepositoryChannel.preallocate(AllocatorAwareRepositoryChannel.java:226)
#011- locked <0x000000072a5b3ab8> (a java.lang.Object)
On the other hand, the message thread tries to process the get info command (coming either from web collector, the admin interface or the pool heartbeat), which tries to access allocator for get free/used space number:
#011at org.dcache.pool.classic.MoverRequestScheduler.getQueueInfo(MoverRequestScheduler.java:395)
#011- waiting to lock <0x0000000703a9dfa0> (a org.dcache.pool.classic.MoverRequestScheduler)
#011at org.dcache.pool.classic.PoolV4.toNamedPoolQueueInfo(PoolV4.java:1789)
#011at org.dcache.pool.classic.PoolV4$PoolInfo.lambda$movers$1(PoolV4.java:1824)
#011at org.dcache.pool.classic.PoolV4$PoolInfo$$Lambda$389/0x00000008406b7440.apply(Unknown Source)
#011at java.util.stream.ReferencePipeline$3$1.accept([email protected]/ReferencePipeline.java:195)
#011at java.util.stream.ReferencePipeline$2$1.accept([email protected]/ReferencePipeline.java:177)
#011at java.util.concurrent.ConcurrentHashMap$ValueSpliterator.forEachRemaining([email protected]/ConcurrentHashMap.java:3603)
#011at java.util.stream.AbstractPipeline.copyInto([email protected]/AbstractPipeline.java:484)
#011at java.util.stream.AbstractPipeline.wrapAndCopyInto([email protected]/AbstractPipeline.java:474)
#011at java.util.stream.ReduceOps$ReduceOp.evaluateSequential([email protected]/ReduceOps.java:913)
#011at java.util.stream.AbstractPipeline.evaluate([email protected]/AbstractPipeline.java:234)
#011at java.util.stream.ReferencePipeline.collect([email protected]/ReferencePipeline.java:578)
#011at org.dcache.pool.classic.PoolV4$PoolInfo.movers(PoolV4.java:1825)
#011at diskCacheV111.pools.PoolCostInfo.<init>(PoolCostInfo.java:46)
#011at org.dcache.pool.classic.PoolV4.getPoolCostInfo(PoolV4.java:1361)
#011at org.dcache.pool.classic.PoolV4$PoolManagerPingThread.getPoolManagerMessage(PoolV4.java:1336)
#011at org.dcache.pool.classic.PoolV4$PoolManagerPingThread.sendPoolManagerMessage(PoolV4.java:1329)
#011- locked <0x0000000700cbe8a0> (a org.dcache.pool.classic.PoolV4$PoolManagerPingThread)
#011at org.dcache.pool.classic.PoolV4$PoolManagerPingThread.run(PoolV4.java:1310)
#011at java.lang.Thread.run([email protected]/Thread.java:829)
and if the message thread is dead, then everything is dead..... :skull:
Great, any idea what to do about it? Pool is disk only and space is not gonna appear unless users remove file, meanwhile the pools should not hang if they try to write.
I am trying to rewrite Allocator with non-blocking mechanisms, at least for querying