linstor-server icon indicating copy to clipboard operation
linstor-server copied to clipboard

RuntimeException: The resource would have to be deleted from nodes to reach the placement count.

Open kvaps opened this issue 5 years ago • 2 comments

Hi, I just got this error report while the sript was working:

# linstor r c one-vm-6493-disk-2 --auto-place 2 --replicas-on-same=opennebula-1 --replicas-on-different=moonshot
ERROR:
Description:
    The resource 'one-vm-6493-disk-2' was already deployed on 3 nodes: 'm13c36', 'm14c15', 'm8c29'. The resource would have to be deleted from nodes to reach the placement count.
Details:
    Auto-placing resource: one-vm-6493-disk-2
Show reports:
    linstor error-reports show 5FBED2E6-00000-000020
command terminated with exit code 10
╭───────────────────────────────────────────────────────────────────────────────────────╮
┊ ResourceName       ┊ Node   ┊ Port  ┊ Usage  ┊ Conns ┊    State ┊ CreatedOn           ┊
╞═══════════════════════════════════════════════════════════════════════════════════════╡
┊ one-vm-6493-disk-2 ┊ m13c36 ┊ 56375 ┊ Unused ┊ Ok    ┊ UpToDate ┊ 2020-11-23 22:22:55 ┊
┊ one-vm-6493-disk-2 ┊ m14c15 ┊ 56375 ┊ InUse  ┊ Ok    ┊ UpToDate ┊ 2020-11-18 22:18:46 ┊
┊ one-vm-6493-disk-2 ┊ m8c29  ┊ 56375 ┊ Unused ┊ Ok    ┊ UpToDate ┊ 2020-11-18 22:18:50 ┊
╰───────────────────────────────────────────────────────────────────────────────────────╯
ERROR REPORT 5FBED2E6-00000-000020

============================================================

Application:                        LINBIT�� LINSTOR
Module:                             Controller
Version:                            1.9.0
Build ID:                           678acd24a8b9b73a735407cd79ca33a5e95eb2e2
Build time:                         2020-10-03T22:33:15+00:00
Error time:                         2020-11-28 21:56:20
Node:                               linstor-controller-0
Peer:                               RestClient(127.0.0.1; 'PythonLinstor/1.4.0 (API1.0.4)')

============================================================

Reported error:
===============

Category:                           RuntimeException
Class name:                         ApiRcException
Class canonical name:               com.linbit.linstor.core.apicallhandler.response.ApiRcException
Generated at:                       Method 'autoPlaceInTransaction', Source file 'CtrlRscAutoPlaceApiCallHandler.java', Line #201

Error message:                      The resource 'one-vm-6493-disk-2' was already deployed on 3 nodes: 'm13c36', 'm14c15', 'm8c29'. The resource would have to be deleted from nodes to reach the placement count.

Error context:
    The resource 'one-vm-6493-disk-2' was already deployed on 3 nodes: 'm13c36', 'm14c15', 'm8c29'. The resource would have to be deleted from nodes to reach the placement count.

Asynchronous stage backtrace:

    Error has been observed at the following site(s):
    	|_ checkpoint ? Auto-place resource
    Stack trace:

Call backtrace:

    Method                                   Native Class:Line number
    autoPlaceInTransaction                   N      com.linbit.linstor.core.apicallhandler.controller.CtrlRscAutoPlaceApiCallHandler:201

Suppressed exception 1 of 1:
===============
Category:                           RuntimeException
Class name:                         OnAssemblyException
Class canonical name:               reactor.core.publisher.FluxOnAssembly.OnAssemblyException
Generated at:                       Method 'autoPlaceInTransaction', Source file 'CtrlRscAutoPlaceApiCallHandler.java', Line #201

Error message:                      
Error has been observed at the following site(s):
	|_ checkpoint ��� Auto-place resource
Stack trace:

Error context:
    The resource 'one-vm-6493-disk-2' was already deployed on 3 nodes: 'm13c36', 'm14c15', 'm8c29'. The resource would have to be deleted from nodes to reach the placement count.

Call backtrace:

    Method                                   Native Class:Line number
    autoPlaceInTransaction                   N      com.linbit.linstor.core.apicallhandler.controller.CtrlRscAutoPlaceApiCallHandler:201
    lambda$autoPlace$0                       N      com.linbit.linstor.core.apicallhandler.controller.CtrlRscAutoPlaceApiCallHandler:142
    doInScope                                N      com.linbit.linstor.core.apicallhandler.ScopeRunner:147
    lambda$fluxInScope$0                     N      com.linbit.linstor.core.apicallhandler.ScopeRunner:75
    call                                     N      reactor.core.publisher.MonoCallable:91
    trySubscribeScalarMap                    N      reactor.core.publisher.FluxFlatMap:126
    subscribeOrReturn                        N      reactor.core.publisher.MonoFlatMapMany:49
    subscribe                                N      reactor.core.publisher.Flux:8311
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:188
    request                                  N      reactor.core.publisher.Operators$ScalarSubscription:2317
    onSubscribe                              N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:134
    subscribe                                N      reactor.core.publisher.MonoCurrentContext:35
    subscribe                                N      reactor.core.publisher.Flux:8325
    onNext                                   N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain:188
    onNext                                   N      reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber:121
    complete                                 N      reactor.core.publisher.Operators$MonoSubscriber:1755
    onComplete                               N      reactor.core.publisher.MonoCollect$CollectSubscriber:152
    onComplete                               N      reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner:252
    checkTerminated                          N      reactor.core.publisher.FluxFlatMap$FlatMapMain:838
    drainLoop                                N      reactor.core.publisher.FluxFlatMap$FlatMapMain:600
    drain                                    N      reactor.core.publisher.FluxFlatMap$FlatMapMain:580
    onComplete                               N      reactor.core.publisher.FluxFlatMap$FlatMapMain:457
    checkTerminated                          N      reactor.core.publisher.FluxFlatMap$FlatMapMain:838
    drainLoop                                N      reactor.core.publisher.FluxFlatMap$FlatMapMain:600
    innerComplete                            N      reactor.core.publisher.FluxFlatMap$FlatMapMain:909
    onComplete                               N      reactor.core.publisher.FluxFlatMap$FlatMapInner:1013
    onComplete                               N      reactor.core.publisher.FluxMap$MapSubscriber:136
    onComplete                               N      reactor.core.publisher.Operators$MultiSubscriptionSubscriber:1989
    onComplete                               N      reactor.core.publisher.FluxSwitchIfEmpty$SwitchIfEmptySubscriber:78
    complete                                 N      reactor.core.publisher.FluxCreate$BaseSink:438
    drain                                    N      reactor.core.publisher.FluxCreate$BufferAsyncSink:784
    complete                                 N      reactor.core.publisher.FluxCreate$BufferAsyncSink:732
    drainLoop                                N      reactor.core.publisher.FluxCreate$SerializedSink:239
    drain                                    N      reactor.core.publisher.FluxCreate$SerializedSink:205
    complete                                 N      reactor.core.publisher.FluxCreate$SerializedSink:196
    apiCallComplete                          N      com.linbit.linstor.netcom.TcpConnectorPeer:455
    handleComplete                           N      com.linbit.linstor.proto.CommonMessageProcessor:363
    handleDataMessage                        N      com.linbit.linstor.proto.CommonMessageProcessor:287
    doProcessInOrderMessage                  N      com.linbit.linstor.proto.CommonMessageProcessor:235
    lambda$doProcessMessage$3                N      com.linbit.linstor.proto.CommonMessageProcessor:220
    subscribe                                N      reactor.core.publisher.FluxDefer:46
    subscribe                                N      reactor.core.publisher.Flux:8325
    onNext                                   N      reactor.core.publisher.FluxFlatMap$FlatMapMain:418
    drainAsync                               N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:414
    drain                                    N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:679
    onNext                                   N      reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber:243
    drainFused                               N      reactor.core.publisher.UnicastProcessor:286
    drain                                    N      reactor.core.publisher.UnicastProcessor:322
    onNext                                   N      reactor.core.publisher.UnicastProcessor:401
    next                                     N      reactor.core.publisher.FluxCreate$IgnoreSink:618
    next                                     N      reactor.core.publisher.FluxCreate$SerializedSink:153
    processInOrder                           N      com.linbit.linstor.netcom.TcpConnectorPeer:373
    doProcessMessage                         N      com.linbit.linstor.proto.CommonMessageProcessor:218
    lambda$processMessage$2                  N      com.linbit.linstor.proto.CommonMessageProcessor:164
    onNext                                   N      reactor.core.publisher.FluxPeek$PeekSubscriber:177
    runAsync                                 N      reactor.core.publisher.FluxPublishOn$PublishOnSubscriber:439
    run                                      N      reactor.core.publisher.FluxPublishOn$PublishOnSubscriber:526
    call                                     N      reactor.core.scheduler.WorkerTask:84
    call                                     N      reactor.core.scheduler.WorkerTask:37
    run                                      N      java.util.concurrent.FutureTask:264
    run                                      N      java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask:304
    runWorker                                N      java.util.concurrent.ThreadPoolExecutor:1128
    run                                      N      java.util.concurrent.ThreadPoolExecutor$Worker:628
    run                                      N      java.lang.Thread:834


END OF ERROR REPORT.

kvaps avatar Nov 28 '20 22:11 kvaps

Well.. I guess we could skip the ErrorReport and convert this to a warning that Linstor will no-op since there are already more resources than you requested... Or am I missing the point here?

ghernadi avatar Nov 30 '20 05:11 ghernadi

Linstor just considered this as an error, I wasn't sure if that is expected behavior and decided better to report you

kvaps avatar Nov 30 '20 07:11 kvaps