celestia-node
celestia-node copied to clipboard
Bridge nodes listener gets stuck until resub
Implementation ideas
Initially reported by the P-OPS team, this issue was noted when bridge nodes occasionally fell behind consensus nodes. The issue is consistently marked by the following log line:
listener: subscriber error, resubscribing...
Upon further investigation across other bridge nodes, similar occurrences were noted a few times each week. These issues are generally resolved automatically by the node's internal logic, which triggers a resubscription in response to such events. Notably, these events affect multiple nodes simultaneously, which suggests potential connectivity issues at the time of the incident or a problem with the core node RPC becoming stuck.
Hi, you can refer to this for how to modify soft constraints: https://dig.cmu.edu/draco2/applications/debug_draco.html#how-to-modify-the-knowledge-base-with-constraints
Note that you will need to create a new Draco object to pass in the new soft constraint set, and use it in the debugger.
I've been writing new constraints directly into the soft.lp, and then updating them in the weights.lp and they have worked fine. The problem is specifically with constraints that came default with Draco (I presume).
Because even if I change radically their intended behaviour, for example changing from:
% @Soft(c_d_overlap_line) Continuous by discrete for line mark.
preference(c_d_overlap_line,M) :-
helper(is_c_d,M),
not helper(no_overlap,M),
attribute((mark,type),M,line).
to
% @Soft(c_d_overlap_line) Continuous by discrete for line mark.
preference(c_d_overlap_line,M) :-
helper(is_c_d,M),
not helper(no_overlap,M),
attribute((mark,type),M,point).
I'm still getting the "c_d_overlap_line" weight in a graph specification that looks like this:
CHART 3
COST: [50]
{'field': [{'name': 'discount', 'type': 'number', 'unique': 12},
{'name': 'sales', 'type': 'number', 'unique': 5825},
{'name': 'category', 'type': 'string', 'unique': 3}],
'number_rows': 9994,
'task': 'compare',
'view': [{'coordinates': 'cartesian',
'mark': [{'encoding': [{'channel': 'x', 'field': 'discount'},
{'aggregate': 'sum',
'channel': 'y',
'field': 'sales'},
{'channel': 'color', 'field': 'category'}],
'type': 'line'}],
'scale': [{'channel': 'x', 'type': 'ordinal'},
{'channel': 'y', 'type': 'linear', 'zero': 'true'},
{'channel': 'color', 'type': 'categorical'}]}]}
('attribute(task,root,compare).\n'
'attribute(number_rows,root,9994).\n'
'attribute((field,name),(f,0),discount).\n'
'attribute((field,type),(f,0),number).\n'
'attribute((field,unique),(f,0),12).\n'
'attribute((field,name),(f,1),sales).\n'
'attribute((field,type),(f,1),number).\n'
'attribute((field,unique),(f,1),5825).\n'
'attribute((field,name),(f,2),category).\n'
'attribute((field,type),(f,2),string).\n'
'attribute((field,unique),(f,2),3).\n'
'attribute((mark,type),(m,0),line).\n'
'attribute((encoding,field),(e,0),discount).\n'
'attribute((encoding,field),(e,1),sales).\n'
'attribute((encoding,field),(e,2),category).\n'
'attribute((encoding,channel),(e,2),color).\n'
'attribute((encoding,channel),(e,1),y).\n'
'attribute((encoding,channel),(e,0),x).\n'
'attribute((view,coordinates),(v,0),cartesian).\n'
'attribute((encoding,aggregate),(e,1),sum).\n'
'attribute((scale,type),(s,(e,2)),categorical).\n'
'attribute((scale,channel),(s,(e,2)),color).\n'
'attribute((scale,type),(s,(e,1)),linear).\n'
'attribute((scale,channel),(s,(e,1)),y).\n'
'attribute((scale,type),(s,(e,0)),ordinal).\n'
'attribute((scale,channel),(s,(e,0)),x).\n'
'attribute((scale,zero),(s,(e,1)),true).\n'
'entity(field,root,(f,0)).\n'
'entity(field,root,(f,1)).\n'
'entity(field,root,(f,2)).\n'
'entity(view,root,(v,0)).\n'
'entity(mark,(v,0),(m,0)).\n'
'entity(encoding,(m,0),(e,0)).\n'
'entity(encoding,(m,0),(e,1)).\n'
'entity(encoding,(m,0),(e,2)).\n'
'entity(scale,(v,0),(s,(e,0))).\n'
'entity(scale,(v,0),(s,(e,1))).\n'
'entity(scale,(v,0),(s,(e,2))).')
It shouldn't happen since this spec doesn't have an "attribute((mark,type),M,point).". Therefore, I'm really confused as to why this is happening.
I'm rather sorry about opening an issue about such a petty thing but I'm not being being able to wrap my head around this,
what happens when you run the tests after the edit?
Thanks for opening this issue @AMPAlves. That's what it's for so no need to apologize. Did you find the issue?