Lumphead
Lumphead
+vats output ``` +vats %bitcoin /sys/kelvin: [%zuse 417] base hash: 0vt.2qfd4.s5bt4.95ia2.o4e82.1ckgr.ltmd5.ddmsb.p9utd.ecsf8.5dqva %cz hash: 0vt.2qfd4.s5bt4.95ia2.o4e82.1ckgr.ltmd5.ddmsb.p9utd.ecsf8.5dqva app status: running force on: ~ force off: ~ publishing ship: [~ ~mister-dister-dozzod-dozzod] updates: remote source...
I suspended most of my apps because I originally thought maybe one of them was the cause - that does not seem to be the case.
It hosts several groups but they are all pretty small.
Is there some way I can figure out what is sending the ames messages? is there a way I can make them stop?
@yosoyubik if there is any way I can get an alpha version of |close-flows I would be willing to give it a try.
It took a while to install but I got it now. ``` > +diagnose!ames #7.982 flows from %nacking %watches #487 backward flows with >10 retries #20.759 forward flows with >10...
@yosoyubik Unfortunately it looks like the issue still occurs - `|ames-verb %snd` still spamming endlessly.
I ran it a couple more time ``` ames: packet dropped (54000 total) ames: packet dropped (55000 total) ames: packet dropped (56000 total) ames: packet dropped (57000 total) > +diagnose!ames...
thanks @yosoyubik - I don't think it relates to new groups as my log size issue predates that upgrade.
thanks @yosoyubik - this is what it prints now: ``` +diagnose!ames, =veb %21 #7.392 flows from %nacking %watches #491 backward flows with >10 retries #20.904 forward flows with >10 retries...