hydrabadger icon indicating copy to clipboard operation
hydrabadger copied to clipboard

Fix: '::handle_key_gen_ack: State must be `GeneratingKeys`.'

Open c0gent opened this issue 6 years ago • 5 comments

c0gent avatar Oct 20 '18 04:10 c0gent

I'm using wireguard to form a private network: A(my home PC) connected to B(VPS), while B(VPS) connected to C(my office PC), but A and C won't connect each other directly, they are both behind NAT that wireguard can't traverse. Their's VPN IPs are A(10.1.0.5), B(10.1.0.1), C(10.1.0.3).

on A: ./target/release/peer_node --bind-address 10.1.0.5:9431 --remote-address 10.1.0.1:9431 --remote-address 10.1.0.3:9431 on B: ./target/release/peer_node --bind-address 10.1.0.1:9431 --remote-address 10.1.0.5:9431 --remote-address 10.1.0.3:9431 on C: ./target/release/peer_node --bind-address 10.1.0.3:9431 --remote-address 10.1.0.1:9431 --remote-address 10.1.0.5:9431

Their responses: A:

thread 'tokio-runtime-worker-0' panicked at '::handle_key_gen_part: State must be `GeneratingKeys`. State: 
AwaitingMorePeersForKeyGeneration 

[FIXME: Enqueue these parts!]

', src/hydrabadger/handler.rs:271:18
note: Run with `RUST_BACKTRACE=1` for a backtrace.
2018-10-24T02:49:11 [ERROR]: Unable to send on internal tx. Internal rx has dropped: send failed because receiver is gone

B:

thread 'tokio-runtime-worker-0' panicked at 'FIXME: RESTART KEY GENERATION PROCESS AFTER PEER DISCONNECTS.', src/hydrabadger/state.rs:398:17
note: Run with `RUST_BACKTRACE=1` for a backtrace.
2018-10-24T10:49:33 [ERROR]: Unable to send on internal tx. Internal rx has dropped: send failed because receiver is gone

C:

thread 'tokio-runtime-worker-1' panicked at '::handle_key_gen_part: State must be `GeneratingKeys`. State: 
AwaitingMorePeersForKeyGeneration 

[FIXME: Enqueue these parts!]

', src/hydrabadger/handler.rs:271:18
note: Run with `RUST_BACKTRACE=1` for a backtrace.
2018-10-24T10:49:11 [ERROR]: Unable to send on internal tx. Internal rx has dropped: send failed because receiver is gone

diyism avatar Oct 24 '18 02:10 diyism

So first let me make sure I understand you correctly. Are you saying that, in your network, machines A and C are not visible to each other (i.e. can't ping each other, etc.)?

Have you tried starting three nodes on the same machine (using 127.0.0.1 as the address and varying the ports)? Two nodes on one machine, one node on a second machine?

Obviously, the error messages your are getting are an caused by an issue I've been meaning to fix for a little while but I'd still like to understand precisely what is causing the problem in your case. I might be able to offer a temporary workaround until I have time to implement a proper fix to Hydrabadger (which will be a few more days at least).

c0gent avatar Oct 24 '18 03:10 c0gent

I get the same error. Еhis is a log on one of the devices:

E / HYDRABADGERTAG: thread 'tokio-runtime-worker-0' panicked at ':: handle_key_gen_part: State must be `GeneratingKeys`. State:
     AwaitingPeers {required_peers: [], available_peers: []}
    
     [FIXME: Enqueue these parts!]
    
     ': src / hydrabadger / key_gen.rs: 333

2019-03-15 20: 25: 10.190 1942-2285 / net.korul.hbbft V / HYDRABADGERTAG: Received message: Some (WireMessage {kind: KeyGen (BuiltIn, Message {kind: Ack (Ack (0, "<3 values > "))})})
2019-03-15 20: 25: 10.190 1942-2285 / net.korul.hbbft E / HYDRABADGERTAG: Unable to send on internal tx. Internal rx has dropped: receiver failed is receiver

This problem also occurs in the mobile application, when one phone uses a WIFI network, and the second mobile Internet. At the moment, this is the biggest problem in the test version. One time the connection is established and everything is fine. One time this error

KORuL avatar Mar 15 '19 17:03 KORuL

This bug is due to an edge case in the timing of connections and I haven't yet got around to implementing the fix. I probably won't have time to work on this right away so I'll try to explain the issue in detail so that someone else can take a look.

c0gent avatar Mar 26 '19 16:03 c0gent

I meet this error too, it was solved?

VegeBun-csj avatar Mar 04 '23 12:03 VegeBun-csj