go-libp2p
go-libp2p copied to clipboard
swarm: inconsistent listening address reporting
During our punchr measurement campaign, we observed that the listening addresses returned by host.Addrs() didn't match the ones that were used during a hole punch.
After a punchr client has requested a peer to hole punch, it immediately makes a note of all the addresses it is listening on (here). In our database, we find plenty of cases where the client reported a set of listening addresses that didn't contain any public address but still a successful hole punch.
I saw that go-libp2p calls OwnObservedAddrs on the ID service here. This is also used if no port mappings are in use here. punchr clients also reported if they've had active port mappings in place. If we only consider the ones with no port mapping, we still end up with plenty of successful hole punches.
The only explanation that comes to my mind is that from extracting the listening addresses before the hole punch until we are transmitting all addresses to the remote peer, the set of listening addresses has changed.
Could this have something to do with #2046 ?
I have documented the distribution of hole punch outcomes for peers that reported to listen on a public address here in Notion.
We used the bootstrappers to obtain public address knowledge first in the original flare experiment, exactly for this reason.
Unfortunately you may have skewed somewhat the experimental results inadvertently.
On Tue, Feb 7, 2023, 20:48 Dennis Trautwein @.***> wrote:
During our punchr https://github.com/libp2p/punchr measurement campaign, we observed that the listening addresses returned by host.Addrs() didn't match the ones that were used during a hole punch.
After a punchr client has requested a peer to hole punch, it immediately makes a note of all the addresses it is listening on (here https://github.com/libp2p/punchr/blob/4d2343ff01f2250a7b88314dde0fa2e6ca9a1775/pkg/client/host.go#L302). In our database, we find plenty of cases where the client reported a set of listening addresses that didn't contain any public address but still a successful hole punch.
I saw that go-libp2p calls OwnObservedAddrs on the ID service here https://github.com/libp2p/go-libp2p/blob/313b080ea4e27f47dbfb9f872133b9fba4a9d183/p2p/protocol/holepunch/svc.go#L173. This is also used if no port mappings are in use here https://github.com/libp2p/go-libp2p/blob/313b080ea4e27f47dbfb9f872133b9fba4a9d183/p2p/host/basic/basic_host.go#L944. punchr clients also reported if they've had active port mappings in place. If we only consider the ones with no port mapping, we still end up with plenty of successful hole punches.
The only explanation that comes to my mind is that from extracting the listening addresses before the hole punch until we are transmitting all addresses to the remote peer, the set of listening addresses has changed.
Could this have something to do with #2046 https://github.com/libp2p/go-libp2p/issues/2046 ?
I have documented the distribution of hole punch outcomes for peers that reported to listen on a public address here in Notion https://www.notion.so/pl-strflt/Final-Report-NAT-Hole-Punching-Measurement-Campaign-94366124f4e34b29bf55fb860a3d8c72#dd8f6e54043b4190bdefcf362cfc74f2 .
— Reply to this email directly, view it on GitHub https://github.com/libp2p/go-libp2p/issues/2067, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAI4STISD5DRADIRTMDYCDWWKKHZANCNFSM6AAAAAAUUKPIS4 . You are receiving this because you are subscribed to this thread.Message ID: @.***>
@vyzo Is there already an issue that describes this problem which I've missed?
I think the data is indeed skewed. However, we may be able to exclude reported hole punches that don't make sense. E.g., hole punches where the client didn't report a public address but ended up with a successful connection. The other way around is probably not possible. A hole punch where the client didn't have a public address but reported to have one. I'd assume this to be the rarer case, though.
no issue, just tribal knowledge unfortunately.
Lets use this one.
Is this the same issue as https://github.com/libp2p/go-libp2p/issues/1930?
Perhaps related. So far, I have only observed missing public addresses - not wrong ones. Although I can't really tell if a client reported wrong ones. The addresses that my personal client reported were correct AFAICT, so I believe this is true for others as well.
You connect to the peer over the relay a couple of lines later here. Could it be that connection to the relay triggers an identify where the client learns about new addresses after you recorded them a few lines earlier?
These clients are long running libp2p hosts. The first thing that they do upon startup is to connect to the bootstrap nodes. Then they wait until they have observed a public address. For that, I'm reusing the logic from the holepunch package which just periodically checks if a public address is among the observed ones.
Only then, the clients reach your linked code path. I actually don't know if the connection through a relay to a remote peer triggers an identify - it probably does? If that's indeed the case, I think you're right that the observed addresses may change from the point where I take note of them until they are used during the hole punch. However, I wouldn't expect this to happen very frequently unless the client is behind an endpoint dependent NAT. Also, I wouldn't expect that any previously identified public address will be removed in the meantime.
Can you reproduce this consistently?
Let me verify that it happened to my personal client as well, and if it did, add some more tracing and run it again.
Oops, seems like we needed more information for this issue, please comment with more details or this issue will be closed in 7 days.
Posting here just to prevent this issue from being auto-closed. I haven't had the chance to revisit this yet but still think this is something to investigate.
Is it possible to check if these were relay addresses? When the node is private host.Addrs() will just give you the list of relay addresses and not your observed public address.
Oops, seems like we needed more information for this issue, please comment with more details or this issue will be closed in 7 days.