go-libp2p
go-libp2p copied to clipboard
[WIP] Self dialing
The beginning of self dialing work, eventually closing #328.
This work starts with the transports. I developed a ring buffer based MemoryTransport
, but then, in documenting it, realized I really should just use pipes. I've implemented both and left them in for posterity. I'd be happy to delete the MemoryTransport
if we decide there is no use for it.
The memory transport expects to listen to /p2p
or /ipfs
multiaddrs populated with the peer's identity.
I think we should stick to a single transport for this, pipes look nice and simple.
thanks @vyzo! def agree, will remove the memorytransport.
To actually hook this in, I think we'll need a
/memory
multiaddr. We strip out the/p2p/Qm...
part when looking up transports as that isn't actually a part of the transport address.We can even introduce a memory network concept so all peers within a process can communicate through this.
* Listening on `/memory` would cause this transport to register a listener (for its peer ID) with the memory" network. * Dialing a peer with `/memory` would cause this transport to try to connect to that peer using the memory network.
We could even name these networks
/memory/SOME_NAME
if we want to support multiple local networks but we shouldn't get too fancy (we can also control this in the in-memory network itself).
i wanted to avoid having to extend the multiformats repo, though it seems it's unavoidable. i'd advocate for both /memory
and /pipe
, even if we don't choose to implement memory at the moment (or reconfigure to use channels).
should the pipe transport have a muxer and no security or should i simply create a new pipe for every new stream?
i think we should ditch the memory transport, since your proposed channel based solution is almost exactly what net.Pipe()
is doing
i wanted to avoid having to extend the multiformats repo, though it seems it's unavoidable. i'd advocate for both /memory and /pipe, even if we don't choose to implement memory at the moment (or reconfigure to use channels).
By "memory" I just mean in-memory, not necessarily a specific in-memory transport (the user can pick which transport they want to use for process-local connections). Maybe /local
is better? The idea is that any application would get to define what the "local" protocol is and how it should work.
should the pipe transport have a muxer and no security or should i simply create a new pipe for every new stream?
I'd just create a new pipe for every stream. That'll be a lot less overhead in practice.
By "memory" I just mean in-memory, not necessarily a specific in-memory transport
tomaka needs the same: multiformats/multiaddr#71
oh wow @lgierth thanks for the link in
i wanted to avoid having to extend the multiformats repo, though it seems it's unavoidable. i'd advocate for both /memory and /pipe, even if we don't choose to implement memory at the moment (or reconfigure to use channels).
By "memory" I just mean in-memory, not necessarily a specific in-memory transport (the user can pick which transport they want to use for process-local connections). Maybe
/local
is better? The idea is that any application would get to define what the "local" protocol is and how it should work.
should these slots be named, then? i.e. /memory/foo
should /memory
be a path multiaddr? so we sandbox by peer ID but allow arbitrary named ports?
should these slots be named, then? i.e. /memory/foo should /memory be a path multiaddr? so we sandbox by peer ID but allow arbitrary named ports?
I wouldn't make it a path multiaddr but having arbitrary slots may make sense. However, I can't think of any reason I'd want to use slots where I wouldn't just want to setup a per-peer policy (i.e., tell the local network service "allow peer X to dial peer Y").
What I'm thinking of is basically @tomaka's approach.
@Stebalien i've fixed the implementation of the pipetransport to eschew upgraders and the like. i'm going to focus my attention to the multiformats side to get that set up, then i'll continue work on this change. remaining:
- [ ] update multiaddr handling to use /memory
- [ ] add options and new transport to libp2p hosts
- [ ] implement self dialing via pipetransport when enabled on a host
@bigs – how is this going? do you need a review from me or should I wait for the next iteration? In the latter case, when is it due, so I can plan it? Thanks!
@raulk the only blocker right now is consensus around /memory multiaddrs at pierre’s linked issue. i’ve already implemented a version that uses opaque uint64 slots should we thumbs up that!
On Tue, May 28, 2019 at 08:59 Raúl Kripalani [email protected] wrote:
@bigs https://github.com/bigs – how is this going? do you need a review from me or should I wait for the next iteration? In the latter case, when is it due, so I can plan it? Thanks!
— You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub https://github.com/libp2p/go-libp2p/pull/638?email_source=notifications&email_token=AABUCWRYR45WZZ7GST6IHALPXUUBJA5CNFSM4HNXZK4KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWMBGSI#issuecomment-496505673, or mute the thread https://github.com/notifications/unsubscribe-auth/AABUCWXPX5NWQGH6QBEFUW3PXUUBJANCNFSM4HNXZK4A .
regarding the dialer work, i think the transport is still necessary (you still have to listen somewhere in your code) but it will be much cleaner determining where we want to use it, just as you’ve described.
edit: the shimming is already there as well. down to benchmark, what’s the goal or target? being slower than a tcp transport would be an absolute failure, should we just aim to be better than that?
@bigs
i think the transport is still necessary (you still have to listen somewhere in your code)
Hm, I think listening is not necessary. I think of this as a persistent, in-memory bridge rather than a traditional transport.
There's only ever a singleton virtual connection. We create it at system startup, and whenever the swarm processes a dial to ourselves, we return that connection.
We probably don't even want to generate any connection Notifee events, nor trigger the connection handler. It doesn't need to be registered in the swarm either, and it shouldn't be subject to connection management.
We'll need access to the swarm's StreamHandler
, so we can invoke it when a new stream is opened. That should give way to multistream-select negotiation, and eventually a callback to the appropriate stream handler.
One detail here is that we should only consider handlers that have annotated themselves as "supporting self-dialling", as per the original design: https://github.com/libp2p/go-libp2p/issues/328#issuecomment-465275237
i suppose so long as a stream handler exists, listening becomes irrelevant, though that does break pierre’s testing use case, which could be quite nice. it allows us to allocate an address that doesn’t consume any resources beyond cpu/memory which could potentially enable large tests. do see what you mean, though.
doesn't look like my comment posted but as of last friday this is ready for final review. only thing left after review is go.mod directive replacement and publish.
@raulk @Stebalien @vyzo
Hi,
is there any progress on that effort ? seems to me it has been abandonned since more a year now and I'm facing similar issue on my own project now and it seems I will have to spent time on a feature which would be definitively great to have in libp2p ...
In my case I would like to simulate several identity in a tiny environment regarding the amount of p2p node roles I've to manage. I'm currently starting many p2p nodes (currently 12) but it has its limit (many current IOs leading to many timeouts ... Note: libp2p nodes are not the only IOs consuming component in my test scenario). Let me know if you see some helpers...
Thank you
If you're doing local testing, I recommend using a "mock network": https://pkg.go.dev/github.com/libp2p/[email protected]/p2p/net/mock#Mocknet.
Well this is not only local testing, I would like some of my libp2p node to handles several identity/IPFS addresses a little like nginx can handles several URLs.
Currently for each identity I'm running one libp2p node because of dial to self attempted
issue. But in my business case, most of these identity should be managed by a super libp2p node - or lets say a nginx like libp2p node - regarding their roles on the network...
Thank you for this first answer however, that may help even it is not targeting my entire scenario ;)
You want multiple identities but one node? Every node will always have it's own identity. Are you sure you need multiple identities not just multiple protocols? Let's move this discussion over to https://discuss.libp2p.io, I think we can solve your issue without any new features.