rust-ipfs
rust-ipfs copied to clipboard
floodsub compatibility with go-ipfs
When connecting with go-ipfs daemon --enable-pubsub-experiment (0.4.23 at least) both processes will start consuming a lot of cpu time, and looking at logs at go-ipfs at log all=debug,mdns=error,bootstrap=error:
18:19:30.046 DEBUG swarm2: [QmbLzuAXTG1yLt3ghR6RbvbZUBaJtDqPpcV2hVSESKYAej] opening stream to peer [QmbxRPux7zZmSkMUMF8aHuQdV8gQCXNExvwg7Vgx9qLjQf] swarm.go:280
18:19:30.049 WARNI pubsub: peer declared dead but still connected; respawning writer: QmbxRPux7zZmSkMUMF8aHuQdV8gQCXNExvwg7Vgx9qLjQf pubsub.go:326
With rust-ipfs the logs are similar, mostly about yamux and multistream_select opening and closing a substream. I suspect that the issue is that go-ipfs side wants to have a long running substream and the rust-ipfs side wants to have a substream only when there is something to send.
Ok, I managed to reproduce this and investigate a little. Still not sure where connections gets closed on the rust side. I think you are correct, the logic in go implementation to always reconnect on EOF.
This is the log line and logic, for dead peers the connections state is checked, if it is still connected then pubsub RPC is done again: https://github.com/libp2p/go-libp2p-pubsub/blob/ea5d2e6d6dcddf8daca51e8e742dd03571047fbf/pubsub.go#L463
Here the peer is marked as dead, so when EOF is reached (eg. stream is closed on the rust side): https://github.com/libp2p/go-libp2p-pubsub/blob/1f147c24576a60c9b718456c98de964032e3b38e/comm.go#L94
Not sure what is the correct behavior, so this has to be clarified first.
I came across a potentially related issue, using the following code:
use async_std::task;
use futures::StreamExt;
use ipfs::{IpfsOptions, Types, UninitializedIpfs};
fn main() {
env_logger::init();
let options = IpfsOptions::<Types>::default();
task::block_on(async move {
println!("IPFS options: {:?}", options);
let (ipfs, future) = UninitializedIpfs::new(options).await.start().await.unwrap();
task::spawn(future);
// Subscribe
let topic = "test1234".to_owned();
let mut subscription = ipfs.pubsub_subscribe(topic.clone()).await.unwrap();
ipfs.pubsub_publish(topic.clone(), vec![41, 41]).await.unwrap();
while let Some(message) = subscription.next().await {
println!("Got message: {:?}", message)
}
// Exit
ipfs.exit_daemon().await;
})
}
This will not connect with itself (i.e. running this twice), or with the go-ipfs client.
Rust Client A
IPFS options: IpfsOptions { ipfs_path: PathBuf { inner: "/Users/nicktaylor/.rust-ipfs" }, bootstrap: [("/ip4/104.131.131.82/tcp/4001", PeerId("QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ")), ("/ip4/104.236.179.241/tcp/4001", PeerId("QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM")), ("/ip4/104.236.76.40/tcp/4001", PeerId("QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64")), ("/ip4/128.199.219.111/tcp/4001", PeerId("QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu")), ("/ip4/178.62.158.247/tcp/4001", PeerId("QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd")), ("/ip6/2400:6180:0:d0::151:6001/tcp/4001", PeerId("QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu")), ("/ip6/2604:a880:1:20::203:d001/tcp/4001", PeerId("QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM")), ("/ip6/2604:a880:800:10::4a:5001/tcp/4001", PeerId("QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64")), ("/ip6/2a03:b0c0:0:1010::23:1001/tcp/4001", PeerId("QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd"))], keypair: Keypair::Ed25519, mdns: true }
Got message: PubsubMessage { source: PeerId("12D3KooWJXVxo6wY8T1CHuou3Nct4JrnYojRpnTnqKygv8tiymTp"), data: [41, 41], sequence_number: [176, 19, 137, 113, 220, 129, 183, 152, 104, 245, 121, 8, 46, 107, 141, 152, 255, 248, 3, 160], topics: ["test1234"] }
Rust Client B
IPFS options: IpfsOptions { ipfs_path: PathBuf { inner: "/Users/nicktaylor/.rust-ipfs" }, bootstrap: [("/ip4/104.131.131.82/tcp/4001", PeerId("QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ")), ("/ip4/104.236.179.241/tcp/4001", PeerId("QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM")), ("/ip4/104.236.76.40/tcp/4001", PeerId("QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64")), ("/ip4/128.199.219.111/tcp/4001", PeerId("QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu")), ("/ip4/178.62.158.247/tcp/4001", PeerId("QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd")), ("/ip6/2400:6180:0:d0::151:6001/tcp/4001", PeerId("QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu")), ("/ip6/2604:a880:1:20::203:d001/tcp/4001", PeerId("QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM")), ("/ip6/2604:a880:800:10::4a:5001/tcp/4001", PeerId("QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64")), ("/ip6/2a03:b0c0:0:1010::23:1001/tcp/4001", PeerId("QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd"))], keypair: Keypair::Ed25519, mdns: true }
Got message: PubsubMessage { source: PeerId("12D3KooWJXVxo6wY8T1CHuou3Nct4JrnYojRpnTnqKygv8tiymTp"), data: [41, 41], sequence_number: [194, 9, 29, 179, 181, 171, 142, 16, 229, 4, 162, 118, 44, 19, 7, 248, 3, 186, 49, 231], topics: ["test1234"] }
go-ipfs
nicktaylor@Nicks-MBP ~ % ipfs pubsub pub test1234 "test"
nicktaylor@Nicks-MBP ~ % ipfs pubsub pub test1234 "test"
nicktaylor@Nicks-MBP ~ % ipfs pubsub pub test1234 "test"
nicktaylor@Nicks-MBP ~ % ipfs pubsub pub test1234 "test"
There is a lot of logging in the daemon with INFO level, so not sure what other logs might be helpful.
The real solution this is to upgrade to gossipsub.
The real solution this is to upgrade to gossipsub.
in #186 you mentioned:
The current build has an equivalent of --enable-pubsub-experiment always on, and in fact, it cannot be turned off via configuration. The inability to configure it off with the #132 makes it so that you cannot connect to go-ipfs 0.5 which is running with --enable-pubsub-experiment.
Is this also the case for go-ipfs 0.6? And js-ipfs? Does this mean the pubsub functionality is currently only working among rs-ipfs nodes?
Does this mean the pubsub functionality is currently only working among rs-ipfs nodes?
As far as I know yes but haven't looked into this for a while! While it is possible that the floodsub has been changed in the meanwhile (if so, I have missed those PRs), the gossipsub is as far I remember still on track to support both gossipsub and floodsub.
Is this also the case for go-ipfs 0.6? And js-ipfs?
I don't think I ever tested the js-ipfs for floodsub, nor am I sure on the go-ipfs 0.x where x > 5.