js-ipfs
js-ipfs copied to clipboard
ipfs-http-client should warn about limited number of pubsub topic subscriptions
TLDR: this is an artificial limit imposed by the number of socket connections open or a browser limit, see explainer and suggestion to warn users in https://github.com/ipfs/js-ipfs/issues/3983#issuecomment-996771955
-
Version:
ipfs-http-client v0.55.0(this was also happening with v0.54.2)go-ipfs v0.11.0(this was also happening with v0.9.1) -
Platform:
Linux localhost 5.11.0-37-generic #41-Ubuntu SMP Mon Sep 20 16:39:20 UTC 2021 x86_64 x86_64 x86_64 GNU/LinuxDarwin Christoss-MBP 21.1.0 Darwin Kernel Version 21.1.0: Wed Oct 13 17:33:23 PDT 2021; root:xnu-8019.41.5~1/RELEASE_X86_64 x86_64(seems platform irrelevant)
Node Version: v16.13.1 (LTS)
-
Subsystem: ipfs-http-client
Severity: Medium - A non-essential functionality does not work, performance issues, etc.
Description:
When trying to open more than 6 pubsub topics through an ipfs-http-client instance using ipfs.pubsub.subscribe(<topic>);, the number of topics opened, validated through ipfs.pubsub.ls() or through the CLI (ipfs pubsub ls) caps at 6.
Expected Behaviour
When doing running ipfs pubsub ls there should be outputted as many pubsub topics as invocations of ipfs.pubsub.subscribe(<unique_topic>);.
Actual Behaviour
The number of open pubsub topics outputted by ipfs pubsub ls caps at 6.
Steps to reproduce the error:
- Subscribe to more than 6 pubsub topics through ipfs-http-client
- Run
ipfs pubsub lsto compare the number of topics open.
const IPFS = require("ipfs-http-client");
const GO_IPFS_HOST = "localhost";
const GO_IPFS_PORT = 5001;
// Constant to change
const NUMBER_OF_TOPICS = 10; // If this is more than 6, then there will be **less pubsub topics open** than topics opened.
(async () => {
const ipfs = IPFS.create({ host: GO_IPFS_HOST, port: GO_IPFS_PORT })
console.log("IPFS is ready");
const topics = [];
for (let i = 0; i < NUMBER_OF_TOPICS; i++) {
await ipfs.pubsub.subscribe(`test-topic-${i}`);
topics.push(`test-topic-${i}`);
}
console.log("All topics are ready");
console.log("\n");
topics.forEach((t) => console.log(t));
const pubsubTopics = await ipfs.pubsub.ls();
console.log("\n");
console.log("Open pubsub topics:")
console.log(pubsubTopics)
if (pubsubTopics.length < topics.length) {
console.log("There are less pubsub topics than what we opened! :(")
} else if (pubsubTopics.length == topics.length) {
console.log("There are the same number of pubsub topics as the ones we opened! :)")
} else {
// THIS WON'T EVER HAPPEN IF THE IPFS NODE IS FRESH
console.log("There are more pubsub topics than the ones we opened! :( You might need to restart your IPFS node.")
}
})();
The above code reproduces the issue. You can also clone and run node test-pubsub-only.js from the following repository:
https://github.com/chrispanag/orbit-db-pubsub-issue-replication
Additional Information - Observations
The issue seems to be specific to ipfs-http-client not been able to open more than 6 pubsub subscriptions, because when a separate instance of it is run concurrently, it can also open at most 6 additional pubsub subscriptions.
Additionally, when more than 6 subscription topics are opened through go-ipfs CLI, the behaviour is correct.
Finally, the same thing happens when a 1s delay is added between opening each pubsub topic. So it seems it's not a performance or rate-limiting issue.
This sounds like a duplicate of https://github.com/ipfs/js-ipfs/issues/3741 - have you read that issue?
Folks will get bitten by this in the browser context over and over again. We should find a way to proactively avoid silent failures / bugs like this.
Will get even worse when enabling ipns-over-pubsub where each name requires listening on a separate topic (we could change the spec to work better in browsers, but it is like that atm).
HTTP/1.1 vs HTTP/2
iiuc this limit of 6 connections is specific to HTTP/1.1. If the RPC API is exposed over HTTP/2 then everything is multiplexed over a single TCP connection, and the Chromium limit for the number of multiplexed streams is 100 (which is way more manageable).
@chrispanag if you need a quick workaround, put go-ipfs behind a reverse proxy (like nginx) with HTTP/2 set up, and see if that helped.
Ceiling detection?
@achingbrain perhaps js-ipfs running in the browser could run a self-test to progressively determine what the ceiling of the current runtime is, and then throw an error before the ceiling (if present) it is reached, instead of failing silently?
To make the error actionable, if ceiling is <10 then we should print console message suggesting exposing /api/v0 over HTTP/2 to increase topic limit to ~100.
This sounds like a duplicate of #3741 - have you read that issue?
No I didn't! Thanks for pointing this out, setting a custom http.Agent solved the issue!
Apart from that I can only agree with @lidel on "Folks will get bitten by this in the browser context over and over again". Maybe there should be an explicit mention of this in the docs and also some way so that it doesn't fail silently.
@lidel thanks for the suggestion, fortunately I'm doing the job on NodeJS so the solution was easy (just increasing the number of socket connections open).
I've run up against similar issues when working on exposing hypercore protocol extension messages for pub/sub over HTTP gateways.
The approach I used was to have all of the incoming messages be sent over a single server sent events connection with the event type set to the gossip topic, and having clients POST to a url with the topic name (and optionally a peer I'd) to broadcast out.
The main benefits over something like a websocket protocol is that it's very webby and doesn't take any extra APIs beyond what's already in the browser.
I've been thinking of integrating something similar into js-ipfs-fetch and the Agregore Browser
js-ipfs is being deprecated in favor of Helia. You can https://github.com/ipfs/js-ipfs/issues/4336 and read the migration guide.
Please feel to reopen with any comments by 2023-06-02. We will do a final pass on reopened issues afterward (see https://github.com/ipfs/js-ipfs/issues/4336).
@achingbrain assigning to you because i'm not sure if we're handling the root issue of this problem with pubsub in helia/libp2p.
Please port your app to use https://github.com/ipfs/js-kubo-rpc-client in place of ipfs-http-client - it's a drop-in replacement and is where Kubo fixes and compatibility updates will land in future.