kubo icon indicating copy to clipboard operation
kubo copied to clipboard

Deprecate then Remove /api/v0/pubsub/* RPC API and `ipfs pubsub` Commands

Open Jorropo opened this issue 1 year ago • 49 comments

This is about /api/v0/pubsub/*, and not IPNS-over-Pubsub.

Kubo's PubSub RPC API (/api/v0/pubsub/*) provides access to a somewhat reliable multicast protocol.

However it has many issues:

Lack of a real contract on the reliability of messages

The messages reliability contract is more or less:

  1. It's mostly reliable, without retransmission but using reliable transports and a flood like action. (AKA whatever go-libp2p-pubsub implements)
  2. Duplicate messages shouldn't be shared too much across the mesh and should die off after some time. (Attempt to avoid storms)

First point, go-libp2p-pubsub has a bunch of options and callbacks you can configure to change the behaviour of the mesh (view Examples of correct usage of go-libp2p-pubsub section below). The options are not currently configurable, it's not just a question of exposing them, the really impactfull options are the one locked behind callback like validators. Two potential solutions are to have clients maintain a websocket, grpc, ... stream open with Kubo, then when Kubo receives a callback from go-libp2p-pubsub it can forward the arguments over the API stream and wait for the response. Or add something like WASM to run custom application code inside Kubo, then you will be able to configure WASM blobs which implements the validator you want. This is much harder than just throwing a WASM interpreter and writing a few hundred SLOCs of glue code, because most validators you would want write would need to access and store some application related state. (for example in a CRDT application, do not relay messages that advertise a HEAD that is lower than the currently known HEAD).

Second point, our current implementation of message deduplication use a bounded cache to find duplicates, if the mesh gets wider than the cache size, you can reach an exponential broadcast storm like event: https://github.com/ipfs/kubo/issues/9665, sadly linking to point one, even tho the fix is supposed to be transparent and implement a visibly similar message deduping logic except it does not have a bounded size this make our interop tests very flaky and thus might break various stuff in the ecosystem.

Confusing Architecture

I had more than I have fingers discussions with various peoples who complain that Kubo pubsub does not work, they never receive messages. Almost always the issue is that they are running ipfs http clients in the browser, open two browser tabs, and then try to receive messages from the other tab. This does not work because Kubo does not think of the two clients as two clients, from Kubo's point of view the http api is remote controlling the Kubo node. Thus the fact that the browsers are different tabs are different browsers instance, is not taken into a count, as far as Kubo can see, the messages are sent by the same node (itself) and it does not return you your own message because receiving messages you sent yourself is confusing.

This is a perfectly valid usecase, just not what the API was designed to do (you can implement this is to use js-libp2p in the browser then your browser node would use floodsub to a local Kubo node, with messages going through the libp2p swarm instead of the HTTP API).

Future of the API

Currently the pubsub API is not in a good place and correctly advertise this:

EXPERIMENTAL FEATURE
  
    It is not intended in its current state to be used in a production
    environment.  To use, the daemon must be run with
    '--enable-pubsub-experiment'.

Our current team goals are to move away from the two ABIs (HTTP & Go) maintenance costs for people who want to build applications on top of IPFS by providing a consistent Go ABI story (go-libipfs) and a comprehensive example collection on how to use this (go-libipfs/examples). Fixing the PubSub API require lots of work which does not allign with theses goals and thus to not justify allocating much time on this when we enginer time is at a premium.

go-libp2p-pubsub's Go API is already competent and capable of satisfying the needs of consumers proven by the Production examples of correct usage of go-libp2p-pubsub section below. go-libp2p-pubsub will continue to stay part of libp2p given this really have very little to do with IPFS and can be used by any libp2p project (ETH2 for example).

Ways for creating a soft landing

To ease the pain of people currently using the PubSub HTTP API Kubo API, we could:

  1. create a new daemon binary that provide the same endpoints. That said, we wouldn't have plans to maintain it and it is most likely going to have the same issues as the current API. Someone else would need to pick it up to maintain it.
  2. As part of @ipfs/kubo-maintainer's example-driven development strategy, we could create missing examples on how to bootstrap a project using go-libp2p-pubsub if that is useful. (TBD where that example will live but could be something like full-example in libp2p/go-libp2p-pubub that is validated as part of CI)

Production examples of correct usage of go-libp2p-pubsub

For good example of how to use libp2p-pubsub's effectively see things like:

  • Lotus's pubsub used in many layer of the Filecoin consensus to discover state and share state updates.
  • Prysm's beacon chain which does something similar but for the ETH2 consensus layer.
## Tasks
- [x] @Jorropo Close https://github.com/ipfs/kubo/issues/6621 (pointing to the issue above)
- [X] @Jorropo Create a topic in discuss.ipfs.tech (https://discuss.ipfs.tech/t/help-kubo-maintainers-about-usecases-for-the-http-pubsub-api/16097). Include that we are planning to remove pubsub (link to this issue) and "Please comment below to share your usecase and why you have used this in Kubo."
- [x] @Jorropo For Kubo 0.19, PR for moving pubsub from experimental to deprecated (referencing this issue). This should have a changelog update. Hide/remove all the docs about this as well since we don't want anyone else putting weight on this.  https://github.com/ipfs/kubo/pull/9718
- [ ] post 0.20 / IPFS Thing, create the migration path / soft landing so we can fully remove pubsub from Kubo.
- [ ] Remove pubsub from Kubo

Jorropo avatar Mar 10 '23 16:03 Jorropo

Thanks for creating this issue @Jorropo . I made some adjustments to the issue description including adding a formal task list. (Feel free to look at the changes in the issue history.)

BigLep avatar Mar 10 '23 18:03 BigLep

Thanks for the write up @Jorropo.

This makes total sense for all the reasons you mentioned.

Broadly speaking, we just need to do some better advocacy and education about PubSub in libp2p and establish some best practices from the known real world usecases you listed.

As far as I understand, this would deprecate the following endpoints:

  • https://docs.ipfs.tech/reference/kubo/rpc/#api-v0-pubsub-ls
  • https://docs.ipfs.tech/reference/kubo/rpc/#api-v0-pubsub-peers
  • https://docs.ipfs.tech/reference/kubo/rpc/#api-v0-pubsub-pub
  • https://docs.ipfs.tech/reference/kubo/rpc/#api-v0-pubsub-sub

Tagging @TheDiscordian as this would likely break Discochat, which relies on the Kubo-RPC client and a Kubo daemon to subscribe to topics.

It looks like it from a search of the code

Either way, we're already planning a new example to showcase universal connectivity with libp2p https://github.com/libp2p/universal-connectivity/issues/1 which showcases an app architecture where every user is a full libp2p Peer.

2color avatar Mar 13 '23 11:03 2color

Reopening since this isn't complete (only https://github.com/ipfs/kubo/pull/9718 is).

BigLep avatar Mar 15 '23 14:03 BigLep

As far as I understand, this would deprecate the following endpoints:

* https://docs.ipfs.tech/reference/kubo/rpc/#api-v0-pubsub-ls

* https://docs.ipfs.tech/reference/kubo/rpc/#api-v0-pubsub-peers

* https://docs.ipfs.tech/reference/kubo/rpc/#api-v0-pubsub-pub

* https://docs.ipfs.tech/reference/kubo/rpc/#api-v0-pubsub-sub

Tagging @TheDiscordian as this would likely break Discochat, which relies on the Kubo-RPC client and a Kubo daemon to subscribe to topics.

So kubo v0.19 will be the last kubo release to have the pubsub RPC API baked in, it seems. My project can't work without pubsub so i'll just freeze the kubo version for now. There's a post here that suggests it'll still be possible somehow to use pubsub via a separate repo, any info on that ?

pinnaculum avatar Mar 21 '23 20:03 pinnaculum

@pinnaculum this is point number 1 in Ways for creating a soft landing in the issue above.

Ideally what we want is that you write ~100 lines of Go and use go-libp2p-pubsub because this gives you access to the lower level callbacks which lets you configure your pubsub mesh correctly (with message validation and not relaying outdated messages).

Jorropo avatar Mar 21 '23 21:03 Jorropo

Ideally what we want is that you write ~100 lines of Go and use go-libp2p-pubsub because this gives you access to the lower level callbacks which lets you configure your pubsub mesh correctly (with message validation and not relaying outdated messages).

Merci @Jorropo :) Yes, that works if you're writing in Go. I've read the "chat" example in go-libp2p-pubsub, but since your program runs outside of kubo and has to be written in Go, this makes things more difficult for a project that wants to use pubsub and the rest of kubo's APIs. IPFS's Pubsub on its own is truly great (hooray gossipsub), but that's when you can combine it with the rest of the IPFS APIs that it can truly shine IMO, because there are many other pubsub implementations out there i think.

The beauty of having an integrated pubsub RPC API is that you can write in any language that has bindings for kubo's RPC, without knowing (or caring, tbh) about the internals of all this, and take advantage of the rest of the IPFS API.

You guys are the experts and i understand that it takes a lot of time to maintain and fix all of this code. I've used the pubsub RPC API since the go-ipfs 0.4.x days until the recent kubo releases and it has steadily improved, so thank you and congratulations to the geniuses involved in this.

pinnaculum avatar Mar 22 '23 12:03 pinnaculum

@pinnaculum I'm not saying that you would rewrite your complete app in Go, I'm saying that you would write a small wrapper that expose the pubsub features you need over http, this is different from Kubo doing it because you have application knowledge, you can write something that correctly fits your usecase (for example if you are doing a CRDT you can have a validator that compares messages heights and remove messages that are not current anymore).

The beauty of having an integrated pubsub RPC API is that you can write in any language that has bindings for kubo's RPC, without knowing (or caring, tbh) about the internals of all this, and take advantage of the rest of the IPFS API.

There will be a binary that provides the same features if you are happy with them, I would be surprised if very few peoples sees issues like #9665. Which are not trivial to solve. Also the current message seen cache code seems to use more and more CPU for each message as the cache grows.

Jorropo avatar Mar 22 '23 19:03 Jorropo

My project relies on both IPFS and pubsub. By pulling it out of kubo, and having projects rely on libp2p instead, wouldn't that leave us to effectively load libp2p twice, once in kubo, then a separate instance of libp2p to run pubsub?

It seems that folks using pubsub via kubo are perfectly happy with the level of accessibility that's currently offered, and would prefer to continue to use it. The alternate solutions provided above make projects much more cumbersome to manage, requiring support for multiple languages isn't ideal (wrapper), or having to now manage an additional process (go-libp2p-pubsub) adds further complexity in deployment, maintenance, and usage.

I'd propose leaving the API in its current state, and if someone is unhappy with how it's working, leave it to them to implement their changes, and just remove it from the primary devs' roadmap.

The feature is only a performance issue for nodes that explicitly enable it, correct?

fcbrandon avatar Mar 22 '23 21:03 fcbrandon

I rely on the Kubo RPC API for accessing pubsub from an Electron app. Removing this would mean I could no longer use the go-ipfs NPM module or Kubo in general and would need to totally rework how I integrate IPFS into my applications.

I suppose this could be a reason to ditch Kubo entirely and embed just a subst of it into a custom HTTP API? It would certainly make it harder to deploy and reuse things like IPFS-Cluster with the node.

RangerMauve avatar Mar 22 '23 22:03 RangerMauve

I just wanted to say thanks for folks sharing about their usecases and needs. For transparency, Kubo maintainers havne't done any work on this yet. In case it wasn't clear, the migration path/plan will be designed and communicated before we undergo this work. Updates will be posted here. In the meantime, feel free to continue to share.

BigLep avatar Mar 23 '23 04:03 BigLep

I'm saying that you would write a small wrapper that expose the pubsub features you need over http, this is different from Kubo doing it because you have application knowledge, you can write something that correctly fits your usecase

I've thought about that (expose some kind of RPC in the wrapper) but did not mention it in my message. The reason why this approach would be problematic for my project, and @fcbrandon talks about this as well, is that as i understand it you would have two parallel libp2p instances/nodes, the kubo's libp2p instance (that would not have pubsub "capabilities") and the libp2p instance running in the "wrapper" that uses go-libp2p-pubsub and thereforce can exchange pubsub messages. Am i wrong about that ? I use IPFS peer IDs as a "unique key" in a PeerID <=> DID mapping. Having two nodes and therefore distinct peer IDs makes it much more difficult, but not impossible, and also performance wise it's not ideal.

Just exposing my thoughts about this approach, but maybe for other people's usecases it wouldn't be a problem at all. I wonder if a compromise could not be found, by having kubo keep using the "default" pubsub validator (the "BasicSeqnoValidator" ?, which is probably inefficient), while letting people who want better control with the option of using go-libp2p-pubsub's API and set up their custom validators, etc ...

I've read validation_builtin.go. If better validators are implemented in the future, then there could be a /api/v0/pubsub/set_topic_validator kubo RPC API call that would just pass the name of a builtin pubsub validator (and some optional settings maybe), to set the validator to use for a given topic. Then, no need for messy callbacks, and kubo's pubsub implementation would strengthen over time with different kinds of validators ? I think the implementation of validators should stay in go-libp2p-pubsub. Once you take this problem out of the equation, the case for deprecating the pubsub API in kubo is not as strong because most people using pubsub with kubo are probably fine with letting kubo use the best validator available.

pinnaculum avatar Mar 23 '23 21:03 pinnaculum

I just wanted to say thanks for folks sharing about their usecases and needs.

One our usecase would be to know the IP address of the direct peer who sent a message (not the origin), to be able to block IPs of peers that relay bad messages. Also being able to block IPs in the pubsub would be nice. I know you can do it with swarm filter but that blocks the IP from everywhere.

IMO having a basic pubsub API included with kubo is good for testing, prototyping and discovering that it even exists, even if it can't be customized. I'm not sure I would know it exists if I didn't randomly see it as an option in IPFS one day.

In our app we bundle kubo with electron, and use both ipfs and pubsub, so having 2 binaries, one for ipfs and one for pubsub would mean a larger bundle. Could it also mean more resource/bandwidth usage if the user has to run both at the same time?

Also no one in our team knows go, so the ideal scenario would be for pubsub to remain in kubo, but with more configuration options, for example in our case blocking IPs of direct peers. The second best scenario would be a separate pubsub binary and RPC with more configuration options. The least ideal (but not dealbreaking) would be to use go-libp2p-pubsub.

estebanabaroa avatar Mar 28 '23 18:03 estebanabaroa

Thank you all for feedback and reaching out.

I had some related conversations about this during IPFS Thing, and people would appreciate having basic pubsub built-in bit longer, with whatever opinionated defaults we want.

Cold cut, and removing pubsub RPC commands it will cause bigger pain than libp2p-relay-daemon because we talk end user API. Pain for both users, and maintainers ("there will be a binary that provides the same features if you are happy with them" → someone has to do it, maintenance cost time, maybe we can spend it elsewhere?).

Given that:

  1. js-ipfs and js-ipfs-http-client are deprecated, and specific to JS-IPFS, and not Kubo
  2. we point people at Helia and kubo-rpc-client, and these no longer share the API nor have interop tests for RPC commands

maybe we could consider a more gentle deprecation path, or at least do a stop-gap:

  • :point_right: deprecate interop with JS-IPFS RPC, but keep it in Kubo RPC
  • keep /api/v0/pubsub as a basic feature without ability to customize validator
    • apply default one from https://github.com/ipfs/kubo/pull/9684
    • document the opinionated behavior in /docs/config.md#pubsub + state that if user needs more, they should build their own binary

This way we could keep /api/v0/pubsub as deprecated, discouraging its use, but would not have to remove it nor invest time into creating yet another daemon as a drop-in replacement.

Could be safer path. Removing interop with JS-IPFS, but keeping commands in Kubo for now buy us some time/options:

  • we can always fully remove it at some point in the future
  • for now it will be enough to unblock merging https://github.com/ipfs/kubo/pull/9684
  • limited team does not need to sink time in any follow-up work here, we create no unnecessary waves across the ecosystem

lidel avatar May 11 '23 11:05 lidel

This way we could keep /api/v0/pubsub as deprecated, discouraging its use, but would not have to remove it nor invest time into creating yet another daemon as a drop-in replacement.

Excellent reasoning. Will the merge of #9684 break pubsub communications with previous kubo versions (say .. 0.18.x), or does it only affect the validator ?

pinnaculum avatar May 11 '23 13:05 pinnaculum

2023-05-18 conversation:

  1. Generally aligned to go with the less disruptive approach: https://github.com/ipfs/kubo/issues/9717#issuecomment-1543827199
  2. We need check the validator that "ipns over pubsub" uses. We need to know what we're in for.
  3. Changelog entry to make it clear that we're breaking interoperability with js-ipfs (i.e., no longer testing for it or guaranteeing), but js-ipfs is deprecated so that is fine.

BigLep avatar May 18 '23 15:05 BigLep

@pinnaculum :

Excellent reasoning. Will the merge of https://github.com/ipfs/kubo/pull/9684 break pubsub communications with previous kubo versions (say .. 0.18.x), or does it only affect the validator ?

Good question. We don't believe it breaks compatibility between Kubo versions.

BigLep avatar May 18 '23 23:05 BigLep

Will this affect and if so how will this affect IPNS publishing over pubsub? The --enable-namesys-pubsub option.

zacharywhitley avatar May 27 '23 12:05 zacharywhitley

Here's another developer who'll mourn the loss of PubSub. It's been an amazing feature that enabled so much. Now I'll have to rebuild similar functionality for my projects that rely on it, as switching away from IPFS to go-libp2p isn't an option. I'll post here when I've got something usable, but as it's built on top of ipfs it won't be integratable into every project. 😔

emendir avatar May 29 '23 07:05 emendir

@zacharywhitley It won't thankfully, see https://github.com/ipfs/kubo/issues/9795.

Winterhuman avatar May 29 '23 13:05 Winterhuman

Let’s discuss some issues of concern when deprecation pubsub (and other libp2p features in the future?): Interpretable as:

  • arguments against the deprecation
  • problems to solve when deprecating pubsub and other libp2p features

So deprecating IPFS' access to libp2p's pubsub feature focuses IPFS' implementation spectrum more on the filesystem, and reduces the number of the rest of libp2p’s capabilities that developers using IPFS have access to. This means that applications which have used IPFS’ pubsub interface have to move away from using IPFS and instead implement libp2p themselves. This in turn means that many users of those applications will end up running multiple instances of libp2p: in the applications that use pubsub and in IPFS (which they may want to use for other purposes, I mean, IPFS is so cool!).

Many (I hope most!) of us developers here have a vision of building a P2P internet, and we’re using libp2p to realise that. More and more applications will be built on top of libp2p’s capabilities. Do we want them each to run their own libp2p instance, or would it be more resource-efficient for them to access the libp2p instance inside of IPFS so that each computer only needs to run one libp2p instance? Another issue of concern is ease of development: using IPFS’ pubsub feature in an application was fairly easy, and could be done in a variety of programming languages from shell to python. Implementing libp2p in a project is a whole lot more difficult, and so reduces the ability of developers to build P2P applications.

Conclusion:

Before deprecating pubsub and other libp2p features in IPFS, we need to answer the following questions:

  • Do we want IPFS to just be a filesystem or THE infrastructure for 99% of P2P projects?
  • Do we really want to separate content publishing (filesystem) from direct P2P communication (PubSub, LibP2P-StreamMounting) or keep them integrated when building Web4.0?
  • Is it more efficient (in terms of computer processing power and network efficiency) if P2P applications each run their own instance of libp2p or all access IPFS’ libp2p instance?
  • If we do go ahead with deprecating pubsub and other IPFS libp2p features, we need to think of good alternatives for developers, such as:
    • make implementing libp2p in own projects of a variety of languages easier: APIs, documentation & tutorials
    • give libp2p its own RPC API? (more work than maintaining IPFS libp2p!)
  • How much of IPFS’ functionality can projects that use libp2p’s pubsub expect to rewrite?
    • e.g. peer management?
  • What implications does this deprecation have for projects that need both IPFS’ filesystem and pubsub?
    • peer management: will they have two peer IDs for every peer they have, one from IPFS and one from their own libp2p implementation?

emendir avatar May 31 '23 04:05 emendir

Thanks for the continued feedback here. Just an update from the maintainers that changes here aren't going to make it in for Kubo 0.21. We'll be targeting it for Kubo 0.22. We'll engage more during that development iteration.

BigLep avatar May 31 '23 18:05 BigLep

PubSub is a critical component of Kubo and removing it in any release would be a massive loss. PubSub should remain a core component of ipfs reference implementations.

sevenrats avatar Jun 02 '23 21:06 sevenrats

@emendir in your message however you confuse IPFS and Kubo, AFAIT you assume that all applications that build on IPFS need to run a Kubo daemon and interact with IPFS through Kubo's HTTP API, thus the two libp2p instances. This is exactly what we do not want to happen anymore. :sparkles: With the push for boxo we are moving to a library oriented story for people who want to build on our stack, not whatever Kubo's HTTP API. So you wouldn't run Kubo anymore next to your app, you would use boxo (and or other libs) inside your own application process. So you could use a single libp2p instance shared by boxo, go-libp2p-pubsub, ... or even do IPFS without libp2p.

Here is a talk about boxo if you are into that: https://www.youtube.com/watch?v=uFr4EtySorY else you can browse the repo: https://github.com/ipfs/boxo, right now boxo is still pretty much made of the same libraries that powered Kubo before except in one repo (with a few notable exceptions for example the new reusable gateway API handler) we are working on improving this and is one of the main reason I don't have time to fix pubsub.

If you had feedback on how you could have learnt that Kubo != IPFS this would be nice, we tried putting it everywhere but I guess there is some path we missed because you slipped through the cracks. :slightly_smiling_face:


The main reason I want to remove pubsub from Kubo is that right now it is broken, there is a pull request that has been opened that makes it slightly less broken (but still very broken) however it also breaks more things in the process.

The underlying issue is that implementing a good pubsub mesh (like the Filecoin ones for example) require very application specific knowledge (you need to use your own application state to implement message filtering, discarding of outdated messages and rate limiting) which we can't do in Kubo because this is unique to each application. go-libp2p-pubsub's API is in term of callbacks, everytime the pubsub daemon receives a new message it need to invoke some magic piece of code the consumer wrote that tell it if the message is relevent, if it should be discarded if the node should be rate limited, and implementing callbacks over HTTP sucks, however in go this is very easy you just pass a function pointer.

Thus from my selfish view (and I would like to hear the other view):

I could spend my time fixing bugs that harm more critical and more used features (filesharing), making things faster, writing new libraries that are easy to use, ... why should that time instead be spent on writing an unperformant HTTP callback API that will be hard to use, confusing and easy to get wrong when a type safe performant easy solution already exists (calling go-libp2p-pubsub directly). Having an everything HTTP API was a trap, it takes way to much man power to make close to performant,, it is not customizable enough, it makes it very hard for new comers to get into the weeds because the language they learn (HTTP API) is different from the language used at the lower level in the libraries. This lead to a perverse view where everyone that want to build / add IPFS features have to get them merged in Kubo instead of having a diverse implementations ecosystem.

I see ~~4~~ 3 options:

  1. Let consumers (you) write a bit of Go, you would import the go-libp2p-pubsub directly, we could also provide libraries (in boxo) and examples on how to get started. Unlike what were suggested before you wouldn't need to re-implement all of Kubo's features like peer management, ... because this is / will be provided by libraries, we want a story where in a few tens of line of code you can pick and choose all modules you need and have a working starting point. Kubo's codebase is way bigger than yours would be because it has to handle a huge interdimensional matrix of various features and configs.
  2. We would provide the same API as go-libp2p-pubsub but over HTTP, that means callbacks and you can write the same application decision code but from outside Kubo. This is not an easy task and will take lots of time. Would require lots of work and the API will always suck, it will be confusing (for example if you would run pubsub without your app listening for callbacks the pubsub service inside kubo will stall).
  3. Leave it as is. The pubsub meshs created by Kubo are and will be unstable, we would change pubsub's API description from experimental to something along the line of the pubsub API does not work and is limited, to use a performant reliable alternative please use go-libp2p-pubsub, we would rename --enable-pubsub-experiment to something along the line of --enable-pubsub-broken. We maybe would also add arbitrary restrictions in order to limit it's usage to PoC, demos and learning (again forcing power users onto solution 1.), for example no more than X messages per second.
  4. Buy into the treadmill problem (for example PR #9684), that means adding more and more complex check's as newer bug reports are added I don't think is an acceptable solution because we already don't have enough free time for pubsub for this feature, we really can't commit to running a treadmill race against the bugs and bugs reports as treadmill problems drain all available time and more. Also this would yield a more and more complex pubsub layer that could have higher and higher costs and reduced utility, for example a nuclear option would be to make Kubo's pubsubs channels a Proof-Of-Work blockchain and require to pay gas to send messages, it sounds like we could make it work (given a lot of time) but now you need to sync TiBs of data and run a GPU mining rig to send pubsub messages, I don't think anyone want this.

Note: solution 1 and 3 are reasonable enough in time investment, however even if there is an overwhelming consensus around solution 2 I'll still lack the time to implement it and maintain it, I guess this could maybe happen if someone steps up to help ? (idk) solution 4 is a half serious joke, I wont even consider doing this.

Now I have two questions for everyone who gave us feedback (:heart: btw):

  • Exclusively about resolving Kubo's pubsub issue, is there more options I didn't considered ?
  • If you had to chose options, which options would you pick (feel free to choose or and rank multiple options) ?

My own votes are on 1 and 3 (leave pubsub API in Kubo but be extremely clear that it is broken and push people to use go-libp2p-pubsub). Or 1 alone (that means remove pubsub api completely from kubo and push peoples to go-libp2p-pubsub).

Jorropo avatar Jun 03 '23 08:06 Jorropo

Three sounds like a perfect solution. Everybody complaining is using it as-is without problems, presumeably. Also, it doesn't diminish your ability to deprecate and remove later, while still pushing people towards the correct long-term solution. Who knows, maybe if you give us another three years of broken pubsub, we will all adjust to the idea of writing a few dozen lines of go (but probably not).

sevenrats avatar Jun 03 '23 14:06 sevenrats

If you had to chose options, which options would you pick (feel free to choose or and rank multiple options) ?

I am an building an app that uses all IPFS, IPNS over pubsub and pubsub, and I dont know go, I looked into creating a default pubsub validator, adding it to the kubo codebase, and building kubo with my custom validator, and it only took me like a few days to get working.

However, if the pubsub APIs were completely removed from kubo, and I had to handle the RPC endpoints, customizing the libp2p host, dealing with the kubo CLI options, etc. that would be overwhelming.

One thing that would have made my work even easier is if there was a minimal example project using a default validator and peer filter. The pubsub examples in https://github.com/libp2p/go-libp2p/tree/master/examples/pubsub don't use validators or filters so it was confusing, especially since I had never used go before.

estebanabaroa avatar Jun 03 '23 18:06 estebanabaroa

@estebanabaroa IPNS over pubsub is not concerned by this issue.

The pubsub examples in https://github.com/libp2p/go-libp2p/tree/master/examples/pubsub don't use validators or filters so it was confusing, especially since I had never used go before.

Thx that important to know, but then I think the work item would be to make thoses proper examples.

Jorropo avatar Jun 03 '23 18:06 Jorropo

@estebanabaroa IPNS over pubsub is not concerned by this issue.

Yes I know, what I meant is that since my app uses almost everything included in kubo right now, it would be overwhelming to have to create it from scratch using components from boxo, especially since I dont know go. I think boxo is a great idea though for people who dont use all the features in kubo.

estebanabaroa avatar Jun 03 '23 18:06 estebanabaroa

First of all, thank you very much for putting so much time into your clarification. I find the library-oriented approach with Boxo a great idea and a sensible way to push forward. Can't wait to dive in deeper and check it out more thoroughly. It's probably my own fault that I didn't learn about Boxo before as I've been way too busy with all sorts of things and am still learning to get and filter through news.

emendir avatar Jun 04 '23 08:06 emendir

My vote on how to move forwards are options 1 & 3 (leave PubSub API in Kubo but be extremely clear that it is broken and push people to use go-libp2p-PubSub).

My reasons for wanting to leave the Kubo PubSub Endpoint are:

  • To allow people to update Kubo without breaking applications they've already built that implement its PubSub endpoint, which is important for sustaining the community of IPFS application developers who don't have the time and resources to react quickly to such changes as moving from the HTTP API to Boxo. I, for example, work in Python, using IPFS via a library based on interacting with Kubo's HTTP client. Until I or others have built a new Python library based on Boxo, many months (hopefully not years) will go by in which we don't want to pause/limit all running protoypes built with the old library.
  • To ease IPFS evangelisation by enabling PubSub demonstrations on a CLI. What I mean by that is, when I want to excite other people about the power and simplicity of IPFS, there are two things I do: demonstrate file sharing (via CLI or WebUI) between two computers and demonstrate PubSub communication between two computers (via CLI). I find the simplicity and accessibility of the second demonstration to be a powerful tool in exciting other people to realise that P2P is no longer a tricky dream but a now easy reality. It's great to be able to demonstrate P2P communication without coding.

emendir avatar Jun 04 '23 08:06 emendir

Thank you @jorropo for the detailed explanations.

Solution 2 is time-consuming and using callbacks is wrong.

  1. Leave it as is.

I think it's acceptable, flag the feature as "incomplete", "broken", users are warned and you can point to go-libp2p-pubsub .. That way you halt the flow of tears of the developers already using kubo's pubsub and who don't have a backup plan, but also you keep the possibility of fixing pubsub's flaws in the near future if a better solution comes along.

pinnaculum avatar Jun 04 '23 09:06 pinnaculum