Proposal: Reliable Mutability Primitive
Originally draft was edited at https://hackmd.io/@gozala/ipns-pitch
@carsonfarmer provided some valuable feedback, which I'll migrate here for the visibility of others
cc @hugomrdias @aschmahmann @lidel
iād like a review approval from @aschmahmann on this one
I think it is worth pointing out that this proposal was mostly about:
- Documenting current state of mutability in IPFS
- Making a case for addressing this problem
During iterations it became clear that different strategies (with various tradeoffs) could be used to address it. I have proposed #66 as one such strategy and was hoping that other proposals would surface as well. I also attempted to further document current limitations https://hackmd.io/@gozala/state-of-ipns across environments that I challenge other previously discussed strategies to improve IPNS across the board.
@Gozala should we close this PR in favor of #66 then?
@Gozala should we close this PR in favor of #66 then?
Not sure. I feel like current proposal structure does not support decoupling problem statement that attempts to clarify why we should address the problem, from concrete plan how can we do that. In many cases including this one it would make sense to have single problem statement (like this document) and multiple proposals to address it (which could link to this instead of reiterating).
Currently "Brief plan of attack" attempts to capture what @aschmahmann had been saying on various channels, to whom I'd defer this call.
I feel like current proposal structure does not support decoupling problem statement that attempts to clarify why we should address the problem, from concrete plan how can we do that. In many cases including this one it would make sense to have single problem statement (like this document) and multiple proposals to address it (which could link to this instead of reiterating).
šÆ
IMO going down the path of this proposal better prepares us for a decentralized solution while also having "exit points" towards #66 if needed.
Briefly my understanding of this proposal is:
- Implement third party IPNS republishing
- go-ipfs default use of PubSub and IPNS over PubSub
- js-ipfs fully implementing IPNS over PubSub (i.e. spec which includes the fetch protocol) and using it by default
Briefly my understanding of #66 is:
- Define fast lane name resolution specification.
- Define name keeper service specification.
- Define name routing service specification.
- Implement name routing service in go-ipfs
- Implement name keeper service in go-ipfs
- Implement fast lane name resolution across web, go, node ipfs.
- Deploy name routing service to PL operated boostrap nodes
But what if instead these were almost the same:
- Define fast lane name resolution specification -> ā fetch protocol
- Define name keeper service specification -> third party IPNS republishing + "pinning API" to ask a third party to do it for you
- Define name routing service specification -> ā DHT provider records
- Implement name routing service in go-ipfs -> ā DHT provider records
- Implement name keeper service in go-ipfs -> third party IPNS republishing + "pinning API" to ask a third party to do it for you
- Implement fast lane name resolution across web, go, node ipfs -> IPNS over PubSub (Go ā , JS ā )
- Deploy name routing service to PL operated boostrap nodes -> ā delegated routing
There's a reasonable point made in #66 that maybe IPNS records should have more information in them like routing hints (e.g. where is a good node to start my query), and while we can do this with pretty low effort because of how the DHT processes IPNS records I also don't think it's really going to be necessary once we've got IPNS over PubSub actually working.
There is a risk/issue with the IPNS over PubSub approach wherein PubSub falls over if you try and sign up for too many topics. However, if this becomes an issue we can easily pivot towards #66 by making a new router type that only uses the Fetch protocol instead of also using PubSub. The tradeoff is that we no longer get updates pushed to us and have to poll for updates, but it's an easy pivot to make if we need to.
Is this something we can grant to an ecosystem team? third party IPNS publishing sounds like something that could be executed through an RFP/devgrant...
On Wed, Mar 24, 2021 at 2:54 PM Adin Schmahmann @.***> wrote:
IMO going down the path of this proposal better prepares us for a decentralized solution while also having "exit points" towards #66 https://github.com/protocol/web3-dev-team/pull/66 if needed.
Briefly my understanding of this proposal is:
- Implement third party IPNS republishing
- go-ipfs default use of PubSub and IPNS over PubSub
- js-ipfs fully implementing IPNS over PubSub (i.e. spec https://github.com/ipfs/specs/blob/master/naming/pubsub.md which includes the fetch protocol) and using it by default
Briefly my understanding of #66 https://github.com/protocol/web3-dev-team/pull/66 is:
- Define fast lane name resolution specification.
- Define name keeper service specification.
- Define name routing service specification.
- Implement name routing service in go-ipfs
- Implement name keeper service in go-ipfs
- Implement fast lane name resolution across web, go, node ipfs.
- Deploy name routing service to PL operated boostrap nodes
But what if instead these were almost the same:
- Define fast lane name resolution specification -> ā fetch protocol
- Define name keeper service specification -> third party IPNS republishing + "pinning API" to ask a third party to do it for you
- Define name routing service specification -> ā DHT provider records
- Implement name routing service in go-ipfs -> ā DHT provider records
- Implement name keeper service in go-ipfs -> third party IPNS republishing + "pinning API" to ask a third party to do it for you
- Implement fast lane name resolution across web, go, node ipfs -> IPNS over PubSub (Go ā , JS ā )
- Deploy name routing service to PL operated boostrap nodes -> ā delegated routing
There's a reasonable point made in #66 https://github.com/protocol/web3-dev-team/pull/66 that maybe IPNS records should have more information in them like routing hints (e.g. where is a good node to start my query), and while we can do this with pretty low effort because of how the DHT processes IPNS records I also don't think it's really going to be necessary once we've got IPNS over PubSub actually working.
There is a risk/issue with the IPNS over PubSub approach wherein PubSub falls over if you try and sign up for too many topics. However, if this becomes an issue we can easily pivot towards #66 https://github.com/protocol/web3-dev-team/pull/66 by making a new router type that only uses the Fetch protocol instead of also using PubSub. The tradeoff is that we no longer get updates pushed to us and have to poll for updates, but it's an easy pivot to make if we need to.
ā You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/protocol/web3-dev-team/pull/19#issuecomment-806210703, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAEXAF25J67CEFOWS4URVQLTFJNS5ANCNFSM4XXO2QTQ .
Is this something we can grant to an ecosystem team? third party IPNS publishing sounds like something that could be executed through an RFP/devgrant...
Maybe? The toughest part of the job is doing the plumbing in go-ipfs which shouldn't be too hard to do but is also not intuitive. I'd be ok giving this a shot as long as we do regular check-ins to check for alignment.
But what if instead these were almost the same:
I think it is worth considering:
-
Complexity between approach proposed in #66 and below. Broadly speaking #66 attempts network querying by caching name keeper addresses forever at:
- Local level
- Name routing level
And removes need for disturbing a network when publishing updates by making it point to point operation.
Furthermore by embedding name keeper address in the DNSLink all human readable name resolutions could effectively be point to point operations.
It is less decentralized, but but that is a conscious decision to remove all of the overhead in happy path. It does not replace need for fully distributed solution as it is still needed as a slower fallback.
-
Dependencies
#66 intentionally does not specify how name routing service API nor name keeper API nor how things get replicated. That is because I want to be able to make progress on IPNS without blocking on pubsub or DHT in JS or other things.
* Define fast lane name resolution specification -> ā fetch protocol
Not sure what you're referring to here. Assuming it's name keeper resolution I think fetch protocol is probably a good way to go about it. Although protocol seems generic enough that I feel some semantic meaning might be useful.
* Define name keeper service specification -> third party IPNS republishing + "pinning API" to ask a third party to do it for you
#66 intentionally makes publishing an operation that does not require disturbing the rest of the network. I think it is a good compromise, maybe it is not ? Worth comparing tradeoffs at least.
* Define name routing service specification -> ā DHT provider records
in #66 intention was to decouple name routing from DHT, pubsub etc... If node is aware of some name keeper(s) it can provide routing. It does not even needs to be IPFS node.
* Implement name routing service in go-ipfs -> ā DHT provider records
š
* Implement name keeper service in go-ipfs -> third party IPNS republishing + "pinning API" to ask a third party to do it for you
In #66 intenion was that anything, as simple as cloudless lambda could play that role in the system. All it needs is a key value store as opposed to whole IPFS node.
* Implement fast lane name resolution across web, go, node ipfs -> IPNS over PubSub (Go ā , JS ā )
I really want us to arrive to a place where you can build distributed applications on the web without having to have to operate ipfs nodes on the server, because that is where many choose AWS as cheaper and simpler alternative. #66 was geared towards enabling that so might be worth considering.
It is true however that it creates a system in which a name resolution depends on an authoritative node, however remedy is that name owner can freely swap that authority which I think provides a good compromise.
* Deploy name routing service to PL operated boostrap nodes -> ā delegated routing
Load on routing nodes as per #66 is designed to be a lot smaller than I imagine it would be with delegated routing.
Assigning to grants since it won't get worked on in the short term with the current program structure. @mikeal is going to evaluate grant possibility.