kubo
kubo copied to clipboard
Keep others’ IPNS records alive
ipfs name keep-alive add <friend’s node id>
Periodically get and store the IPNS record and keep serving the latest seen version to the network until the record’s EOL.
You'll be able to pin IPNS records like anything else once we have IPRS
Awesome
Waiting for this feature 👍
But doesn't it make more sense if they are automatically pinned by nodes? Or would it be resource heavy,?
Consider that if pinned those have to be updated constantly via signatures etc etc...
The issue here is that the signature on IPNS records currently expires and random nodes won't be able to re-sign them as they'd need the associated private key. We expire them because the DHT isn't persistent and will eventually forget these records anyways. When it does, an attacker would be able to replay an old IPNS record from any point in time.
When it does, an attacker would be able to replay an old IPNS record from any point in time.
Is it really considered more dangerous than possibility of practically disappearing whole materials published under certain IPNS key if one (just one!) publisher node with its private key once disappears too? Doesn't this publisher node look like the central point of failure? Outdated, but valid records are really worse than no records at all?
I think that ability to replay is not an critical security issue, at least in condition that user is explicitly notified that the obtained result could be outdated. After all, «it will always return valid records (even if a bit stale)», as mentioned in 0.4.18 changelog.
So what do you think about --show-publish-time
flag on ipfs name resolve
command? Do the IPNS records itself contain this data?
@lockedshadow I've been thinking about (and discussing this) this and, well, you're right. Record authors should be able to specify a timeout but there's no reason to remove expired records from the network. Whether or not to accept an expired record would be up to the client.
@Stebalien What is the best way to go about introducing this change to the protocol?
@T0admomo since this is a client and UX change rather than a spec one mostly I would propose what the UX should be along with the various changes that would need to happen in order to enable it.
Some of the work here is in ironing out the UX and then there's some in implementation. By discussing your proposed plan in advance it makes it easier to ensure that your work is likely to be reviewed and accepted.
Some related issues: #7572 #4435 #3117
The issue here is that the signature on IPNS records currently expires and random nodes won't be able to re-sign them as they'd need the associated private key.
According to the IPNS spec, the signature contains the concatenated value
, validity
, and validityType
fields.
That means that as long as validity
is in the future, there's no reason why nodes wouldn't republish the IPNS record.
Moreover, since validity
is controlled by the key holder when they sign the record, they have the flexibility to pick any validity at the potential cost of users getting an expired/stale record (in the case of a new record published within the validity period that isn't propagated to all nodes holding the previous one). This is arguably better than getting no resolution as pointed out by @lockedshadow
Am I understanding this correctly?
That means that as long as
validity
is in the future, there's no reason why nodes wouldn't republish the IPNS record.
I think this could be an attack vector as a malicious node could publish a lot of signed records with near infinite validity. They will accumulate on the DHT and clog it sooner or later, and never be flushed out.
So other clients needs to reject very old records, even if the original publisher wanted them to have very long validity.
(An attacker could also spawn many nodes and publish records from them, with the same effect)
I think this could be an attack vector as a malicious node could publish a lot of signed records with infinite validity. They will accumulate on the DHT and clog it sooner or later, and never be flushed out.
I recently read that DHT nodes will drop stored values after ~24 hours, no matter what Lifetime and TTL you set. So it's not really possible to clog the DHT or use this as an attack vector.
As far as I understand, clients don't reject old records as they have no way of knowing a record's age, they just drop them after 24 hours, when a newer sequence comes or once they expire (the earliest of the three).
(An attacker could also spawn many nodes and publish records from them, with the same effect)
I believe that this is what Fierro allows you to do, though without any malicious intent.
As far as I understand, clients don't reject old records as they have no way of knowing a record's age, they just drop them after 24 hours, when a newer sequence comes or once they expire (the earliest of the three).
Yes, you're right. Droping records is not based on age, I oversimplified. The point is that they are not in the DHT after some time if they are not republished, so they can't accumulate.
I believe that this is what Fierro allows you to do, though without any malicious intent.
Yes, but since records are droped by clients after abiut 24 hours, they still can't accumulate