nips
nips copied to clipboard
EOSE extensions
Concept NACK -- this adds too much complexity that I don't think is warranted, and adds a path for ever increased complexity with the x_
prefixes and protocol fragmentation.
Even if we didn't have another way of doing pagination (and we do), I think this approach is too complex.
I think its better to just include a new message response instead of lumping it into EOSE
["CURSOR", <id>]
MUST be sent before EOSE
, clients can resume fetching results by sending another request with the cursor ID ["REQ",{"cursor": <id>}]
Something like that, i dont think you need those other extensions only a way to continue reading the cursor.
I think its better to just include a new message response instead of lumping it into
EOSE
["CURSOR", <id>]
MUST be sent before
EOSE
, clients can resume fetching results by sending another request with the cursor ID["REQ",{"cursor": <id>}]
Something like that, i dont think you need those other extensions only a way to continue reading the cursor.
Now clients have to deal with extra state to track.
Concept NACK -- this adds too much complexity that I don't think is warranted, and adds a path for ever increased complexity with the
x_
prefixes and protocol fragmentation.
Anything could get the protocol fragmented. Tag names for example.
Okay sure, we won't have x_
prefixes then. So what, did the problem disappear? No! People will just now have incompatible shit in the main namespace.
Even if we didn't have another way of doing pagination (and we do), I think this approach is too complex.
That method of doing pagination is broken. Relays aren't just single node deployments that have an entire view of their database unlike what everyone assumes they are, and that method of pagination breaks down when you try doing stuff like Bostr.
See latest_created_at
for a fix.
This is a bad idea. Relays are not databases, but you can already paginate with until
. No relay that exists today implements persistent cursors and they probably shouldn't. Keep some weird state in between queries? That's unnecessary cumbersomeness.
But yes, if you really want cursors then ["CURSOR"]
messages make more sense.
And what is the point of x_
prefixed extensions? Has that worked well for HTTP? "Let's standardize the unstandardized!" makes no sense.
yeah, i think it's unnecessary, one or two events sent twice is no big deal, just shift the "until" timestamp to the oldest (last) returned
cursors and pagination are state and if you try to set arbitrary markers other than timestamps you have a problem of race conditions and total event ordering
this is definitely functionality that belongs in a cache not in a relay
but you can already paginate with
until
.
That breaks down when you use something like Bostr. Or try to do anything that isn't single DB backed single server relays.
There's a reason I proposed next_created_at
also.
cursors and pagination are state
Keep some weird state in between queries
Here's a very easy way to implement cursors that does not store any state in a relay backed by an SQL database (example, but this can translate easily to your own DB)
- Your query is
SELECT * FROM events WHERE <user conditions> ORDER BY (created_at DESC, id DESC) LIMIT <limit>
- When you are done, you send the client the following as the cursor: a hash of the filters (this is actually not needed), and the created_at and id of the last event sent
- When the client asks for more events with the cursor, you check the filter hash (just in case the client fucked up) and you run the query again, but now like this:
SELECT * FROM events WHERE <user conditions> AND (created_at < $created_at_in_cursor OR (created_at = $created_at_in_cursor AND id < $id_in_cursor)) ORDER BY (created_at DESC, id DESC) LIMIT <limit>
I am against this proposed solution.
I am against any stateful queries on relays.
Chorus database is not SQL and I can't just whip up some SQL to do it.
Pagination isn't ideal right now but it is workable.
Sending the cursor is better than cutting clients off when relays hit their internal LIMIT config as it happens today. Many relays send an EOSE at 500 events even though they have more. It's up to clients to figure out if it is a real EOSE or it is fake. That is bad. The current way to figure out if an EOSE is real is to repeat the filter with a separate until and see if 0 events return. If not, keep changing until
at every EOSE.
I am ok with extending EOSE but I do not understand the benefits of the auth extension as proposed.
I am against any stateful queries on relays.
There is no state that you need to store. How many times do I have to say this? You can very well do this on a K/V database also.
I am against any stateful queries on relays.
There is no state that you need to store. How many times do I have to say this? You can very well do this on a K/V database also.
it's state, the query is state, and the only way to implement what you describe requires storing at least all of the IDs of the events that the query matched
more useful would be to propose exactly that, a new variant like count but instead of count it's just an ID list, then the client is free to paginate it as it likes
i'm not being difficult, query results are a snapshot of the database at the time of the query
the only way this could be made possible is how i say - a special type of req that only gets event IDs
and even then, it requires a new index to be added to most databases to find it efficiently
will be replaced with a general NIP for multiplexing over one connection and the edge cases that come with it