Client starts looping requests when response is malformed or not sse
So I've noticed that if I make mistakes in my sse events and data-star cant understand them, it will start looping the original request without end. This also happens if the server responds with something else than sse. I'm thinking this is related to the reconnect handling needed to be able to handle initiating sse requests from a post like mentioned in https://data-star.dev/docs/streaming_backend#sse-backend-fetch-on-the-frontend
I think this behavior is a bug but maybe it's working as intended ? It sure is annoying though, since the requests will keep looping until I reload the client and that means I lose the network inspection logs.
It's part of the SSE retry contract, BUT if you see that then please debug the backend plugin and let's get better docs in the form of throwing better/specific errors.
@Superpat are you still seeing this or can we close this issue?
Keep it open, I havent had time to investigate and I'd really like a better understanding of when datastar retries.
-------- Courriel d’origine -------- De : Delaney @.> Envoyé : 17 juin 2024 12 h 37 min 19 s HAE À : delaneyj/datastar @.> Cc : Patrick Marchand @.>, Mention @.> Objet : Re: [delaneyj/datastar] Client starts looping requests when response is malformed or not sse (Issue #28)
@Superpat are you still seeing this or can we close this issue?
Might try latest, 0.15.3. I've updated the expression eval to have more explicit error handling
bump @Superpat
@delaneyj I noticed the same. If the client makes the request and server returns 404 (or probably any non 2xx), the client starts looping, which feels incorrect as it breaks HTTP.
It starts looping even if mime type is wrong. This is wacky behavior, because you can easily DDoS yourself. Can this retry on errors be opt in on the client for when client actually needs to be listening for a stream of data from an endpoint constantly? My gut feel is that you would want this is the minority of cases. Am I missing some big usecase/use datastar in a wrong way?
@blazmrak it's a valid point. Do you have any suggestions for the message?
<pipy>Hey I notice you 404 your own site, probably didn't mean to do that!</pipy>
https://www.youtube.com/watch?v=Nn_LLLyMwAs
I suggest giving up and being quiet/logging something into the console :smile:
I thought about it a bit more and I'll just put this out there as a food for thought:
I don't think SSE has many benefits and is being more abused than anything, as it serves just as a glorified "array of objects" serialization format - for the vast majority of cases it could have just been a sync HTTP that returned JSON (e.g. {type:string,fragment?:string,...}), which would complete the circle and have the r/htmx malding :rofl: But it still solves the issues HTMX has with OOB and client state, and the custom SSE implementation thing, as you could just use browser's SSE components, because you would not need methods other than GET. You would still have the same core and functionalities with both, just $$get would become $$sse or something. Am I crazy for thinking this?
I suggest giving up and being quiet/logging something into the console 😄
It's an invariant case, either you need to be loudly asserting or retrying. Quiet seems to me to be the worst option
I don't think SSE has many benefits and is being more abused than anything, as it serves just as a glorified "array of objects" serialization format
This is just not true. The point of it is it allows streaming of response instead of paying for round-tripping every message. For latency sensitive apps this halves the response times at least.
for the vast majority of cases it could have just been a sync HTTP that returned JSON (e.g. {type:string,fragment?:string,...}), which would complete the circle and have the r/htmx malding 🤣
If you think polling is better than on-demand push we will have to agree to disagree. SSE is a superset of what a normal HTTP response can do.
But it still solves the issues HTMX has with OOB and client state, and the custom SSE implementation thing, as you could just use browser's SSE components, because you would not need methods other than GET.
You are now having multiple ways of doing thing, each with less functionality than what already exists.
You would still have the same core and functionalities with both, just $$get would become $$sse or something. Am I crazy for thinking this?
IMO, yes. If you want lot of ways to solve a problem, a focus on 0-1 message support only, only being able to stream on GET then the best options are
- Do HTMX + (Alpine+hyperscript+vanilla)
- Writing your own Datastar plugins with the HTMX paradigm. Datastar actually started with these plugins, but I never committed it. Found a better way to support all ways in a single interface. @Superpat has talked about making them, but I personally think it's step back. SSE on the server cost no more than any other response.
It's an invariant case, either you need to be loudly asserting or retrying. Quiet seems to me to be the worst option Yes and no. It's a bug on the server, the worst option is to blow up the client imo. Maybe there can be an event emitted, that the developer can implement to display a popup or sth, but it would be similar to browsers blowing up on invalid HTML. It's just not good user experience. Maybe exponential backoff with max retries can be implemented, to prevent spamming the server? But even this would have to be done in a way, where it is seen as request being constantly in progress until max retries are reached, only then you would show that it errored out.
Just a disclaimer, before I address the other points, the JSON thing was a bit tongue and cheek. I thought about it a bit more, and the issue is not that much with SSE, but with client behavior I think. I hope that I didn't come off as entitled/demanding of a change, as it is the last thing I want. I know that this is not my project and that I can fork it if I don't like it :sweat_smile:
This is just not true. The point of it is it allows streaming of response instead of paying for round-tripping every message. For latency sensitive apps this halves the response times at least.
Am I misunderstanding HTTP? Can't HTTP responses be streamed as well? Also, When client makes the request, it usually gets just one response back, unless there is something on the server that is constantly updating in the background. But the examples all show calling res.end() in the same handle. Although thinking about it a bit more, JSON is probably not the serialization format to choose for this and anything else would be just reimplementing SSE :sweat_smile:
If you think polling is better than on-demand push we will have to agree to disagree
I'm not talking about polling, but there are not many cases, at least not from the official examples (and my experience, but I guess your work might differ, as you built this for a reason), where you have long lived streams. There could be one for notifications, and maybe if you have some dashboard where you update the values live, but for every action on the client, a new request has to be made to the server anyways.
You are now having multiple ways of doing thing, each with less functionality than what already exists.
True, but the counter argument is that the intent is clearer. SSE has the intent of staying open and receiving a constant stream of events essentially as long as the client is open.
SSE is a superset of what a normal HTTP response can do.
It might be just a language thing, but SSE is a subset (it has all the rules of HTTP + a bit more) - something can be HTTP but not SSE, but if it is SSE, then it is 100% HTTP :nerd_face:
a focus on 0-1 message support only
It wouldn't be a focus on 0-1 message. It would be the same as it is now, it can even be the same format, but it would be 1 action on the client makes one request, if request fails, that has to be handled on the client and not retried automatically ad infinitum. I haven't checked the code, but are POST requests also retried? If they are, it's maybe risky?
The retries are probably the biggest surprise here, especially if the server responds in an unexpected way. Imo automatic retry should be done only if the connection cannot be established or has unexpectedly died. Because if request is idempotent, there is no reason for you to assume that the second time will be any different and if it isn't, you probably don't want to retry it anyways.
Like I said, this is just my opinion. I think you did an amazing job with this library and it brings a ton of quality of life improvements when developing from the server.
Just a disclaimer, before I address the other points, the JSON thing was a bit tongue and cheek. I thought about it a bit more, and the issue is not that much with SSE, but with client behavior I think. I hope that I didn't come off as entitled/demanding of a change, as it is the last thing I want. I know that this is not my project and that I can fork it if I don't like it 😅
No worries, I'm happy to have the discussion. Also this is good for posterity for another interested in my current mindset. I've been living with this for quite a while at this point and happy to defend the ideas.
Am I misunderstanding HTTP? Can't HTTP responses be streamed as well?
So SSE is HTTP streaming. Normally HTTP streaming is for binary content like video, SSE declaring your intent to do be that with a plain-text response.
Also, When client makes the request, it usually gets just one response back, unless there is something on the server that is constantly updating in the background.
That the key point. You are making the assumption it gets one thing back. It could be 0 (nack/ack) 1(success/fail) or a stream of N+ updates. The focus on the "normal" is the key place where we disagree. If I have one way to do things, its smaller, faster, less bug prone. The desire to constrain it is a human issue, not a machine one.
But the examples all show calling res.end() in the same handle. Although thinking about it a bit more, JSON is probably not the serialization format to choose for this and anything else would be just reimplementing SSE
:point_up: most of the basic examples are ports of HTMX examples, so you send back a simple response. But for example the progress update one uses a single GET where as the HTMX example has to poll. IMO, polling is a code smell for bad design when push based is available and especially when it's the same cost. Look at the DBMon example and try to do that in HTMX. You may not need that kind of interaction, but I do, and if you can make it scale from simple to real-time with no logical change I'm on board.
It wouldn't be a focus on 0-1 message. It would be the same as it is now, it can even be the same format, but it would be 1 action on the client makes one request, if request fails, that has to be handled on the client and not retried automatically ad infinitum. I haven't checked the code, but are POST requests also retried? If they are, it's maybe risky?
It's part of the SSE contract, if you fail you retry. It's up to you handler to decide how idempotent your are. Again if its a "normal" POST then you'd expect 0-1 responses. But if you are sending a command that can effect multiple systems (say a multi-stage ETL or disributed build system) your definitely want to see partial success back instead of a pass/fail at the end.
Like I said, this is just my opinion. I think you did an amazing job with this library and it brings a ton of quality of life improvements when developing from the server.
Happy to pay it forward. I'm still :100: open to a stop the world assert if you 404 but that can happen in a real-time system, especially with hot-reloading. Currently I'm less concerned with the "initial vibes" of the SSE and more the robustness it's giving me solving large scale real-time problems. If you are interested I'd be happy to talk more about something like $$hxGet $$hxPost as plugins. I can see the value if you want to stay in that paradigm, I just think it's a step back after living with this for over a year.
For whats it worth, I use the sse stuff extensively in a contract. Not only for typical data streaming of a long running response, but also to give precise status updates of key points in a request processing pipeline.
These status updates also serve as checkpoints so that the server can just restart from the last checkpoint (when applicable) if it receives a retry.
I think it would be a better dx for server errors to not trigger retries, but we'd have to make sure we're not getting into situations where it should have retried but we're being too strict.