Fix wrong end_time
Hopefylly closes #935
Also, attempt to clean up/document event types, and event type validation in the API.
For some reason I'm not allowed to reply to @elfjes ' comments on line 507 to 510 in incident/views:
if event_type in Event.CLOSING_TYPES and not incident.open:
self._abort_due_to_type_validation_error("The incident is already closed.")
if event_type == Event.Type.REOPEN and incident.open:
self._abort_due_to_type_validation_error("The incident is already open.")
I presume these lines are also here for completeness sake just like the stateless checks in repair_end_time..
A source system is not allowed to reopen but that is handled earlier in the process, in validate_event_type_for_user.
If a client tries to reopen something that is correctly reopened already, or close something that is correctly closed already, then the client has an outdated view of the world. The server cannot fix that, or repair anything in itself to fix that, it can only report.
All we can do is either:
- fail quietly and either ship back
- nothing
- the original event that the client doesn't know about
- what we are currently doing: fail loudly and ship back a ValidationError.
- fail loudly with something else than ValidationError and I haven't found any good status codes
For some reason I'm not allowed to reply to @elfjes ' comments on line 507 to 510 in incident/views. I presume these lines are also here for completeness sake just like the stateful checks in repair_end_time..
A source system is not allowed to reopen but that is handled earlier in the process, in
validate_event_type_for_user.If a client tries to reopen something that is correctly reopened already, or close something that is correctly closed already, then the client has an outdated view of the world. The server cannot fix that, or repair anything in itself to fix that, it can only report.
All we can do is either:
* fail quietly and either ship back * nothing * the original event that the client doesn't know about * what we are currently doing: fail loudly and ship back a ValidationError.
Yes, in case there was nothing to repair and it's all the clients fault, a ValidationError would be ok, even though i think failing quietly is a nicer experience. Not sure what would be the value of sending back of the original event; imho the current/actual state of the incident (ie the incident itself) would be more useful.
My comment was on 510 only (keep forgetting that GH adds the other lines and that I thus need to be more explicit in my comments): the text "Found end_time mismatch was in error, see logs" is not gramatically correct
All we can do is either:
fail quietly and either ship back
- nothing
- the original event that the client doesn't know about
what we are currently doing: fail loudly and ship back a ValidationError.
Yes, in case there was nothing to repair and it's all the clients fault, a ValidationError would be ok, even though i think failing quietly is a nicer experience. Not sure what would be the value of sending back of the original event; imho the current/actual state of the incident (ie the incident itself) would be more useful.
The endpoint returns a serialized copy of the newly created event if everything is alright, having it return a serialized incident after a repair makes no sense. After the repair it needs to re-fetch the incident, yes, so it needs to know that the repair happened.
We could change the type of the ValidationError: Now it sends {"type": message}. We could make it send {"repair": message}.
My comment was on 510 only (keep forgetting that GH adds the other lines and that I thus need to be more explicit in my comments): the text
"Found end_time mismatch was in error, see logs"is not gramatically correct
Quality Gate passed
Issues
0 New issues
0 Accepted issues
Measures
0 Security Hotspots
0.0% Coverage on New Code
0.0% Duplication on New Code
Any chance on this getting some more attention? We have accrued more incidents that we cannot close...