spark-operator
spark-operator copied to clipboard
Change metav1.Time type to pointer in spark application struct of types.go
when submit spark application and scheduled spark application in kubernetes v1.13.0, it occured error about validating metav1.Time when status update
spark application
failed to update SparkApplication spark-job/spark-pi:
...
"lastSubmissionAttemptTime":"2020-12-10T02:23:23Z", "terminationTime":interface {}(nil),
...
status.terminationTime in body must be of type string: "null"
scheduled spark application
failed to sync ScheduledSparkApplication "spark-job/spark-pi-scheduled
...
"lastRun":interface {}(nil), "nextRun":"2020-12-10T02:28:44Z"
...
status.lastRun in body must be of type string: "null"
This has no effect on submit, but results in the spark application status not being updated. But in kubernetes v1.19.2, it works. This seems to have been dealt with as the kubernetes version up and I found related issue https://github.com/kubernetes/kubernetes/issues/86811
The correct local fix is to make the field a pointer, so it can actually be omitted during serialization. However, that adds significant risk of NPE for all callers that currently do not check whether the field is nil. If we did this, I think we would need to add nil-safe implementations of all the current methods on metav1.Time and time.Time that tolerated a nil receiver and behaved like the zero-value currently does. I also explored a custom serialization of ObjectMeta that would strip a null serialization, but making ObjectMeta implement MarshalJSON also made all types that inline ObjectMeta (which is all types, currently) implement MarshalJSON, which prevented marshaling the rest of their data.
So I changed metav1.Time
to pointer type
@nicholas-fwang Can this be closed instead? It never got any attention in 4 years, and kubernetes versions 1.13-1.19 have been unsupported now for years https://endoflife.date/kubernetes