augray
augray
It should be out-of-the-way but findable. If all is going well you won't need these logs, but if things are failing in weird ways you'll need it
FWIW, here is a sample of a status condition for a pod that is currently unable to schedule: ``` status: conditions: - lastProbeTime: null lastTransitionTime: "2022-10-08T02:24:05Z" message: '0/3 nodes are...
Sometimes pods fail to schedule, and k8s has no way to tell you whether they ever will or not. We should fail after a certain amount of time of being...
Sometimes the resolver itself fails. In this circumstance, we should still show the error, but most likely should present it in a different way. Here's an example stack trace from...
Currently we upload the entirety of a sematic function's logs at once. We buffer them on disk. For long-running jobs, this may not be viable; the container may eat up...
`get_user_by_api_key` can fail if there are intermittent DB connection errors. We should retry if it fails.
To be used within a func like this: ```python import sematic @sematic.func def my_func() -> None: print("Root id is: {}".format(sematic.context().root_id)) return None ```
One uniquely difficult k8s error to surface is if the resolver container itself fails to start. This can happen if, for example, k8s fails to be able to pull the...
When people upgrade regularly, it can be helpful to know what version of the server is running. We should display this information discreetly on the UI