Maxim Martynov

Results 137 comments of Maxim Martynov
trafficstars

https://github.com/pypi/warehouse/issues/8254 lead to filling up metadata for all .whl files in PyPI, so new API can be used for most of packages. It is not yet clear what the total...

Isn't that the same as existing metric? `starlette_requests_total{method="...",path="...",status_code="500"} 1.0`

This is because there is no Spark-specific dialect implementation for Clickhouse, so Spark does not know how to convert this type to ClickhouseJDBC-compatible one: https://github.com/apache/spark/blob/b41ea9162f4c8fbc4d04d28d6ab5cc0342b88cb0/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala#L139-L167

Well, `_make_delete_pod_request` already handles missing pod: https://github.com/jupyterhub/kubespawner/blob/5c6801b9d87508e2435b2dd11da8b89040a72ef6/kubespawner/spawner.py#L2867 So `kubectl delete` should not be an issue.

Still don't get why evicted pod cannot be deleted. It's just the same case as pod is not started yet.

`reflector._list_and_update` calls [api.list_namespaced_ingress](https://github.com/tomplus/kubernetes_asyncio/blob/b291a3a7475e9c6ea1cae74f65f43e9e3ad4af54/kubernetes_asyncio/client/api/networking_v1_api.py#L2339), but it does not raise exceptions if Kubernetes returned 403 error. This is because `_preload_content=False` is used, and method parses response object manually: https://github.com/jupyterhub/kubespawner/blob/8766d364a29b9a688a268592fbee5e7a44190fad/kubespawner/reflector.py#L218-L223 It was added...

If `await list_method(**kwargs)` is called without `_preload_content=False`, and Kubernetes API returned 403 error, this function raises exception before `for p in initial_resources["items"]`.

Why don't just increase `k8s_api_request_timeout` to `5s` or `10s`?

Administrators on standalone installations may activate this kind of integration, I'll be better not to mess up with it