Kiefer Chang
Kiefer Chang
We add the volumes health check in v1.1.0 before upgrading. Let's keep monitoring if this happens again.
@shuo-wu I think we mix two problems in the same issue. Vicente's comment https://github.com/harvester/harvester/issues/2053#issuecomment-1200829171 is for the support bundle in https://github.com/harvester/harvester/issues/2053#issuecomment-1193863957, if you have time please take a look again,...
Supposed to be fixed with LH 1.5.0: https://github.com/longhorn/longhorn/issues/4305
@torchiaf If you need help of load balancer, feel free to reach @starbops
@innobead I think we need to wait until self hosted runner from the EIO team is ready right? **Update** I have received guidance.
I can't reproduce this. `cattle-cluster-agent` pods run with prime images. The `cattle-cleanup` pod runs with a non-existent community image after deleting the Harvester cluster, it might be worth filing an...
@WebberHuang1118 please help look into this.
> So my first question would be, what runtime means in this context, are reboots acceptable? I guess it's acceptable. One scenario is to add new hardware on day 2,...
> 2. We can make some paths persistent and writable by tweaking the layout configuration. So to speak we could make `/lib` RW and presistent. I dislike the idea, `/lib`...
@ibrokethecloud mentioned Nvidia has the way to ship the driver in the container image: https://gitlab.com/nvidia/container-images/driver/-/tree/main/sle15