PhanLe1010
PhanLe1010
The current [helm chart](https://github.com/longhorn/upgrade-responder/tree/master/chart) doesn’t automatically re-deploy the deployment so the upgrade-responder pod doesn’t automatically pick up the new configMap. **Workaround**: kill the upgrade-responder pod one by one to reload...
@williamlin-suse The fix has been pushed and released https://github.com/longhorn/upgrade-responder/releases/tag/v0.1.5. Can you give it a try again? * Pull the new chart changes * Update the response config * Verify that...
Seems that [the 33% cpu time of the longhorn-instance-manager](https://user-images.githubusercontent.com/1158428/181378206-dbdd7dfc-2173-45c5-8fd7-308822db8dcd.png) is not aggregated of the engine processes below. `top` command only aggregates CPU time of child process in forest view mode...
@mbleichner Do you observe this behavior in Longhorn v1.2.4?
These logs are from the proxy in Longhorn v1.3.0. How about in Longhorn v1.2.4 @mbleichner ?
Can you file a ticket in the kasten k10 repo to see if they provide any info about why it marks Longhorn as failed? After knowing the reason, it is...
Put this in planning for now. If we see more attractions from users, we can raise the priority
After checked the support bundle, we see that: * volume cannot finish attaching because the one or more replica cannot be started * Replica cannot be started because Longhorn cannot...
@wang-xiaowu you can do: > Workaround: delete one of the duplicated instance manager `kubectl delete instancemanagers instance-manager-r-84356b81 -n longhorn-system`
Things are looking better. The volume is running. Can you send another support bundle? We can check for the unsaleable condition