Xiang Ji
Xiang Ji
Sure, this is the logs from one such incident where the pod disconnected for a moment (the logs are from the pod `assistant-service-2`. The membership is `:auto` and the strategy...
Sure. We took a look at that when the bug happened, but from what we could tell, it says > The CRDT resolves the conflict and Horde.Registry sends an exit...
So what you mean is that the messages in the logs are actually normal when the registry tries to shut down a duplicate process. Then perhaps it was actually some...
The worker module: ``` def start_link(%{account: account}) do inbox_name = account.identifier case GenServer.start_link(__MODULE__, %{account: account}, name: via_tuple(inbox_name)) do {:ok, pid} -> {:ok, pid} {:error, {:already_started, pid}} -> Logger.info("#{inspect(inbox_name)} already started...
After using static membership instead of `:auto`, similar termination messages still occur (I assume it's still the registry trying to kill duplicate processes, as mentioned), but the cluster behaves correctly...
Well actually today we saw a new error even with static membership... ``` 10:28:56.775 assistant 08:28:56.768 [info] SIGTERM received - shutting down 10:28:56.777 assistant 08:28:56.773 [error] GenServer Assistant.Inbox.Sync.Supervisor terminating 10:28:56.783...
Right, so this is what happened in this case, and apparently the new Registry/DynamicSupervisor will still try to join the cluster regardless of the static list, which doesn't actually include...
By the way, when I tried to scale down from 4 to 3 again, an `(EXIT) no process` error similar to https://github.com/derekkraan/horde/issues/202 happened on all 3 of the remaining nodes....
In my case I realized I didn't call `await LocalNotifications.requestPermissions()` before trying to send the notification. Note that the latest version of the project is now at https://github.com/ionic-team/capacitor-plugins/
Sure. I haven't used skhd in a while though from what I remember it seemed to work the last time I tried.