feat: Allow using custom clusters with groups
But only if ALL devices in the group support the cluster. It's highly dependent on individual devices if they will support read/write/command for custom clusters. In my testing, my Inovelli switches are responding to writing attributes, but not commands (such as LED commands).
I'm sorry I haven't had time to get back to this. Other projects have taken priority. I still plan on getting back to it, but don't have a specific estimate as to when. Feel free to finish it yourself if you feel like it warrants it. Otherwise... I'll get to it sometime!
Will finish https://github.com/Koenkk/zigbee-herdsman/pull/1330 first then we can cleanup this one.
This pull request is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 7 days
Adding a comment to hopefully make this not stale. I'm really hoping this functionality is included soon so I can keep a bunch of settings on dozens of Inovelli switches in sync.
@Nerivec: were you planning on getting back to finishing this sometime soon?
Made a couple attempts a while back, but nothing satisfactory. The logic is too convoluted, and there is a fatal flaw in the detection that requires some refactoring first to support this (a group would be considered as having a common custom cluster, even if the group doesn't have the specific device endpoint that actually supports this custom cluster, which would make requests partially fail).
I also don't have any use case, so, I can check the logic, but not test any of this.
I'm not sure I fully understand what the next steps are. The logic and fatal flaw... in this PR as-is... is too convoluted? Or you just don't think this feature should be implemented at all?
You're still planning on getting to this, but will need help testing? Or because you can't test, you no longer plan on working on this.
Basically... are you expecting me to pick it back up, and if so, can you give any pointers on what I should address first? Or is it just a "no, this isn't going to happen"?
The convoluted part was about the logic to keep the common clusters in sync. There should be a way to make this simpler.
The flaw part was about the fact we need custom clusters to be endpoint-aware before this can be implemented ("what custom clusters are supported on this endpoint on this device" so the group limits the checking to the endpoint actually in the group). Two PRs are needed, in ZH and in ZHC, so devices can declare which endpoints are concerned when custom clusters are added.
I don't have enough time to get into big PRs at the moment, but later in the year, sure.
Probably should first discuss the best approach for the endpoint-aware part. @Koenkk we only flew over the subject... not sure if you've had any ideas on this.
what custom clusters are supported on this endpoint on this device" so the group limits the checking to the endpoint actually in the group
I would expect that if a device has 2 endpoints where both endpoints support a manuspecific cluster with id 99, that both are the same the same cluster definition (so cluster definitions are always device wide, not endpoint specific). Or am I missing something?
That's not the problem here.
A device could have 2, 3, 4... endpoints, and say only 1 supports the custom cluster. If the device is included in the group, it would always assume that custom cluster is common, no matter what endpoint is actually in the group, and if that endpoint indeed supports that cluster or not. That would result in partial failure, since the device will obviously not properly respond (I'm not even sure what would happen in this scenario, probably some stack-specific outcome...).
- If you send a group command to a non-existent custom cluster on a device's endpoint because the group assumed it was supported, you might bring troubles.
- Same goes if for example, that device's endpoint actually implements the same cluster ID but with a different structure/behavior (mismatching types, out of bounds, etc. that kind of parsing failure on the device is likely to crash lesser-quality ones...). Actually this one might be a problem even without multi-endpoint in the mix.
At best it's a failed request and is silently dropped by the device, but it's something Z2M should not allow for safety.
Indeed. In this case, the Inovelli switch has manuSpecificInovelli cluster as an input on endpoint 1, and an output on endpoints 2 and 3 (and not present on endpoint 242).
I think a group should only support a manuspecific cluster if all endpoints in the group support it.
That would be bring the number down to pretty much zero, since a lot of devices have GP 242 cluster that won't support custom. Also several have either "mistake" secondary clusters (bad/leftover firmware config), or a secondary that's just for a specific use (identify, etc.), that would have the same impact.
What I mean is e.g.:
- Group 1: Inovelli device 1 ep 1 which supports cluster X, Inovelli device 2 ep 1 which supports cluster X -> group supports cluster X
- Group 2: Inovelli device 1 ep 1 which supports cluster X, Inovelli device 2 ep 1 which supports cluster X, some Ikea device ep 1 -> group does not supports cluster X