architecture
architecture copied to clipboard
device interfaces
Idea brought up by @Kane610 during SOTU.
Context
This applies mainly to Zigbee and Deconz. Z-Wave too, but it doesn't support device automations yet.
Currently with device automations we're allowing users to automate one button at a time. That's great, but can also be cumbersome. For a four button remote, a user now needs to set up 4 automations.
There also the new IKEA remotes that can turn and click.
Another example is contact/motion sensors that should enable/disable a light. (Thanks @dmulcahey)
Proposal
Allow linking up remotes/sensors directly to one or more compatible devices or an area (maybe with type selection, so only lights/switches?).
Examples:
- light: turn on/off, control brightness
- media player: play/pause, control volume
- fan: on/off, control speed
- motion -> light on for 30 seconds
- closet open/close -> light on/off
I'm thinking that we can specify interfaces that devices can implement. We could allow making this part of device_info
. We then should be able to ask an integration to attach an interface to it's device, so it will call the right triggers.
class RemoteUpDownToggleInterface:
interface_id = "remote_up_down_toggle"
async def async_trigger_up(self):
pass
async def async_trigger_down(self):
pass
async def async_trigger_toggle(self):
pass
We will introduce a new interface_automation
integration to manage, and integrations create platforms like zha/interface_automation.py
that contain async_attach_interface
async def async_attach_interface(device_id, interface):
ieee = get_ieee(device_id)
detach = [
async_listen_zha_event(ieee, "up", interface.async_trigger_up)
async_listen_zha_event(ieee, "down", interface.async_trigger_down)
async_listen_zha_event(ieee, "on_off", interface.async_trigger_toggle)
]
@callback
def unsub():
for det in detach:
det()
return unsub
Consequences
It gets easier to deal with remotes in the UI .
Yes! This is more or less what I was talking about on SOTU. Difference to my original thought was that we would "package" a set of automations that would just use existing device automations. E.g. If I use the Hue dimmer remote with On, Off, Dim Up and Dim Down with a light four automations would be created that would automatically fill in the trigger and action, resulting in four automations, one for turning the light on, one off, one for dim up and one for dim down, which the user could then adjust accordingly.
Oh that is an interesting thought. So we would scaffold 4 automations that the user can adjust afterwards.
I think that I would slightly prefer to use interfaces, but offer a "convert to automations" button for users that want to do more.
Ah, in that case, we could even have async_attach_interface
just return device automation config. That way interface_integration
can convert it when the user wants to.
Yeah I think it would be really simple to define similar to current device triggers. You define automation from the perspective of the remote model and what entity platform it should control
Could this interface be made more generic than to be something just for remotes? I'm thinking about allowing integrations to provide a way to expose their available "signals" (or events), that could be bound to "slots" (or actions).
Simple example would be that symfonisk button and a "rotate_right" signal, which could be bound to "increase_volume" of a mediaplayer (or increase_brightness of a bulb). This would also allow mapping the interface to the web of things spec (or was it the second iot thing spec).
This would also allow easy integration for any integration-specific functionalities without needing to resort to manually calling custom services, and these "actions" could be also easily exposed to lovelace or other UIs.
Really it could work for anything having device automation support
This is something I have really been missing. But I'd like to throw in an idea for some more pwerful ways to hook up dimmers (and volume controllers etc). I think it would make sense to not connect the buttons directly to the brightness of a light etc, but instead expose a dimmer control entity, so that the actual automation of remote buttons is separate from the dimming state. This allows e.g. a lovelace ui to show and control the same dimming state as that controlled by the remote.
I made a prototype and wrote up some more details in
https://github.com/dkagedal/dimmer and https://community.home-assistant.io/t/dimmer-control/164563
Obviously, you'd still need the high-level automation of the remotes and the interfaces on the controlled devices. But we should consider having something in between.
The "dimmer control" entity above would generalize to some kind of "state/attribute mutator" with a state that describes how it is currently mutating values, typically increasing/decreasing at a given rate.
Maybe this is only tangential to the core proposal here, but I think it is worth bringing up.
Users can control their brightness of the lights themselves via the normal controls. Using entities is out of scope for this.
The goal is to be able to offer something simple. If the user doesn't want that anymore, it can convert it to automations that the user can customize. Anything beyond this goal is out of scope.
It will be up to the interfaces to decide what devices they can control.
I've been thinking about this a bit more. Two thoughts:
Producers and Consumers
For each interface there will be:
- producers: devices that generate the events
- consumers: devices that consume the events
Each device interface would need to describe the requirements for both the producer and the consumer. Some examples:
-
motion_on_off
. Producer is a binary sensor with motion device class. Consumer is any device that can be turned on and off (ie lights). -
remote_on_off
. Producer is a remote with on/off buttons. Consumer is any device that can be turned on and off (ie lights). -
remote_on_off_up_down
. Producer is a remote with on, off, up and down commands. Consumer is any device that can be turned on/off but also increase/decrease something (light, fan, thermostats?) - Other interfaces can be
remote_toggle_up_down
,remote_toggle
,remote_up_down
As you can see there is already some overlap between producers and consumers of different interfaces, so we can generalize there. Both motion_on_off
and remote_on_off
need a consumer that can handle on/off commands.
Layered Approach
- Device Interfaces should be expressed in configs using device automations
- Device automations should be expressed in configs using automation primitives (state trigger, call service etc)
Now we can offer a user that wants to customize things the option to break it down.
- Don't like a single device interface? HA can break it down to 5 automations using device triggers and device actions.
- Still too high level? Convert a device action/condition/trigger to its automation primitives.
FYI, another example of a gimmicky however still very popular home automation remote-like device which has a few other interfaces that was not mentioned the original post is the "Aqara Cube" by Xiaomi (a.k.a. Xiaomi Aqara Magic Cube Controller)
https://www.aqara.com/us/cube.html
It is Zigbee compatible and has a six-axis motion sensor which via 6 gestures can produce several commands, including:
- shake = Shake cube in hand in an angry motion.
- push = Push cube on a flat surface in any direction.
- rotate = Twist the cube on a flat surface in any direction.
- flip 90° = Rotate the cube to any side.
- flip 180° = Rotate the cube and put it down on the opposite face.
- double tap = Knock the cube twice against a surface.
https://www.youtube.com/watch?v=LeuHpBdwmag&t=395s
https://notenoughtech.com/review/xiaomi-aqara-cube-controller-review/
I would like to have generic events for handle remotes and don't need first find out what kind of events comes and how I need to use it on automation.
"single press", "double press", "short press", and "long press" are relativly common generic events(?)
Those four distinctive events could allow you to assign four different actions with just one button.
Those four distinctive events could allow you to assign four different actions with just one button.
You could allow for any number of actions by making that more generic, allowing for configurable sequences. It could be a sequence of presses and delays of a button, each of configurable lengths (within some limits). Or the sequences could be physical movement, like the cube mentioned above or a wearable. Or maybe gestures on a touch-screen (or even eventually, a camera input).
Has this been superseded / implemented by blueprints? Should we close here?
I think that blueprints are a step towards this goal but not a full solution. The problem with blueprints is that we still expect users to find the blueprint for their remote and then configure their area. I think that we should still have a way to automatically find/activate blueprints for remotes + areas.
This architecture issue is old, stale, and possibly obsolete. Things changed a lot over the years. Additionally, we have been moving to discussions for these architectural discussions.
For that reason, I'm going to close this issue.
../Frenck