Automation trigger is incorrectly fired in 2024.6
The problem
I have automation trigger that is fired when there is a power level transition. In 2024.5.4 it was fired when there was a change, now in 2024.6 it is fired many times a day even when there was no such transition.
Looks like a regression to me, I also do not see an open issue for something like this or mention of such change in release notes.
What version of Home Assistant Core has the issue?
core-2024.6.0
What was the last working version of Home Assistant Core?
core-2024.5.4
What type of installation are you running?
Home Assistant OS
Integration causing the issue
No response
Link to integration documentation on our website
No response
Diagnostics information
No response
Example YAML snippet
alias: Power on
description: ""
trigger:
- platform: numeric_state
entity_id: sensor.solax_x1_hybrid_g4_grid_power
below: -1
condition: []
action:
- device_id: 77469a77222a02e4022c230d82564dd6
domain: mobile_app
type: notify
message: ⚡ Power on
mode: single
Anything in the logs that might be useful for us?
No response
Additional information
No response
Which integration provides the sensor.solax_x1_hybrid_g4_grid_power? Further please check and provide the history graph of this entity.
It is solax integration.
It looks like this: sensor.solax_x1_hybrid_g4_grid_power.csv
I now see there are some "unavailable" entries there that likely cause this, which were not happening in 2024.5.4. Is that a new behavior in the integration or HA itself?
Hey there @squishykid, mind taking a look at this issue as it has been labeled with an integration (solax) you are listed as a code owner for? Thanks!
Code owner commands
Code owners of solax can trigger bot actions by commenting:
@home-assistant closeCloses the issue.@home-assistant rename Awesome new titleRenames the issue.@home-assistant reopenReopen the issue.@home-assistant unassign solaxRemoves the current integration label and assignees on the issue, add the integration domain after the command.@home-assistant add-label needs-more-informationAdd a label (needs-more-information, problem in dependency, problem in custom component) to the issue.@home-assistant remove-label needs-more-informationRemove a label (needs-more-information, problem in dependency, problem in custom component) on the issue.
(message by CodeOwnersMention)
solax documentation solax source (message by IssueLinks)
I now see there are some "unavailable" entries there that likely cause this, which were not happening in 2024.5.4. Is that a new behavior in the integration or HA itself?
This is caused by https://github.com/home-assistant/core/pull/117767
It should only happen after the solax library is unable to receive a valid response three times in a row. I think updating your automation is the best way forward.
Hm... from my experience Solax inverters (I had two) are notoriously bad with Wi-Fi and sometimes stop responding for a few seconds or have multi-seconds pings. Is there a way to override the behavior somehow?
Asked Solax support to upgrade Wi-Fi and it changed from this:
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=1 ttl=254 time=10463 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=2 ttl=254 time=9462 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=3 ttl=254 time=8465 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=4 ttl=254 time=7464 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=5 ttl=254 time=6463 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=6 ttl=254 time=5460 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=7 ttl=254 time=4459 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=8 ttl=254 time=3458 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=9 ttl=254 time=2458 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=10 ttl=254 time=1457 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=11 ttl=254 time=457 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=12 ttl=254 time=3861 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=13 ttl=254 time=2861 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=14 ttl=254 time=1860 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=15 ttl=254 time=852 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=16 ttl=254 time=79.3 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=17 ttl=254 time=102 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=18 ttl=254 time=7877 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=19 ttl=254 time=6878 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=20 ttl=254 time=5879 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=21 ttl=254 time=4880 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=22 ttl=254 time=3880 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=23 ttl=254 time=2884 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=24 ttl=254 time=1887 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=25 ttl=254 time=888 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=26 ttl=254 time=108 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=27 ttl=254 time=51.1 ms
...
--- solax-x1-hybrid-g4.localdomain ping statistics ---
186 packets transmitted, 181 received, 2.68817% packet loss, time 185113ms
rtt min/avg/max/mdev = 2.088/3230.751/10462.508/2979.903 ms
To this:
PING solax-x1-hybrid-g4.localdomain (192.168.2.13) 56(84) bytes of data.
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=1 ttl=254 time=2.19 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=2 ttl=254 time=2.14 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=3 ttl=254 time=1.94 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=4 ttl=254 time=2.17 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=5 ttl=254 time=2.26 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=6 ttl=254 time=2.05 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=7 ttl=254 time=4.14 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=8 ttl=254 time=4.10 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=9 ttl=254 time=1.97 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=10 ttl=254 time=2.16 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=11 ttl=254 time=1.97 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=12 ttl=254 time=1.85 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=13 ttl=254 time=1.96 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=14 ttl=254 time=3.34 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=15 ttl=254 time=1.97 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=16 ttl=254 time=1.89 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=17 ttl=254 time=8.33 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=18 ttl=254 time=1.91 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=19 ttl=254 time=1.93 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=20 ttl=254 time=1.83 ms
64 bytes from solax-x1-hybrid-g4.localdomain (192.168.2.13): icmp_seq=21 ttl=254 time=9.19 ms
^C
--- solax-x1-hybrid-g4.localdomain ping statistics ---
21 packets transmitted, 21 received, 0% packet loss, time 19992ms
rtt min/avg/max/mdev = 1.829/2.917/9.190/2.014 ms
With everything else being the same.
Clearly the buggy Wi-Fi firmware was the reason before. Closing now.
Still happening :confused:
There hasn't been any activity on this issue recently. Due to the high number of incoming GitHub notifications, we have to clean some of the old issues, as many of them have already been resolved with the latest updates. Please make sure to update to the latest Home Assistant version and check if that solves the issue. Let us know if that works for you by adding a comment 👍 This issue has now been marked as stale and will be closed if no further activity occurs. Thank you for your contributions.
Still happening 😕
There hasn't been any activity on this issue recently. Due to the high number of incoming GitHub notifications, we have to clean some of the old issues, as many of them have already been resolved with the latest updates. Please make sure to update to the latest Home Assistant version and check if that solves the issue. Let us know if that works for you by adding a comment 👍 This issue has now been marked as stale and will be closed if no further activity occurs. Thank you for your contributions.
This is actually correct behavior. As the integration responds again / get back online, the state goes from unavailable (during it not responding) back to its value.
As Home Assistant can't possibly know the states during its offline period (not matter how short it was), it will trigger again on the below value.
If this behavior is not what you are looking for, you could decide to guard for this conditionally:
- condition: template
value_template: >
{{ trigger.from_state is not none and trigger.from_state.state != 'unavailable' }}
In your example, it would look like:
alias: Power on
description: ""
triggers:
- trigger: numeric_state
entity_id: sensor.solax_x1_hybrid_g4_grid_power
below: -1
conditions:
- condition: template
value_template: >
{{ trigger.from_state is not none and trigger.from_state.state != 'unavailable' }}
actions:
- device_id: 77469a77222a02e4022c230d82564dd6
domain: mobile_app
type: notify
message: ⚡ Power on
mode: single
Closing this issue on this end, as this would be expected and correct behavior.
../Frenck