SonoffLAN
SonoffLAN copied to clipboard
Integration still requires reloading after 3.4 release
Hello,
After the new release, hopes were up that it would fix my reload issues, but it didn't :-( So I am opening this issue to try to provide the right amount of details, even though it seems I don't have much to give at this point.
Relates to https://github.com/AlexxIT/SonoffLAN/issues/1072
General info
I am running the following on a RaspberryPi3.
OS Version: Home Assistant OS 9.5
Home Assistant Core: 2023.2.2
This issue arose all of a sudden some time ago; This issue was the one I saw already opened. None of the devices in the impacted list below had any firmware upgrade between the time it worked and the time it didn't anymore. However, Home Assistant updates happened fairly regularly, and sadly, I didn't write down HA versions :-/
Devices impacted by this issue are, all on latest firmwares:
- SonOff ZBBridge
- 5+ devices connected to the above:
- 1 SonOff temperature sensor SNZB-02
- 2 SonOff motion sensors SNZB-03 (ultimately driving lights)
- 2 SonOff S26R1 elec plugs (connected to heaters)
- Not sure how Hue plays with all of this, but maybe some lights then...! it's all magic right...
Subpar workaround
My current workaround, which is not perfect as it impedes some other automation, is the following automation to reload the integration every 2 minutes. Very often, the integration ceases to update within that time period:
alias: HACK - Force reload of SonOff integration
description: Force SonOff reloading
trigger:
- platform: time_pattern
minutes: /2
action:
- service: shell_command.reload_sonoff_integration
data: {}
mode: single
With the following shell script
#!/bin/bash
set -e
[[ -z $1 ]] && exit 1
INTEGRATION_ENTRY=$1
source /config/script_secrets
curl -XPOST -H "Authorization: Bearer ${BEARER_TOKEN}" http://192.168.10.168:8123/api/config/config_entries/entry/${INTEGRATION_ENTRY}/reload
Update to 3.4
I updated this morning to 3.4 and restart HA. I disabled my workaround, but the same symptoms appeared immediately.
The diagnostic for the integration does not indicate anything special, except what seems to be a one-time error upon HA restart after the update. I'll still put it here, might be my weak raspberrypi could not cope with the restart load and processing the response... There is not any more occurence of it and if the integration is in "cloud" mode only, it goes and fetches with no error the devices from the cloud (even in auto mode, see below... but wanted to make sure it didn't get them locally by forcing the mode).
2023-02-06 15:26:04.001 WARNING (MainThread) [custom_components.sonoff] Can't login with mode: auto
Traceback (most recent call last):
File "/config/custom_components/sonoff/__init__.py", line 198, in async_setup_entry
await registry.cloud.login(username, password)
File "/config/custom_components/sonoff/core/ewelink/cloud.py", line 132, in login
r = await self.session.post(
File "/usr/local/lib/python3.10/site-packages/aiohttp/client.py", line 559, in _request
await resp.start(conn)
File "/usr/local/lib/python3.10/site-packages/aiohttp/client_reqrep.py", line 893, in start
with self._timer:
File "/usr/local/lib/python3.10/site-packages/aiohttp/helpers.py", line 721, in __exit__
raise asyncio.TimeoutError from None
asyncio.exceptions.TimeoutError
2023-02-06 15:26:04.023 WARNING (MainThread) [homeassistant.config_entries] Config entry '[email protected]' for sonoff integration not ready yet: None; Retrying in background
Auto mode
I enabled debug more for the integration. A typical restart shows the following, no error:
2023-02-06 22:04:00 [D] CLOUD True => None
2023-02-06 22:04:01 [D] 18 devices loaded from Cloud
2023-02-06 22:04:01 [D] a480045bf0 UIID 1770 | {'subDevId': 'b4e4ce22004b12001770', 'parentid': '100155ebc5', 'battery': 100, 'supportPowConfig': 1, 'trigTime': '1675711617104', 'ASYNC': 2, 'temperature': '2222', 'humidity': '3256', 'switch': 'off', 'timeZone': 1}
2023-02-06 22:04:01 [D] a480045b42 UIID 2026 | {'subDevId': '8de8b423004b12002026', 'parentid': '100155ebc5', 'motion': 0, 'trigTime': '1675716235570', 'battery': 61, 'supportPowConfig': 1}
2023-02-06 22:04:01 [D] 100139e6ba UIID 0137 | {'version': 8, 'init': 1, 'fwVersion': '1000.2.1050', 'switch': 'off', 'colorR': 255, 'colorG': 0, 'colorB': 0, 'mode': 1, 'bright': 1, 'light_type': 1, 'timeZone': 1, 'lineSequence': {'1': 'b', '2': 'g', '3': 'r'}, 'speed27': 50, 'bright27': 50, 'speed28': 50, 'bright28': 50, 'speed29': 50, 'bright29': 50, 'speed30': 50, 'bright30': 50}
2023-02-06 22:04:01 [D] 100139ef32 UIID 0137 | {'version': 8, 'init': 1, 'fwVersion': '1000.2.1050', 'switch': 'off', 'colorR': 0, 'colorG': 0, 'colorB': 255, 'mode': 1, 'bright': 30, 'light_type': 1, 'speed27': 50, 'bright27': 50, 'speed28': 50, 'bright28': 50, 'speed29': 50, 'bright29': 50, 'speed30': 50, 'bright30': 50, 'timeZone': 1, 'lineSequence': {'1': 'b', '2': 'g', '3': 'r'}, 'NO_SEND_TO_APP': 0, 'speed14': 40, 'bright14': 50, 'speed07': 30, 'bright07': 70, 'speed18': 50, 'bright18': 50, 'speed20': 60, 'bright20': 85}
2023-02-06 22:04:01 [D] 1000cabd0f UIID 0001 | {'version': 8, 'sledOnline': 'on', 'switch': 'off', 'fwVersion': '2.9.0', 'rssi': -58, 'startup': 'off', 'init': 1, 'pulse': 'off', 'pulseWidth': 500}
2023-02-06 22:04:01 [D] 1001100395 UIID 0059 | {'version': 8, 'only_device': {'ota': 'success'}, 'sledOnline': 'on', 'rssi': -70, 'fwVersion': '3.4.3', 'switch': 'on', 'light_type': 1, 'colorR': 255, 'colorG': 96, 'colorB': 175, 'bright': 82, 'mode': 1, 'speed': 100, 'sensitive': 10}
2023-02-06 22:04:01 [D] a480037c1a UIID 2026 | {'subDevId': 'bdf81b25004b12002026', 'parentid': '100155ebc5', 'motion': 0, 'trigTime': '1675716238599', 'battery': 100, 'supportPowConfig': 1}
2023-02-06 22:04:01 [D] 1000bfaed8 UIID 0001 | {'version': 8, 'sledOnline': 'on', 'switch': 'off', 'fwVersion': '3.5.1', 'rssi': -70, 'startup': 'off', 'init': 1, 'pulse': 'off', 'pulseWidth': 500, 'only_device': {'ota': 'success'}}
2023-02-06 22:04:01 [D] 100155ebc5 UIID 0066 | {'version': 8, 'subDevNum': 3, 'subDevMaxNum': 32, 'sledOnline': 'on', 'zled': 'off', 'fwVersion': '1.7.0', 'subDevices': [{'subDevId': 'bdf81b25004b12002026', 'deviceid': 'a480037c1a', 'uiid': '2026', 'index': 0}, {'subDevId': '8de8b423004b12002026', 'deviceid': 'a480045b42', 'uiid': '2026', 'index': 0}, {'subDevId': 'b4e4ce22004b12001770', 'deviceid': 'a480045bf0', 'uiid': '1770', 'index': 0}], 'rssi': -75, 'only_device': {'ota': 'success'}, 'addSubDevState': 'off', 'addDevTime': 180, 'timeZone': 1}
2023-02-06 22:04:01 [D] 1000dc0dbb UIID 0077 | {'version': 8, 'switches': [{'switch': 'off', 'outlet': 0}, {'switch': 'off', 'outlet': 1}, {'switch': 'off', 'outlet': 2}, {'switch': 'off', 'outlet': 3}], 'configure': [{'startup': 'off', 'outlet': 0}, {'startup': 'off', 'outlet': 1}, {'startup': 'off', 'outlet': 2}, {'startup': 'off', 'outlet': 3}], 'pulses': [{'pulse': 'off', 'width': 1000, 'outlet': 0}, {'pulse': 'off', 'width': 1000, 'outlet': 1}, {'pulse': 'off', 'width': 1000, 'outlet': 2}, {'pulse': 'off', 'width': 1000, 'outlet': 3}], 'lock': 0, 'sledOnline': 'on', 'rssi': -69, 'fwVersion': '3.7.1', 'senMode': {'enabled': 0, 'dTime': 10, 'tUnit': 's', 'outlets': [0, 1, 2, 3], 'effectives': [{'outlet': 0, 'enabled': 0, 'days': [0, 1, 2, 3, 4, 5, 6], 'effIndex': [{'index': 0, 'enabled': 1, 'from': '00:00', 'to': '23:59'}, {'index': 1, 'enabled': 0, 'from': '00:00', 'to': '23:59'}]}, {'outlet': 1, 'enabled': 0, 'days': [0, 1, 2, 3, 4, 5, 6], 'effIndex': [{'index': 0, 'enabled': 1, 'from': '00:00', 'to': '23:59'}, {'index': 1, 'enabled': 0, 'from': '00:00', 'to': '23:59'}]}, {'outlet': 2, 'enabled': 0, 'days': [0, 1, 2, 3, 4, 5, 6], 'effIndex': [{'index': 0, 'enabled': 1, 'from': '00:00', 'to': '23:59'}, {'index': 1, 'enabled': 0, 'from': '00:00', 'to': '23:59'}]}, {'outlet': 3, 'enabled': 0, 'days': [0, 1, 2, 3, 4, 5, 6], 'effIndex': [{'index': 0, 'enabled': 1, 'from': '00:00', 'to': '23:59'}, {'index': 1, 'enabled': 0, 'from': '00:00', 'to': '23:59'}]}]}, 'savMode': {'enabled': 0, 'dTime': 1, 'tUnit': 'm', 'outlets': [0, 1, 2, 3]}, 'alertMode': {'enabled': 0}, 'selfApikey': 'xxxxxxxx'}
[... removed Hue stuff ...]
2023-02-06 22:04:01 [D] AUTO mode start
2023-02-06 22:04:01 [D] 1000bfaed8 <= Local3 | {'switch': 'off', 'startup': 'off', 'pulse': 'off', 'pulseWidth': 500, 'rssi': -70} | 26
2023-02-06 22:04:01 [D] 1000cabd0f <= Local3 | {'switch': 'off', 'startup': 'off', 'pulse': 'off', 'sledOnline': 'on', 'pulseWidth': 500, 'rssi': -58} | 4
2023-02-06 22:04:01 [D] CLOUD None => True
2023-02-06 22:04:01 [D] Add 35 entities
As I've just tested it, there appears to be no error in the HA log nor the debug log of the integration now... :-/ It all just fails silently.
Cloud mode
Same symptoms as auto mode.
Local mode
I guess the first weird thing to me is that, upon enabling this mode, it still goes to load from cloud anyways, before "LOCAL mode start".
Outside of that, the devices show as "unavailable".
2023-02-06 22:29:13 [D] CLOUD True => None
2023-02-06 22:29:14 [D] 18 devices loaded from Cloud
2023-02-06 22:29:14 [D] a480045bf0 UIID 1770 | {'subDevId': 'b4e4ce22004b12001770', 'parentid': '100155ebc5', 'battery': 100, 'supportPowConfig': 1, 'trigTime': '1675718815574', 'ASYNC': 2, 'temperature': '2202', 'humidity': '3148', 'switch': 'off', 'timeZone': 1}
2023-02-06 22:29:14 [D] a480045b42 UIID 2026 | {'subDevId': '8de8b423004b12002026', 'parentid': '100155ebc5', 'motion': 0, 'trigTime': '1675716235570', 'battery': 61, 'supportPowConfig': 1}
2023-02-06 22:29:14 [D] 100139e6ba UIID 0137 | {'version': 8, 'init': 1, 'fwVersion': '1000.2.1050', 'switch': 'off', 'colorR': 255, 'colorG': 0, 'colorB': 0, 'mode': 1, 'bright': 1, 'light_type': 1, 'timeZone': 1, 'lineSequence': {'1': 'b', '2': 'g', '3': 'r'}, 'speed27': 50, 'bright27': 50, 'speed28': 50, 'bright28': 50, 'speed29': 50, 'bright29': 50, 'speed30': 50, 'bright30': 50}
2023-02-06 22:29:14 [D] 100139ef32 UIID 0137 | {'version': 8, 'init': 1, 'fwVersion': '1000.2.1050', 'switch': 'off', 'colorR': 0, 'colorG': 0, 'colorB': 255, 'mode': 1, 'bright': 30, 'light_type': 1, 'speed27': 50, 'bright27': 50, 'speed28': 50, 'bright28': 50, 'speed29': 50, 'bright29': 50, 'speed30': 50, 'bright30': 50, 'timeZone': 1, 'lineSequence': {'1': 'b', '2': 'g', '3': 'r'}, 'NO_SEND_TO_APP': 0, 'speed14': 40, 'bright14': 50, 'speed07': 30, 'bright07': 70, 'speed18': 50, 'bright18': 50, 'speed20': 60, 'bright20': 85}
2023-02-06 22:29:14 [D] 1000cabd0f UIID 0001 | {'version': 8, 'sledOnline': 'on', 'switch': 'off', 'fwVersion': '2.9.0', 'rssi': -58, 'startup': 'off', 'init': 1, 'pulse': 'off', 'pulseWidth': 500}
2023-02-06 22:29:14 [D] 1001100395 UIID 0059 | {'version': 8, 'only_device': {'ota': 'success'}, 'sledOnline': 'on', 'rssi': -70, 'fwVersion': '3.4.3', 'switch': 'on', 'light_type': 1, 'colorR': 255, 'colorG': 96, 'colorB': 175, 'bright': 82, 'mode': 1, 'speed': 100, 'sensitive': 10}
2023-02-06 22:29:14 [D] a480037c1a UIID 2026 | {'subDevId': 'bdf81b25004b12002026', 'parentid': '100155ebc5', 'motion': 1, 'trigTime': '1675718892117', 'battery': 100, 'supportPowConfig': 1}
2023-02-06 22:29:14 [D] 1000bfaed8 UIID 0001 | {'version': 8, 'sledOnline': 'on', 'switch': 'off', 'fwVersion': '3.5.1', 'rssi': -70, 'startup': 'off', 'init': 1, 'pulse': 'off', 'pulseWidth': 500, 'only_device': {'ota': 'success'}}
2023-02-06 22:29:14 [D] 100155ebc5 UIID 0066 | {'version': 8, 'subDevNum': 3, 'subDevMaxNum': 32, 'sledOnline': 'on', 'zled': 'off', 'fwVersion': '1.7.0', 'subDevices': [{'subDevId': 'bdf81b25004b12002026', 'deviceid': 'a480037c1a', 'uiid': '2026', 'index': 0}, {'subDevId': '8de8b423004b12002026', 'deviceid': 'a480045b42', 'uiid': '2026', 'index': 0}, {'subDevId': 'b4e4ce22004b12001770', 'deviceid': 'a480045bf0', 'uiid': '1770', 'index': 0}], 'rssi': -75, 'only_device': {'ota': 'success'}, 'addSubDevState': 'off', 'addDevTime': 180, 'timeZone': 1}
2023-02-06 22:29:14 [D] 1000dc0dbb UIID 0077 | {'version': 8, 'switches': [{'switch': 'off', 'outlet': 0}, {'switch': 'off', 'outlet': 1}, {'switch': 'off', 'outlet': 2}, {'switch': 'off', 'outlet': 3}], 'configure': [{'startup': 'off', 'outlet': 0}, {'startup': 'off', 'outlet': 1}, {'startup': 'off', 'outlet': 2}, {'startup': 'off', 'outlet': 3}], 'pulses': [{'pulse': 'off', 'width': 1000, 'outlet': 0}, {'pulse': 'off', 'width': 1000, 'outlet': 1}, {'pulse': 'off', 'width': 1000, 'outlet': 2}, {'pulse': 'off', 'width': 1000, 'outlet': 3}], 'lock': 0, 'sledOnline': 'on', 'rssi': -69, 'fwVersion': '3.7.1', 'senMode': {'enabled': 0, 'dTime': 10, 'tUnit': 's', 'outlets': [0, 1, 2, 3], 'effectives': [{'outlet': 0, 'enabled': 0, 'days': [0, 1, 2, 3, 4, 5, 6], 'effIndex': [{'index': 0, 'enabled': 1, 'from': '00:00', 'to': '23:59'}, {'index': 1, 'enabled': 0, 'from': '00:00', 'to': '23:59'}]}, {'outlet': 1, 'enabled': 0, 'days': [0, 1, 2, 3, 4, 5, 6], 'effIndex': [{'index': 0, 'enabled': 1, 'from': '00:00', 'to': '23:59'}, {'index': 1, 'enabled': 0, 'from': '00:00', 'to': '23:59'}]}, {'outlet': 2, 'enabled': 0, 'days': [0, 1, 2, 3, 4, 5, 6], 'effIndex': [{'index': 0, 'enabled': 1, 'from': '00:00', 'to': '23:59'}, {'index': 1, 'enabled': 0, 'from': '00:00', 'to': '23:59'}]}, {'outlet': 3, 'enabled': 0, 'days': [0, 1, 2, 3, 4, 5, 6], 'effIndex': [{'index': 0, 'enabled': 1, 'from': '00:00', 'to': '23:59'}, {'index': 1, 'enabled': 0, 'from': '00:00', 'to': '23:59'}]}]}, 'savMode': {'enabled': 0, 'dTime': 1, 'tUnit': 'm', 'outlets': [0, 1, 2, 3]}, 'alertMode': {'enabled': 0}, 'selfApikey': 'xxxxx'}
[ ... hue stuff ... ]
2023-02-06 22:29:14 [D] LOCAL mode start
2023-02-06 22:29:14 [D] 1000cabd0f <= Local3 | {'switch': 'off', 'startup': 'off', 'pulse': 'off', 'sledOnline': 'on', 'pulseWidth': 500, 'rssi': -58} | 4
2023-02-06 22:29:14 [D] 1000cabd0f <= Local3 | {'switch': 'off', 'startup': 'off', 'pulse': 'off', 'sledOnline': 'on', 'pulseWidth': 500, 'rssi': -58} | 4
2023-02-06 22:29:14 [D] 1000bfaed8 <= Local3 | {'switch': 'off', 'startup': 'off', 'pulse': 'off', 'pulseWidth': 500, 'rssi': -70} | 26
2023-02-06 22:29:14 [D] 1000bfaed8 <= Local3 | {'switch': 'off', 'startup': 'off', 'pulse': 'off', 'pulseWidth': 500, 'rssi': -70} | 26
2023-02-06 22:29:17 [D] Add 35 entities
If there is anything more I can do to provide information, please let me know how, I'll definitely get to it.
Cheers.
Same here with 3 ZIGBEE_DOOR_AND_WINDOW_SENSOR
I have the same issue with a temperature sensor connected to TH10. local mode.
I haven't noticed this until today when I put my Home Assistant Raspberry Pi on a mini UPS that keeps it powered on all the time. I have multiple power outs per day in my country and I put the Pi on UPS so that I didn't have to wait each time for the Pi to boot up to get control of my sonoff lights and devices when power restores. Now the problem is that power restores and my sonoff devices get power and go online, it does not inform home assistant that its state changed from unavailable to on. Previously when I wasn't using a UPS on the Pi, the Pi would power on alongside the sonoff devices and presumably would run the integration reload on boot so I it would always refresh on each Pi boot. Now that my Pi stays on 24/7, the sonoff reload not working is obvious
After 2 days I could say that that problem is quite different apart the error that is the same:
"exception": "Traceback (most recent call last):\n File \"/config/custom_components/sonoff/core/ewelink/cloud.py\", line 318, in connect\n raise Exception(resp)\nException: {'error': 406, 'apikey': 'b1552c13-f6d1-4993-b7b8-fe81f07407a0', 'sequence': '1675840624863', 'actionName': 'userOnline'}\n",
"count": 3,
"first_occurred": 1675840624.8961823
}
],
"device": {
"uiid": 3026,
"params": {
"bindInfos": "***",
"subDevId": "87ae9028004b12003026",
"parentid": "1001703423",
"lock": 0,
"trigTime": "1675837586829",
"battery": 100,
"supportPowConfig": 1
},
"model": "ZIGBEE_DOOR_AND_WINDOW_SENSOR",
"online": true,
"localtype": null,
"host": null,
"deviceid": "a48004640f"
}
The issue occurs only after a restart and, restarting again it works fine
have
This is actually something I encounter too. Many power outages caused a lot of issues so I put a ups as well and am seeing the Sensor unavailable until reloading the integration
ZBBridge works only via cloud. There may be a problem with automatically reconnecting to the cloud if the cloud returns errors. I will check this point.
I think I have the same problem since the upgrade to the v3.4.0 I did yesterday. Approximately 1 hour after the restart of the integration, the entities are suddenly no more available (I have a RF Bridge 433 and a remote) and I found that in the debug log (device becomes offline at the end):
2023-02-08 08:51:59 [D] SysInfo: {'installation_type': 'Home Assistant OS', 'version': '2023.2.3', 'dev': False, 'hassio': True, 'virtualenv': False, 'python_version': '3.10.7', 'docker': True, 'arch': 'x86_64', 'timezone': 'Europe/Paris', 'os_name': 'Linux', 'os_version': '5.15.90', 'user': 'root', 'supervisor': '2023.01.1', 'host_os': 'Home Assistant OS 9.5', 'docker_version': '20.10.22', 'chassis': 'vm', 'sonoff_version': '3.4.0 (5406fa7)'}
2023-02-08 08:51:59 [D] 1 devices loaded from Cloud
2023-02-08 08:51:59 [D] 1000f22239 UIID 0028 | {'version': 8, 'sledOnline': 'on', 'init': 1, 'fwVersion': '3.5.2', 'rssi': -70, 'setState': 'arm', 'rfList': [{'rfChl': 0, 'rfVal': '1E0000FA0302929148'}, {'rfChl': 1, 'rfVal': '1DF6010E02F892914C'}, {'rfChl': 2, 'rfVal': '1E0A00FA0302929144'}, {'rfChl': 3, 'rfVal': '1E1401040302929149'}, {'rfChl': 4, 'rfVal': '1DEC011802EE929142'}, {'rfChl': 5, 'rfVal': '1E1E01040302929145'}, {'rfChl': 6, 'rfVal': '1E6401040302929141'}, {'rfChl': 7, 'rfVal': '1E50010E02F8929143'}, {'rfChl': 8, 'rfVal': '1E32010E02F8DF1518'}, {'rfChl': 9, 'rfVal': '1E5A010E02EEDF151C'}, {'rfChl': 10, 'rfVal': '1E2801040302DF1511'}, {'rfChl': 11, 'rfVal': '1E8C010E02F8DF1513'}, {'rfChl': 12, 'rfVal': '1E64010E02F8DF1514'}, {'rfChl': 13, 'rfVal': '1E78010E02F8DF1519'}], 'only_device': {'ota': 'success'}, 'cmd': 'trigger', 'rfChl': 8, 'rfTrig0': '2022-05-04T18:37:05.000Z', 'rfTrig4': '2023-01-27T14:06:10.000Z', 'rfTrig1': '2022-09-07T20:07:32.000Z', 'rfTrig3': '2022-11-06T14:19:06.000Z', 'rfTrig2': '2022-12-02T18:56:51.000Z', 'rfTrig5': '2023-01-25T18:38:13.000Z', 'rfTrig7': '2023-01-23T08:07:48.000Z', 'rfTrig6': '2021-09-24T16:39:56.000Z', 'rfTrig8': '2023-02-07T21:11:34.000Z', 'rfTrig9': '2023-02-07T21:11:27.000Z', 'rfTrig10': '2022-10-11T05:52:41.000Z', 'rfTrig11': '2022-04-03T12:40:42.000Z', 'rfTrig12': '2023-02-07T23:31:06.000Z', 'rfTrig13': '2023-02-07T23:31:08.000Z'}
2023-02-08 08:51:59 [D] LOCAL mode start
2023-02-08 08:51:59 [D] 1000f22239 <= Local3 | {'sledOnline': 'on', 'arming': True, 'rfTrig13': '2023-02-07T23:31:08.000Z'} | 21
2023-02-08 08:52:02 [D] Add 3 entities
2023-02-08 08:52:24 [D] CLOUD None => None
2023-02-08 08:52:24 [D] 1 devices loaded from Cloud
2023-02-08 08:52:24 [D] 1000f22239 UIID 0028 | {'version': 8, 'sledOnline': 'on', 'init': 1, 'fwVersion': '3.5.2', 'rssi': -70, 'setState': 'arm', 'rfList': [{'rfChl': 0, 'rfVal': '1E0000FA0302929148'}, {'rfChl': 1, 'rfVal': '1DF6010E02F892914C'}, {'rfChl': 2, 'rfVal': '1E0A00FA0302929144'}, {'rfChl': 3, 'rfVal': '1E1401040302929149'}, {'rfChl': 4, 'rfVal': '1DEC011802EE929142'}, {'rfChl': 5, 'rfVal': '1E1E01040302929145'}, {'rfChl': 6, 'rfVal': '1E6401040302929141'}, {'rfChl': 7, 'rfVal': '1E50010E02F8929143'}, {'rfChl': 8, 'rfVal': '1E32010E02F8DF1518'}, {'rfChl': 9, 'rfVal': '1E5A010E02EEDF151C'}, {'rfChl': 10, 'rfVal': '1E2801040302DF1511'}, {'rfChl': 11, 'rfVal': '1E8C010E02F8DF1513'}, {'rfChl': 12, 'rfVal': '1E64010E02F8DF1514'}, {'rfChl': 13, 'rfVal': '1E78010E02F8DF1519'}], 'only_device': {'ota': 'success'}, 'cmd': 'trigger', 'rfChl': 8, 'rfTrig0': '2022-05-04T18:37:05.000Z', 'rfTrig4': '2023-01-27T14:06:10.000Z', 'rfTrig1': '2022-09-07T20:07:32.000Z', 'rfTrig3': '2022-11-06T14:19:06.000Z', 'rfTrig2': '2022-12-02T18:56:51.000Z', 'rfTrig5': '2023-01-25T18:38:13.000Z', 'rfTrig7': '2023-01-23T08:07:48.000Z', 'rfTrig6': '2021-09-24T16:39:56.000Z', 'rfTrig8': '2023-02-07T21:11:34.000Z', 'rfTrig9': '2023-02-07T21:11:27.000Z', 'rfTrig10': '2022-10-11T05:52:41.000Z', 'rfTrig11': '2022-04-03T12:40:42.000Z', 'rfTrig12': '2023-02-07T23:31:06.000Z', 'rfTrig13': '2023-02-07T23:31:08.000Z'}
2023-02-08 08:52:24 [D] LOCAL mode start
2023-02-08 08:52:24 [D] 1000f22239 <= Local3 | {'sledOnline': 'on', 'arming': True, 'rfTrig13': '2023-02-07T23:31:08.000Z'} | 21
2023-02-08 08:52:27 [D] Add 3 entities
2023-02-08 08:53:29 [D] 1000f22239 => Local4 | {'cmd': 'transmit', 'rfChl': 0} <= {'seq': 22, 'sequence': '1675842809000', 'error': 0}
2023-02-08 09:08:31 [D] 1000f22239 <= Local3 | {'sledOnline': 'on', 'arming': True, 'rfTrig8': '2023-02-08T08:08:32.000Z'} | 22
2023-02-08 09:08:34 [D] 1000f22239 <= Local3 | {'sledOnline': 'on', 'arming': True, 'rfTrig8': '2023-02-08T08:08:34.000Z'} | 23
2023-02-08 09:23:31 [D] 1000f22239 <= Local3 | {'sledOnline': 'on', 'arming': True, 'rfTrig8': '2023-02-08T08:08:34.000Z'} | 23
2023-02-08 09:37:35 [D] 1000f22239 <= Local3 | {'sledOnline': 'on', 'arming': True, 'rfTrig8': '2023-02-08T08:08:34.000Z'} | 23
2023-02-08 09:51:39 [D] 1000f22239 <= Local3 | {'sledOnline': 'on', 'arming': True, 'rfTrig8': '2023-02-08T08:08:34.000Z'} | 23
2023-02-08 10:10:32 [D] 1000f22239 <= Local0 | {'online': False} |
2023-02-08 10:10:43 [D] 1000f22239 => Local4 | {'cmd': 'info'} !! Timeout 10
2023-02-08 10:10:43 [D] 1000f22239 !! Local4 | Device offline
and in the config-entry file, the ip in the host field is replaced by "null" (the ip is correct after the restart):
"model": "RFBridge433",
"online": true,
"localtype": "rf",
"host": null
I have to reload the integration, but 1 hour after the problem occurs again.
(I did also a firmware upgrade of the bridge yesterday, but with no impact on the problem)
@AlexxIT if you tell me what debug statements to put in cloud.py
or others, I'll update the scripts manually to help out.
I have the same issue. Before latest upgrade the sonoff integration was rock solid but now I have to restart it all the time to reconnect to all my devices. I only run in local mode.
Тоже заметил проблему, когда вентиляция перестала включаться. Перезапуск помогает, но не долго. Пробовал и Локально и облоко. Пропадает!
I have it configured in local mode and they always go unresponsive after 10-15 minutes until I toggle a device in the app and reload. It also cares if you recently used the ewelink phone app.
It's hella weird
Hi @AlexxIT do you have some news about this strange problem ? or did you have change to check the errors ? just to know if the problem is for all and you suggest to wait before install to latest release or if we can go ahed due to only some installation are afftected. wbr
I have the same issue after updating to 3.4.0. I roll back to 3.3.1, and it works fine.
А у меня после обновления стала отваливаться реле RE5V1C. Причем спонтанно. Пробовал и авто режим, и локальный. Время до отвала от 10 минут до 1,5часов.
После переустановки интеграции вообще реле пропало в любом режиме...
2023-02-12 10:36:17 [D] SysInfo: {'installation_type': 'Home Assistant Supervised', 'version': '2023.2.3', 'dev': False, 'hassio': True, 'virtualenv': False, 'python_version': '3.10.7', 'docker': True, 'arch': 'x86_64', 'timezone': 'Europe/Moscow', 'os_name': 'Linux', 'os_version': '5.10.0-19-amd64', 'user': 'root', 'supervisor': '2023.01.1', 'host_os': 'Debian GNU/Linux 11 (bullseye)', 'docker_version': '20.10.21', 'chassis': 'desktop', 'sonoff_version': '3.4.0 (5406fa7)'}
2023-02-12 10:36:17 [D] 1 devices loaded from Cloud
2023-02-12 10:36:17 [D] LOCAL mode start
2023-02-12 10:36:17 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:36:33 [D] CLOUD None => None
2023-02-12 10:36:33 [D] 1 devices loaded from Cloud
2023-02-12 10:36:33 [D] LOCAL mode start
2023-02-12 10:36:33 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:37:00 [D] CLOUD None => None
2023-02-12 10:37:00 [D] 1 devices loaded from Cloud
2023-02-12 10:37:00 [D] AUTO mode start
2023-02-12 10:37:00 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:37:01 [D] CLOUD None => True
2023-02-12 10:37:08 [D] CLOUD True => None
2023-02-12 10:37:09 [D] 1 devices loaded from Cloud
2023-02-12 10:37:09 [D] AUTO mode start
2023-02-12 10:37:09 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:37:09 [D] CLOUD None => True
2023-02-12 10:39:54 [D] CLOUD True => None
2023-02-12 10:39:54 [D] 1 devices loaded from Cloud
2023-02-12 10:39:54 [D] AUTO mode start
2023-02-12 10:39:54 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:39:55 [D] CLOUD None => True
2023-02-12 10:41:20 [D] CLOUD True => None
2023-02-12 10:41:21 [D] 1 devices loaded from Cloud
2023-02-12 10:41:21 [D] AUTO mode start
2023-02-12 10:41:21 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:41:21 [D] CLOUD None => True
2023-02-12 10:44:04 [D] CLOUD True => None
2023-02-12 10:44:04 [D] 1 devices loaded from Cloud
2023-02-12 10:44:04 [D] CLOUD mode start
2023-02-12 10:44:05 [D] CLOUD None => True
2023-02-12 10:44:15 [D] CLOUD True => None
2023-02-12 10:44:16 [D] 1 devices loaded from Cloud
2023-02-12 10:44:16 [D] CLOUD mode start
2023-02-12 10:44:16 [D] CLOUD None => True
2023-02-12 10:46:15 [D] CLOUD True => None
2023-02-12 10:46:53 [D] 1 devices loaded from Cloud
2023-02-12 10:46:53 [D] AUTO mode start
2023-02-12 10:46:53 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:46:54 [D] CLOUD None => True
2023-02-12 10:47:19 [D] CLOUD True => None
2023-02-12 10:47:20 [D] 1 devices loaded from Cloud
2023-02-12 10:47:20 [D] AUTO mode start
2023-02-12 10:47:20 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:47:21 [D] CLOUD None => True
2023-02-12 10:47:30 [D] CLOUD True => None
2023-02-12 10:47:31 [D] 1 devices loaded from Cloud
2023-02-12 10:47:31 [D] AUTO mode start
2023-02-12 10:47:31 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:47:31 [D] CLOUD None => True
2023-02-12 10:48:19 [D] CLOUD True => None
2023-02-12 10:48:19 [D] 1 devices loaded from Cloud
2023-02-12 10:48:19 [D] AUTO mode start
2023-02-12 10:48:19 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:48:20 [D] CLOUD None => True
2023-02-12 10:48:44 [D] CLOUD True => None
2023-02-12 10:48:44 [D] 1 devices loaded from Cloud
2023-02-12 10:48:44 [D] CLOUD mode start
2023-02-12 10:48:45 [D] CLOUD None => True
2023-02-12 10:50:04 [D] CLOUD True => None
2023-02-12 10:50:05 [D] 1 devices loaded from Cloud
2023-02-12 10:50:05 [D] CLOUD mode start
2023-02-12 10:50:06 [D] CLOUD None => True
2023-02-12 10:50:23 [D] CLOUD True => None
2023-02-12 10:50:23 [D] 1 devices loaded from Cloud
2023-02-12 10:50:23 [D] LOCAL mode start
2023-02-12 10:50:23 [D] 10012ae408 !! skip setup for encrypted device
2023-02-12 10:50:29 [D] CLOUD None => None
2023-02-12 10:50:29 [D] 1 devices loaded from Cloud
2023-02-12 10:50:29 [D] LOCAL mode start
2023-02-12 10:50:29 [D] 10012ae408 !! skip setup for encrypted device
I have the same issue after updating to 3.4.0. I roll back to 3.3.1, and it works fine.
How to roll back correctly so as not to break it?
I didn't have this issue with the previous version. I mean not in the last few months (when I started using the whole Sonoff stuff, I experienced it a few times). It is unusable at the moment, the heating fails at least once every day.
I use one 4ch and one 4chpro, local, or auto mode.
Here are a few ldebug logs of such an unfortunate event:
2023-02-11 19:59:20 [D] 10015bc32f => Local4 | {'switches': [{'outlet': 0, 'switch': 'on'}]} <= {'seq': 5924, 'sequence': '1676141960001', 'error': 0}
2023-02-11 19:59:20 [D] 10015bc32f <= Local3 | {'sledOnline': 'on', 'configure': [{'startup': 'stay', 'outlet': 0}, {'startup': 'stay', 'outlet': 1}, {'startup': 'stay', 'outlet': 2}, {'startup': 'stay', 'outlet': 3}], 'pulses': [{'pulse': 'off', 'width': 1000, 'outlet': 0}, {'pulse': 'off', 'width': 1000, 'outlet': 1}, {'pulse': 'off', 'width': 1000, 'outlet': 2}, {'pulse': 'off', 'width': 1000, 'outlet': 3}], 'switches': [{'switch': 'on', 'outlet': 0}, {'switch': 'on', 'outlet': 1}, {'switch': 'on', 'outlet': 2}, {'switch': 'on', 'outlet': 3}]} | 5924
2023-02-11 19:59:20 [D] 10015aae9b => Local4 | {'switches': [{'outlet': 1, 'switch': 'on'}, {'outlet': 3, 'switch': 'on'}, {'outlet': 2, 'switch': 'on'}]} <= {'seq': 2606, 'sequence': '1676141960000', 'error': 0}
2023-02-11 19:59:20 [D] 10015aae9b <= Local3 | {'sledOnline': 'on', 'configure': [{'startup': 'stay', 'outlet': 0}, {'startup': 'stay', 'outlet': 1}, {'startup': 'stay', 'outlet': 2}, {'startup': 'stay', 'outlet': 3}], 'pulses': [{'pulse': 'off', 'width': 1000, 'outlet': 0}, {'pulse': 'off', 'width': 1000, 'outlet': 1}, {'pulse': 'off', 'width': 1000, 'outlet': 2}, {'pulse': 'off', 'width': 1000, 'outlet': 3}], 'switches': [{'switch': 'on', 'outlet': 0}, {'switch': 'on', 'outlet': 1}, {'switch': 'on', 'outlet': 2}, {'switch': 'on', 'outlet': 3}]} | 2606
2023-02-11 20:18:12 [D] 10015bc32f <= Local0 | {'online': False} |
2023-02-11 20:18:12 [D] 10015aae9b <= Local0 | {'online': False} |
2023-02-11 20:18:23 [D] 10015bc32f => Local4 | {'cmd': 'info'} !! Timeout 10
2023-02-11 20:18:23 [D] 10015bc32f !! Local4 | Device offline
2023-02-11 20:18:23 [D] 10015aae9b => Local4 | {'cmd': 'info'} !! Timeout 10
2023-02-11 20:18:23 [D] 10015aae9b !! Local4 | Device offline
2023-02-11 20:24:07 [D] CLOUD None => None
2023-02-11 20:24:07 [D] 2 devices loaded from Cache
2023-02-11 20:24:07 [D] 10015aae9b UIID 0004 | {'version': 8, 'only_device': {'ota': 'success'}, 'sledOnline': 'on', 'fwVersion': '3.5.1', 'rssi': -56, 'init': 1, 'lock': 0, 'configure': [{'startup': 'stay', 'outlet': 0}, {'startup': 'stay', 'outlet': 1}, {'startup': 'stay', 'outlet': 2}, {'startup': 'stay', 'outlet': 3}], 'pulses': [{'pulse': 'off', 'width': 1000, 'outlet': 0}, {'pulse': 'off', 'width': 1000, 'outlet': 1}, {'pulse': 'off', 'width': 1000, 'outlet': 2}, {'pulse': 'off', 'width': 1000, 'outlet': 3}], 'switches': [{'switch': 'off', 'outlet': 0}, {'switch': 'off', 'outlet': 1}, {'switch': 'off', 'outlet': 2}, {'switch': 'off', 'outlet': 3}]}
2023-02-11 20:24:07 [D] 10015bc32f UIID 0004 | {'version': 8, 'only_device': {'ota': 'success'}, 'sledOnline': 'on', 'fwVersion': '3.5.1', 'rssi': -57, 'init': 1, 'lock': 0, 'configure': [{'startup': 'stay', 'outlet': 0}, {'startup': 'stay', 'outlet': 1}, {'startup': 'stay', 'outlet': 2}, {'startup': 'stay', 'outlet': 3}], 'pulses': [{'pulse': 'off', 'width': 1000, 'outlet': 0}, {'pulse': 'off', 'width': 1000, 'outlet': 1}, {'pulse': 'off', 'width': 1000, 'outlet': 2}, {'pulse': 'off', 'width': 1000, 'outlet': 3}], 'switches': [{'switch': 'off', 'outlet': 0}, {'switch': 'off', 'outlet': 1}, {'switch': 'off', 'outlet': 2}, {'switch': 'on', 'outlet': 3}]}
2023-02-11 20:24:07 [D] AUTO mode start
2023-02-11 20:24:10 [D] 10015bc32f <= Local3 | {'sledOnline': 'on', 'configure': [{'startup': 'stay', 'outlet': 0}, {'startup': 'stay', 'outlet': 1}, {'startup': 'stay', 'outlet': 2}, {'startup': 'stay', 'outlet': 3}], 'pulses': [{'pulse': 'off', 'width': 1000, 'outlet': 0}, {'pulse': 'off', 'width': 1000, 'outlet': 1}, {'pulse': 'off', 'width': 1000, 'outlet': 2}, {'pulse': 'off', 'width': 1000, 'outlet': 3}], 'switches': [{'switch': 'on', 'outlet': 0}, {'switch': 'on', 'outlet': 1}, {'switch': 'on', 'outlet': 2}, {'switch': 'on', 'outlet': 3}]} | 5924
2023-02-11 20:24:10 [D] 10015aae9b <= Local3 | {'sledOnline': 'on', 'configure': [{'startup': 'stay', 'outlet': 0}, {'startup': 'stay', 'outlet': 1}, {'startup': 'stay', 'outlet': 2}, {'startup': 'stay', 'outlet': 3}], 'pulses': [{'pulse': 'off', 'width': 1000, 'outlet': 0}, {'pulse': 'off', 'width': 1000, 'outlet': 1}, {'pulse': 'off', 'width': 1000, 'outlet': 2}, {'pulse': 'off', 'width': 1000, 'outlet': 3}], 'switches': [{'switch': 'on', 'outlet': 0}, {'switch': 'on', 'outlet': 1}, {'switch': 'on', 'outlet': 2}, {'switch': 'on', 'outlet': 3}]} | 2606
2023-02-11 20:24:10 [D] Add 12 entities
Considering rolling back one minor version at least.
OMG! Issue is still with me after downgrading to 3.3.1
My devices run with factory firmware. It has been upgraded the other day to 3.6 - but the issue appeared with SonoffLAN 3.4 with the same (3.5.1) fw version on the devices
It is getting worse, as I see...
:(
Just a small question: how many of the affected users use fixed IP address for the Sonoff devices?
I just fixed the IPs for the 4ch-r3 devices in the router, and no unavailability issue occurred since then. It hasn't been a long ago, so it't way too early to say, this was the solution, but I'm interested in your configuration.
I don't fix my IP addresses
On Tue, 14 Feb 2023, 12:44 Heidrich Attila, @.***> wrote:
Just a small question: how many of the affected users use fixed IP address for the Sonoff devices?
I just fixed the IPs for the 4ch-r3 devices in the router, and no unavailability issue occurred since then. It hasn't been a long ago, so it't way too early to say, this was the solution, but I'm interested in your configuration.
— Reply to this email directly, view it on GitHub https://github.com/AlexxIT/SonoffLAN/issues/1108#issuecomment-1429510758, or unsubscribe https://github.com/notifications/unsubscribe-auth/AH2XX7RFKCLEKUVPLJJJTRTWXNOYRANCNFSM6AAAAAAUTGE3CM . You are receiving this because you commented.Message ID: @.***>
I got disconnected with 3.3.1 version and fix IP addresses too. ewelink app doesn't complain a word, but it doesn't look like an integration version issue to me.
manually downgrading to 3.3.0 seems to have fixed most issues for me so far (auto mode with mDNS working)
Hi,
using HACS to downgrade to 3.3.1 fixed the issue here for more than 48hours but now is unavailable again.
Cheers,
Simone
Mine are becoming unavailable every 2 hours since Home Assistant 2023.2.5 update.
Hi @AlexxIT do you have some news about this strange problem ? or did you have change to check the errors ? just to know if the problem is for all and you suggest to wait before install to latest release or if we can go ahed due to only some installation are afftected. wbr
@AlexxIT sorry for my second question, but just to know if it is better wait for a fix :-) thanks a lot fr your effort on that. :-)
after I deleted the integration (deletion didn't actually remove it, just the configuration), the device data got refreshed, now I have got the home selection part. after I selected the home, I have no getting-offline errors.
devices don't always work in auto mode and go off-sync from the cloud state (uncontrollable from HA, while working in the eWeLink app...), but it's ok using LAN mode, so I think this solved the current issue for me (it was last afternoon, so fingers crossed)
I remove the integration and add it again. After a few hours all devices seem to be working fine. Even the missing option to check the home appeared. So far so good ....
I had the same issue with my zigby thermometer. Using the homeassistant.reload_config_entry service I am able force an update every 10 minutes. Need to check in a couple of days if it fixed it, usually mine go down after 48 hours
alias: "Update Office Thermometer " description: Every 10 mins trigger:
- platform: time_pattern minutes: "10" condition: [] action:
- service: homeassistant.reload_config_entry data: {} target: entity_id: sensor.sonoff_a48004e00b_temperature mode: single
I'm desperate now... This bug makes the whole system unusable. It is off-sync right now, shows one switch On, while it's off... Ewelink status is ok. Sometimes just doesn't do the required operation, leaves heater on/off, doesn't show operations in the log...
I try to reload the config on schedule, but have no idea what's next.
Just download v3.3.0 version, this will fix the problems for now until is gonna fix it. Mine is working ok now.
Just download v3.3.0 version, this will fix the problems for now until is gonna fix it. Mine is working ok now.
That's what fixed it for me