hass-kumo
hass-kumo copied to clipboard
Heat Mode Missing or Unit Unavailable
Hey this plugin has been rock solid since I set it up in January. I know Kumo Cloud is flaky but things have become very unreliable. I see the following so not sure if this is part of the issue.
The problem is the head units either go "unavailable" or the heat mode disappears completely from HA so when the units are set to fire on the heat isn't working.
Logger: homeassistant.helpers.frame Source: helpers/frame.py:77 First occurred: 7:34:59 AM (1 occurrences) Last logged: 7:34:59 AM
Detected integration that called async_setup_platforms instead of awaiting async_forward_entry_setups; this will fail in version 2023.3. Please report issue to the custom integration author for kumo using this method at custom_components/kumo/init.py, line 102: hass.config_entries.async_setup_platforms(entry, PLATFORMS)
Logger: homeassistant.components.homekit.type_thermostats Source: components/homekit/type_thermostats.py:599 Integration: HomeKit (documentation, issues) First occurred: 7:35:06 AM (12 occurrences) Last logged: 7:51:08 AM
Cannot map hvac target mode: heat to homekit as only {0: <HVACMode.OFF: 'off'>, 2: <HVACMode.COOL: 'cool'>} modes are supported
I deleted integration and set it up fresh with cache enabled and put in all of the IP Addresses. All 3 units are connected and show "heating" but two of them are still missing heat mode icon so something is still off.
I have been seeing this issue myself for the past couple of days, so it will get fixed once I have time to dig into it. Anyone who's seeing this and wants to help out, use the Interactive Use instructions to capture good & bad status dictionaries from an indoor unit, so we can see what's going on.
OK, I did a little exploration this evening. The errant indoor units are returning:
{'_api_error': 'serializer_error'}
for requests they used to process just fine. This is the error you get if the response you would get is too big.
Some background: the Kumo local API is a big JSON dictionary. You send the part of the dictionary you want as a 'c' command and get back an 'r' response with everything under that point filled in. For example, the sensors query is {"c":{"sensors":{}}}
.
If you ask for {"c":{}}
it always returns that serializer error. Which is too bad because otherwise it would tell us everything the indoor unit has to offer, and we would not have had to deduce the existence of (say) the mhk2 portion of the dictionary.
But now, asking for all the sensors is (sometimes) too much. This one's not too bad because I've seen in the past there are 4 possible sensors (only one of my indoor units has any sensors) so I can just request each of '{"c":{"sensors":{'0': {}}}}'
and 1, 2, and 3, and get back all the info.
But I'm also seeing this issue right now on the "profile" query, which is just a list of attributes. So it seems like there we'll need to fetch the attributes we care about individually in separate API calls. This is doable, but not on a weeknight so it'll have to wait for the weekend.
What's weird is that sometimes it's fine. Right now one of my 3 indoor units is showing the problem, but the other 2 are fine; yesterday one of the other units was having trouble.
FYI this has nothing to do with Home Assistant version. This issue is actually in the pykumo library and seems to be caused by something Mitsubishi changed. So it's possible redoing some of the network snooping between indoor unit and the app would (a) reveal if the app is now doing something different, and (b) possibly reveal some new functionality they've added that we could exploit.
Yes i'm seeing the exact same thing. Right now all 3 of my units the heat mode is back it's completely random on when it changes. I don't go into the Kumo Cloud App that often on my iPhone but it definitely looks like they added things to the app, there looks to be more menus and options than I previously remember. Thanks for maintaining this plugin it's been excellent up this this point.
I've published a new beta v0.3.5-beta which should resolve this issue. Given the severity I wanted to get a fix out quickly, so I've only done basic testing. Please test and reply here with results. If all goes well I'll make a new production release soon.
Thanks, I have 3.5 Beta installed. I will monitor and let you know how it's looking. Appreciate the quick turnaround.
Things look overall somewhat better but I'm still seeing issues. I also started seeing a lot of '__no_memory' responses from 2 of my 3 indoor units. I power-cycled the system and that seems to have stopped that from happening.
I am going to try always querying the individual attributes and see if that helps. I'll let that soak on my own system for a while and push another update if it seems improved.
So, indications are this is something Mitsubishi broke. Let's hope they fix it, and let's hope my workarounds are good enough to improve reliability.
v0.3.5-beta2 is doing better for me overnight.
I just installed 3.5 beta2. the plugin seems to take longer than usual to initialize after HA restart, but it's running and all 3 units look good. I'll let you know if I run into any issues.
I'm running the new beta and my problematic unit still seems to go up and down. I'm seeing the same issue from the kumo app. Not sure there is much to do until Mitsubishi fixes it from their end.
Power cycling (at the breaker) seems to have helped my system, at least for today. Before I was getting a lot of "__no_memory" errors in the logs, and I haven't seen one of those today.
My theory (based on nothing but having been writing software since 1984 :-) ) is that there's some issue that's occurring that results in the "serializer_error", and when this happens there's also a memory leak on the adapter. Thus later triggering the "__no_memory" condition. That's why I switched pykumo to querying individual values only, since whatever's triggering the serializer error happens with some regularity when getting the larger responses. Power cycling the adapter would, of course, restore any leaked memory.
So I'd also advise staying out of the KumoCloud app, which probably still uses the larger requests.
Power cycling (at the breaker) seems to have helped my system, at least for today. Before I was getting a lot of "__no_memory" errors in the logs, and I haven't seen one of those today.
My theory (based on nothing but having been writing software since 1984 :-) ) is that there's some issue that's occurring that results in the "serializer_error", and when this happens there's also a memory leak on the adapter. Thus later triggering the "__no_memory" condition. That's why I switched pykumo to querying individual values only, since whatever's triggering the serializer error happens with some regularity when getting the larger responses. Power cycling the adapter would, of course, restore any leaked memory.
So I'd also advise staying out of the KumoCloud app, which probably still uses the larger requests.
If local cache is enabled does the plugin talk directly to the units and doesn't need anything from Kumo Cloud for operation?
If local cache is enabled does the plugin talk directly to the units and doesn't need anything from Kumo Cloud for operation?
Yes.
I'm running the new beta and my problematic unit still seems to go up and down. I'm seeing the same issue from the kumo app. Not sure there is much to do until Mitsubishi fixes it from their end.
I would remove the configuration and re-configure it with cache mode enabled if it's not setup this way. Just make sure the kumo wifi adapter(s) are setup with DHCP reservations beforehand.
Thanks for the troubleshooting help. I can confirm that turning the breaker off seems to have fixed the connectivity issue (albeit I did it in conjunction with removing and reinstalling the kumo custom component.) Connectivity seems really slow now compared to what it used to be. I'm seeing relatively large delays between commands, it getting executed, and the state being updated in Kumo. Are you guys seeing this too?
I've been toying with the idea of trying to firewall the kumo wifi adapters from WAN. Do you guys think that would help or even be worth while?
Slower is expected. It's doing ~47 requests per refresh, where before it did ~5. If your WiFi is marginal it will only magnify the effect.
I think firewalling would totally prevent you from using the Kumo Cloud app. It would also prevent you from getting any hypothetical fix for this issue. It's not going to help the current situation, though.
1 out of my 5 units went offline this morning. I'd like to think I don't have a wifi issue... Unifi reports everything is solid and I have no issues with other devices.
I should revise by earlier comment by saying that I had toyed with the idea of taking my adapters off of WAN because I couldn't think of any good that keeping them connected to the internet would do. I don't use the Kumo app at all as I found the home assistant integration much more responsive. It sounds like if I had done it, it would have saved me from these headaches.
Which firmware version are folks seeing this issue with? (Kumo app: Settings > System Setup > Installer Settings > (site) > (zone) > firmware version shows in the lower right corner of the page)
I'm currently on 02.06.05 and will be on the lookout for the behavior change. Currently not yet seeing the issue.
I suspect this is highly variable with what equipment you have. Mine is 00.04.21.
I'm still amazed that the indoor unit model is not available through the API (nor WiFi adapter model).
I think exposing the firmware version as an attribute (it is in available via the local API) would be a good idea.
Interesting. The only reference I've found to my firmware rev is a Mitsubishi FAQ about a fix introduced to support commissioning on Android 12 devices, apparently dated April 2022, so I've likely had version 02.06.05 since the system was installed in December.
https://help.mitsubishicomfort.com/kumocloud/connectivity#what-if-my-mobile-device-is-running-android-12
I'm using the PAC-USWHS002-WF-2 interfaces on all 4 of my units, which are a mixture of the MLZ-KP ceiling cassettes, an SEZ ducted, and an MSZ-GL wall mount.
Mine is 00.04.21 for what that's worth.
Interesting, so it seems like it's possible the older PAC-USWHS002-WF-1 interfaces are showing this issue and the WF-2 ones are not. Maybe I should put in a config option -- I doubt I'll have time before the weekend, though.
- Things have been stable since beta2 was applied over the weekend.
- I have 3 PAC-USWHS002-WF-2 adapters.
I have the older PAC-USWHS002-WF-1 adapter with firmware 00.04.21. It briefly worked again for about an hour after updating to beta2 a few days ago, but has been unavailable since.
It briefly worked again for about an hour
Did you try power-cycling your system (i.e. throw the breaker, wait 10 seconds, turn it back on)?
The theory is that there's a memory leak on the adapter that's triggered in certain error conditions, which the beta2 tries to avoid.
I did, yes. I did it again just now for good measure and it's still unavailable.
You may have something else going on. What are the error messages in your logs?
Never mind, I'm just an idiot. I got a new router the other day and did DHCP reservations for all my smart devices shortly after installing beta2. I noticed in the logs that it was trying to poll the old IP address.
I reinstalled the integration and it's working again now. Sorry about that!