async_add_entities will stop working in Home Assistant 2025.12.0
HA error log repeats entries like this and self-suggests opening a bug report. So here we go:
Logger: homeassistant.helpers.frame Quelle: helpers/frame.py:324 Erstmals aufgetreten: 25. März 2025 um 23:32:28 (1 Vorkommnisse) Zuletzt protokolliert: 25. März 2025 um 23:32:28
Detected that custom integration 'solaredgeoptimizers' calls device_registry.async_get_or_create referencing a non existing via_device ('solaredgeoptimizers', XXXXXXXXX'), with device info: {'hw_version': 'NNNNNNNNNNN', 'identifiers': {('solaredgeoptimizers', 'NNNNNNNNNNN')}, 'manufacturer': 'JA solar', 'model': 'JAM54S30-440/LR (1000V)', 'name': '1.1.1', 'via_device': ('solaredgeoptimizers', 'NNNNNNNNNNNNNNNNNN')} at custom_components/solaredgeoptimizers/sensor.py, line 103: async_add_entities(. This will stop working in Home Assistant 2025.12.0, please create a bug report at https://github.com/ProudElm/solaredgeoptimizers/issues
I'm polling the SolarEdge monitoring portal with Home Assistant including the "SolarEdge Optimizers Data" integration for many month's without any issue.
It seems to me, that this kind of error logs started since recent SolarEdge's changes of the Web monitoring portal around March 24th/25th 2025.
Additionally the optimizers data have intermittent drops / missing values (see screenshot). Reboots of HA had no effect so far.
Hi, I'm struggling with the same issue encountered some days ago. A working around is to eliminate via_device field in sensor.py as following:
self._attr_device_info = DeviceInfo( identifiers={(DOMAIN, self._paneelobject.serialnumber)}, name=self._optimizerobject.displayName, manufacturer=self._paneelobject.manufacturer, model=self._paneelobject.model, hw_version=self._paneelobject.serialnumber, # via_device=(DOMAIN, self._entry.entry_id), # ❌ REMOVED )
Applying this fix is possible to avoid "unavailable" status anymore but I'm experiencing of long update time of Solaredge panels power (15-20 min). Of course a robust fix is needed 😞
@clanic70 thank's for looping in. My sensor.py looks a bit different like this:
@property
def device_info(self):
return {
"identifiers": {
# Serial numbers are unique identifiers within a specific domain
(DOMAIN, self._paneelobject.serialnumber)
},
"name": self._optimizerobject.displayName,
"manufacturer": self._paneelobject.manufacturer,
"model": self._paneelobject.model,
"hw_version": self._paneelobject.serialnumber,
"via_device": (DOMAIN, self._entry.entry_id),
}
Is it safe to comment out the last line with a "#" and restart plugin? Does this also fix the intermittently missing data values? Update-Time is quite common due to SE's monitoring portal doesn't gives more than these 15 minute values. Or do you mean the polling request itself ?
Hi, yes it's safe. Using #..# that string will not be used anymore and you can restore it only cancelling that simbol. In my case I didn't see anymore sensor "unavailable". About update time I remember around 5-7 min and not 15-20. I wonder if from Solaredge side they have limited the API request. Best regards
ok, I commented out the line with "via_device"-code and reloaded integration. Unfortunately, there are missing data values as before. Recently these kind of error-messages show up in the logs:
--snip-- Logger: custom_components.solaredgeoptimizers.sensor Quelle: helpers/update_coordinator.py:412 Integration: SolarEdge Optimizers Data (Dokumentation, Probleme) Erstmals aufgetreten: 16:03:17 (7 Vorkommnisse) Zuletzt protokolliert: 23:08:41 Error fetching SolarEdgeOptimizer data:
--snip-- Logger: custom_components.solaredgeoptimizers.sensor Quelle: custom_components/solaredgeoptimizers/sensor.py:177 Integration: SolarEdge Optimizers Data (Dokumentation, Probleme) Erstmals aufgetreten: 16:03:17 (12 Vorkommnisse) Zuletzt protokolliert: 23:23:43 Error in updating updater
I see, it happened to me too after HA shutdown/reboot. Try to initialise integration a couple of times until all data will come up, then should be stable. From my side this issue started on 22nd March when Solaredge updated inverter firmware to 0004.0017.0221 version.
Sensor.py.docx
On attach the full sensor-py modified:
a) Keep last value available to avoid missing data and "unavailable status"
b) Force all solar panels power to zero when Solaredge Inverter status switch to "sleeping mode" (no production)
Cheers
short update: commenting out the "via_device"-code was successful in terms of the corresponding error logs.
Concerning "data drops" : I don't know what had been changed on April 1st, 09:30 MEST, but since then, all collected data values are complete with no further drops - not even one! I suspect that SolarEdge has optimized their websocket performance.
So I'm pleased with the situation for now as everything works quite well for me without any of the former error logs.
The "gaps" in the data have magically disappeared for a week... but now, at the end of the day (around 6:45/7:00 PM), everything stops!
The "gaps" in the data have magically disappeared for a week... but now, at the end of the day (around 6:45/7:00 PM), everything stops!
Same here around April 11th, 18:30 CEST but fortunately, data is available again beginning with sunshine this morning (April 12th). So this is not an issue with this integration but due to SE's monitoring platform.
I've ended up here, as I think I'm experiencing the same issue. Around 17:35 each day, all data just stops. Is this something to do with the plugin, the monitoring portal, or have I misconfigured something?
Happy to provide more info if helpful.
That the data just 'drop's is expected i'm afraid. The portal only updates the data so often en once there is no 'new' update, no more data to get from the portal.
This integration is a very bad way to get the data from our own optimizes instead of getting it directly from them (or the inverter).
I still dream of the day that this data is just plain available via the modbus.. In a few days i will go on holiday and have finally some time to work on most issues.
The portal still seems to show data after 17:30, however (red line is where it drops off in HA)
Makes me wonder if, on their side, they're terminating scraping sessions by some daily batch job. I mean, this is probably far exceeding the 300 daily request limit they impose on API usage.
I confirm, the portal records data from each individual optimizer 24/7