ebus component not showing any errors
The problem
I configured the ebus integration. ebusd is running, it is connected properly and shows values. I verified that I can connect to the ebusd ports. I have added the ebus component in configuration.yaml, however the integration does not show up, and neither do any errors. There might be something obvious that I am missing, but if not I think at least there should be some error, when something specified in configuration.yaml is not configured correctly or not working somehow.
What version of Home Assistant Core has the issue?
core-2024.5.5
What was the last working version of Home Assistant Core?
core-2024.5.5
What type of installation are you running?
Home Assistant Container
Integration causing the issue
ebusd
Link to integration documentation on our website
https://www.home-assistant.io/integrations/ebusd
Diagnostics information
the integration is not showing up, and there are no errors in the the log, so now diagnostic data can be provided.
Example YAML snippet
This is the beginning of my configuration.yaml, with the component included.
# Set your HOME-ASSISTANT configuration here!
default_config:
# Text to speech
tts:
- platform: google_translate
http:
use_x_forwarded_for: true
trusted_proxies:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
- 127.0.0.1
- ::1
ebusd:
host: 127.0.0.1
circuit: "bai"
port: 8889
template:
- sensor:
- name: Verbunden
Anything in the logs that might be useful for us?
no
Additional information
No response
✔️ Linting Passed
All linting checks passed. Your pull request is in excellent shape! ☀️
Thanks for the PR. Could you please add a non regression test based on the reproducer provided in #29416 and make it run on all solvers with a small value of tol such as tol=1e-12? It's possible that strict equivalence might be hard to reach for stochastic solvers such as SAG/SAGA but I am not sure.
Note: in the test_logistic_regression_sample_weight sag and saga are left out as they systematically fail
@snath-xoc could you please push a commit that triggers running the updated test in this PR for all admissible values of global_random_seed? Instructions are available in this information issue: #28959.
EDIT: before testing on the CI, you should test locally with commands like the following:
SKLEARN_TESTS_GLOBAL_RANDOM_SEED="all" pytest -k test_logistic_regression_sample_weights sklearn/linear_model/tests/test_logistic.py -vlx
at the moment, the test fails for many seeds but this can be fixed as explained in the review comments below.
I find this test too long to follow. You might want to decouple the part that tests equivalence between integer sample weights and their repeated counter part from the part of the test that checks equivalence between class weights and sample weights into two independent tests.
To actually trigger [all random seeds] tests you need to include the list of test function names after the [all random seeds] flag in the commit message as explained in #28959.
Note for later: the LogisticRegressionCV class has the following tag to skip the common test:
def _more_tags(self):
return {
"_xfail_checks": {
"check_sample_weights_invariance": (
"zero sample_weight is not equivalent to removing samples"
),
}
}
However check_sample_weights_invariance currently does not work well with kind="zeros" in the presence of a cv constructor parameter. This common test should be updated to drop samples without moving the CV boundaries for the remaining training points.
This should better be done in a dedicated PR though.
Merged! Thanks very much @snath-xoc!