core icon indicating copy to clipboard operation
core copied to clipboard

ebus component not showing any errors

Open step21 opened this issue 1 year ago • 16 comments

The problem

I configured the ebus integration. ebusd is running, it is connected properly and shows values. I verified that I can connect to the ebusd ports. I have added the ebus component in configuration.yaml, however the integration does not show up, and neither do any errors. There might be something obvious that I am missing, but if not I think at least there should be some error, when something specified in configuration.yaml is not configured correctly or not working somehow.

What version of Home Assistant Core has the issue?

core-2024.5.5

What was the last working version of Home Assistant Core?

core-2024.5.5

What type of installation are you running?

Home Assistant Container

Integration causing the issue

ebusd

Link to integration documentation on our website

https://www.home-assistant.io/integrations/ebusd

Diagnostics information

the integration is not showing up, and there are no errors in the the log, so now diagnostic data can be provided.

Example YAML snippet

This is the beginning of my configuration.yaml, with the component included.

# Set your HOME-ASSISTANT configuration here!
default_config:
# Text to speech
tts:
  - platform: google_translate
http:
  use_x_forwarded_for: true
  trusted_proxies:
    - 10.0.0.0/8
    - 172.16.0.0/12
    - 192.168.0.0/16
    - 127.0.0.1
    - ::1
ebusd:
  host: 127.0.0.1
  circuit: "bai"
  port: 8889
template:
  - sensor:
      - name: Verbunden

Anything in the logs that might be useful for us?

no

Additional information

No response

step21 avatar Aug 27 '24 21:08 step21

✔️ Linting Passed

All linting checks passed. Your pull request is in excellent shape! ☀️

Generated for commit: cb2f28b. Link to the linter CI: here

github-actions[bot] avatar Jul 05 '24 10:07 github-actions[bot]

Thanks for the PR. Could you please add a non regression test based on the reproducer provided in #29416 and make it run on all solvers with a small value of tol such as tol=1e-12? It's possible that strict equivalence might be hard to reach for stochastic solvers such as SAG/SAGA but I am not sure.

ogrisel avatar Jul 05 '24 12:07 ogrisel

Note: in the test_logistic_regression_sample_weight sag and saga are left out as they systematically fail

snath-xoc avatar Jul 09 '24 21:07 snath-xoc

@snath-xoc could you please push a commit that triggers running the updated test in this PR for all admissible values of global_random_seed? Instructions are available in this information issue: #28959.

EDIT: before testing on the CI, you should test locally with commands like the following:

SKLEARN_TESTS_GLOBAL_RANDOM_SEED="all" pytest -k test_logistic_regression_sample_weights sklearn/linear_model/tests/test_logistic.py -vlx

at the moment, the test fails for many seeds but this can be fixed as explained in the review comments below.

ogrisel avatar Aug 07 '24 15:08 ogrisel

I find this test too long to follow. You might want to decouple the part that tests equivalence between integer sample weights and their repeated counter part from the part of the test that checks equivalence between class weights and sample weights into two independent tests.

ogrisel avatar Aug 07 '24 16:08 ogrisel

To actually trigger [all random seeds] tests you need to include the list of test function names after the [all random seeds] flag in the commit message as explained in #28959.

ogrisel avatar Aug 09 '24 14:08 ogrisel

Note for later: the LogisticRegressionCV class has the following tag to skip the common test:

    def _more_tags(self):
        return {
            "_xfail_checks": {
                "check_sample_weights_invariance": (
                    "zero sample_weight is not equivalent to removing samples"
                ),
            }
        }

However check_sample_weights_invariance currently does not work well with kind="zeros" in the presence of a cv constructor parameter. This common test should be updated to drop samples without moving the CV boundaries for the remaining training points.

This should better be done in a dedicated PR though.

ogrisel avatar Sep 05 '24 09:09 ogrisel

Merged! Thanks very much @snath-xoc!

ogrisel avatar Sep 06 '24 12:09 ogrisel