pytest: slow collection for `car/tests/`
can/tests/ is 20x faster to collect than car/tests/.
batman:opendbc$ pytest --co can/tests/
============ 22 tests collected in 0.05s ============
batman:opendbc$ pytest --co car/tests/
============ 1627 tests collected in 1.04s ============
My first suspicion is the use of parameterized. Parameterizing is super important for running all the tests on different car models, so it's worth confirming that it's this and making it faster. Is it faster to just use a for loop?
For bounty, need a good root cause and get it <= 0.1s unless there's some reason we can't.
It's all the imports.
E.g.
pyinstrument -m pytest --cache-clear --collect-only opendbc/car/tests/test_lateral_limits.py
================================================================================ 430 tests collected in 0.29s =================================================================================
_ ._ __/__ _ _ _ _ _/_ Recorded: 22:02:12 Samples: 358
/_//_/// /_\ / //_// / //_'/ // Duration: 0.401 CPU time: 0.398
/ _/ v5.0.0
Program: pytest --cache-clear --collect-only opendbc/car/tests/test_lateral_limits.py
0.354 <module> pytest/__main__.py:1
└─ 0.354 console_main _pytest/config/__init__.py:194
[108 frames hidden] _pytest, pluggy, inspect, importlib, ...
0.201 import_module importlib/__init__.py:73
├─ 0.141 AssertionRewritingHook.exec_module _pytest/assertion/rewrite.py:144
│ └─ 0.139 <module> opendbc/car/tests/test_lateral_limits.py:1
│ └─ 0.136 <module> opendbc/car/car_helpers.py:1
│ ├─ 0.064 <module> opendbc/car/fingerprints.py:1
│ │ ├─ 0.039 <module> opendbc/car/interfaces.py:1
│ │ │ └─ 0.039 <module> opendbc/car/values.py:1
│ │ │ ├─ 0.014 compile <built-in>
│ │ │ └─ 0.004 <module> opendbc/car/volkswagen/values.py:1
│ │ └─ 0.022 get_interface_attr opendbc/car/interfaces.py:378
│ │ └─ 0.015 compile <built-in>
│ ├─ 0.059 load_interfaces opendbc/car/car_helpers.py:16
│ │ ├─ 0.022 <module> opendbc/car/toyota/carcontroller.py:1
│ │ │ └─ 0.022 <module> opendbc/car/secoc.py:1
│ │ │ ├─ 0.013 <module> Crypto/Cipher/__init__.py:1
│ │ │ └─ 0.007 <module> Crypto/Hash/CMAC.py:1
│ │ │ [7 frames hidden] Crypto, platform, subprocess
│ │ └─ 0.020 compile <built-in>
│ └─ 0.008 <module> opendbc/car/fw_versions.py:1
│ └─ 0.005 get_interface_attr opendbc/car/interfaces.py:378
└─ 0.056 <module> opendbc/car/__init__.py:1
├─ 0.037 <module> opendbc/car/docs_definitions.py:1
│ └─ 0.034 <module> opendbc/car/common/conversions.py:1
│ └─ 0.034 <module> numpy/__init__.py:1
│ [11 frames hidden] numpy
├─ 0.006 <module> opendbc/car/structs.py:1
└─ 0.006 <module> panda/__init__.py:1
Specifically, functions such as this one and this one that import stuff using __import__.
Parametrizing could be done using @pytest.mark.parametrize.
Nice find, I'm supportive of removing the dynamic imports. In fact, I think it's preferable to regardless of this.
The first import of the compiled cpp packages is slow. Not sure if anything can be done there.
parser_pyx in opendbc/can/can_define.py
packer_pyx in opendbc/can/packer.py
pyinstrument -m pytest --cache-clear --collect-only opendbc/car/tests
1.161 <module> pytest/__main__.py:1
└─ 1.161 console_main _pytest/config/__init__.py:194
[79 frames hidden] _pytest, pluggy, ast, importlib, pyte...
0.891 import_module importlib/__init__.py:73
├─ 0.833 AssertionRewritingHook.exec_module _pytest/assertion/rewrite.py:144
│ ├─ 0.665 <module> opendbc/car/tests/test_can_fingerprint.py:1
│ │ └─ 0.663 <module> opendbc/car/car_helpers.py:1
│ │ ├─ 0.442 <module> opendbc/car/fingerprints.py:1
│ │ │ ├─ 0.403 <module> opendbc/car/interfaces.py:1
│ │ │ │ └─ 0.401 <module> opendbc/car/values.py:1
│ │ │ │ └─ 0.368 <module> opendbc/car/volkswagen/values.py:1
│ │ │ │ └─ 0.354 <module> opendbc/can/can_define.py:1. <----------------------------
│ │ │ └─ 0.038 get_interface_attr opendbc/car/interfaces.py:378
│ │ │ └─ 0.029 compile <built-in>
│ │ └─ 0.202 load_interfaces opendbc/car/car_helpers.py:16
│ │ ├─ 0.116 <module> opendbc/car/body/carcontroller.py:1
│ │ │ └─ 0.116 <module> opendbc/can/packer.py:1 <--------------------------------------
│ │ ├─ 0.037 compile <built-in>
│ │ └─ 0.029 <module> opendbc/car/toyota/carcontroller.py:1
│ │ └─ 0.028 <module> opendbc/car/secoc.py:1
│ │ └─ 0.018 <module> Crypto/Cipher/__init__.py:1
│ ├─ 0.069 <module> opendbc/car/tests/test_car_interfaces.py:1
│ │ └─ 0.067 AssertionRewritingHook.exec_module _pytest/assertion/rewrite.py:144
│ │ [9 frames hidden] hypothesis, _pytest
│ └─ 0.035 <module> opendbc/car/tests/test_docs.py:1
│ └─ 0.035 <module> opendbc/car/docs.py:1
│ ├─ 0.018 <module> jinja2/__init__.py:1
│ │ └─ 0.017 <module> jinja2/environment.py:1
│ └─ 0.013 <module> natsort/__init__.py:1
│ └─ 0.013 <module> natsort/natsort.py:1
└─ 0.054 <module> opendbc/car/__init__.py:1
└─ 0.036 <module> opendbc/car/docs_definitions.py:1
└─ 0.033 <module> opendbc/car/common/conversions.py:1
└─ 0.033 <module> numpy/__init__.py:1
[3 frames hidden] numpy
The following flame graph running on a VM (pyinstrument --show-all -r html -m pytest --cache-clear --collect-only opendbc/car/tests) (3.954s total on my machine):
Showed a fair bit of time spent in _is_coroutine_or_asyncgen.
Found the related github issues: https://github.com/pytest-dev/pytest-asyncio/issues/720 https://github.com/pytest-dev/pytest-asyncio/issues/997
The latter suggests a performance regression in pytest-asyncio==0.18.1. Downgrading to pytest-asyncio==0.18.0:
(openpilot) root@ubuntu-dev-vm:/home/vagrant/openpilot# python3 -m pip install --force-reinstall -v "pytest-asyncio==0.18.0"
Additionally, modifying pyproject.toml to account for configuration changes of new version:
[tool.pytest.ini_options]
minversion = "6.0"
addopts = "--ignore=openpilot/ --ignore=opendbc/ --ignore=panda/ --ignore=rednose_repo/ --ignore=tinygrad_repo/ --ignore=teleoprtc_repo/ --ignore=msgq/ -Wer
ror --strict-config --strict-markers --durations=10 -n auto --dist=loadgroup"
asyncio_mode = "strict" <------------------ made explicit
cpp_files = "test_*"
cpp_harness = "selfdrive/test/cpp_harness.py"
python_files = "test_*.py"
# asyncio_default_fixture_loop_scope = "function" <-------------- Comment this out, as `0.18.0` doesn't understand
Has pyinstrument reporting collection runtime 3.954s->1.639s for ~2.5x improvement in collection time. Total test runtime goes from ~23.480s to ~22.712s.
If downgrading is unacceptable working with upstream pytest-asyncio may be useful for this portion of the time.
pytest-asyncio isn't an opendbc dependency.
In case the pytest-asyncio performance regression is still an issue, I wanted to point out that you should see improvements to test collection speed when upgrading to pytest-asyncio v0.25.1 and v1.0.0.
This is my first time ever working on anything opensource on GH, and I don't even know where to start. Any tips?
Looks solved by #1932. After rebasing on main, car/tests/ shows == 1769 tests collected in 0.10s ==
hello @adeebshihadeh @sshane #2751 is ready for review
@adeebshihadeh it appears most of the proposed solutions just move the time taken from the collection step to the test run step; so no net time loss. We should make the bounty more clear if that wasn't your intention
I believe my proposed solution does not, both test collection and total execution time become faster.
On Sun, 7 Sep 2025 at 07:17, Shane Smiskol @.***> wrote:
sshane left a comment (commaai/opendbc#1184) https://github.com/commaai/opendbc/issues/1184#issuecomment-3263484704
@adeebshihadeh https://github.com/adeebshihadeh it appears most of the proposed solutions just move the time taken from the collection step to the test run step; so no net time loss. We should make the bounty more clear if that wasn't your intention
— Reply to this email directly, view it on GitHub https://github.com/commaai/opendbc/issues/1184#issuecomment-3263484704, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA3P4LIO3YQBXZZBQWWMIPD3RO5YLAVCNFSM6AAAAABNE7UU6SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTENRTGQ4DINZQGQ . You are receiving this because you commented.Message ID: @.***>
On master, collection time is ~0.7s and total test time is 25–30s. Locally and on CI, the total test time does decrease on average, but only slightly, a measly ~0.5s reduction is minimal compared to 25–30s.
@sshane Please can you review the changes I proposed in my pull request, thank you. Criticisms or review of any form is welcomed. I am here to learn, you.