opentelemetry-python-contrib
opentelemetry-python-contrib copied to clipboard
Add support for Python 3.13
Description
Add support for Python 3.13 to all instrumentations where the instrumented library already support Python 3.13.
Updated tox.ini and also the workflow generation script. So for all instrumentation that support Python 3.13 already the tests will also run in CI.
To make the tests pass I had to update some test-requirements. But only helper libs and not the libraries under test itself.
Type of change
Please delete options that are not relevant.
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [x] This change requires a documentation update
How Has This Been Tested?
I ran the all instrumentation tests in Python 3.13 locally, and those are the libraries that do not support Python 3.13 yet. Those CI workflows have not yet updated to run under Python 3.13:
py313-test-instrumentation-aiopg: FAIL code 1 (39.86=setup[0.02]+cmd[6.38,6.33,5.05,6.13,15.95] seconds)
py313-test-instrumentation-asyncpg: FAIL code 1 (22.23=setup[0.02]+cmd[4.22,4.03,3.63,3.52,6.81] seconds)
py313-test-instrumentation-celery: FAIL code 1 (36.26=setup[0.02]+cmd[3.83,4.71,4.54,4.21,4.12,14.82] seconds)
py313-test-instrumentation-confluent-kafka: FAIL code 1 (21.60=setup[0.02]+cmd[4.11,4.00,4.11,3.55,5.81] seconds)
py313-test-instrumentation-django-1: FAIL code 2 (34.17=setup[0.02]+cmd[5.97,5.85,8.85,4.00,9.17,0.30] seconds)
py313-test-instrumentation-falcon-1: FAIL code 2 (26.33=setup[0.03]+cmd[5.14,4.49,4.64,4.71,7.04,0.29] seconds)
py313-test-instrumentation-falcon-2: FAIL code 1 (26.67=setup[0.02]+cmd[4.80,4.51,3.93,4.47,8.95] seconds)
py313-test-instrumentation-grpc-0: FAIL code 1 (145.68=setup[0.02]+cmd[6.33,3.66,3.89,4.21,127.58] seconds)
py313-test-instrumentation-grpc-1: FAIL code 1 (149.32=setup[0.02]+cmd[5.35,4.05,3.97,4.06,131.86] seconds)
py313-test-instrumentation-httpx-0: FAIL code 1 (22.01=setup[0.02]+cmd[4.44,4.08,4.09,3.84,5.27,0.26] seconds)
py313-test-instrumentation-psycopg2: FAIL code 1 (29.28=setup[0.02]+cmd[4.66,4.23,3.92,4.89,11.56] seconds)
py313-test-instrumentation-pyramid: FAIL code 2 (22.67=setup[0.02]+cmd[3.76,3.68,3.73,4.36,6.79,0.31] seconds)
py313-test-instrumentation-sqlalchemy-1: FAIL code 1 (22.50=setup[0.02]+cmd[3.91,3.75,3.97,4.03,6.82] seconds)
py313-test-instrumentation-system-metrics: FAIL code 1 (27.84=setup[0.02]+cmd[4.25,8.01,5.68,5.47,3.67,0.73] seconds) Note: ONLY tests failed
Note on py313-test-instrumentation-system-metrics: I think this can be fixed to be compatible to Python 3.13 because only the tests fail (but the tox environment is built and the tests run) But I have not enough knowledge yet to fix those two failing tests.
Does This PR Require a Core Repo Change?
- [x] Yes. - Link to PR: https://github.com/open-telemetry/opentelemetry-python/pull/4067
- [ ] No.
Checklist:
See contributing.md for styleguide, changelog guidelines, and more.
- [ ] Followed the style guidelines of this project
- [ ] Changelogs have been updated
- [x] Unit tests have been added
- [ ] Documentation has been updated
Do they work locally? I think it's premature to add them on CI since we are depending on other packages support and wheels (they removed some C-API symbols) more than something on our side.
Makes sense.
@xrmx Would it be desirable to have a separate job that runs the tests only against the latest Python pre-release that is not mandatory so we get a heads-up if we will brake in upcoming releases of Python?
Makes sense.
@xrmx Would it be desirable to have a separate job that runs the tests only against the latest Python pre-release that is not mandatory so we get a heads-up if we will brake in upcoming releases of Python?
I think CI is already slow enough :)
I think CI is already slow enough :)
I mean, the tests will have to be added eventually, so we're probably not getting around that. :)
Is there anything in particular that is slow? Is it a concurrency issue, i.e., not enough runners? Would clustering tests somehow help? I can take a look if there's anything that I see that could be improved (appreciate any pointers about current pain points there!).
I think getting tests to run against 3.13 RCs would be great for uncovering issues early. I of course get that that's not always possible, especially if there are dependencies. Could we start by testing some core part of the codebase that isn't blocked? I.e., not the instrumentations?
I think CI is already slow enough :)
I mean, the tests will have to be added eventually, so we're probably not getting around that. :)
Is there anything in particular that is slow? Is it a concurrency issue, i.e., not enough runners? Would clustering tests somehow help? I can take a look if there's anything that I see that could be improved (appreciate any pointers about current pain points there!).
The current pain point I think is that the checkout of core libraries from git is slow. There is a PR switching to uv that should help https://github.com/open-telemetry/opentelemetry-python-contrib/pull/2667
I think getting tests to run against 3.13 RCs would be great for uncovering issues early. I of course get that that's not always possible, especially if there are dependencies. Could we start by testing some core part of the codebase that isn't blocked? I.e., not the instrumentations?
I'm not doubting it's useful. They are less useful if our instrumented libraries does not yet work with python 3.13 though. Some comments ago I asked if you have run them locally, if everything is fine (modulo you'll need the same exclusions we have for 3.12 in the workflows) then it's fine to add them. If we need to add temporary workarounds because packages don't have wheels already then I would prefer to wait at least for a final 3.13.
Looks like we are reaching some kind of limit and some tests won't run? Anyway there is at least pydantic to bump.
I do not know about the limits, sorry.
I have added the exclusions for boto and kafka-python.
I ran all the test suites using Python 3.13.0b1 and those are the ones that are failing:
py313-test-instrumentation-aiopg: FAIL code 1 (63.75=setup[0.08]+cmd[13.91,12.72,7.29,9.46,20.29] seconds)
py313-test-instrumentation-botocore: FAIL code 1 (112.76=setup[1.06]+cmd[17.77,11.54,8.03,9.44,64.91] seconds)
py313-test-instrumentation-django-1: FAIL code 1 (28.75=setup[0.98]+cmd[10.57,17.20] seconds)
py313-test-instrumentation-falcon-1: FAIL code 2 (71.02=setup[0.26]+cmd[18.80,6.74,7.39,18.26,19.13,0.44] seconds)
py313-test-instrumentation-falcon-2: FAIL code 1 (78.57=setup[0.28]+cmd[20.51,9.25,17.03,10.84,20.66] seconds)
py313-test-instrumentation-fastapi: FAIL code 1 (111.77=setup[0.22]+cmd[11.47,7.51,17.36,15.19,60.01] seconds)
py313-test-instrumentation-fastapi-slim: FAIL code 1 (102.10=setup[0.53]+cmd[13.85,17.04,30.62,9.05,31.01] seconds)
py313-test-instrumentation-flask-0: FAIL code 1 (46.91=setup[0.02]+cmd[25.25,21.65] seconds)
py313-test-instrumentation-urllib3-1: FAIL code 1 (24.29=setup[0.65]+cmd[23.64] seconds)
py313-test-instrumentation-psycopg2: FAIL code 1 (77.35=setup[1.23]+cmd[9.88,5.79,7.86,7.42,45.17] seconds)
py313-test-instrumentation-pyramid: FAIL code 2 (66.54=setup[1.03]+cmd[10.71,6.82,6.29,12.09,28.69,0.91] seconds)
py313-test-instrumentation-asyncpg: FAIL code 1 (52.69=setup[0.03]+cmd[7.12,7.29,12.24,8.79,17.23] seconds)
py313-test-instrumentation-grpc: FAIL code 1 (280.81=setup[1.06]+cmd[9.51,10.82,6.56,19.25,233.61] seconds)
py313-test-instrumentation-sqlalchemy-1: FAIL code 1 (87.16=setup[0.62]+cmd[9.28,12.58,14.33,8.82,41.54] seconds)
py313-test-instrumentation-remoulade: FAIL code 2 (78.88=setup[0.25]+cmd[12.77,16.87,7.68,16.37,23.28,1.66] seconds)
py313-test-instrumentation-celery: FAIL code 1 (121.81=setup[1.14]+cmd[17.80,8.39,15.38,42.73,22.45,13.93] seconds)
py313-test-instrumentation-system-metrics: FAIL code 1 (114.00=setup[0.64]+cmd[21.56,13.99,12.82,44.14,17.84,3.01] seconds)
py313-test-instrumentation-tortoiseorm: FAIL code 1 (149.58=setup[0.91]+cmd[11.70,10.05,33.60,41.12,52.20] seconds)
py313-test-instrumentation-httpx-0: FAIL code 1 (141.74=setup[1.12]+cmd[15.88,25.46,54.34,7.77,35.94,1.24] seconds)
py313-test-instrumentation-confluent-kafka: FAIL code 1 (59.27=setup[1.12]+cmd[11.14,17.07,13.74,9.01,7.19] seconds)
py313-test-instrumentation-cassandra: FAIL code 2 (67.08=setup[1.17]+cmd[11.27,17.76,18.86,4.79,12.56,0.66] seconds)
py313-test-processor-baggage: FAIL code 1 (49.19 seconds)
So a question @xrmx : will otel only support Python 3.13 when all the instrumentation's work with 3.13? (could take a while for all those libraries to add support for 3.13)
I know there will be a lot of work needed to make opentelementry-python-contrib compatible with Python 3.13. I am also aware that this will probably not happen before Python 3.13 final is out. But at least we have now something in place to run the testsuites against Python 3.13.
@xrmx You can close this PR if it cluttering the prs. We can reopen at a later time.
I guess the way to approach this would be:
- Make everything in
opentelemetry-pythoncompatible with Python 3.13 - Enable 3.13 tests for all instrumentations in this repo that "just work" out of the box with 3.13
- Migrate the rest of the instrumentations one-by-one and make them compatible. (could take some time, ex Celery is not the fasted to work with adopting new Python versions.)
So a question @xrmx : will otel only support Python 3.13 when all the instrumentation's work with 3.13? (could take a while for all those libraries to add support for 3.13)
I think this is the wrong question to ask :) Also it's not that I decide here, I'll try to help :)
I know there will be a lot of work needed to make opentelementry-python-contrib compatible with Python 3.13. I am also aware that this will probably not happen before Python 3.13 final is out. But at least we have now something in place to run the testsuites against Python 3.13.
@xrmx You can close this PR if it cluttering the prs. We can reopen at a later time.
I guess the way to approach this would be:
* Make everything in `opentelemetry-python` compatible with Python 3.13 * Enable 3.13 tests for all instrumentations in this repo that "just work" out of the box with 3.13 * Migrate the rest of the instrumentations one-by-one and make them compatible. (could take some time, ex Celery is not the fasted to work with adopting new Python versions.)
The correct question in my opinion would be: how can I help to have 3.13 supported when it will be released? Your list looks fine to me, the actual timeline depends on people doing the work. So first thing would be to understand why things are failing. For 3.12 we did a big "add support for 3.12", but there were a few things that can be fixed separately and we did so. For -contrib at least 3 people opened a PR to add 3.12 support. To the list of things to do there should be probably be to being able to run 3.13 tests in CI :sweat_smile: Maybe https://github.com/open-telemetry/opentelemetry-python-contrib/pull/2687 will do the trick.
To elaborate a bit on the possible failures in tests, for 3.12 there were things that the python interpreter become more picky like warning about wrong assert methods. And these can be fixed right now. Another thing could be missing wheels / language compatibility issues, and as I said already I would avoid to add temporary workarounds for these but wait for updated packages instead.
Updated the branch to use the new workflow generation script.
When running the tests locally with tox -f py313 everything is green. Should be the same in CI now.
@xrmx could you trigger the CI run, so we can check?
All GH checks (except the changelog one) succeeded. (fixed this in the mean time)
So I guess this is good to review!
Shouldn't we support Python 3.13 in the api before we add support for the instrumentations?
@antonpirker please clean your base branch so you don't have other people commits, thanks!
Then it would be nice to open a new PR with the following commits so we can reduce this PR to just the enablement:
- https://github.com/open-telemetry/opentelemetry-python-contrib/pull/2724/commits/40c98a84dedfeb2efbc79232b3ad4d4ce92f8eee
- https://github.com/open-telemetry/opentelemetry-python-contrib/pull/2724/commits/d898ac4b2ef61e06dff8ee1c2cc893fc27857d91
- https://github.com/open-telemetry/opentelemetry-python-contrib/pull/2724/commits/c053eef343b28d6a29f1f02e9358ad4971fa4576
- https://github.com/open-telemetry/opentelemetry-python-contrib/pull/2724/commits/6014c59cafd2722331ba7dcb73c873467083a057
@xrmx moved those commits into new PR: https://github.com/open-telemetry/opentelemetry-python-contrib/pull/2887/
- Replaced by https://github.com/open-telemetry/opentelemetry-python-contrib/pull/3134