Use Pydantic v1 from v2 backdoor
Shim until #192 is diagnosed
Are there normally tracebacks in failing tests? I'm just seeing failures and a crash. pytest -v usually gives me enough info
Not sure why the v2 tests are timing out - tracking test durations to see if something is slow, and also failing fast on the first failure are my next ideas
I should note that micromamba list isn't reporting that v1 is installed (on top of the v2 from conda-forge) in those matrix runs, but based on the behavior (tests passing in that half of the matrix) I think it is being installed as expected. I think this is a bug/limitation in how micromamba itself reports what's installed, just ignoring stuff from PyPI.
Okay -x surfaces some real errors related to changes in how validators work. These take a bit to bubble up, at least in the first test that fails. (Maybe a simpler unit test gets more directly to the point, I haven't tried.)
I'm just noting this now and not looking into it further, but this should be a good starting point when it's picked up.
@dotsdl and @ianmkenney - It looks like the simple "can we get away with doing nothing" tests here are indicating that there may be some deeper changes needed for the pydantic 2 release. Interchange will become pydantic 2-only in version 0.4 (ETA ~late September). You can downpin for a bit but our support window for bugfixes in the pydantic 1-compatible Interchange versions will be just a few months. So we're happy to advise here but it looks like some code changes in Alchemiscale might be needed for the pydantic 2 transition.
Closing due to #192. Thanks again for your efforts here @mattwthompson!