jaimergp
                                            jaimergp
                                        
                                    Wow @dholth, that looks incredible!!!
At first look it might have been a sporadic issue triggered by temporary network condition. Are you experiencing this problem often or continuously? One more note, `conda info` reports you...
There are different JSONs involved: * Repodatas: these are loaded by libmamba * conda-meta/*.json: these are loaded by conda, so it can be a bottleneck for REALLY large environments, but...
Big like "this is where I have installed all my packages ever for every project ever, I don't know what environments are". Like hundreds of packages.
This repo (https://github.com/tktech/json_benchmark) might be useful if we ever decide to change JSON parsers. I am looking into this as part of other work and it might not negligible...
Orrr maybe not? I patched conda so it imports `simdjson` instead of the builtin `json` and ran some `conda install numpy --dry-run` on a base Miniconda environment with conda-libmamba-solver enabled....
> @jaimergp do you have a repo with your changes in it, or a patch file? Not really, I only batch replaced the `json` imports with the `simdjson` equivalent ones....
That's great! I wonder if we can ship something like that in `conda` itself, where the accessor functions can provide which fields are needed... In practice this might mean all...
I have taken Jim's example and added more (almost all) repodata fields, and using the full repodata for noarch we get: Modified Struct ```python def query_msgspec(data: bytes) -> list[tuple[int, str]]:...
I fiddled a bit more with this example just to reach the same conclusion: JSON loading is not the limiting factor, but the massaging that `conda` does after the fact....