Brains should be dynamically registered
Current problem
There is an Astroid function for detecting Numpy members (attribute_looks_like_numpy_member). I'm sure it is very useful for code that uses Numpy, but for code that doesn't use Numpy it is a big waste of time. For example, running Pylint against Pylint itself, this function is called 1,441,536 times, and it is never true because Pylint doesn't use Numpy.
This makes a measurable time difference.
Desired solution
For a given Astroid brain, some effort should be made to detect whether that brain is actually needed. This might entail doing a pass over imports, or something like that. For a codebase that does not use Numpy, the Numpy brain should not be registered and none of its checks should be run, and similar for other brains.
Additional context
No response
I agree with not calling code that is never useful, however this won't add too much to performance. I looked into this before: since the numpy brain is the first that is registered we do a lot of inference work in it. By not loading it we just move that work to a different brain without noticeably improving performance.
I think this would be a good thing, but it's not easy to do. I think we'd have to go from a "battery included" pylint that "just works" for numpy, to a lot of plugin with optional install pylint[numpy,spelling] or at least option if the configuration, all solutions that requires user actions during an upgrade. (There's not only numpy, we have brains for dozen of libs). I don't know how much numpy 2.0 will break the numerous numpy brains for 1.0 we have, so an user action might be necessary anyway (or do we autodetect the numpy version?).
https://github.com/pylint-dev/astroid/pull/2550 reduces the cost of numpy checks on codebases that don't use numpy