bug: eval iterates over `model_config_map` but only takes checkpoint path for single model
this only works by accident, because we're usually only training one model, but it will be a problem when trying to compare multiple models
note this bug affects learncurve since it iterates over model_config_map
https://github.com/NickleDave/vak/blob/538c3085c8151eaef1a5bab6813c16af09e2a7cd/src/vak/core/learncurve/learncurve.py#L280
but passes in the whole model_config_map to eval
https://github.com/NickleDave/vak/blob/538c3085c8151eaef1a5bab6813c16af09e2a7cd/src/vak/core/learncurve/learncurve.py#L329
a simple fix for now would be to have core.eval accept a single model, and then have core.learncurve get models_map using model_config_map and iterate over that models_map instead
probably core.predict should behave the same way -- just accept a single model
one reason for the current design is that models specify an input shape, and we can't get that input shape until we instantiate a dataset inside predict or eval. So we can't just get models_map outside those functions