baal
baal copied to clipboard
Results visualization dashboard
Composer have a brilliant dashboard (https://app.mosaicml.com/explorer/imagenet) which summarises all of their experiment results. It allows you to inspect results against some set of hyperparameters, understand some simple trends, and what methods work well together vs poorly (the library is for fast deep learning training). This is a significant innovation in open-source documentation.
The main metrics would be runtime and some measure of performance (AUC at some X,Y,Z fractions of datasets or something like that).
I've been working with Superset a lot, and I'm pretty sure their SaS offering (Preset) has a very generous free tier that would cover the use case. I'd be happy to set this up. Let me know your thoughts.
Hi george,
Oh that looks very cool!!
So what would be the project look like?
Would we run many experiments on many datasets and BaaL's website would refer to these dashboards hosted on mosaicml?
If your experiments have been run with MLFlow or Weights and Biases so far we'd be able to import them across, but we could log to a DB as well moving forwards
I was just using mosaicml's tool as an example. We'd host on https://preset.io
This is very blue sky thinking, but I think they have the right idea (dashboard as demo for techniques that work)
We could either create a logger that wrote to a DB that preset read from, or export logs from whatever you want to / do use (MLFlow / WandB etc.)
Could also have an option for any user to log their results directly to the dashboard in cases where they're working on an open-source dataset (probably coloured / displayed differently to demonstrate that the results are unverified).
Oh I like that! I think we should do this. There is a lack of leaderboards in active learning which makes research more difficult.
And websites such as paperswithcode do not show the information we need.
What would be the first step? Gathering the logs that we have currently?