yocto-gl
yocto-gl copied to clipboard
We'd like your feedback on the new MLflow experiment page
In the MLflow 2.2.0, we introduced several improvements to the MLflow experiment page, including a configurable chart view providing visual model performance insights, a revamped parallel coordinates experience for tuning, and a streamlined table view with enhancements for search and filtering. We believe that these improvements will greatly improve the speed of model comparison for data scientists and give them more time to focus on the thing they love doing the most: building awesome models.
If you have feedback on the new experience or requests for additional features, please let us know by commenting below.

Hi thanks for the nice job!
In the Compare runs tab, it would be much more readable to transpose the tables in order to:
1 - Be able to see all runs at once without scrolling horizontally 2 - Be consistent with the standard Table view
At least to provide an option to transpose all tables. In practice when dealing with 50+ runs, the MLFlow 1.28 Only show diff switch was very efficient. Think as we want to identify the run where a parameter marginally differs in order to extract information.
As a researcher It is extremely useful to be able to see the marginal differences in parameters w.r.t. to all runs at once. Moreover the introduction of inter-experiments analysis is really useful.
Note all the blank spaces left in the table by 1-digit parameters values below:
Thank you for your consideration,
@ReHoss Thank you for your feedback. Have you tried the new chart view on the experiment page? Our goal is for the chart view to be a replacement for this run comparison view.
I am discovering the new version, the chart view looks very promising :).
Moreover, I think a useful MLFlow UI defines two separate fundamental tools:
(1) - An experiment tracking (logging) tool, such as the table view.
(2) - An experiment results analysis tool such as the Chart view.
In order, to make accurate results analysis through (2), the user should be able to observe what marginally differs between runs, that's why the show diff switch is extremely useful both for tracking and analysis:
First, It allows the user to ensure the experiments are the same except for a small subset of hyperparameters.
Second, It allows to identify which parameters impact the metrics.
So please, take into account my comment on the readability of the Show only diff table.
Example from something like MLFlow 1.18:
@BenWilson2 @dbczumar @harupy @WeichenXu123 Please assign a maintainer and start triaging this issue.
The new chart view is indeed looking really good! Excited for this. I'm already using this in a few collaborative projects on DagsHub. I discovered that due to the way the state of the chart view is saved to local storage in the browser, I can't share the chart view that I created with my collaborators. Each one sees what they configured, which is a shame since I think this feature could be much more powerful if it's shareable.
The way to reproduce this is to configure some custom charts in the chart view, then send it to someone else, when they view the MLflow UI, they will not see any configured charts.
Hope this is useful feedback! Thanks for the awesome feature.
Hello! Great work on this tool! In Table View, the metrics that are displayed are the ones from the last epoch. Would be nice to have the minimum / maximum achieved for that model instead of the last metric obtained.
Is it possible to provide a button to select/deselect all experiments? Currently I have to click the experiments one by one if I want to compare all the runs in different experiments. I've been looking forward to this for a long time
Great work!
@dbczusmar @harupy , please brink back the show diff button.
minimum
I was at first very very enthusiastic about the chart view. Then I noticed that it does not use the min or max of the metric that I select. That makes it useless for comparing the quality of different runs (unless I'm missing something of course).
Is it possible to add this? If I'm mistaken, please correct me.
In the chart view the pane on the left does the new version allow you to see more characters of a run name? Is the run name expandable or scroll able?
In version 2.11.3 I've seen the run names getting truncated where it's hard to see the rest of the name. This is even worse with nested runs. Basically the UI makes it so that the length of a parent run needs to be max around 21 chars and the nested run needs to be 14 characters.