medperf
medperf copied to clipboard
[WIP] Web UI dataset
Adds functionality for creating & managing datasets
Dataset submission
- [x] "add new dataset" button on datasets list
- [ ] "choose benchmark" step
- [x] PoC (dropdown list)
- [ ]
p1list benchmarks as cards with detailed info (as on "benchmarks list" page), + highlight chosen one - [ ]
p3allow to choose dataprep mlcube instead
- "Fill text fields" step
- [x] DS name
- [x] description
- [ ]
p2check length beforehand? DS submission would fail if description is > 20 characters ?
- [ ]
- [x] location
- [ ]
p2check length beforehand?
- [ ]
- [ ]
p3"Submit as prepared" flag- needs to be explained somewhere (like, in a tooltip)
- Q: if checked, is dataset automatically created as operational?
- [ ]
p1check how errors are displayed & handled
- "Paths" step
- [ ] Q: combine with previous step?
- [x] data path
- [x] labels path
- [ ]
p2metadata path (what is it?) (is required if dataset is already prepared) - [ ]
p2redesign path picking panel- [ ] folders / files difference
- [ ] one click - chosing folder, double click - go inside?
- [ ]
p2"go back" button / navigation
- [x] "Verify entered data" step
- [x] dataset submission
- [ ]
p1check how errors are displayed & handled
Submitted dataset displaying (dataset details page)
- [ ]
p2display paths (data path, labels path)
Dataset preparation
- [x] "Prepare" button on dataset detail page
- [x] Preparation run
- [ ]
p1clean log messages from magic bytes - [ ]
p1distinguish messages to headers + usual lines in the medperf code - [ ]
p1display header messages properly - [ ]
p1display log messages without json - [ ]
p2log lines highlighting? - [ ]
p1spinner at text headers to underline process is running - [ ]
p1check how errors are displayed & handled - [ ]
p1button "back to the dataset" rename - [ ] link to the report / display report if it exists
- [ ]
p1check how errors are displayed & handled (display exceptions in the log)
Prepared dataset displaying (dataset details page)
- [ ]
p2if report exists, "Allowed automatic report submission" flag - [ ]
p2if report exists, link / path to the report - [ ]
p1if dataset is prepared, unlock next button "set operational" (locked if not prepared)
Set operational
- [ ]
p0"Set operational" button on dataset detail page - [ ]
p0Set operational - [ ]
p0Disable button if already operational - [ ]
p1check how errors are displayed & handled
Associate
- [ ]
p1"Associate with the benchmark" button on the dataset detail page (no choice of benchmarks) - [ ]
p3choice of benchmarks if dataset was created with dataprep mlcube - [ ]
p1associate - [ ]
p1check how errors are displayed & handled
Run benchmark
- [ ]
p1"Run benchmark" button on the dataset detail page - [ ]
p1running benchmark page with logs - [ ] ??? runs history??
- [ ] ??? displaying result???
- [ ]
p1check how errors are displayed & handled
Submit result
- [ ]
p1"Submit result" button on the dataset detail page - [ ]
p1Submit result - [ ]
p1check how errors are displayed & handled
general dataset UI
- [ ]
p2design buttons prepare-operational-... navigation to one line? - [ ]
p1hide buttons panel if you're not the dataset owner? - [ ]
p3redesign state displaying (we'd have dev/op floating blocks in the header + set-op button in the footer)
Technical refactoring
- [ ]
p1split routes to different files:dataset/submission.py,dataset/preparation.py
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅