medperf icon indicating copy to clipboard operation
medperf copied to clipboard

[WIP] Web UI dataset

Open VukW opened this issue 1 year ago • 1 comments

Adds functionality for creating & managing datasets

Dataset submission

  • [x] "add new dataset" button on datasets list
  • [ ] "choose benchmark" step
    • [x] PoC (dropdown list)
    • [ ] p1 list benchmarks as cards with detailed info (as on "benchmarks list" page), + highlight chosen one
    • [ ] p3 allow to choose dataprep mlcube instead
  • "Fill text fields" step
    • [x] DS name
    • [x] description
      • [ ] p2 check length beforehand? DS submission would fail if description is > 20 characters ?
    • [x] location
      • [ ] p2 check length beforehand?
    • [ ] p3 "Submit as prepared" flag
      • needs to be explained somewhere (like, in a tooltip)
      • Q: if checked, is dataset automatically created as operational?
    • [ ] p1 check how errors are displayed & handled
  • "Paths" step
    • [ ] Q: combine with previous step?
    • [x] data path
    • [x] labels path
    • [ ] p2 metadata path (what is it?) (is required if dataset is already prepared)
    • [ ] p2 redesign path picking panel
      • [ ] folders / files difference
      • [ ] one click - chosing folder, double click - go inside?
    • [ ] p2 "go back" button / navigation
  • [x] "Verify entered data" step
  • [x] dataset submission
  • [ ] p1 check how errors are displayed & handled

Submitted dataset displaying (dataset details page)

  • [ ] p2 display paths (data path, labels path)

Dataset preparation

  • [x] "Prepare" button on dataset detail page
  • [x] Preparation run
  • [ ] p1 clean log messages from magic bytes
  • [ ] p1 distinguish messages to headers + usual lines in the medperf code
  • [ ] p1 display header messages properly
  • [ ] p1 display log messages without json
  • [ ] p2 log lines highlighting?
  • [ ] p1 spinner at text headers to underline process is running
  • [ ] p1 check how errors are displayed & handled
  • [ ] p1 button "back to the dataset" rename
  • [ ] link to the report / display report if it exists
  • [ ] p1 check how errors are displayed & handled (display exceptions in the log)

Prepared dataset displaying (dataset details page)

  • [ ] p2 if report exists, "Allowed automatic report submission" flag
  • [ ] p2 if report exists, link / path to the report
  • [ ] p1 if dataset is prepared, unlock next button "set operational" (locked if not prepared)

Set operational

  • [ ] p0 "Set operational" button on dataset detail page
  • [ ] p0 Set operational
  • [ ] p0 Disable button if already operational
  • [ ] p1 check how errors are displayed & handled

Associate

  • [ ] p1 "Associate with the benchmark" button on the dataset detail page (no choice of benchmarks)
  • [ ] p3 choice of benchmarks if dataset was created with dataprep mlcube
  • [ ] p1 associate
  • [ ] p1 check how errors are displayed & handled

Run benchmark

  • [ ] p1 "Run benchmark" button on the dataset detail page
  • [ ] p1 running benchmark page with logs
  • [ ] ??? runs history??
  • [ ] ??? displaying result???
  • [ ] p1 check how errors are displayed & handled

Submit result

  • [ ] p1 "Submit result" button on the dataset detail page
  • [ ] p1 Submit result
  • [ ] p1 check how errors are displayed & handled

general dataset UI

  • [ ] p2 design buttons prepare-operational-... navigation to one line?
  • [ ] p1 hide buttons panel if you're not the dataset owner?
  • [ ] p3 redesign state displaying (we'd have dev/op floating blocks in the header + set-op button in the footer)

Technical refactoring

  • [ ] p1 split routes to different files: dataset/submission.py, dataset/preparation.py

VukW avatar Sep 16 '24 16:09 VukW

MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅

github-actions[bot] avatar Sep 16 '24 16:09 github-actions[bot]