Add 'Export data' button which delivers JSON as downloadable file
The human-readable data is contained as JSON within the HTML whereas the data dump file written to the server is binary. Users may want to view the data, so let's make it easy for them. Since this data is already in the HTML file it shouldn't increase file weight and should be very easy to implement.
I'd say JSON only is fine, a CSV option might be a nice-to-have but I'd say it's not a priority since the kind of users who will be exporting data files are probably also the kind of users who can convert JSON to CSV themselves if that's what they needed (and maybe makes more sense server-side as a CLI option so it can be used with --collect-only?).
Regarding where in the UI, I'm also drawing up some suggestions for a feature to import data to allow different Doctor benchmarks to be compared side-by-side: it'd make sense for import/export buttons to be together, and the most intuitive place for these would be the very top header aligned right opposite the Doctor logo since it's a step outside the context of the current analysis and recommendations.
Maybe a dropdown button would be best with (initially) two options:
- Save JSON file (default)
- Log to console (simply
console.logs the data for people who prefer to explore data objects in dev tools, saves them having to hunt for it)
It'd then be easy to add alternatives if there was any demand.
I've had the same thoughts :)
(and maybe makes more sense server-side as a CLI option so it can be used with --collect-only?).
Yes. I don't think this should be a UI feature. A user who would want the JSON/CSV data is an advanced user and would find using the UI to get the data an unnecessary step.
My thinking has been to separate the data formatting submodule into its own module (clinic-doctor-format) so that any user can read the data programmatically, the same way we do it internally.
This would also make version management of the output format easier, as the format version would be the clinic-doctor-format version. Where currently it is undefined and a semver major update may still be format-wise compatible with the previous version.
Makes sense. There's only one other case I can think of, which is if colleague A emails a html output file to a more specialist colleague B, and B has some specialist reason for wanting to see the data but isn't in a position to re-run the exact same benchmark with different CLI arguments.
I think this could be satisfied much more easily without running into versioning issues with something like what we do on Bubbleprof, a simple console.log telling the user how to access the data object (I know it's intended as a temporary dev aid but it's a useful one)
I think this could be satisfied much more easily without running into versioning issues with something like what we do on Bubbleprof, a simple console.log telling the user how to access the data object (I know it's intended as a temporary dev aid but it's a useful one)
I would be okay with that. But it feels very hypothetical.
Do we need this to develop? Or can we live without?
I do not think this helps the primary function of Doctor, but it would be good to have in the future.
It's certainly not something we need for Doctor V1 launch - assuming no-one objects, I'll make it match Bubbleprof in exposing the data on window.data (it's also a nice convenience for developing), then close this issue.
Then some time after launch we can revisit the question of data output formats and data file versioning policies more properly for Doctor V2? I'll also save my thoughts on data import and multiple benchmark comparisons until then
👍 on this approach.
@AlanSl since we released Bubbleprof and there were a lot of changes since January: do you have a update on this one?
@BridgeAR Bubbleprof now only exposes window.data in debug mode, which is currently activated by flipping a property to true in the code (not ideal). I have a plan to try adding a --debug flag to clinic CLI, and then activates the tools' various debugging features (sourcemaps in browserify, data exposure in UI) in response to this flag. This would then be the same in Doctor and Bubbleprof (and in the near future in Flame).
So I'd say the status of this is to do, pending some sort of consistent debugging activation in clinic generally (probably a --debug flag in the CLI unless anyone has a better idea).
This is something I'd like as well, so I can more easily compare stats across various result sets.