splits-io icon indicating copy to clipboard operation
splits-io copied to clipboard

Compare against sum of bests

Open moorecp opened this issue 3 years ago • 6 comments

I wanted the ability to easily analyze my runs against my best splits. I'm not sure if this is something desired in the app itself or not, though. My feelings won't be hurt if the added complexity isn't desired.

Things of note:

  • I disabled the sum of best run from showing anything in the charts. It didn't make sense to have that there since that data is already part of your normal charts.
  • I hid the gold_timeline from the sum of best split timeline. It was just a big empty bar since every split is gold and the base run would never show anything there.
  • I removed the Swap Comparison button because, at least as of right now, you can't view a SumOfBestRun as a Run, so you can't make that the base run for the #show action. I could add support for that, but it seems unnecessary.
  • A SumOfBestRun won't display a video, because it doesn't make sense.
image image image

moorecp avatar Jul 10 '20 19:07 moorecp

Kudos, SonarCloud Quality Gate passed!

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities (and Security Hotspot 0 Security Hotspots to review)
Code Smell A 0 Code Smells

No Coverage information No Coverage information
0.0% 0.0% Duplication

sonarcloud[bot] avatar Jul 10 '20 20:07 sonarcloud[bot]

I am totally for the ability to compare to sums of bests. However, doing it in this way, while super interesting, I think works against us in maintenance cost.

I agree it introduces some issues with regard to adding new functionality and the like going forward. You'd have to try to think about "Can/Should this apply to a SumOfBestRun, too?", etc. even when it doesn't appear to be related at all. So that's a clear negative with this implementation.

I'm trying to envision how to simplify comparisons in the way you're talking while also still allowing for comparing parts of Runs you do want to compare. For instance, when comparing conventional Runs currently, you do want to compare things like videos, histories, the right side charts, etc. So, there is still plenty of stuff you care about from a traditional Run in the majority of cases. Unless your thought was something as simple as, if the user requests a Run comparison, a Run is added to the compare list while also adding its Segments to a separate collection. We then look at the Run collection where we care about Run things and the Segment collection where we only actually need Segment things.

moorecp avatar Jul 13 '20 14:07 moorecp

To throw my 2 cents in, what about a "Comparable" model concern. Then all the things that are related to comparing anything run like would be in one place, and would make future comparisons easier. Then you could have some of them just be PORO classes in the models folder.

BatedUrGonnaDie avatar Jul 13 '20 16:07 BatedUrGonnaDie

🤔 Good points all around. I do like the idea of extracting common stuff to a concern, but idk if it would solve this completely -- like @moorecp says there are a lot of things we care about when comparing Run-to-Run like videos and the "swap comparison" button, that we can't deal with when comparing Run-to-Segments. Unless I'm misunderstanding how you want to use concerns.

if the user requests a Run comparison, a Run is added to the compare list while also adding its Segments to a separate collection.

Hmm... I like this line of thinking, but with a twist. What if we made it so comparing to your own sum-of-best actually is a Run-to-Run comparison (comparing the run against itself), and we have a parameter that tells us which segments to compare against (PB by default, but you can specify to compare against sum-of-best).

So where our URLs now look like

/abc?compare=123

We support a param like

/abc?compare=123&to=sob

So that comparing to your own sum-of-best is just a matter of specifying

/abc?compare=abc&to=sob

Would that work? We still need some custom code, but a lot of it seems like it would melt away since we are still dealing with a real run on the other end. And if some of the UI isn't perfect for the first version because of that, it's not a big deal.

In the future we could add support for also specifying a from=sob parameter, to compare your sum-of-bests against someone else's.

glacials avatar Jul 15 '20 15:07 glacials

I think I hear what you're saying, but I feel like that gets us into a similar situation as we have with SumOfBestRun. If I'm comparing from a run to a set of segments like a sum of best, I still won't have a video and all of those things that would be part of that secondary run. Unless we're thinking that if we see we're comparing against ourselves, we just don't do certain things across the board.

This might fall apart slightly if you want to implement something like a comparison against a community sum of best like you mentioned, as there is no run there.

In thinking about @BatedUrGonnaDie's proposal, in theory you could just fields that say this feature is supported while this feature isn't supported for this thing we're using to compare. It might be okay, or it might end up super messy. I haven't put much thought/research into it.

moorecp avatar Jul 15 '20 16:07 moorecp

If you wanted to go the concern route, then it would end up with a SumOfBestRun almost definitely. It was more to make that method scalable to future comparisons we may have such that any model/object that includes it would be able to be compared instantly (or near instantly by stubbing out some known methods for limitations).

BatedUrGonnaDie avatar Jul 15 '20 19:07 BatedUrGonnaDie