dowel
dowel copied to clipboard
Add dots to alternating tabular lines
Tables with keys of varying lengths are hard to read, since some of the key names end up far from their values. This change adds a sequence of dots on all odd lines, so that lines are easier to match up with their keys.
Codecov Report
Merging #37 into master will increase coverage by
0.26%
. The diff coverage is100%
.
@@ Coverage Diff @@
## master #37 +/- ##
==========================================
+ Coverage 94.2% 94.47% +0.26%
==========================================
Files 7 7
Lines 328 344 +16
Branches 48 53 +5
==========================================
+ Hits 309 325 +16
Misses 12 12
Partials 7 7
Impacted Files | Coverage Δ | |
---|---|---|
src/dowel/tabular_input.py | 100% <100%> (ø) |
:arrow_up: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 621d3e4...93fab0b. Read the comment docs.
I definitely think that alternating lines is easier to scan than adding dots on every line, especially when text is scrolling quickly. I don't have a very strong opinion on sparse-dots vs dense dots, but sparse dots looks a little less distracting to me. A realistic example with the current implementation looks like this:
------------------------------------- ------------
AverageDiscountedReturn -1.83889
AverageReturn . . . . . . . . . . . -6.90437
Entropy 4.18283
EnvExecTime . . . . . . . . . . . . 1.5736
Extras/EpisodeRewardMean -6.96981
GaussianMLPBaseline/ExplainedVariance 0.326456
GaussianMLPBaseline/LossAfter 1.19292
GaussianMLPBaseline/LossBefore . . . 1.26723
GaussianMLPBaseline/MeanKL 0.00447475
GaussianMLPBaseline/dLoss . . . . . 0.074309
GaussianMLPPolicy/Entropy 4.17182
GaussianMLPPolicy/KL . . . . . . . . 0.00811469
GaussianMLPPolicy/KLBefore 0
GaussianMLPPolicy/LossAfter . . . . -0.011946
GaussianMLPPolicy/LossBefore 5.96046e-09
GaussianMLPPolicy/dLoss . . . . . . 0.0119461
Iteration 3
MaxReturn . . . . . . . . . . . . . -6.56137
MinReturn -7.28088
NumTrajs . . . . . . . . . . . . . . 12
Perplexity 65.5508
PolicyExecTime . . . . . . . . . . . 0.703327
ProcessExecTime 0.056324
StdReturn . . . . . . . . . . . . . 0.230479
------------------------------------- ------------
With denser dots, it looks like this:
------------------------------------- ------------
AverageDiscountedReturn -1.83889
AverageReturn ....................... -6.90437
Entropy 4.18283
EnvExecTime ......................... 1.5736
Extras/EpisodeRewardMean -6.96981
GaussianMLPBaseline/ExplainedVariance 0.326456
GaussianMLPBaseline/LossAfter 1.19292
GaussianMLPBaseline/LossBefore ...... 1.26723
GaussianMLPBaseline/MeanKL 0.00447475
GaussianMLPBaseline/dLoss ........... 0.074309
GaussianMLPPolicy/Entropy 4.17182
GaussianMLPPolicy/KL ................ 0.00811469
GaussianMLPPolicy/KLBefore 0
GaussianMLPPolicy/LossAfter ......... -0.011946
GaussianMLPPolicy/LossBefore 5.96046e-09
GaussianMLPPolicy/dLoss ............. 0.0119461
Iteration 3
MaxReturn ........................... -6.56137
MinReturn -7.28088
NumTrajs ............................ 12
Perplexity 65.5508
PolicyExecTime ...................... 0.703327
ProcessExecTime 0.056324
StdReturn ........................... 0.230479
------------------------------------- ------------
With even denser dots, it looks like this (which I think isn't really easier to read than no dots):
------------------------------------- ------------
AverageDiscountedReturn ............. -1.83889
AverageReturn ....................... -6.90437
Entropy ............................. 4.18283
EnvExecTime ......................... 1.5736
Extras/EpisodeRewardMean ............ -6.96981
GaussianMLPBaseline/ExplainedVariance 0.326456
GaussianMLPBaseline/LossAfter ....... 1.19292
GaussianMLPBaseline/LossBefore ...... 1.26723
GaussianMLPBaseline/MeanKL .......... 0.00447475
GaussianMLPBaseline/dLoss ........... 0.074309
GaussianMLPPolicy/Entropy ........... 4.17182
GaussianMLPPolicy/KL ................ 0.00811469
GaussianMLPPolicy/KLBefore .......... 0
GaussianMLPPolicy/LossAfter ......... -0.011946
GaussianMLPPolicy/LossBefore ........ 5.96046e-09
GaussianMLPPolicy/dLoss ............ 0.0119461
Iteration ........................... 3
MaxReturn ........................... -6.56137
MinReturn ........................... -7.28088
NumTrajs ............................ 12
Perplexity .......................... 65.5508
PolicyExecTime ...................... 0.703327
ProcessExecTime ..................... 0.056324
StdReturn ........................... 0.230479
------------------------------------- ------------
full density
------------------------------------- ---
a.................................... 100
bbbbbbb.............................. 55
ccccc................................ 55
d.................................... 55
ThisIsAnnoylinglyLong/LikeActuallyWTF 23
ee................................... 55
ff................................... 55
------------------------------------- ---
half vertical
------------------------------------- ---
a 100
bbbbbbb.............................. 55
ccccc 55
d.................................... 55
ThisIsAnnoylinglyLong/LikeActuallyWTF 23
ee................................... 55
ff 55
------------------------------------- ---
half horizontal
------------------------------------- ---
a . . . . . . . . . . . . . . . . . . 100
bbbbbbb . . . . . . . . . . . . . . . 55
ccccc . . . . . . . . . . . . . . . . 55
d . . . . . . . . . . . . . . . . . . 55
ThisIsAnnoylinglyLong/LikeActuallyWTF 23
ee . . . . . . . . . . . . . . . . . 55
ff . . . . . . . . . . . . . . . . . 55
------------------------------------- ---
half both
------------------------------------- ---
a 100
bbbbbbb . . . . . . . . . . . . . . . 55
ccccc 55
d . . . . . . . . . . . . . . . . . . 55
ThisIsAnnoylinglyLong/LikeActuallyWTF 23
ee . . . . . . . . . . . . . . . . . 55
ff 55
------------------------------------- ---
For my eyes, full-density and half-horizontal are much more readable than the vertical-skipping options. I think keys with wildly-varying lengths make the horizontal-skipping options hard to read if you get unlucky with where you are skipping lines (i.e. if you end up skipping dots on a bunch of short lines but keeping them on longer lines)
I don't want to flame -- how about we put this to a poll on the RESL slack channel? I generated some GIFs.
Poll results from RESL (n=12):
I realized I should have done a ranking poll instead...
H-density | V-density | % | Votes |
---|---|---|---|
100% | 100% | 50% | 6 |
100% | 50% | 33% | 4 |
50% | 100% | 17% | 2 |
50% | 50% | 0% | 0 |
Here's how how Louise ranked them (best-to-worst):
1. 100% / 100%
2. 50% / 100%
3. 50% / 50 %
4. 100% / 50 %
People seem to overwhelmingly prefer dense dots (88%). They are more split on vertical density (67%).
You've spent the most time staring at this so I'll let you pick what to do and won't comment any more. I hope the surveys were helpful.