Clarify whether 'unused' bits are ignored
For example, short-form timestamps can have unused bits. (The high bit in a year timestamp, the high five bits in a month timestamp, etc.)
A writer could reasonably decide to go ahead and encode all time unit fields and then only emit the relevant bytes after the fact, eliminating a lot of branching. In this case, the unused bits could contain 1s; the spec needs to clarify if this is ok or not. Allowing it enables this optimization, but it also complicates direct byte-for-byte comparisons in (e.g.) unit tests.
I'm in favor of it, but want additional input.
@popematt raised a related question in this PR:
[...in the encoding for long-form timestamp...] If there's "day" precision, then the day will already be non-zero. If anything, shouldn't we be zero-ing the day if it's month precision?