timeflake
timeflake copied to clipboard
is there any spec documentation in the readme?
Something similar to how https://github.com/ulid/spec#specification would be appreciated.
Also would be interested in how the conversion between timeflake to UUID and back may work (e.g. what extra field needed for lossless conversion from UUID back to timeflake.
Also any NULL or MAX value available?
On a side note... I wonder why timeflake and ULID have fixed time precision and random ID length. Would it have made sense to allow for developers to adjust timestamp precision or random bit length (or even shrink it if they feel like it)? If I'm speculating on what this may look like, this is what I would come up with.
<48b: Timestamp><Ext Timestamp>...<X Random bits><Ext Random>...
Extended Timestamp:
0bXXXX_XXX0 - Timestamp Extension Completed (X= Random Bits)
0bTTTT_TTT1 - Next Byte is extended sub milisecond precision timestamp
Extended Randomness:
0b0XXX_XXXX - No Change
0b1RRR_RRRR - Next Byte is extended randomness
Thanks for the feedback!
I updated the readme with a basic spec.
Regarding lossless conversion to a UUID, you can do a roundtrip like this:
flake = timeflake.parse(from_hex='016fb4209023b444fd07590f81b7b0eb')
flake.uuid
>> UUID('016fb420-9023-b444-fd07-590f81b7b0eb')
timeflake.parse(from_bytes=flake.uuid.bytes)
>> Timeflake('016fb4209023b444fd07590f81b7b0eb')
You can also start from a UUID and go to a timeflake and back, but it won't preserve the corresponding spec while as a timeflake, if that makes sense.
For example, UUIDv4 is almost completely random, which means if you turn that into a timeflake the timestamp component will be random and not ordered (you may get dates far in the past or future). But it works.
foo = uuid.uuid4()
bar = timeflake.parse(from_bytes=foo.bytes)
foo.bytes == bar.bytes
>> True
foo.timestamp
>> 5533-11-07T03:07:25.026001 UTC
Regarding a configurable precision, I think it'd be certainly useful and appreciate you put thought into this, but I'd prefer to keep timeflake fixed in scope.
But this feature could definitely make for a separate library / unique ID spec with its own characteristics. Worth exploring if you're up for it.
Thanks, just wondering if there are any obvious reasons why it should not be explored. E.g. varying ID size may slow down the database?