luxon
luxon copied to clipboard
Improve IANAZone.offset performance
This is more than 2.5x faster in my testing. Example benchmark script:
import { setTimeout } from 'timers/promises';
import { DateTime } from './src/luxon.js';
// Warm up
for (let i = 0; i < 10; i++) {
DateTime.fromJSDate(new Date(), { zone: 'America/New_York' });
}
await setTimeout(200);
console.time('run');
for (let i = 0; i < 10000; i++) {
DateTime.fromJSDate(new Date(), { zone: 'America/New_York' });
}
console.timeEnd('run');
The problem with this is that the output format of longOffset
is not described in the specification, so there is no guarantee that it will work like this in all engines or even that it will keep working in the same way.
@diesieben07 This problem exists for hackyOffset
and partsOffset
as well. Luxon generally makes certain assumptions about the engine, such as it having Intl.DateTimeFormat
, an en-US
locale, and a format that looks a certain way. It is recommended for engines to use the CLDR data, and in the case of long offset formatting it is fairly rigid.
hackyOffset
is only used if formatToParts
is not supported, which is pretty much only in ancient runtimes. I don't see how partsOffset
is susceptible to the same issue, as it looks at each part individually and only parses numbers.
At the very least we should check that the expected format is returned by longOffset
and only set hasLongOffset
if that is so.
hackyOffset
is only used ifformatToParts
is not supported, which is pretty much only in ancient runtimes. I don't see howpartsOffset
is susceptible to the same issue, as it looks at each part individually and only parses numbers.
partsOffset
is certainly less susceptible, perhaps the only part where it might fail is the BC
check.
At the very least we should check that the expected format is returned by
longOffset
and only sethasLongOffset
if that is so.
I've added this. I also extracted the offset calculations out of the method, and made each function get its own DTF since they rely on different formats. The check is expensive, that's why I had to do it inside the offset
method as otherwise importing the library is slower (the Intl.DateTimeFormat
constructor is painfully slow).
In general, I like this kind of optimization, as long as the feature detection works well. For partsOffset and even Intl support in general, we went through years of feature detection and updates to the support matrix as support stabilized and we bumped the minimum browser version up to support them. And then we left hackyOffset in there as a fallback to formatToParts
anyway. My point isn't that we shouldn't do this -- 2.x is nothing to sneeze at -- but that we need to do it with care.
I'm also somewhat hoping there's a better way to detect support for longOffset
, but I don't have any bright ideas there.
I found this issue while investigating a performance issue in offset that impacted a use case of parsing a few hundred thousand dates at a time.
I noticed that hackyOffset
was substantially faster than partsOffset
(about 2.5x on node 18). I wrote a benchmark to compare and got even more dramatic results in browser
Test case name | Result |
---|---|
hackyOffset | hackyOffset x 718,239 ops/sec ±0.67% (68 runs sampled) |
partsOffset | partsOffset x 208,390 ops/sec ±0.21% (69 runs sampled) |
dtf.formatToParts | dtf.formatToParts x 298,653 ops/sec ±0.52% (68 runs sampled) |
dtf.format | dtf.format x 822,318 ops/sec ±0.18% (60 runs sampled) |
It seems like there might be good reasons not to enable this in the browser, but I am very tempted to use this approach in node where I control the environment...
Any update on this? I've been using it in production since and it seems to be working well. Note that this is even faster than hackyOffset
(1.4x as fast in my testing).