openstack4j
openstack4j copied to clipboard
Timestamps are incorrectly parsed in Ceilometer statistics
Running openstack4j 3.0.3 I see some weird parsing behavior when query results from Ceilometer are being deserialized.
The following code:
List<? extends Statistics> statistics = client.telemetry().meters().statistics(meter, criteria, downsamplingPeriod);
for (Statistics stat : statistics) {
this.logger.debug("got statistic: {}", stat);
}
yields the following result (with request logging turned on):
Jan 27, 2017 1:41:06 PM org.glassfish.jersey.filter.LoggingFilter log
INFO: 1 * Client response received on thread pool-1-thread-2
1 < 200
1 < Content-Length: 763
1 < Content-Type: application/json; charset=UTF-8
1 < Date: Fri, 27 Jan 2017 12:41:06 GMT
1 < Server: Werkzeug/0.10.4 Python/2.7.12
1 < x-openstack-request-id: req-2eb2f3d4-e518-4f67-8aa7-62a1012a76b9
[{"count": 30, "duration_start": "2017-01-27T12:35:57.736000", "min": 14.89424786133495, "duration_end": "2017-01-27T12:40:47.764000", "max": 15.131249975411718, "sum": 449.9349666470593, "period": 300, "period_end": "2017-01-27T12:40:55.426000", "duration": 290.028, "period_start": "2017-01-27T12:35:55.426000", "avg": 14.997832221568643, "groupby": null, "unit": "connections/s"}, {"count": 1, "duration_start": "2017-01-27T12:40:57.772000", "min": 14.988076984758623, "duration_end": "2017-01-27T12:40:57.772000", "max": 14.988076984758623, "sum": 14.988076984758623, "period": 300, "period_end": "2017-01-27T12:45:55.426000", "duration": 0.0, "period_start": "2017-01-27T12:40:55.426000", "avg": 14.988076984758623, "groupby": null, "unit": "connections/s"}]
13:41:06.444 [pool-1-thread-2] DEBUG [QueryCall:fetchStatistics:135] - got statistic: CeilometerStatistics{avg=14.997832221568643, count=30, duration=290.028, durationStart=Fri Jan 27 13:48:13 CET 2017, durationEnd=Fri Jan 27 13:53:31 CET 2017, min=14.89424786133495, max=15.131249975411718, sum=449.9349666470593, period=300, periodStart=Fri Jan 27 13:43:01 CET 2017, periodEnd=Fri Jan 27 13:48:01 CET 2017, unit=connections/s}
13:41:06.454 [pool-1-thread-2] DEBUG [QueryCall:fetchStatistics:135] - got statistic: CeilometerStatistics{avg=14.988076984758623, count=1, duration=0.0, durationStart=Fri Jan 27 13:53:49 CET 2017, durationEnd=Fri Jan 27 13:53:49 CET 2017, min=14.988076984758623, max=14.988076984758623, sum=14.988076984758623, period=300, periodStart=Fri Jan 27 13:48:01 CET 2017, periodEnd=Fri Jan 27 13:53:01 CET 2017, unit=connections/s}
As can be seen, the periodStart
and periodEnd
fields in the returned CeilometerStatistics
objects are incorrect when we compare them to the period_start
and period_end
fields in the raw JSON response. For example, periodStart
should be Fri Jan 27 13:35:55 CET 2017
in the first case but is Fri Jan 27 13:43:01 CET 2017
.
I haven't been able to dig into the exact cause of the problem, but I've managed to find out that it is likely due to using Parser.asDate
to parse these timestamps, since running:
Parser.asDate("2017-01-27T12:35:55.426000")
results in Fri Jan 27 13:43:01 CET 2017
.
AFAIK, the parser is expecting milliseconds after 'dot'. But in this response (this is common for openstack and python) 426000 are microseconds.
So it is treated as millis and we are getting 426000 ms = 426s = 7m 6s
which you can see your result is bigger exactly on that value.
j.u.Date class knows nothing about microseconds and common java parsers, and even com.fasterxml.jackson.databind.util.StdDateFormat which is used to parse date don't take this into account.
Probably another DateFormat can be written and used as default to parse dates. Or jackson-databind can be fixed to address this issue.
Since you already digged in to the details, would you like to submit a fix @ZsergUA? Would be much appreciated.
Hey guys, I created a PR to address this fix, could someone check it out? Thanks
FYI this issue affects authentication as well (seen in 3.0.3). e.g. The server returned "expires_at": "2017-08-30T13:46:51.480655Z" and this resulted in a java.util.Date of "Wed Aug 30 14:54:51 BST 2017". i.e. 13:46:51 + 480655 milliseconds.
When combined with an Openstack service that always zeros the sub-second part of the issued_at (my server told me "issued_at": "2017-08-30T12:46:51.000000Z" in this case) this results in the expected login duration being 8 minutes (480655ms ~= 8 minutes) longer than the server permitted, meaning that my code (the Jenkins openstack-cloud plugin) trying use the token past its expiry time.
I think what's needed is some form of intelligent parsing so we treat the first digit in the sub-second field (if present) as hundreds of milliseconds, the second digit as tens of milliseconds, the third as milliseconds and ignore any extra. i.e. just read it to millisecond precision regardless of the precision it was given in.
This specific issues and the attached PR refers to ceilometer endpoint specifically. With @pjdarton, we observe that on Keystone tokens. Should it be tracked separately or will the work continue here?
Looks like it is fixed in new jackson starting from 2.9.0 version