influxdb icon indicating copy to clipboard operation
influxdb copied to clipboard

[cloud/v3] Casting BIGINT to TIMESTAMP doesn't seem to work anymore

Open Notilarie opened this issue 1 year ago • 3 comments

Steps to reproduce: List the minimal actions needed to reproduce the behaviour.

  1. Open data explore on InfluxDB Cloud
  2. Execute the folowing query: SELECT 1705590600000000000::TIMESTAMP

Expected behaviour: I would expect a timestamp

Actual behaviour: An exception is thrown: Error while planning query: Optimizer rule 'simplify_expressions' failed caused by Arrow error: Compute error: Overflow happened on: 1705590600000000000 * 1000000000: rpc error: code = InvalidArgument desc = Error while planning query: Optimizer rule 'simplify_expressions' failed caused by Arrow error: Compute error: Overflow happened on: 1705590600000000000 * 1000000000

Environment info:

  • InfluxDB Cloud

As workaround I use the following query: SELECT 1705590600000000000::BIGINT UNSIGNED::TIMESTAMP

On older versions of InfluxDB, unsigned big ints are not allowed to cast to timestamp

Notilarie avatar Jan 18 '24 16:01 Notilarie

@btasker - this is reported as a issue in cloud serverless.

philjb avatar Feb 06 '24 16:02 philjb

This looks like a SQL issue (it's being run via the Cloud UI which doesn't accept InfluxQL) so doesn't fall under my purview.

I've flagged up internally.

btasker avatar Feb 06 '24 17:02 btasker

The Cloud UI does accept InfluxQL as well these days but anyway I think the problem is probably coming from InfluxQL-IOx, when direct casting an integer using ::timestamp IOx will default to treat it as seconds (we will work on fixing this)

@NotiArie Workaround in the meantime would be to use to_timestamp_seconds or to_timestamp_mills etc to explicitly specify the precision to use.

helenosheaa avatar Feb 12 '24 17:02 helenosheaa

This was changed in datafusion (the query engine used by influxdb v3) in November. The stated intention being to make timestamp conversions behave consistently, and be consistent with the same queries run in postgresql.

If you need subsecond accuracy for your timestamps then the functions to_timestamp_millis, to_timestamp_micros, or to_timestamp_nanos can be used to perform explicit conversions.

mhilton avatar Feb 21 '24 10:02 mhilton

Thanks, Both functions to_timestamp and to_timestamp_nanos work fine and are returning the timestamp with nanoseconds precision.

BTW, funny thing is that SELECT 1705590600000000000::BIGINT UNSIGNED::TIMESTAMP does work even with negative numbers.

Notilarie avatar Feb 21 '24 15:02 Notilarie

BTW, funny thing is that SELECT 1705590600000000000::BIGINT UNSIGNED::TIMESTAMP does work even with negative numbers.

Thanks for letting us know, I'd avoid using that just because it feels like something that might be fixed in the future.

mhilton avatar Feb 21 '24 15:02 mhilton

thanks for the report, as theres a workaround and this was a change in Datafusion closing this as there's no further action needed

helenosheaa avatar Feb 21 '24 16:02 helenosheaa