genedavis
genedavis
Hi any update on this issue? Thx
Thank you @mkindahl - I haven't yet found a simple reproducing case. I'll work harder at that now. In the meantime I was thinking of trying possible workarounds, and I...
One update from above. I decompressed the chunk (identified in the explain plan) and I got the correct results this time. I will now attempt to recompress the chunk to...
Interestingly, recompressing the chunk brought the problem back again. This might suggest there are steps I can do to reproduce the problem. Side question: Are there any tools to query...
ok with a small amount of identical data - even after hypertabling and compressing, I can't reproduce. This leads me to think it needs more data to make a more...
No rows were inserted into a compressed chunk. Here's what we did: 1. create table 2. define table as hypertable 3. insert about 18B rows (edit: it's only 6B rows)...
@gayappan that was the first explain above. I was also requested to run an explain (analyze) so here it is: ``` tsdb=> EXPLAIN (ANALYZE) SELECT tag_name, time_bucket_gapfill('1 minute', resample_time, '2020-11-14...
@svenklemm @gayyappan is there anything else I can do to help debug? Is there a way to hint a slightly different plan (perhaps avoiding the index for example)? Thx
Thanks @svenklemm that version ALSO gives duplicate records (at least compressed version) - uncompressed does not. There are reasons we do have the nested processing (and there might be simpler...
@svenklemm this looks promising, we are in the throes of an initial release, will let you know once we can test