Alexander Kuzmenkov

Results 32 issues of Alexander Kuzmenkov

``` # create function test1() returns int as $$ select decompress_chunk(x) from show_chunks('uk_price_paid') x limit 1; select compress_chunk(x) from show_chunks('uk_price_paid') x limit 1; select count(*) from uk_price_paid; $$ language sql;...

bug

Currently its scan targetlist is always the uncompressed tuple, but in some cases we can make the scan targetlist the same as the required output targetlist, thus avoiding the projection....

Parts: - [x] https://github.com/timescale/timescaledb/pull/6806 - [x] https://github.com/timescale/timescaledb/pull/6784 - [x] https://github.com/timescale/timescaledb/pull/6817 - [x] https://github.com/timescale/timescaledb/pull/6859 - [x] https://github.com/timescale/timescaledb/pull/6893 - [x] https://github.com/timescale/timescaledb/pull/6891 - [x] https://github.com/timescale/timescaledb/pull/6892 - [x] https://github.com/timescale/timescaledb/pull/7050 - [ ] https://github.com/timescale/timescaledb/pull/7049 -...

``` create table pvagg(s int, a int); select create_hypertable('pvagg', 'a', chunk_time_interval => 1000); insert into pvagg select 1, generate_series(1, 999); insert into pvagg select 2, generate_series(1001, 1999); alter table pvagg...

bug
planner
compression

The unsorted paths are better for hash aggregation, but currently if we're doing aggregation and we can push down the sort, we are only going to add sorted paths. Fixes...

We would add extra Sort nodes when adjusting the children of space partitioning MergeAppend under ChunkAppend. This is not needed because MergeAppend plans add the required Sort themselves, and in...

Add ANALYZE. To keep the desired MergeAppend plans, we also have to add a LIMIT everywhere so that the MergeAppend is chosen based on its lower startup cost. Otherwise the...

Add ANALYZE after compression. The plan changes are expected, SeqScans are preferred over IndexScans and Sort over MergeAppend for small tables.

We don't have to decompress anything more when we re-lookup the chunk insert state on COPY buffer flush. Moreover, `ChunkInsertState.slots[0]` is incorrect slot type for `decompress_batches_for_insert()`, because it is a...

This is important for the common case of grouping by time_bucket(). In this case, under AggPath there is a ProjectionPath above the Append node for all the chunks. When we...