mkaruza

Results 11 issues of mkaruza

* To cache remote object, user need to explicilty use `duckdb.cache(path, type)` function. Path is remote HTTPFS/S3/GCS/R2 object path and type is either `parquet` or `csv` indicating remote object type....

If there exists index on table - research and explore possibility that they can be used for scan. Interesting would be to look into BRIN indexes that can be used...

* We can now start multiple worker process that can read relation blocks and write buffer to shared memory to be consumed by duckdb reader threads. Problem can be viewed...

performance

We should enable COPY command to be able to write to remote S3 storage. Copying TO remote storage should be possible by passing query directly to duckdb execution while in...

enhancement

First improvement is to implement replication loop to run from periodic fiber.

For each streamer object create periodic fiber that would do replication on interval basis. If there is influx of data we are would increase frequency of replication until pending data...

Some initial implementation (#5225) shows that we can have better performances while we are doing replication from master node. Noticeable improvements were lower p99 and average latency, lower `MainLoop` CPU...

enhancement
replication

``` info = await good_client.info() # Gradually release pipeline. > assert old_pipeline_cache_bytes > info["pipeline_cache_bytes"] E assert 378 > 378 ``` https://github.com/dragonflydb/dragonfly/actions/runs/15376451220/job/43261747284

bug
failing-test
iouring

https://github.com/dragonflydb/dragonfly/actions/runs/19884092364 https://github.com/dragonflydb/dragonfly/actions/runs/19912988985 ``` if heartbeat_rss_eviction: # We should see used memory deacrease and number of some number of evicted keys > assert memory_info_after["used_memory"] < memory_info_before["used_memory"] E assert 772468480 < 772468480...

bug
failing-test
epoll