Roman Shanin
Roman Shanin
> Executing the transactions in the execution_payload updates the global state. All clients re-execute the transactions in the execution_payload to ensure the new state matches that in the new block...
After some thought and discussion with @SamHSmith i think that it may be too risky to introduce this change. Things i am worried about: 1. If at some point block...
Setting such default leads to really painful debugging experience. I've noticed significant performance difference between catalog implementations doing simple table scan (like 10x difference). Turned out to be exclusively due...
It's, but it took some amount of time to discover it exists in the first place :)
If you try to add parquet files which already have field ids you would get [this](https://github.com/apache/iceberg-python/blob/ca7044216d00df1ec6863937ca1abd656ce8ff4e/pyiceberg/io/pyarrow.py#L2569-L2571) error.
Like the interface. However i find it a bit hard to track the logic of `Supervisor` itself maybe we can simplify it's implementation? My idea is to handle all tasks...
> Anyway, I found a middle ground and made the implementation simpler. Indeed, implementation became much simpler and easier to grasp, i like it.
> What I can suggest at this point is that we can implement some sort of backoff mechanism to reduce the amount of transactions in a block in such cases....
From discussion we had with @mversic before: > As i see problem right now we have two groups of parameters which are fighting with each other. Like f(tx_in_block, max_isi_in_tx, wasm_limit)...
Another idea was to have backoff mechanism for `commit_time_limit` so it increase with every view change index. Eventually it going to be height enough to commit block. But this might...