tikkanz

Results 5 comments of tikkanz

> > > Hi @XiangkunYe @JeroenSchmidt , did anyone figure out what the max execution time was finally ? Struggling to find actual documentation Spent a bit of time sorting...

[> ](https://github.com/boto/boto3/issues/654#issuecomment-968237421) Although the `allowedPattern` specified will enable you to specify a longer `executionTimeout` than the default, the actual max time that a command can run is [hard-coded](https://github.com/aws/amazon-ssm-agent/blob/mainline/agent/plugins/pluginutil/pluginutil.go) at 172800...

I see a similar issue with gzip compression. The same dataframe (33 million rows, 13 columns) takes 56.9 seconds to write to parquet using gzip compression but 6.57s using lz4....

My data also has significant `null` content. Testing showed that, if anything, it took longer to write to parquet with compression after filling nulls with `0`. - `lz4` 6 ->...

While `read_csv` now supports reading 1 & 2-byte integers, both `read_csv_batched` and `scan_csv` report that it is not supported: `Unsupported data type Int8 when reading a csv` Scanning as 4-byte...