Kyle Bendickson

Results 34 comments of Kyle Bendickson

When will a release be created that allows us to use `any` and `all`? I'm trying to migrate my repos to this workflow from the deprecated probot autolabeleer (the apache...

I'm also running into this issue. I'm unable to use any configuration that has `any` in it on 2.2.0.

If I understand correctly, you have lost your catalog data (e.g. the data in HMS or in your dynamodb table). Is that correct? There's a [`RegisterTableProcedure`](https://github.com/apache/iceberg/blob/master/spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/procedures/RegisterTableProcedure.java) that can be used...

I think you're supposed to use `DROP PARTITION FIELD days(ts)`. Or dropping the column directly. Here's some examples from the tests: https://github.com/apache/iceberg/blob/84f40cff9b98ee15b706289e551078355bb8a7a5/spark/v3.3/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestAlterTablePartitionFields.java#L397-L422 I'm wondering if that's a correct usage of...

All of the examples of `REPLACE PARTITION FIELD` seem to be a transform of the same type: Here's the actual SQL definition: ``` spark/v3.3/spark-extensions/src/main/antlr/org.apache.spark.sql.catalyst.parser.extensions/IcebergSqlExtensions.g4 72: | ALTER TABLE multipartIdentifier REPLACE...

I think it needs to still use `DROP PARTITION FIELD days_of_ts` in order to properly clean up the data, as that column is still a partition field in older snapshots....

The `Number of reduce tasks is set to 0 since there's no reduce operator` is something you can ignore. That's just saying that this is a map only job. What...

To be more precise, the 0.12.1 Iceberg docs on Hive are here: https://iceberg.apache.org/docs/0.12.1/hive/ If there's anymore error stack trace, that would be helpful. As well as the configuration and set...

The stack trace reads like there's an S3 request timeout. Can you provide the following infirmation? 1. The exact Iceberg runtime jar dependency used (ensure that you're using the spark...

To get around the issue, assuming you're using `S3FileIO` (which it seems like you are), you might consider increasing the number of multipart upload threads if the issue is indeed...