Danny Chan
Danny Chan
> Because I will clean up the historical hive partition data to ensure that there is a stable amount of partition data in hive instead of growing all the time....
> Do we have some pull-requests or plans to automatically clean up old archive files? I think we can put it in the upgrade handler maybe.
@KnightChess maybe you can give some insights here, also cc @nsivabalan for visibility.
Just enable the hive sync would work, but because you enabled the hive sync all of a sudden, there might be too much metadata to sync which is a pressure...
From the source code it looks like there are multiple fields named "array" in the schema, can you share with us the table creation schema after the alter table operation?...
You may got some schema mismatch: ```java public Object get(String key) { Field field = schema.getField(key); if (field == null) { throw new AvroRuntimeException("Not a valid schema field: " +...
But it looks like the three fields are concatenated as one as of to Avro schema, which is the reason it can not be found.
Then you just need to define the record keys concatented with comma: ```java 'hoodie.datasource.write.recordkey.field': col_a,col_b,col_c ```
I saw this in the options: `'hoodie.datasource.write.recordkey.field': recordkey,`
looks great, how can we enable this @yihua ?