Results 108 comments of Marius Grama

Stacktrace of the issue I'm dealing with as well: ``` 22/05/11 14:00:08 ERROR SparkExecuteStatementOperation: Error executing query with 98f04258-59d4-43cd-b4a2-373a6684f3d9, currentState RUNNING, spark | org.apache.spark.sql.AnalysisException: SHOW CREATE TABLE is not supported...

> Gains for JDBC connectors are "obvious". Out of curiosity: how would we be able to test the gains?

> Spark already supports the iceberg's $data_files and $all_data_files metadata tables. Trino is already supporting $files. Spark has actually `$files` and `$all_data_files`. I would argue that (for consistency sake and...

Build is red https://github.com/trinodb/trino/runs/6319061876?check_suite_focus=true#step:7:3183 ``` Error: COMPILATION ERROR : [3184](https://github.com/trinodb/trino/runs/6319061876?check_suite_focus=true#step:7:3184) [INFO] ------------------------------------------------------------- [3185](https://github.com/trinodb/trino/runs/6319061876?check_suite_focus=true#step:7:3185) Error: /home/runner/work/trino/trino/plugin/trino-iceberg/src/main/java/io/trino/plugin/iceberg/AbstractFilesTable.java:[51,37] [MissingOverride] getDistribution implements method in SystemTable; expected @Override ```

### Technical analysis #### Hive metastore ``` mysql> select VIEW_ORIGINAL_TEXT from TBLS where TBL_TYPE='VIRTUAL_VIEW'; +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | VIEW_ORIGINAL_TEXT | +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | /* Presto View: eyJvcmlnaW5hbFNxbCI6IlNFTEVDVCB4XG5GUk9NXG4gIGljZWJlcmcuZGVmYXVsdC5teV90YWJsZVxuIiwiY2F0YWxvZyI6InRwY2RzIiwic2NoZW1hIjoic2YxIiwiY29sdW1ucyI6W3sibmFtZSI6IngiLCJ0eXBlIjoiaW50ZWdlciJ9XSwiY29tbWVudCI6Im15X3ZpZXcgZGVzY3JpcHRpb24iLCJvd25lciI6Im1hcml1cyIsInJ1bkFzSW52b2tlciI6ZmFsc2V9 */ | +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 1 row in...

https://github.com/trinodb/trino/runs/6222742010?check_suite_focus=true

Overall this PR brings a more elegant approach to dealing with the file system. The only thing I'm not sure of being an improvement is using `String` instead of `Path`...

I investigated the hadoop-connectors project code and opted to use reflection in order to get access to `com.google.api.services.storage.Storage` from the `GoogleHadoopFileSystem` ``` GoogleCloudStorage googleCloudStorage = ghfs.getGcsFs().getGcs(); Field gcsField = googleCloudStorage.getClass().getDeclaredField("gcs");...

Related code: https://github.com/linkedin/coral/blob/c96456329efc5eab15393e5c7bfb7e4e009f2245/coral-trino/src/main/java/com/linkedin/coral/trino/rel2trino/UDFTransformer.java#L147-L149

From #115 > Who will this benefit? > Tools that use dbt metadata. Which tools do actually use dbt metadata? Can you please describe in more detail this so that...