Martin Andersson

Results 16 comments of Martin Andersson

> Doing it piece by piece is probably a good idea but it would be nice to have a clear end goal. Spatial partitioning and the kryoserializer are not intertwined...

I think this would be at great addition to both Trino and Sedona. The current geospatial support in Trino is very limited compared to Sedona. Features in Sedona that i...

In Sedona 1.4 we've added a new raster type. Its basically a serialized GridCoverage2D (from GeoTools). You can use the new api to load the tiff files: https://sedona.apache.org/latest-snapshot/api/sql/Raster-loader/#rs_fromgeotiff Sedona has...

I did some more research. GDAL is able to create COGs and rasterio is a python wrapper around GDAL. You should be able to convert GeoTiffs to COGs in rasterio....

@jiayuasu I think that would be a good idea. It’s not ideal to depend on a native LGPL library but I think it’s our only option right now. I guess...

@desruisseaux That's really exciting news that you're considering implementing a COG writer! I'm curious, would this also mean that Apache SIS would support writing "regular" GeoTIFFs as well? If the...

The WKB reader in JTS supports both formats. Adding support for EWKB in Trino is just a matter of replacing the ESRI implementation with JTS. https://github.com/trinodb/trino/compare/master...umartin:trino:ewkb If you are happy...

Since ST_SubDivideExplode splits on vertices it doesn't work on long lines with few vertices. If ST_SubDivideExplode is combined with ST_Segmentize it works better. ST_Segmentize introduces more vertices that ST_SubDivideExplode can...

Full stack trace: ``` Exception in thread "main" java.lang.NoSuchMethodError: 'org.apache.spark.sql.catalyst.expressions.ExpressionSet org.apache.spark.sql.catalyst.expressions.ExpressionSet.$plus$plus(scala.collection.GenTraversableOnce)' at org.apache.spark.sql.delta.stats.DeltaScan.filtersUsedForSkipping$lzycompute(DeltaScan.scala:92) at org.apache.spark.sql.delta.stats.DeltaScan.filtersUsedForSkipping(DeltaScan.scala:92) at org.apache.spark.sql.delta.stats.DeltaScan.allFilters$lzycompute(DeltaScan.scala:93) at org.apache.spark.sql.delta.stats.DeltaScan.allFilters(DeltaScan.scala:93) at org.apache.spark.sql.delta.stats.PreparedDeltaFileIndex.matchingFiles(PrepareDeltaScan.scala:355) at org.apache.spark.sql.delta.files.TahoeFileIndex.listAddFiles(TahoeFileIndex.scala:111) at org.apache.spark.sql.delta.files.TahoeFileIndex.listFiles(TahoeFileIndex.scala:103) at org.apache.spark.sql.execution.FileSourceScanLike.selectedPartitions(DataSourceScanExec.scala:256) at org.apache.spark.sql.execution.FileSourceScanLike.selectedPartitions$(DataSourceScanExec.scala:251) at...

Thank you for working on this! Spark will, by default, use the local time zone. If your environment is set to UTC you might not get a second offset. Does...