qbeast-spark
qbeast-spark copied to clipboard
Implement HistogramTransformation
After analyzing the efficiency of distribution functions for indexing (see issue #336 ), we can start implementing the HistogramTransformation.
The idea is to build it as another type of transformation, and eventually turn it into the default.
The API can be something like:
df.write.format("qbeast").option("columnsToIndex", "id:histogram").save("/tmp/test-histogram")
And under the hood:
case class HistogramTransformation(hist: IndexedSeq[String]) extends Transformation {
override def transform(value: Any): Double = ???
/**
* This method should determine if the new data will cause the creation of a new revision.
*
* @param newTransformation
* the new transformation created with statistics over the new data
* @return
* true if the domain of the newTransformation is not fully contained in this one.
*/
override def isSupersededBy(newTransformation: Transformation): Boolean = ???
/**
* Merges two transformations. The domain of the resulting transformation is the union of this
*
* @param other
* @return
* a new Transformation that contains both this and other.
*/
override def merge(other: Transformation): Transformation = ???
}
object HistogramTransformation {
def apply(hist: IndexedSeq[String]): HistogramTransformation = new HistogramTransformation(hist)
}
We would take advantage of the first step of OTreeDataAnalyzer
and compute an approximate histogram or quartiles of the columns specified.
/**
* Analyze a specific group of columns of the dataframe and extract valuable statistics
* @param data
* the data to analyze
* @param columnTransformers
* the columns to analyze
* @return
*/
private[index] def getDataFrameStats(
data: DataFrame,
columnTransformers: IISeq[Transformer]): Row = {
val columnStats = columnTransformers.map(_.stats)
val columnsExpr = columnStats.flatMap(_.statsSqlPredicates)
data.selectExpr(columnsExpr ++ Seq("count(1) AS count"): _*).first()
}