cobrix
cobrix copied to clipboard
Is it possible to flatten a nested schema so all values are the root?
Background [Optional]
We have a requirement to flatten our schema in a downstream consumer. I wanted to see if this feature is supported by cobrix so we can both convert mainframe data and flatten the schema
Question
Does cobrix support flattening a schema where all values are at the root and the blank parent columns are removed? ie: Now root | -- parent1 | | -- column1 | | -- column2 | -- column2 | -- column3
Desired | -- parent1_column1 | -- parent1_column2 | -- column2 | -- column3
Yes, you can use
SparkUtils.flattenSchema(df)
https://github.com/AbsaOSS/cobrix/blob/7462ecf01b6a8ba35250bc4db44a44849643d2cf/spark-cobol/src/main/scala/za/co/absa/cobrix/spark/cobol/utils/SparkUtils.scala#L61
Can we use this if our code base is in python? I believe we can use something like py4j, but is there another easier way?
We haven't tried this, but you might be able to use the method from the JVM to Python gateway (sc._gateway.jvm...) the same way PySpark interacts with Scala codebase.
Alternatively, the source code for the flattening is not too big, you can covert it in Python for your use. If you want, you can also contribute the Python version of the flattening code to Cobrix, we can include as one of the examples.
Sorry for the late reply. Can you show a snippet of how the call from the JVM to Python gateway would work? Regarding contribution, I do not have the bandwidth at the moment.
I actually haven't tried this specifically with the flattening code. I might check it out and let you know if it worked
I expect something like:
sc = self.spark.sparkContext
flat_java_df = sc._gateway.jvm.za.co.absa.cobrix.spark.cobol.utils.SparkUtils.flattenSchema(df._jdf)
flat_df = spark.createDataFrame(flat_java_df)
ensuring Cobrix libraries are in the class path.
The schema flattening is possible in PySpark. Please, look at https://github.com/AbsaOSS/cobrix?tab=readme-ov-file#flattening-schema-with-groups-and-occurs