Ruslan Yushchenko
Ruslan Yushchenko
I see now, thanks. Yes, unfortunately, compressed files are not supported. If the file is '.gz', you need to uncompress it first. It is a Spark limitation for binary files.
A follow-up question. The hexdump that you posted does not seem to be compressed. Which command did you use (full command fie the file name) to get the hex dump?
Cobrix uses `sbt` as the primary build tool, and artifacts on Maven are built using JDK 1.8. Maven is more for compatibility and convenience. But it's a good spot and...
This sounds great. The demand for the feature seems to exists already, but the feature requires a lot of effort. This could be a good collaboration. As soon as the...
Good. We can start looking into requirements in about 2 weeks. Actually, generating our own copybook from a Spark dataframe is easier since we can choose output data types. Conforming...
What about data formats? Do you need the support of F, V, VB (RDW, no RDW, BDW+RDW), or we can just start with basic V (RDW)?
Thanks for the kind words, Mark! Enjoy the holiday season and the best of luck at the new role! @milehighhokie , looking forward to future collaboration.
Hi, sorry, the writer would require a lot of effort and we don't have the capacity nor internal demand for it at the moment. But it is in long term...
Thanks for offering help on the parser. But this issue (#208) is more related to #33. The reconstruction of hierarchical records feature is not currently used much so it is...
Hi, It is possible to define a custom code page by defining an EBCDIC to Unicode conversion table. For example, Cobrix option: `.option("ebcdic_code_page_class", "za.co.absa.cobrix.spark.cobol.source.utils.za.co.absa.cobrix.spark.cobol.source.utils"` Source code: [link](http://github.com/AbsaOSS/cobrix/blob/ab9ab1492e9d55aaa9003304c0ff2632f9dba332/spark-cobol/src/test/scala/za/co/absa/cobrix/spark/cobol/source/utils/CustomCodePage.scala#L21-L21) But adding support...