Mark Weghorst
Mark Weghorst
I have a similar need, and here is how I got it to work: /* Read only the header */ val headerData = spark.read .format("cobol") .option("schema_retention_policy", "collapse_root") .option("segment_field", "RECORD-ID") .option("segment_filter",...
I had a meeting to discuss the first draft of these requirements and one of my peers suggested that while dynamically creating a copybook from a Spark schema and declarative...
I have a colleague researching this now, but the preliminary answer is that we need FB and VB formats. In a day or two I'll have a final answer and...
@yruslan as @milehighhokie indicated I have accepted a new position in another company and Bill will be taking over this issue for my former employer. We had a turnover meeting...
I really like this idea, for most of my files i can use the segment_filter feature as suggested, and so far 100% of my outlier use-cases have been solvable using...
Here is the challenge as I see it, when we have a 1:1 relationship between blocks and records, i can work around this quite easily by just treating the BDW...
@yruslan I can build you some test data, but can you be more specific about the field types you need in your sample? Would something composed of just X(N) work...
@yruslan I don't know if you have written any code yet for this feature, but here is what I use in my own Cobrix projects: ``` def copybookLoader(path: String): String...