cobrix
cobrix copied to clipboard
A COBOL parser and Mainframe/EBCDIC data source for Apache Spark
Hi team, Does cobrix support reading from Azure Blob storage? I'm having some issues doing the following: copybook_path = "abfss://[email protected]/example_data/companies_copybook.cpy" file_path = "abfss://[email protected]/example_data/companies_data/COMP.DETAILS.SEP30.DATA.dat" df = ( spark.read.format("za.co.absa.cobrix.spark.cobol.source") .option("copybook", copybook_path) .load(file_path)...
## Background [Optional] A clear explanation of the reason for raising the question. This gives us a better understanding of your use cases and how we might accommodate them. ##...
Hi guys I have a file that I receive via FTP, this file is in EBCDIC format and does not have RDW. In Copybook there is a field MODEL-DECISION PIC...
## Background Currently, Cobrix can be used only for reading EBCDIC data. Sometimes it is useful to be able to write EBCDIC records as well, for instance, to send EBCDIC-encoded...
## Background It is quite a common pattern in mainframe files that large records are split between several segments. Cobrix can combine hierarchical records where one parent record can have...
## Background Determining record type is critical for reading mainframe files. Yet, most issues raised are related to the confusion of specifying the type of records in a mainframe file....
I am trying to parse a file that defined by a set of copybooks. So I set copybooks option to be populated by a set of comma separated file locations...
## Question In 2019 https://github.com/AbsaOSS/cobrix/issues/109 @yruslan stated that BDW+RDW was not currently supported. Is this still the case? I have a use case with variable blocked records, and I wanted...
## Background [Optional] Hi, Iam new to Spark and Cobrix. we have 10 EBCDIC files to process and load to Database. each file contains 1MM records and we have an...
## Background There are several refactoring tasks in the backlog. ## Feature Make sure the coverage of the parser is good enough before starting the refactoring.