asn1tools
asn1tools copied to clipboard
support for multiple records in same file
I can successfully decode one file with one record inside, with BER. Now I'm trying to decode one large file which has hundrends of records, also with BER, but I'm only getting the results of the first record in the file.
with open(localFile, 'rb') as file_t:
blob_data = bytearray(file_t.read())
print(len(blob_data)) # ========> 5799936 correct byte length of file
decoded = datastruct_ber.decode_with_length(rootelement, blob_data)
print (decoded[1]) #decoded[0] length of data ======> 4910 length of only first record.
is this supported and I am calling wrong functions?
FWIW: I used another library (pyasn1) and it works with multiple records in one file:
with open(outputFile, 'w') as f:
cdr, rest_of_cdr_file = ber_decoder(blob_data, asn1Spec)
print(cdr, file=f)
while rest_of_cdr_file:
cdr, rest_of_cdr_file = ber_decoder(rest_of_cdr_file, asn1Spec)
print(cdr, file=f)
Cons: Need to compile ASN1 beforehand. Doesn't support JER output as asn1tools, pyasn1 can only print something similar to YAML.
can you share the full code?
can you share the full code?
Managed to resolve this.
# get all the content of the file
reader = open(file_path, mode='rb')
all_content = reader.read()
reader.close()
# setup
records = []
index = 0
content_length = len(all_content)
# add record to the records
while index < content_length:
decoded_record, record_length = asn_decoder.decode_with_length(schema_name, all_content[index:])
struct_name, struct_record = decoded_record
index += record_length
# write to json file
records.append(decoded_record)
pass
pass
decode_with_length() function is not supported for PER codec
Please provide alternate solution