dbfread
dbfread copied to clipboard
ValueError: Unknown field type: '\x00'
I have this error opening a DBF file:
Traceback (most recent call last):
File "hwmnucheck.py", line 486, in main
checker.loaddb()
File "hwmnucheck.py", line 301, in loaddb
self._loaddb_turni()
File "hwmnucheck.py", line 236, in _loaddb_turni
table = dbfread.DBF(self._db_turnimensa_f)
File "\WinPython-32bit-3.4.3.7\python-3.4.3\lib\site-packages\dbfread\dbf.py", line 123, in __init__
self._check_headers()
File "WinPython-32bit-3.4.3.7\python-3.4.3\lib\site-packages\dbfread\dbf.py", line 265, in _check_headers
raise ValueError('Unknown field type: {!r}'.format(field.type))
ValueError: Unknown field type: '\x00'
This is either a corrupt file or a field type I haven't come across before.
If it's a new field type the first step is to find out what's in the fields. You can do this by adding a custom field parser for the \x00
field type:
from dbfread import DBF, FieldParser
class TestFieldParser(FieldParser):
def parse00(self, field, data):
print(field.name, data)
return data
dbf = DBF('test.dbf', parserclass=TestFieldParser)
for rec in dbf:
pass
This will print out the data in every \x00
field, for example:
NAME b'Bob ' NAME b'Alice '
The next step is to figure out how to parse the data (or just leave it as a binary string).
Hi,
I am having the same issue on a .dbf file and I ran the code above to identify the problematic field. But I get the following exception:
Traceback (most recent call last):
File "test_dbf_field.py", line 14, in <module>
main(sys.argv[1])
File "test_dbf_field.py", line 9, in main
dbf = DBF(fname, parserclass=TestFieldParser, encoding='iso-8859-1')
File "/usr/local/lib/python3.6/dist-packages/dbfread/dbf.py", line 123, in __init__
self._check_headers()
File "/usr/local/lib/python3.6/dist-packages/dbfread/dbf.py", line 265, in _check_headers
raise ValueError('Unknown field type: {!r}'.format(field.type))
ValueError: Unknown field type: 'ÿ'
Libreoffice Calc is able to open this file without any problem.
I investigated a little further to see if the dbf had been altered by opening and saving it on a spreadsheet such as Excel (i've seen this happen before). I inspected the dbf with file
and here is the output:
problematic.dbf: Composite Document File V2 Document, Little Endian, Os: Windows, Version 1.0, Code page: -535, Revision Number: 2, Total Editing Time: 08:58, Last Saved Time/Date: Tue Dec 3 13:04:50 2019
for comparison, here is file
's output on a valid dbf:
DENGON2015.dbf: FoxBase+/dBase III DBF, 105972 records * 1375, update-date 117-10-2, at offset 5089 1st record "00002672A90 201503302015132015232307001518 256387820150325201512JANE DOE "
This tells me that this dbf has been tampered with and re-saved.
Do you suggest any way to handle this without having the original file recreated?
It's hard to say what has happened in the file. You could try to open it in a hex editor and see what it looks like. A normal DBF file should look something like this (people.dbf
from the examples folder):
00000000: 0372 0802 0300 0000 6100 1900 0000 0000 .r......a.......
00000010: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000020: 4e41 4d45 0000 0000 0000 0043 0100 0000 NAME.......C....
00000030: 1000 0000 0000 0000 0000 0000 0000 0000 ................
00000040: 4249 5254 4844 4154 4500 0044 1100 0000 BIRTHDATE..D....
00000050: 0800 0000 0000 0000 0000 0000 0000 0000 ................
00000060: 0d20 416c 6963 6520 2020 2020 2020 2020 . Alice
00000070: 2020 3139 3837 3033 3031 2042 6f62 2020 19870301 Bob
00000080: 2020 2020 2020 2020 2020 2031 3938 3031 19801
00000090: 3131 322a 4465 6c65 7465 6420 4775 7920 112*Deleted Guy
000000a0: 2020 2020 3139 3739 3132 3232 1a 19791222.
The file has:
- a short header (32 bytes, first two lines here)
- field headers (NAME C and BIRTHDATE D)
- records starting with either a ' ' (Alice here) or a '*' (a deleted record, like Deleted Guy here).
If you see something that doesn't look like this the file is likely corrupted.
I've been wanting to write some debugging tool for these sorts of situation, and I've decided to go ahead an do it as described in issue #50.