fparser
fparser copied to clipboard
fparser failing to parse some subroutine formats
The socrates project appears to have some subroutines which cause fparser to fall over.
A .f file with this subroutine declaration fails:
SUBROUTINE make_block_6_1(ierr
& , n_band, wave_length_short, wave_length_long
& , l_exclude, n_band_exclude, index_exclude
& , n_deg_fit, t_ref_thermal, thermal_coefficient
& , theta_planck_tbl, l_present_6, l_planck_tbl
& )
This is the first code after the initial comments in the file, and fails like this:
>>> from fparser.api import parse
>>> from fparser.two.parser import ParserFactory
>>> from fparser.common.readfortran import FortranFileReader
>>> parser = ParserFactory().create(std="f2008")
>>> reader = FortranFileReader("make_block_6_1.f", include_dirs=".")
>>> parser(reader)
Traceback (most recent call last):
File "/home/achalk/LFRIC/LFRIC_env/lib64/python3.6/site-packages/fparser/two/Fortran2003.py", line 266, in __new__
return Base.__new__(cls, string)
File "/home/achalk/LFRIC/LFRIC_env/lib64/python3.6/site-packages/fparser/two/utils.py", line 487, in __new__
raise NoMatchError(errmsg)
fparser.two.utils.NoMatchError: at line 20
>>> SUBROUTINE make_block_6_1(ierr
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/achalk/LFRIC/LFRIC_env/lib64/python3.6/site-packages/fparser/two/Fortran2003.py", line 270, in __new__
raise FortranSyntaxError(string, "")
fparser.two.utils.FortranSyntaxError: at line 20
I don't believe this is a fixed-format only problem, as there is a non-fixed format f90 file with this subroutine declaration:
SUBROUTINE read_schema_spectrum(ierr
, n_band, n_absorb, type_absorb
, n_aerosol, type_aerosol
, wave_length_short, wave_length_long
, n_band_absorb, index_absorb
, n_band_continuum, index_continuum
, l_exclude, n_band_exclude, index_exclude
)
Again - the first non-comment line in the file.
This also fails:
>>> from fparser.two.parser import ParserFactory
>>> from fparser.common.readfortran import FortranFileReader
>>> parser = ParserFactory().create(std="f2008")
>>> reader = FortranFileReader("read_schema_spectrum_90.f90", include_dirs=".")
>>> parser(reader)
Traceback (most recent call last):
File "/home/achalk/LFRIC/LFRIC_env/lib64/python3.6/site-packages/fparser/two/Fortran2003.py", line 266, in __new__
return Base.__new__(cls, string)
File "/home/achalk/LFRIC/LFRIC_env/lib64/python3.6/site-packages/fparser/two/utils.py", line 487, in __new__
raise NoMatchError(errmsg)
fparser.two.utils.NoMatchError: at line 10
>>>SUBROUTINE read_schema_spectrum(ierr
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/achalk/LFRIC/LFRIC_env/lib64/python3.6/site-packages/fparser/two/Fortran2003.py", line 270, in __new__
raise FortranSyntaxError(string, "")
fparser.two.utils.FortranSyntaxError: at line 10
>>>SUBROUTINE read_schema_spectrum(ierr
Is that second code fragment right? There are no line-continuation symbols so unless they've been stripped-out by the cut-n-paste (somehow) I don't think it's valid Fortran?
I double checked and that seemed to be what I find. I think you might be correct that this is not technically valid fortran but compilers seem to be ok with it.
That's really strange. I've never come across that and a google doesn't reveal anything either. Does the Socrates build system do some sort of preprocessing?
Hmm, perhaps this file isn't used, i explored a little and doing make read_schema_spectrum_90.o
does cause the compiler to fall over on this file. Do you know who might be best to ask? Probably we should ignore that problem for now and focus on the first error/file.
Yes, I suggest we ignore it. (In NEMO we keep a list of files to ignore.) I don't know who we ask I'm afraid. Perhaps it's Norman?
@rupertford I have had a quick look, and I think perhaps it might be to do with INCLUDE
statements - I will check the other failing files all contain them and go from there (this is ignoring the files that fail due to the other fparser issue I created).
Files and the failure in fparser/code:
read_schema_spectrum_90.f90
Failing code:
SUBROUTINE read_schema_spectrum(ierr
, n_band, n_absorb, type_absorb
, n_aerosol, type_aerosol
, wave_length_short, wave_length_long
, n_band_absorb, index_absorb
, n_band_continuum, index_continuum
, l_exclude, n_band_exclude, index_exclude
)
Where it falls over in fparser:
fparser/src/fparser/one/block_statements.py
if line.startswith("("):
i = line.find(")")
assert i != -1, repr(line)
Estimated cause - the original source is missing line continuation statements - not an fparser bug.
disort_interface.f
Fails when running PSyclone, but works fine for my test. I think this may actually be a PSyclone issue in algorithm.py
- or I'm using PSyclone incorrectly on this file.
make_block_6_1.f
Failing code
SUBROUTINE make_block_6_1(ierr
& , n_band, wave_length_short, wave_length_long
& , l_exclude, n_band_exclude, index_exclude
& , n_deg_fit, t_ref_thermal, thermal_coefficient
& , theta_planck_tbl, l_present_6, l_planck_tbl
& )
Where it falls over in fparser:
fparser/src/fparser/one/block_statements.py
if line.startswith("("):
i = line.find(")")
assert i != -1, repr(line)
Estimated cause - the original source is missing line continuation statements - not an fparser bug.
make_block_6_2.f
Failing code
SUBROUTINE MAKE_BLOCK_6_2(IERR
& , N_BAND, WAVE_LENGTH_SHORT, WAVE_LENGTH_LONG
& , L_EXCLUDE, N_BAND_EXCLUDE, INDEX_EXCLUDE
& , N_DEG_FIT, T_REF_THERMAL, THERMAL_COEFFICIENT
& , THETA_PLANCK_TBL, L_PRESENT_6, L_PLANCK_TBL
& )
Where it falls over in fparser:
fparser/src/fparser/one/block_statements.py
if line.startswith("("):
i = line.find(")")
assert i != -1, repr(line)
Estimated cause - the original source is missing line continuation statements - not an fparser bug.
seaalbedo_driver.f
I think this fails due to a subroutine containing a function inside the included file? I think this file (and its included seaalbedo.f
) are generally a mess and we should just ignore it for now unless it becomes an issue for performance.
The failing stack trace is:
Traceback (most recent call last):
File "parse.py", line 12, in <module>
tree = parse(code, include_dirs=["/home/achalk/socrates_2304/bin"])
File "/home/achalk/fparser/fparser/src/fparser/api.py", line 215, in parse
parser.analyze()
File "/home/achalk/fparser/fparser/src/fparser/one/parsefortran.py", line 160, in analyze
self.block.analyze()
File "/home/achalk/fparser/fparser/src/fparser/common/utils.py", line 330, in new_func
func(self)
File "/home/achalk/fparser/fparser/src/fparser/one/block_statements.py", line 372, in analyze
stmt.analyze()
File "/home/achalk/fparser/fparser/src/fparser/common/utils.py", line 330, in new_func
func(self)
File "/home/achalk/fparser/fparser/src/fparser/one/typedecl_statements.py", line 404, in analyze
variables = self.parent.a.variables
File "/home/achalk/fparser/fparser/src/fparser/common/base_classes.py", line 109, in __getattr__
raise AttributeError(message % (self.__class__.__name__, name, attributes))
AttributeError: AttributeHolder instance has no attribute 'variables', expected attributes: module, external_subprogram, blockdata
Overall then, perhaps there are no fparser bugs here (one could argue the last one? But I couldn't narrow it down really).