pytest-regressions icon indicating copy to clipboard operation
pytest-regressions copied to clipboard

Failing tests

Open P-EB opened this issue 5 years ago • 0 comments

Hi,

While running the tests of your pytest fixture, I got many Failures.

IDK how to investigate these, but at least the last is due to a trivial space alignment issue between the code and the testcode of your plugin.

============================= test session starts ==============================
platform linux -- Python 3.9.1, pytest-4.6.11, py-1.9.0, pluggy-0.13.0 -- /usr/bin/python3.9
cachedir: .pytest_cache
rootdir: /tmp/autopkgtest.2XwkFn/autopkgtest_tmp
plugins: regressions-2.1.1, datadir-1.3.1+ds
collecting ... collected 39 items

tests/test_data_regression.py::test_example PASSED
tests/test_data_regression.py::test_basename PASSED
tests/test_data_regression.py::test_custom_object PASSED
tests/test_data_regression.py::test_usage_workflow ============================= test session starts ==============================
platform linux -- Python 3.9.1, pytest-4.6.11, py-1.9.0, pluggy-0.13.0
rootdir: /tmp/pytest-of-becue/pytest-0/test_usage_workflow0
plugins: regressions-2.1.1, datadir-1.3.1+ds
collected 1 item

test_file.py F                                                           [100%]

=================================== FAILURES ===================================
____________________________________ test_1 ____________________________________

data_regression = <pytest_regressions.data_regression.DataRegressionFixture object at 0x7f490431d970>

    def test_1(data_regression):
        contents = sys.testing_get_data()
>       data_regression.check(contents)
E       Failed: File not found in data directory, created:
E       - /tmp/pytest-of-becue/pytest-0/test_usage_workflow0/test_file/test_1.yml

test_file.py:4: Failed
=========================== 1 failed in 0.04 seconds ===========================
============================= test session starts ==============================
platform linux -- Python 3.9.1, pytest-4.6.11, py-1.9.0, pluggy-0.13.0
rootdir: /tmp/pytest-of-becue/pytest-0/test_usage_workflow0
plugins: regressions-2.1.1, datadir-1.3.1+ds
collected 1 item

test_file.py .                                                           [100%]

=========================== 1 passed in 0.02 seconds ===========================
============================= test session starts ==============================
platform linux -- Python 3.9.1, pytest-4.6.11, py-1.9.0, pluggy-0.13.0
rootdir: /tmp/pytest-of-becue/pytest-0/test_usage_workflow0
plugins: regressions-2.1.1, datadir-1.3.1+ds
collected 1 item

test_file.py F                                                           [100%]

=================================== FAILURES ===================================
____________________________________ test_1 ____________________________________

data_regression = <pytest_regressions.data_regression.DataRegressionFixture object at 0x7f490428b100>

    def test_1(data_regression):
        contents = sys.testing_get_data()
>       data_regression.check(contents)
E       AssertionError: FILES DIFFER:
E       /tmp/pytest-of-becue/pytest-0/tmp-test_usage_workflow0/pytest-of-becue/pytest-2/test_10/test_file/test_1.yml
E       /tmp/pytest-of-becue/pytest-0/tmp-test_usage_workflow0/pytest-of-becue/pytest-2/test_10/test_file/test_1.obtained.yml
E       HTML DIFF: /tmp/pytest-of-becue/pytest-0/tmp-test_usage_workflow0/pytest-of-becue/pytest-2/test_10/test_file/test_1.obtained.diff.html
E       --- 
E       +++ 
E       @@ -1,2 +1,2 @@
E       -contents: Foo
E       -value: 10
E       +contents: Bar
E       +value: 20

test_file.py:4: AssertionError
=========================== 1 failed in 0.01 seconds ===========================
============================= test session starts ==============================
platform linux -- Python 3.9.1, pytest-4.6.11, py-1.9.0, pluggy-0.13.0
rootdir: /tmp/pytest-of-becue/pytest-0/test_usage_workflow0
plugins: regressions-2.1.1, datadir-1.3.1+ds
collected 1 item

test_file.py F                                                           [100%]

=================================== FAILURES ===================================
____________________________________ test_1 ____________________________________

datadir = PosixPath('/tmp/pytest-of-becue/pytest-0/tmp-test_usage_workflow0/pytest-of-becue/pytest-3/test_10/test_file')
original_datadir = PosixPath('/tmp/pytest-of-becue/pytest-0/test_usage_workflow0/test_file')
request = <SubRequest 'data_regression' for <Function test_1>>
check_fn = functools.partial(<function check_text_files at 0x7f4904507310>, encoding='UTF-8')
dump_fn = <function DataRegressionFixture.check.<locals>.dump at 0x7f4904215c10>
extension = '.yml', basename = 'test_1', fullpath = None, force_regen = True
obtained_filename = PosixPath('/tmp/pytest-of-becue/pytest-0/tmp-test_usage_workflow0/pytest-of-becue/pytest-3/test_10/test_file/test_1.obtained.yml')
dump_aux_fn = <function <lambda> at 0x7f49045073a0>

    def perform_regression_check(
        datadir,
        original_datadir,
        request,
        check_fn,
        dump_fn,
        extension,
        basename=None,
        fullpath=None,
        force_regen=False,
        obtained_filename=None,
        dump_aux_fn=lambda filename: [],
    ):
        """
        First run of this check will generate a expected file. Following attempts will always try to
        match obtained files with that expected file.
    
        If expected file needs to be updated, just enable `force_regen` argument.
    
        :param Path datadir: Fixture embed_data.
        :param Path original_datadir: Fixture embed_data.
        :param SubRequest request: Pytest request object.
        :param callable check_fn: A function that receives as arguments, respectively, absolute path to
            obtained file and absolute path to expected file. It must assert if contents of file match.
            Function can safely assume that obtained file is already dumped and only care about
            comparison.
        :param callable dump_fn: A function that receive an absolute file path as argument. Implementor
            must dump file in this path.
        :param callable dump_aux_fn: A function that receives the same file path as ``dump_fn``, but may
            dump additional files to help diagnose this regression later (for example dumping image of
            3d views and plots to compare later). Must return the list of file names written (used to display).
        :param str extension: Extension of files compared by this check.
        :param bool force_regen: if true it will regenerate expected file.
        :param str obtained_filename: complete path to use to write the obtained file. By
            default will prepend `.obtained` before the file extension.
        ..see: `data_regression.Check` for `basename` and `fullpath` arguments.
        """
        import re
    
        assert not (basename and fullpath), "pass either basename or fullpath, but not both"
    
        __tracebackhide__ = True
    
        if basename is None:
            basename = re.sub(r"[\W]", "_", request.node.name)
    
        if fullpath:
            filename = source_filename = Path(fullpath)
        else:
            filename = datadir / (basename + extension)
            source_filename = original_datadir / (basename + extension)
    
        def make_location_message(banner, filename, aux_files):
            msg = [banner, f"- {filename}"]
            if aux_files:
                msg.append("Auxiliary:")
                msg += [f"- {x}" for x in aux_files]
            return "\n".join(msg)
    
        force_regen = force_regen or request.config.getoption("force_regen")
        if not filename.is_file():
            source_filename.parent.mkdir(parents=True, exist_ok=True)
            dump_fn(source_filename)
            aux_created = dump_aux_fn(source_filename)
    
            msg = make_location_message(
                "File not found in data directory, created:", source_filename, aux_created
            )
            pytest.fail(msg)
        else:
            if obtained_filename is None:
                if fullpath:
                    obtained_filename = (datadir / basename).with_suffix(
                        ".obtained" + extension
                    )
                else:
                    obtained_filename = filename.with_suffix(".obtained" + extension)
    
            dump_fn(obtained_filename)
    
            try:
>               check_fn(obtained_filename, filename)

/usr/lib/python3/dist-packages/pytest_regressions/common.py:153: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

obtained_fn = PosixPath('/tmp/pytest-of-becue/pytest-0/tmp-test_usage_workflow0/pytest-of-becue/pytest-3/test_10/test_file/test_1.obtained.yml')
expected_fn = PosixPath('/tmp/pytest-of-becue/pytest-0/tmp-test_usage_workflow0/pytest-of-becue/pytest-3/test_10/test_file/test_1.yml')
fix_callback = <function <lambda> at 0x7f4904507280>, encoding = 'UTF-8'

    def check_text_files(obtained_fn, expected_fn, fix_callback=lambda x: x, encoding=None):
        """
        Compare two files contents. If the files differ, show the diff and write a nice HTML
        diff file into the data directory.
    
        :param Path obtained_fn: path to obtained file during current testing.
    
        :param Path expected_fn: path to the expected file, obtained from previous testing.
    
        :param str encoding: encoding used to open the files.
    
        :param callable fix_callback:
            A callback to "fix" the contents of the obtained (first) file.
            This callback receives a list of strings (lines) and must also return a list of lines,
            changed as needed.
            The resulting lines will be used to compare with the contents of expected_fn.
        """
        __tracebackhide__ = True
    
        obtained_fn = Path(obtained_fn)
        expected_fn = Path(expected_fn)
        obtained_lines = fix_callback(obtained_fn.read_text(encoding=encoding).splitlines())
        expected_lines = expected_fn.read_text(encoding=encoding).splitlines()
    
        if obtained_lines != expected_lines:
            diff_lines = list(
                difflib.unified_diff(expected_lines, obtained_lines, lineterm="")
            )
            if len(diff_lines) <= 500:
                html_fn = obtained_fn.with_suffix(".diff.html")
                try:
                    differ = difflib.HtmlDiff()
                    html_diff = differ.make_file(
                        fromlines=expected_lines,
                        fromdesc=expected_fn,
                        tolines=obtained_lines,
                        todesc=obtained_fn,
                    )
                except Exception as e:
                    html_fn = "(failed to generate html diff: %s)" % e
                else:
                    html_fn.write_text(html_diff, encoding="UTF-8")
    
                diff = ["FILES DIFFER:", str(expected_fn), str(obtained_fn)]
                diff += ["HTML DIFF: %s" % html_fn]
                diff += diff_lines
>               raise AssertionError("\n".join(diff))
E               AssertionError: FILES DIFFER:
E               /tmp/pytest-of-becue/pytest-0/tmp-test_usage_workflow0/pytest-of-becue/pytest-3/test_10/test_file/test_1.yml
E               /tmp/pytest-of-becue/pytest-0/tmp-test_usage_workflow0/pytest-of-becue/pytest-3/test_10/test_file/test_1.obtained.yml
E               HTML DIFF: /tmp/pytest-of-becue/pytest-0/tmp-test_usage_workflow0/pytest-of-becue/pytest-3/test_10/test_file/test_1.obtained.diff.html
E               --- 
E               +++ 
E               @@ -1,2 +1,2 @@
E               -contents: Foo
E               -value: 10
E               +contents: Bar
E               +value: 20

/usr/lib/python3/dist-packages/pytest_regressions/common.py:58: AssertionError

During handling of the above exception, another exception occurred:

data_regression = <pytest_regressions.data_regression.DataRegressionFixture object at 0x7f49041d5bb0>

    def test_1(data_regression):
        contents = sys.testing_get_data()
>       data_regression.check(contents)
E       Failed: Files differ and --force-regen set, regenerating file at:
E       - /tmp/pytest-of-becue/pytest-0/test_usage_workflow0/test_file/test_1.yml

test_file.py:4: Failed
=========================== 1 failed in 0.03 seconds ===========================
============================= test session starts ==============================
platform linux -- Python 3.9.1, pytest-4.6.11, py-1.9.0, pluggy-0.13.0
rootdir: /tmp/pytest-of-becue/pytest-0/test_usage_workflow0
plugins: regressions-2.1.1, datadir-1.3.1+ds
collected 1 item

test_file.py .                                                           [100%]

=========================== 1 passed in 0.01 seconds ===========================
PASSED
tests/test_data_regression.py::test_data_regression_full_path ============================= test session starts ==============================
platform linux -- Python 3.9.1, pytest-4.6.11, py-1.9.0, pluggy-0.13.0
rootdir: /tmp/pytest-of-becue/pytest-0/test_data_regression_full_path0
plugins: regressions-2.1.1, datadir-1.3.1+ds
collected 1 item

test_foo.py F                                                            [100%]

=================================== FAILURES ===================================
_____________________________________ test _____________________________________

data_regression = <pytest_regressions.data_regression.DataRegressionFixture object at 0x7f49041767f0>

    def test(data_regression):
        contents = {'data': [1, 2]}
>       data_regression.check(contents, fullpath='/tmp/pytest-of-becue/pytest-0/test_data_regression_full_path1/full/path/to/contents.yaml')
E       Failed: File not found in data directory, created:
E       - /tmp/pytest-of-becue/pytest-0/test_data_regression_full_path1/full/path/to/contents.yaml

test_foo.py:3: Failed
=========================== 1 failed in 0.01 seconds ===========================
============================= test session starts ==============================
platform linux -- Python 3.9.1, pytest-4.6.11, py-1.9.0, pluggy-0.13.0
rootdir: /tmp/pytest-of-becue/pytest-0/test_data_regression_full_path0
plugins: regressions-2.1.1, datadir-1.3.1+ds
collected 1 item

test_foo.py .                                                            [100%]

=========================== 1 passed in 0.01 seconds ===========================
PASSED
tests/test_data_regression.py::test_data_regression_no_aliases ============================= test session starts ==============================
platform linux -- Python 3.9.1, pytest-4.6.11, py-1.9.0, pluggy-0.13.0
rootdir: /tmp/pytest-of-becue/pytest-0/test_data_regression_no_aliases0
plugins: regressions-2.1.1, datadir-1.3.1+ds
collected 1 item

test_file.py F                                                           [100%]

=================================== FAILURES ===================================
_____________________________________ test _____________________________________

data_regression = <pytest_regressions.data_regression.DataRegressionFixture object at 0x7f49040b12b0>

    def test(data_regression):
        red = (255, 0, 0)
        green = (0, 255, 0)
        blue = (0, 0, 255)
    
        contents = {
            'color1': red,
            'color2': green,
            'color3': blue,
            'color4': red,
            'color5': green,
            'color6': blue,
        }
>       data_regression.Check(contents)
E       Failed: File not found in data directory, created:
E       - /tmp/pytest-of-becue/pytest-0/test_data_regression_no_aliases0/test_file/test.yml

test_file.py:14: Failed
=========================== 1 failed in 0.01 seconds ===========================
============================= test session starts ==============================
platform linux -- Python 3.9.1, pytest-4.6.11, py-1.9.0, pluggy-0.13.0
rootdir: /tmp/pytest-of-becue/pytest-0/test_data_regression_no_aliases0
plugins: regressions-2.1.1, datadir-1.3.1+ds
collected 1 item

test_file.py .                                                           [100%]

=========================== 1 passed in 0.01 seconds ===========================
PASSED
tests/test_data_regression.py::test_not_create_file_on_error ============================= test session starts ==============================
platform linux -- Python 3.9.1, pytest-4.6.11, py-1.9.0, pluggy-0.13.0
rootdir: /tmp/pytest-of-becue/pytest-0/test_not_create_file_on_error0
plugins: regressions-2.1.1, datadir-1.3.1+ds
collected 1 item

test_file.py F                                                           [100%]

=================================== FAILURES ===================================
_____________________________________ test _____________________________________

data_regression = <pytest_regressions.data_regression.DataRegressionFixture object at 0x7f4904048ac0>

    def test(data_regression):
        class Scalar:
            def __init__(self, value, unit):
                self.value = value
                self.unit = unit
    
        contents = {"scalar": Scalar(10, "m")}
>       data_regression.Check(contents)

test_file.py:8: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/usr/lib/python3/dist-packages/pytest_regressions/data_regression.py:46: in dump
    dumped_str = yaml.dump_all(
/usr/lib/python3/dist-packages/yaml/__init__.py:278: in dump_all
    dumper.represent(data)
/usr/lib/python3/dist-packages/yaml/representer.py:27: in represent
    node = self.represent_data(data)
/usr/lib/python3/dist-packages/yaml/representer.py:48: in represent_data
    node = self.yaml_representers[data_types[0]](self, data)
/usr/lib/python3/dist-packages/yaml/representer.py:207: in represent_dict
    return self.represent_mapping('tag:yaml.org,2002:map', data)
/usr/lib/python3/dist-packages/yaml/representer.py:118: in represent_mapping
    node_value = self.represent_data(item_value)
/usr/lib/python3/dist-packages/yaml/representer.py:58: in represent_data
    node = self.yaml_representers[None](self, data)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pytest_regressions.data_regression.RegressionYamlDumper object at 0x7f4904048cd0>
data = <test_file.test.<locals>.Scalar object at 0x7f4904048c40>

    def represent_undefined(self, data):
>       raise RepresenterError("cannot represent an object", data)
E       yaml.representer.RepresenterError: ('cannot represent an object', <test_file.test.<locals>.Scalar object at 0x7f4904048c40>)

/usr/lib/python3/dist-packages/yaml/representer.py:231: RepresenterError
=========================== 1 failed in 0.04 seconds ===========================
PASSED
tests/test_dataframe_regression.py::test_usage_workflow ============================= test session starts ==============================
platform linux -- Python 3.9.1, pytest-4.6.11, py-1.9.0, pluggy-0.13.0
rootdir: /tmp/pytest-of-becue/pytest-0/test_usage_workflow1
plugins: regressions-2.1.1, datadir-1.3.1+ds
collected 1 item

test_file.py F                                                           [100%]

=================================== FAILURES ===================================
____________________________________ test_1 ____________________________________

dataframe_regression = <pytest_regressions.dataframe_regression.DataFrameRegressionFixture object at 0x7f4903fc0f10>

    def test_1(dataframe_regression):
        contents = sys.testing_get_data()
>       dataframe_regression.check(pd.DataFrame.from_dict(contents))
E       Failed: File not found in data directory, created:
E       - /tmp/pytest-of-becue/pytest-0/test_usage_workflow1/test_file/test_1.csv

test_file.py:5: Failed
=========================== 1 failed in 0.01 seconds ===========================
============================= test session starts ==============================
platform linux -- Python 3.9.1, pytest-4.6.11, py-1.9.0, pluggy-0.13.0
rootdir: /tmp/pytest-of-becue/pytest-0/test_usage_workflow1
plugins: regressions-2.1.1, datadir-1.3.1+ds
collected 1 item

test_file.py .                                                           [100%]

=========================== 1 passed in 0.01 seconds ===========================
============================= test session starts ==============================
platform linux -- Python 3.9.1, pytest-4.6.11, py-1.9.0, pluggy-0.13.0
rootdir: /tmp/pytest-of-becue/pytest-0/test_usage_workflow1
plugins: regressions-2.1.1, datadir-1.3.1+ds
collected 1 item

test_file.py F                                                           [100%]

=================================== FAILURES ===================================
____________________________________ test_1 ____________________________________

dataframe_regression = <pytest_regressions.dataframe_regression.DataFrameRegressionFixture object at 0x7f4903e9f970>

    def test_1(dataframe_regression):
        contents = sys.testing_get_data()
>       dataframe_regression.check(pd.DataFrame.from_dict(contents))
E       AssertionError: Values are not sufficiently close.
E       To update values, use --force-regen option.
E       
E       data:
E                 obtained_data        expected_data                 diff
E       0   1.19999999999999996  1.10000000000000009  0.09999999999999987
E       1   1.19999999999999996  1.10000000000000009  0.09999999999999987
E       2   1.19999999999999996  1.10000000000000009  0.09999999999999987
E       3   1.19999999999999996  1.10000000000000009  0.09999999999999987
E       4   1.19999999999999996  1.10000000000000009  0.09999999999999987
E       5   1.19999999999999996  1.10000000000000009  0.09999999999999987
E       6   1.19999999999999996  1.10000000000000009  0.09999999999999987
E       7   1.19999999999999996  1.10000000000000009  0.09999999999999987
E       8   1.19999999999999996  1.10000000000000009  0.09999999999999987
E       9   1.19999999999999996  1.10000000000000009  0.09999999999999987
E       10  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       11  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       12  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       13  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       14  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       15  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       16  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       17  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       18  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       19  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       20  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       21  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       22  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       23  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       24  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       25  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       26  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       27  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       28  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       29  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       30  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       31  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       32  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       33  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       34  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       35  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       36  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       37  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       38  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       39  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       40  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       41  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       42  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       43  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       44  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       45  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       46  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       47  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       48  1.19999999999999996  1.10000000000000009  0.09999999999999987
E       49  1.19999999999999996  1.10000000000000009  0.09999999999999987

test_file.py:5: AssertionError
=========================== 1 failed in 0.02 seconds ===========================
============================= test session starts ==============================
platform linux -- Python 3.9.1, pytest-4.6.11, py-1.9.0, pluggy-0.13.0
rootdir: /tmp/pytest-of-becue/pytest-0/test_usage_workflow1
plugins: regressions-2.1.1, datadir-1.3.1+ds
collected 1 item

test_file.py F                                                           [100%]

=================================== FAILURES ===================================
____________________________________ test_1 ____________________________________

datadir = PosixPath('/tmp/pytest-of-becue/pytest-0/tmp-test_usage_workflow1/pytest-of-becue/pytest-3/test_10/test_file')
original_datadir = PosixPath('/tmp/pytest-of-becue/pytest-0/test_usage_workflow1/test_file')
request = <SubRequest 'dataframe_regression' for <Function test_1>>
check_fn = <bound method DataFrameRegressionFixture._check_fn of <pytest_regressions.dataframe_regression.DataFrameRegressionFixture object at 0x7f4903e8c970>>
dump_fn = functools.partial(<bound method DataFrameRegressionFixture._dump_fn of <pytest_regressions.dataframe_regression.DataFr...2
37   1.2
38   1.2
39   1.2
40   1.2
41   1.2
42   1.2
43   1.2
44   1.2
45   1.2
46   1.2
47   1.2
48   1.2
49   1.2)
extension = '.csv', basename = 'test_1', fullpath = None, force_regen = True
obtained_filename = PosixPath('/tmp/pytest-of-becue/pytest-0/tmp-test_usage_workflow1/pytest-of-becue/pytest-3/test_10/test_file/test_1.obtained.csv')
dump_aux_fn = <function <lambda> at 0x7f49045073a0>

    def perform_regression_check(
        datadir,
        original_datadir,
        request,
        check_fn,
        dump_fn,
        extension,
        basename=None,
        fullpath=None,
        force_regen=False,
        obtained_filename=None,
        dump_aux_fn=lambda filename: [],
    ):
        """
        First run of this check will generate a expected file. Following attempts will always try to
        match obtained files with that expected file.
    
        If expected file needs to be updated, just enable `force_regen` argument.
    
        :param Path datadir: Fixture embed_data.
        :param Path original_datadir: Fixture embed_data.
        :param SubRequest request: Pytest request object.
        :param callable check_fn: A function that receives as arguments, respectively, absolute path to
            obtained file and absolute path to expected file. It must assert if contents of file match.
            Function can safely assume that obtained file is already dumped and only care about
            comparison.
        :param callable dump_fn: A function that receive an absolute file path as argument. Implementor
            must dump file in this path.
        :param callable dump_aux_fn: A function that receives the same file path as ``dump_fn``, but may
            dump additional files to help diagnose this regression later (for example dumping image of
            3d views and plots to compare later). Must return the list of file names written (used to display).
        :param str extension: Extension of files compared by this check.
        :param bool force_regen: if true it will regenerate expected file.
        :param str obtained_filename: complete path to use to write the obtained file. By
            default will prepend `.obtained` before the file extension.
        ..see: `data_regression.Check` for `basename` and `fullpath` arguments.
        """
        import re
    
        assert not (basename and fullpath), "pass either basename or fullpath, but not both"
    
        __tracebackhide__ = True
    
        if basename is None:
            basename = re.sub(r"[\W]", "_", request.node.name)
    
        if fullpath:
            filename = source_filename = Path(fullpath)
        else:
            filename = datadir / (basename + extension)
            source_filename = original_datadir / (basename + extension)
    
        def make_location_message(banner, filename, aux_files):
            msg = [banner, f"- {filename}"]
            if aux_files:
                msg.append("Auxiliary:")
                msg += [f"- {x}" for x in aux_files]
            return "\n".join(msg)
    
        force_regen = force_regen or request.config.getoption("force_regen")
        if not filename.is_file():
            source_filename.parent.mkdir(parents=True, exist_ok=True)
            dump_fn(source_filename)
            aux_created = dump_aux_fn(source_filename)
    
            msg = make_location_message(
                "File not found in data directory, created:", source_filename, aux_created
            )
            pytest.fail(msg)
        else:
            if obtained_filename is None:
                if fullpath:
                    obtained_filename = (datadir / basename).with_suffix(
                        ".obtained" + extension
                    )
                else:
                    obtained_filename = filename.with_suffix(".obtained" + extension)
    
            dump_fn(obtained_filename)
    
            try:
>               check_fn(obtained_filename, filename)

/usr/lib/python3/dist-packages/pytest_regressions/common.py:153: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pytest_regressions.dataframe_regression.DataFrameRegressionFixture object at 0x7f4903e8c970>
obtained_filename = PosixPath('/tmp/pytest-of-becue/pytest-0/tmp-test_usage_workflow1/pytest-of-becue/pytest-3/test_10/test_file/test_1.obtained.csv')
expected_filename = PosixPath('/tmp/pytest-of-becue/pytest-0/tmp-test_usage_workflow1/pytest-of-becue/pytest-3/test_10/test_file/test_1.csv')

    def _check_fn(self, obtained_filename, expected_filename):
        """
        Check if dict contents dumped to a file match the contents in expected file.
    
        :param str obtained_filename:
        :param str expected_filename:
        """
        try:
            import numpy as np
        except ModuleNotFoundError:
            raise ModuleNotFoundError(import_error_message("Numpy"))
        try:
            import pandas as pd
        except ModuleNotFoundError:
            raise ModuleNotFoundError(import_error_message("Pandas"))
    
        __tracebackhide__ = True
    
        obtained_data = pd.read_csv(str(obtained_filename))
        expected_data = pd.read_csv(str(expected_filename))
    
        comparison_tables_dict = {}
        for k in obtained_data.keys():
            obtained_column = obtained_data[k]
            expected_column = expected_data.get(k)
    
            if expected_column is None:
                error_msg = f"Could not find key '{k}' in the expected results.\n"
                error_msg += "Keys in the obtained data table: ["
                for k in obtained_data.keys():
                    error_msg += f"'{k}', "
                error_msg += "]\n"
                error_msg += "Keys in the expected data table: ["
                for k in expected_data.keys():
                    error_msg += f"'{k}', "
                error_msg += "]\n"
                error_msg += "To update values, use --force-regen option.\n\n"
                raise AssertionError(error_msg)
    
            tolerance_args = self._tolerances_dict.get(k, self._default_tolerance)
    
            self._check_data_types(k, obtained_column, expected_column)
            self._check_data_shapes(obtained_column, expected_column)
    
            data_type = obtained_column.values.dtype
            if data_type in [float, np.float, np.float16, np.float32, np.float64]:
                not_close_mask = ~np.isclose(
                    obtained_column.values,
                    expected_column.values,
                    equal_nan=True,
                    **tolerance_args,
                )
            else:
                not_close_mask = obtained_column.values != expected_column.values
    
            if np.any(not_close_mask):
                diff_ids = np.where(not_close_mask)[0]
                diff_obtained_data = obtained_column[diff_ids]
                diff_expected_data = expected_column[diff_ids]
                if data_type == np.bool:
                    diffs = np.logical_xor(obtained_column, expected_column)[diff_ids]
                else:
                    diffs = np.abs(obtained_column - expected_column)[diff_ids]
    
                comparison_table = pd.concat(
                    [diff_obtained_data, diff_expected_data, diffs], axis=1
                )
                comparison_table.columns = [f"obtained_{k}", f"expected_{k}", "diff"]
                comparison_tables_dict[k] = comparison_table
    
        if len(comparison_tables_dict) > 0:
            error_msg = "Values are not sufficiently close.\n"
            error_msg += "To update values, use --force-regen option.\n\n"
            for k, comparison_table in comparison_tables_dict.items():
                error_msg += f"{k}:\n{comparison_table}\n\n"
>           raise AssertionError(error_msg)
E           AssertionError: Values are not sufficiently close.
E           To update values, use --force-regen option.
E           
E           data:
E                     obtained_data        expected_data                 diff
E           0   1.19999999999999996  1.10000000000000009  0.09999999999999987
E           1   1.19999999999999996  1.10000000000000009  0.09999999999999987
E           2   1.19999999999999996  1.10000000000000009  0.09999999999999987
E           3   1.19999999999999996  1.10000000000000009  0.09999999999999987
E           4   1.19999999999999996  1.10000000000000009  0.09999999999999987
E           5   1.19999999999999996  1.10000000000000009  0.09999999999999987
E           6   1.19999999999999996  1.10000000000000009  0.09999999999999987
E           7   1.19999999999999996  1.10000000000000009  0.09999999999999987
E           8   1.19999999999999996  1.10000000000000009  0.09999999999999987
E           9   1.19999999999999996  1.10000000000000009  0.09999999999999987
E           10  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           11  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           12  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           13  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           14  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           15  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           16  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           17  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           18  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           19  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           20  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           21  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           22  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           23  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           24  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           25  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           26  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           27  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           28  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           29  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           30  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           31  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           32  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           33  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           34  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           35  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           36  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           37  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           38  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           39  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           40  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           41  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           42  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           43  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           44  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           45  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           46  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           47  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           48  1.19999999999999996  1.10000000000000009  0.09999999999999987
E           49  1.19999999999999996  1.10000000000000009  0.09999999999999987

/usr/lib/python3/dist-packages/pytest_regressions/dataframe_regression.py:156: AssertionError

During handling of the above exception, another exception occurred:

dataframe_regression = <pytest_regressions.dataframe_regression.DataFrameRegressionFixture object at 0x7f4903e8c970>

    def test_1(dataframe_regression):
        contents = sys.testing_get_data()
>       dataframe_regression.check(pd.DataFrame.from_dict(contents))
E       Failed: Files differ and --force-regen set, regenerating file at:
E       - /tmp/pytest-of-becue/pytest-0/test_usage_workflow1/test_file/test_1.csv

test_file.py:5: Failed
=========================== 1 failed in 0.04 seconds ===========================
============================= test session starts ==============================
platform linux -- Python 3.9.1, pytest-4.6.11, py-1.9.0, pluggy-0.13.0
rootdir: /tmp/pytest-of-becue/pytest-0/test_usage_workflow1
plugins: regressions-2.1.1, datadir-1.3.1+ds
collected 1 item

test_file.py .                                                           [100%]

=========================== 1 passed in 0.01 seconds ===========================
PASSED
tests/test_dataframe_regression.py::test_common_cases PASSED
tests/test_dataframe_regression.py::test_different_data_types PASSED
tests/test_dataframe_regression.py::test_non_numeric_data[array0] FAILED

=================================== FAILURES ===================================
________________________ test_non_numeric_data[array0] _________________________

dataframe_regression = <pytest_regressions.dataframe_regression.DataFrameRegressionFixture object at 0x7f4903dc21c0>
array = [array([39, 98, 98, 92, 58, 19]), array([39, 98, 98, 92, 58, 19]), array([39, 98, 98, 92, 58, 19]), array([39, 98, 98, 92, 58, 19]), array([39, 98, 98, 92, 58, 19]), array([39, 98, 98, 92, 58, 19])]
no_regen = None

    @pytest.mark.parametrize(
        "array", [[np.random.randint(10, 99, 6)] * 6, [Foo(i) for i in range(4)]]
    )
    def test_non_numeric_data(dataframe_regression, array, no_regen):
        data1 = pd.DataFrame()
        data1["data1"] = array
        with pytest.raises(
            AssertionError,
            match="Only numeric data is supported on dataframe_regression fixture.\n"
            "  Array with type '%s' was given." % (str(data1["data1"].dtype),),
        ):
>           dataframe_regression.check(data1)

tests/test_dataframe_regression.py:184: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pytest_regressions.dataframe_regression.DataFrameRegressionFixture object at 0x7f4903dc21c0>
data_frame =                       data1
0  [39, 98, 98, 92, 58, 19]
1  [39, 98, 98, 92, 58, 19]
2  [39, 98, 98, 92, 58, 19]
3  [39, 98, 98, 92, 58, 19]
4  [39, 98, 98, 92, 58, 19]
5  [39, 98, 98, 92, 58, 19]
basename = None, fullpath = None, tolerances = None, default_tolerance = None

    def check(
        self,
        data_frame,
        basename=None,
        fullpath=None,
        tolerances=None,
        default_tolerance=None,
    ):
        """
        Checks the given pandas dataframe against a previously recorded version, or generate a new file.
    
        Example::
    
            data_frame = pandas.DataFrame.from_dict({
                'U_gas': U[0][positions],
                'U_liquid': U[1][positions],
                'gas_vol_frac [-]': vol_frac[0][positions],
                'liquid_vol_frac [-]': vol_frac[1][positions],
                'P': Pa_to_bar(P)[positions],
            })
            dataframe_regression.check(data_frame)
    
        :param pandas.DataFrame data_frame: pandas DataFrame containing data for regression check.
    
        :param str basename: basename of the file to test/record. If not given the name
            of the test is used.
    
        :param str fullpath: complete path to use as a reference file. This option
            will ignore embed_data completely, being useful if a reference file is located
            in the session data dir for example.
    
        :param dict tolerances: dict mapping keys from the data_dict to tolerance settings for the
            given data. Example::
    
                tolerances={'U': Tolerance(atol=1e-2)}
    
        :param dict default_tolerance: dict mapping the default tolerance for the current check
            call. Example::
    
                default_tolerance=dict(atol=1e-7, rtol=1e-18).
    
            If not provided, will use defaults from numpy's ``isclose`` function.
    
        ``basename`` and ``fullpath`` are exclusive.
        """
        try:
            import pandas as pd
        except ModuleNotFoundError:
            raise ModuleNotFoundError(import_error_message("Pandas"))
    
        import functools
    
        __tracebackhide__ = True
    
        assert type(data_frame) is pd.DataFrame, (
            "Only pandas DataFrames are supported on on dataframe_regression fixture.\n"
            "Object with type '%s' was given." % (str(type(data_frame)),)
        )
    
        for column in data_frame.columns:
            array = data_frame[column]
            # Skip assertion if an array of strings
            if (array.dtype == "O") and (type(array[0]) is str):
                continue
            # Rejected: timedelta, datetime, objects, zero-terminated bytes, unicode strings and raw data
>           assert array.dtype not in ["m", "M", "O", "S", "a", "U", "V"], (
                "Only numeric data is supported on dataframe_regression fixture.\n"
                "Array with type '%s' was given.\n" % (str(array.dtype),)
            )
E           AssertionError: Only numeric data is supported on dataframe_regression fixture.
E           Array with type 'object' was given.

/usr/lib/python3/dist-packages/pytest_regressions/dataframe_regression.py:235: AssertionError

During handling of the above exception, another exception occurred:

dataframe_regression = <pytest_regressions.dataframe_regression.DataFrameRegressionFixture object at 0x7f4903dc21c0>
array = [array([39, 98, 98, 92, 58, 19]), array([39, 98, 98, 92, 58, 19]), array([39, 98, 98, 92, 58, 19]), array([39, 98, 98, 92, 58, 19]), array([39, 98, 98, 92, 58, 19]), array([39, 98, 98, 92, 58, 19])]
no_regen = None

    @pytest.mark.parametrize(
        "array", [[np.random.randint(10, 99, 6)] * 6, [Foo(i) for i in range(4)]]
    )
    def test_non_numeric_data(dataframe_regression, array, no_regen):
        data1 = pd.DataFrame()
        data1["data1"] = array
        with pytest.raises(
            AssertionError,
            match="Only numeric data is supported on dataframe_regression fixture.\n"
            "  Array with type '%s' was given." % (str(data1["data1"].dtype),),
        ):
>           dataframe_regression.check(data1)
E           AssertionError: Pattern "Only numeric data is supported on dataframe_regression fixture.\n  Array with type 'object' was given." not found in "Only numeric data is supported on dataframe_regression fixture.\nArray with type 'object' was given.\n"

tests/test_dataframe_regression.py:184: AssertionError
===================== 1 failed, 10 passed in 1.39 seconds ======================

P-EB avatar Dec 19 '20 18:12 P-EB