fastkml icon indicating copy to clipboard operation
fastkml copied to clipboard

Improving Test coverage

Open apurvabanka opened this issue 1 year ago • 16 comments

Name Stmts Miss Branch BrPart Cover
fastkml/init.py 27 0 0 0 100%
fastkml/about.py 1 0 0 0 100%
fastkml/atom.py 65 0 0 0 100%
fastkml/base.py 75 0 14 0 100%
fastkml/config.py 23 0 4 0 100%
fastkml/containers.py 60 0 4 0 100%
fastkml/data.py 110 4 2 0 96%
fastkml/enums.py 81 0 30 0 100%
fastkml/exceptions.py 5 0 0 0 100%
fastkml/features.py 141 3 4 0 98%
fastkml/geometry.py 268 1 68 0 99%
fastkml/gx.py 139 2 52 0 99%
fastkml/helpers.py 184 0 72 0 100%
fastkml/kml.py 69 0 10 1 99%
fastkml/kml_base.py 22 0 0 0 100%
fastkml/links.py 46 0 0 0 100%
fastkml/mixins.py 18 0 0 0 100%
fastkml/overlays.py 158 6 0 0 96%
fastkml/registry.py 45 0 8 2 96%
fastkml/styles.py 196 4 4 0 98%
fastkml/times.py 103 3 30 6 93%
fastkml/types.py 13 0 0 0 100%
fastkml/views.py 125 5 0 0 96%

@cleder Coverage Report

Related Issue #352

Summary by Sourcery

Enhance test coverage by adding new test cases across multiple modules, focusing on error handling and utility functions.

Tests:

  • Add new test cases to improve coverage for various modules, including containers, kml, geometries, gx, styles, overlays, and views.
  • Introduce tests for handling GeometryError exceptions in multiple geometry-related modules.
  • Add tests for helper functions in a new helper_test.py file to ensure proper functionality of utility methods.
  • Add tests for geometry functions in a new functions_test.py file to validate error handling and coordinate processing.

Summary by CodeRabbit

  • New Features

    • Enhanced test coverage for the PhotoOverlay class, ensuring correct initialization and attribute validation.
    • Introduced new tests for KML container functionalities, including feature appending and style URL retrieval.
    • Added tests for error handling in boundary geometry classes.
    • Established a comprehensive testing framework for geometry-related functions, addressing invalid geometries.
    • Introduced a new test for the Track class to validate the etree_element method.
    • Improved assertions in the MultiTrack and Region tests for better validation of object states.
  • Bug Fixes

    • Corrected assertions in various tests to ensure accurate validation of attributes.
  • Documentation

    • Updated comments for clarity in test methods related to the Region class.

apurvabanka avatar Oct 12 '24 21:10 apurvabanka

Review changes with SemanticDiff.

Analyzed 13 of 13 files.

Overall, the semantic diff is 1% smaller than the GitHub diff.

Filename Status
:heavy_check_mark: tests/containers_test.py Analyzed
:heavy_check_mark: tests/gx_test.py Analyzed
:heavy_check_mark: tests/helper_test.py Analyzed
:heavy_check_mark: tests/kml_test.py Analyzed
:heavy_check_mark: tests/overlays_test.py Analyzed
:heavy_check_mark: tests/styles_test.py Analyzed
:heavy_check_mark: tests/views_test.py Analyzed
:heavy_check_mark: tests/geometries/boundaries_test.py Analyzed
:heavy_check_mark: tests/geometries/functions_test.py Analyzed
:heavy_check_mark: tests/geometries/linestring_test.py 0.56% smaller
:heavy_check_mark: tests/geometries/multigeometry_test.py 0.22% smaller
:heavy_check_mark: tests/geometries/point_test.py 0.23% smaller
:heavy_check_mark: tests/geometries/polygon_test.py Analyzed

semanticdiff-com[bot] avatar Oct 12 '24 21:10 semanticdiff-com[bot]

Reviewer's Guide by Sourcery

This pull request focuses on improving test coverage for the fastkml library. The changes include adding new test cases, expanding existing tests, and introducing error handling scenarios across various modules. The modifications aim to increase the overall test coverage and robustness of the library.

No diagrams generated as the changes look simple and do not need a visual representation.

File-Level Changes

Change Details Files
Added new test cases for container and document classes
  • Implemented tests for container creation and feature appending
  • Added a test for document container style URL retrieval
  • Introduced error handling test for appending a container to itself
tests/containers_test.py
Enhanced KML parsing and element creation tests
  • Added test for KML etree element creation
  • Implemented test for KML append method with error handling
  • Created a new test class for parsing KML with None namespace
tests/kml_test.py
Expanded geometry-related tests
  • Added tests for geometry error handling in boundaries, point, linestring, multigeometry, and polygon classes
  • Implemented new test cases for Track and MultiTrack classes
  • Enhanced existing tests with additional assertions
tests/geometries/boundaries_test.py
tests/geometries/point_test.py
tests/gx_test.py
tests/geometries/linestring_test.py
tests/geometries/multigeometry_test.py
tests/geometries/polygon_test.py
Added tests for styles, overlays, and views
  • Implemented test for StyleMap none case
  • Added assertions for PhotoOverlay and Region boolean checks
  • Enhanced existing tests with additional assertions
tests/styles_test.py
tests/overlays_test.py
tests/views_test.py
Created new test files for helpers and geometry functions
  • Implemented tests for various helper functions including attribute and subelement operations
  • Added tests for geometry-related functions such as error handling and coordinate processing
tests/helper_test.py
tests/geometries/functions_test.py

Possibly linked issues

  • #351: The PR addresses the issue by adding tests to improve coverage in various files.
  • #1: The PR improves test coverage, addressing the issue's need for more tests.

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an issue from a review comment by replying to it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull request title to generate a title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in the pull request body to generate a PR summary at any time. You can also use this command to specify where the summary should be inserted.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

  • Contact our support team for questions or feedback.
  • Visit our documentation for detailed guides and information.
  • Keep in touch with the Sourcery team by following us on X/Twitter, LinkedIn or GitHub.

sourcery-ai[bot] avatar Oct 12 '24 21:10 sourcery-ai[bot]

Walkthrough

This pull request introduces new test methods and assertions in the fastkml library's test suite. It enhances the testing capabilities for KML container functionalities by adding tests for container creation, feature appending, and error handling in geometry classes. Additionally, existing tests in various classes are improved with new assertions to verify the correct initialization and behavior of various attributes, thereby improving overall test coverage.

Changes

File Change Summary
tests/containers_test.py Added three methods to TestStdLibrary: test_container_creation, test_container_feature_append, and test_document_container_get_style_url.
tests/geometries/boundaries_test.py Added two methods to TestBoundaries: test_outer_boundary_geometry_error and test_inner_boundary_geometry_error; corrected spelling in method names.
tests/geometries/functions_test.py Introduced new test suite TestGeometryFunctions with methods for handling invalid geometries and validating coordinate subelements.
tests/gx_test.py Added test_track_etree_element to validate etree_element method; updated assertions in test_from_multilinestring.
tests/overlays_test.py Enhanced test_create_photo_overlay_with_all_optional_parameters with assertions for view_volume and image_pyramid; updated test_read_photo_overlay for comprehensive attribute validation.
tests/views_test.py Added assertion in test_region_with_all_optional_parameters to check truthiness of region; corrected indentation in assertions.

Possibly related PRs

  • #356: The changes in this PR involve modifications to the InnerBoundaryIs and Polygon classes, which are related to the handling of geometries. The main PR enhances tests for KML container functionalities, which may include interactions with these geometry classes.
  • #360: This PR introduces verbosity control for XML serialization and modifies various geometry classes, including Polygon. The main PR's focus on testing KML container functionalities may intersect with the changes made in this PR regarding geometry handling and serialization.

Suggested labels

Review effort [1-5]: 4

Poem

🐇 Hopping through the code so bright,
With tests that shine and errors in sight.
New methods and checks, all put to the test,
Ensuring our KML functions are truly the best!
So let’s celebrate with a joyful cheer,
For robust coverage is finally here! 🎉


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

coderabbitai[bot] avatar Oct 12 '24 21:10 coderabbitai[bot]

Hello @apurvabanka! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

Line 36:13: E124 closing bracket does not match visual indentation

Comment last updated at 2024-10-13 18:46:43 UTC

pep8speaks avatar Oct 12 '24 21:10 pep8speaks

PR Summary

  • Enhancement of Testing for Container Functionalities
    • Introduction of tests to authenticate the process of creation and appending features to containers.
  • Additions to Boundary Testing
    • Inclusion of specific tests designed to handle errors related to geometrical boundaries.
  • New File Generation for Testing Geometry Functions
    • Execution of tests on various functions aimed at handling and coordinating sub-elements of invalid geometrical errors.
  • Implementation of Geometry Error Testing Across Files
    • Enrichment of error handling through introduction of tests in multiple geometry-related files to validate and manage Geometry Errors.
  • Broadening of Test Coverage for KML Parsing
    • Implementation of new tests for creation and validation of KML elements, even when dealing with empty or incorrect references.
  • Expansion of Helper Features Testing Function
    • Induction of functionalities to test helper methods, inclusive of various attribute checks.
  • Assertion of New Attributes' Authenticity
    • Verification process conducted on newly added attributes of overlay and region objects.
  • Improvement of Style Map Testing
    • Addition of scenarios to check the behavior when there is an absence of a style map.

what-the-diff[bot] avatar Oct 12 '24 21:10 what-the-diff[bot]

PR Reviewer Guide 🔍

(Review updated until commit https://github.com/cleder/fastkml/commit/589a00f4682ca14774c80cd0bc816ab517d11cd5)

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
🧪 PR contains tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Possible Bug
The test_container_feature_append function is appending a feature to the container but not verifying if it was actually added. Consider adding an assertion to check if the feature was successfully appended.

Code Smell
The test_coordinates_subelement_getattr function is not actually testing the getattr functionality. It's calling the coordinates_subelement function with obj=None, which doesn't test the getattr behavior. Consider modifying this test to actually test the getattr functionality.

Incomplete Test
The test_attribute_enum_kwarg function is not fully testing the attribute_enum_kwarg function. It only tests the case where the attribute is None. Consider adding more test cases to cover different scenarios, such as when a valid enum value is provided.

qodo-code-review[bot] avatar Oct 12 '24 21:10 qodo-code-review[bot]

Persistent review updated to latest commit https://github.com/cleder/fastkml/commit/589a00f4682ca14774c80cd0bc816ab517d11cd5

Preparing review...

github-actions[bot] avatar Oct 12 '24 21:10 github-actions[bot]

Preparing review...

github-actions[bot] avatar Oct 12 '24 21:10 github-actions[bot]

PR Code Suggestions ✨

Latest suggestions up to 589a00f Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Score
Enhancement
Use pytest.mark.parametrize to test multiple scenarios in a single test function

Use pytest.mark.parametrize to test handle_invalid_geometry_error with both True and
False strict values in a single test function, reducing code duplication.

tests/geometries/functions_test.py [14-22]

[email protected]("strict,expected", [
+    (True, pytest.raises(KMLParseError)),
+    (False, None)
+])
 @patch('fastkml.config.etree.tostring')
-def test_handle_invalid_geometry_error_true(self, mock_to_string) -> None:
+def test_handle_invalid_geometry_error(self, mock_to_string, strict, expected) -> None:
     mock_element = Mock()
-    with pytest.raises(KMLParseError):
-        handle_invalid_geometry_error(error=ValueError, element=mock_element, strict=True)
+    if expected is None:
+        assert handle_invalid_geometry_error(error=ValueError, element=mock_element, strict=strict) is None
+    else:
+        with expected:
+            handle_invalid_geometry_error(error=ValueError, element=mock_element, strict=strict)
 
-@patch('fastkml.config.etree.tostring')
-def test_handle_invalid_geometry_error_false(self, mock_to_string) -> None:
-    mock_element = Mock()
-    assert handle_invalid_geometry_error(error=ValueError, element=mock_element, strict=False) is None
-
  • [ ] Apply this suggestion <!-- /improve --apply_suggestion=0 -->
Suggestion importance[1-10]: 8

Why: Using pytest.mark.parametrize to combine tests for different strict values into a single function is an effective way to reduce duplication and improve test clarity. This suggestion is well-aligned with best practices in testing.

8
Use parametrized tests to reduce duplication in similar test functions

Use pytest.mark.parametrize to test multiple helper functions with similar
structures in a single parametrized test, reducing code duplication and improving
test coverage.

tests/helper_test.py [57-87]

-def test_subelement_int_kwarg(self):
[email protected]("func", [
+    subelement_int_kwarg,
+    subelement_float_kwarg
+])
+def test_subelement_kwarg(self, func):
     node = Node()
     node.text = None
     element = Mock()
     element.find.return_value = node
-    res = subelement_int_kwarg(
-            element=element,
-            ns="ns",
-            name_spaces="name",
-            node_name="node",
-            kwarg="kwarg",
-            classes=None,
-            strict=False
-        )
+    res = func(
+        element=element,
+        ns="ns",
+        name_spaces="name",
+        node_name="node",
+        kwarg="kwarg",
+        classes=None,
+        strict=False
+    )
     assert res == {}
 
-def test_subelement_float_kwarg(self):
-    node = Node()
-    node.text = None
-    element = Mock()
-    element.find.return_value = node
-    res = subelement_float_kwarg(
-            element=element,
-            ns="ns",
-            name_spaces="name",
-            node_name="node",
-            kwarg="kwarg",
-            classes=None,
-            strict=False
-        )
-    assert res == {}
-
  • [ ] Apply this suggestion <!-- /improve --apply_suggestion=1 -->
Suggestion importance[1-10]: 8

Why: The suggestion to use pytest.mark.parametrize for similar helper function tests is beneficial as it reduces code duplication and enhances test coverage. This approach is efficient and maintains clarity in testing.

8
Use pytest fixtures to reduce code duplication in tests

Use pytest.fixture to create a reusable _Container object for multiple tests,
reducing code duplication and improving test maintainability.

tests/containers_test.py [69-88]

-def test_container_creation(self)->None:
-    container = containers._Container(
[email protected]
+def container():
+    return containers._Container(
         ns="ns",
         id="id",
         target_id="target_id",
         name="name"
     )
+
+def test_container_creation(self, container)->None:
     assert container.ns == "ns"
     assert container.name == "name"
-def test_container_feature_append(self)->None:
-    container = containers._Container(
-        ns="ns",
-        id="id",
-        target_id="target_id",
-        name="name"
-    )
+
+def test_container_feature_append(self, container)->None:
     feature = features._Feature(name="new_feature")
     assert container.append(feature) is None
     with pytest.raises(ValueError):
         container.append(container)
  • [ ] Apply this suggestion <!-- /improve --apply_suggestion=2 -->
Suggestion importance[1-10]: 7

Why: The suggestion to use pytest fixtures is valid as it reduces code duplication and improves maintainability by creating a reusable _Container object. This change enhances the test structure without affecting functionality.

7
Best practice
Improve exception testing by using context manager and checking exception message

Use a context manager with pytest.raises to ensure that the exception is raised in
the correct part of the code, providing better test isolation and error reporting.

tests/kml_test.py [189-195]

 def test_kml_append(self) -> None:
     empty_placemark = KMLFILEDIR / "emptyPlacemarkWithoutId.xml"
 
     doc = kml.KML.parse(empty_placemark)
 
-    with pytest.raises(ValueError):
+    with pytest.raises(ValueError) as excinfo:
         doc.append(doc)
+    assert "Cannot append" in str(excinfo.value)
  • [ ] Apply this suggestion <!-- /improve --apply_suggestion=3 -->
Suggestion importance[1-10]: 7

Why: The suggestion to use a context manager with pytest.raises and check the exception message provides better test isolation and error reporting. It is a good practice to ensure the exception is raised as expected.

7

💡 Need additional feedback ? start a PR chat


Previous suggestions

✅ Suggestions up to commit 589a00f
CategorySuggestion                                                                                                                                    Score
Enhancement
Refactor similar test functions using parametrization to reduce code duplication and improve maintainability

Use parametrized tests to reduce code duplication and improve test coverage for
similar functions.

tests/helper_test.py [57-87]

-def test_subelement_int_kwarg(self):
[email protected]("func", [subelement_int_kwarg, subelement_float_kwarg])
+def test_subelement_kwarg(self, func):
     node = Node()
     node.text = None
     element = Mock()
     element.find.return_value = node
-    res = subelement_int_kwarg(
-            element=element,
-            ns="ns",
-            name_spaces="name",
-            node_name="node",
-            kwarg="kwarg",
-            classes=None,
-            strict=False
-        )
+    res = func(
+        element=element,
+        ns="ns",
+        name_spaces="name",
+        node_name="node",
+        kwarg="kwarg",
+        classes=None,
+        strict=False
+    )
     assert res == {}
 
-def test_subelement_float_kwarg(self):
-    node = Node()
-    node.text = None
-    element = Mock()
-    element.find.return_value = node
-    res = subelement_float_kwarg(
-            element=element,
-            ns="ns",
-            name_spaces="name",
-            node_name="node",
-            kwarg="kwarg",
-            classes=None,
-            strict=False
-        )
-    assert res == {}
-
Suggestion importance[1-10]: 8

Why: This suggestion effectively reduces code duplication and enhances maintainability by using parameterized tests, which is a good practice for testing similar functionalities.

8
Improve the assertion for comparing etree elements by using a more specific comparison method

Use a more specific assertion to check the equality of the etree elements instead of
using the default equality check.

tests/kml_test.py [187]

-assert doc.etree_element() == config.etree.Element( f"{doc.ns}{doc.get_tag_name()}", nsmap={None: doc.ns[1:-1]},)
+from lxml.etree import tostring
+assert tostring(doc.etree_element()) == tostring(config.etree.Element(f"{doc.ns}{doc.get_tag_name()}", nsmap={None: doc.ns[1:-1]}))
Suggestion importance[1-10]: 6

Why: Using tostring for comparing etree elements is more precise than the default equality check, improving the accuracy of the test assertions.

6
Best practice
✅ Improve test readability and specificity by using a context manager with an expected error message
Suggestion Impact:The commit added an expected error message to the pytest.raises context manager, improving test readability and specificity.

code diff:

+        container.append(feature)
+        assert feature in container.features
+        with pytest.raises(ValueError, match="Cannot append self"):
             container.append(container)

Use a context manager for the pytest.raises() assertion to make the test more
explicit and easier to read.

tests/containers_test.py [87-88]

-with pytest.raises(ValueError):
+with pytest.raises(ValueError, match="Cannot append a container to itself"):
     container.append(container)
Suggestion importance[1-10]: 7

Why: The suggestion improves test readability and specificity by adding an expected error message to the pytest.raises context manager, which makes the test more explicit and informative.

7
Use more specific assertion methods for comparing with None to improve test readability and error reporting

Use assertIsNone() instead of comparing directly to None for better readability and
more informative error messages.

tests/styles_test.py [616-617]

-assert sm.normal is None
-assert sm.highlight is None
+self.assertIsNone(sm.normal)
+self.assertIsNone(sm.highlight)
Suggestion importance[1-10]: 5

Why: The suggestion to use assertIsNone improves readability and provides more informative error messages, although the improvement is minor in this context.

5

qodo-code-review[bot] avatar Oct 12 '24 21:10 qodo-code-review[bot]

Preparing review...

github-actions[bot] avatar Oct 12 '24 21:10 github-actions[bot]

Preparing review...

github-actions[bot] avatar Oct 12 '24 22:10 github-actions[bot]

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 98.92%. Comparing base (04ec3d7) to head (200bcb5). Report is 19 commits behind head on 352-improve-test-coverage.

Additional details and impacted files
@@                      Coverage Diff                      @@
##           352-improve-test-coverage     #365      +/-   ##
=============================================================
+ Coverage                      98.12%   98.92%   +0.80%     
=============================================================
  Files                             50       52       +2     
  Lines                           4848     5027     +179     
  Branches                         148      148              
=============================================================
+ Hits                            4757     4973     +216     
+ Misses                            63       44      -19     
+ Partials                          28       10      -18     

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

codecov[bot] avatar Oct 13 '24 09:10 codecov[bot]

Hey, nice work 👍 I left some comments and 👍 or 👎 on the AI generated reviews.

@cleder Thanks for the review. Just to clarify, does 👍 mean we are good with the changes or should I go ahead and make the change suggested by the AI bot?

apurvabanka avatar Oct 13 '24 17:10 apurvabanka

Sorry @apurvabanka a :-1: on a bot review means not needed, a :+1: means, it is a good suggestion.

cleder avatar Oct 13 '24 17:10 cleder

@cleder Are there any pending items for this PR?

apurvabanka avatar Oct 18 '24 08:10 apurvabanka

you did not see that mypy failed?

tests/kml_test.py:180: error: Function is missing a type annotation for one or more arguments  [no-untyped-def]
tests/kml_test.py:200: error: Argument 1 to "append" of "KML" has incompatible type "KML"; expected "Folder | Document | Placemark | GroundOverlay | PhotoOverlay"  [arg-type]
tests/helper_test.py:24: error: Function is missing a type annotation for one or more arguments  [no-untyped-def]
tests/helper_test.py:26: error: "node_text" does not return a value (it only ever returns None)  [func-returns-value]
tests/helper_test.py:27: error: Argument "obj" to "node_text" has incompatible type "None"; expected "_XMLObject"  [arg-type]
tests/helper_test.py:28: error: Argument "element" to "node_text" has incompatible type "None"; expected "Element"  [arg-type]
tests/helper_test.py:32: error: Argument "verbosity" to "node_text" has incompatible type "int"; expected "Verbosity"  [arg-type]
tests/helper_test.py:38: error: Function is missing a type annotation for one or more arguments  [no-untyped-def]
tests/helper_test.py:40: error: "float_attribute" does not return a value (it only ever returns None)  [func-returns-value]
tests/helper_test.py:41: error: Argument "obj" to "float_attribute" has incompatible type "None"; expected "_XMLObject"  [arg-type]
tests/helper_test.py:42: error: Argument "element" to "float_attribute" has incompatible type "str"; expected "Element"  [arg-type]
tests/helper_test.py:46: error: Argument "verbosity" to "float_attribute" has incompatible type "int"; expected "Verbosity"  [arg-type]
tests/helper_test.py:47: error: Argument "default" to "float_attribute" has incompatible type "str"; expected "float | None"  [arg-type]
tests/helper_test.py:52: error: Function is missing a type annotation for one or more arguments  [no-untyped-def]
tests/helper_test.py:54: error: "enum_attribute" does not return a value (it only ever returns None)  [func-returns-value]
tests/helper_test.py:55: error: Argument "obj" to "enum_attribute" has incompatible type "None"; expected "_XMLObject"  [arg-type]
tests/helper_test.py:56: error: Argument "element" to "enum_attribute" has incompatible type "str"; expected "Element"  [arg-type]
tests/helper_test.py:60: error: Argument "verbosity" to "enum_attribute" has incompatible type "int"; expected "Verbosity"  [arg-type]
tests/helper_test.py:61: error: Argument "default" to "enum_attribute" has incompatible type "str"; expected "Enum | None"  [arg-type]
tests/helper_test.py:65: error: Function is missing a return type annotation  [no-untyped-def]
tests/helper_test.py:65: note: Use "-> None" if function does not return a value
tests/helper_test.py:81: error: Function is missing a return type annotation  [no-untyped-def]
tests/helper_test.py:81: note: Use "-> None" if function does not return a value
tests/helper_test.py:98: error: Function is missing a type annotation for one or more arguments  [no-untyped-def]
tests/helper_test.py:105: error: Argument "name_spaces" to "attribute_float_kwarg" has incompatible type "str"; expected "dict[str, str]"  [arg-type]
tests/helper_test.py:108: error: Argument "classes" to "attribute_float_kwarg" has incompatible type "None"; expected "tuple[type[_XMLObject] | type[Enum] | type[bool] | type[int] | type[str] | type[float], ...]"  [arg-type]
tests/helper_test.py:115: error: Incompatible types in assignment (expression has type "None", variable has type "str")  [assignment]
tests/helper_test.py:[12](https://github.com/cleder/fastkml/actions/runs/11316861155/job/31488710465#step:5:13)1: error: Argument "name_spaces" to "subelement_enum_kwarg" has incompatible type "str"; expected "dict[str, str]"  [arg-type]
tests/helper_test.py:124: error: Argument "classes" to "subelement_enum_kwarg" has incompatible type "list[type[Color]]"; expected "tuple[type[_XMLObject] | type[Enum] | type[bool] | type[int] | type[str] | type[float], ...]"  [arg-type]
tests/helper_test.py:[13](https://github.com/cleder/fastkml/actions/runs/11316861155/job/31488710465#step:5:14)5: error: Argument "name_spaces" to "attribute_enum_kwarg" has incompatible type "str"; expected "dict[str, str]"  [arg-type]
tests/helper_test.py:138: error: Argument "classes" to "attribute_enum_kwarg" has incompatible type "list[type[Color]]"; expected "tuple[type[_XMLObject] | type[Enum] | type[bool] | type[int] | type[str] | type[float], ...]"  [arg-type]
tests/containers_test.py:96: error: Argument "style_url" to "Document" has incompatible type "str"; expected "StyleUrl | None"  [arg-type]
tests/geometries/multigeometry_test.py:306: error: Argument "geometry" to "MultiGeometry" has incompatible type "Point"; expected "MultiPoint | MultiLineString | MultiPolygon | GeometryCollection | None"  [arg-type]
tests/geometries/multigeometry_test.py:306: error: Argument "kml_geometries" to "MultiGeometry" has incompatible type "Coordinates"; expected "Iterable[Point | LineString | Polygon | LinearRing | MultiGeometry] | None"  [arg-type]
tests/geometries/linestring_test.py:47: error: Argument "geometry" to "LineString" has incompatible type "Point"; expected "LineString | None"  [arg-type]
tests/geometries/functions_test.py:13: error: Function is missing a type annotation for one or more arguments  [no-untyped-def]
tests/geometries/functions_test.py:[17](https://github.com/cleder/fastkml/actions/runs/11316861155/job/31488710465#step:5:18): error: Argument "error" to "handle_invalid_geometry_error" has incompatible type "type[ValueError]"; expected "Exception"  [arg-type]
tests/geometries/functions_test.py:23: error: Function is missing a type annotation for one or more arguments  [no-untyped-def]
tests/geometries/functions_test.py:25: error: "handle_invalid_geometry_error" does not return a value (it only ever returns None)  [func-returns-value]
tests/geometries/functions_test.py:26: error: Argument "error" to "handle_invalid_geometry_error" has incompatible type "type[ValueError]"; expected "Exception"  [arg-type]
tests/geometries/functions_test.py:47: error: Argument "node_name" to "coordinates_subelement" has incompatible type "None"; expected "str"  [arg-type]
tests/geometries/functions_test.py:63: error: "coordinates_subelement" does not return a value (it only ever returns None)  [func-returns-value]
tests/geometries/functions_test.py:64: error: Argument "obj" to "coordinates_subelement" has incompatible type "None"; expected "_XMLObject"  [arg-type]
tests/geometries/functions_test.py:66: error: Argument "node_name" to "coordinates_subelement" has incompatible type "None"; expected "str"  [arg-type]
tests/geometries/boundaries_test.py:78: error: Function is missing a type annotation  [no-untyped-def]
tests/geometries/boundaries_test.py:88: error: Function is missing a return type annotation  [no-untyped-def]
tests/geometries/boundaries_test.py:88: note: Use "-> None" if function does not return a value
tests/geometries/boundaries_test.py:92: error: Function is missing a return type annotation  [no-untyped-def]
tests/geometries/boundaries_test.py:92: note: Use "-> None" if function does not return a value
Found 45 errors in 7 files (checked 56 source files)

It took me a couple of hours to sort this all out: https://github.com/cleder/fastkml/pull/367/commits/18e6b5b8a5a55b9978a2ffaf6755453afce14887

cleder avatar Oct 19 '24 13:10 cleder

I also had to improve some test, so they were asserting something non-trivial, and removed tests that were not of value

cleder avatar Oct 19 '24 14:10 cleder

@cleder For my PR I just ran the Test Coverage for the pytest. I might have missed some setup steps. How do you run mypy?

apurvabanka avatar Oct 19 '24 17:10 apurvabanka

See contributing.rst

mypy fastkml tests

or

pre-commit run --all

cleder avatar Oct 20 '24 08:10 cleder