pydata-sphinx-theme icon indicating copy to clipboard operation
pydata-sphinx-theme copied to clipboard

Extend test suite for pydata-sphinx-theme

Open choldgraf opened this issue 3 years ago • 4 comments

Now that #227 is merged we have a basic testing infrastructure set up for the theme. This will let us test the HTML outputs that are generated by the theme, using BeautifulSoup.

We should add more tests for the following things:

Make sure that...

  • sidebars behave expected for nested toctree pages
  • the navbar / sidebar behaves expected for things like external links in toctrees
  • buttons show up when activated (E.g. github button, edit this page button, etc)
  • logo etc behave as expected when they are added / not added
  • numbering of sidebar sections works with :numbered: flag
  • Special characters in navbar / sidebar work (e.g. equations or formatted text)
  • The header has the structure we'd expect (e.g. JS and CSS imports, social media metadata, etc)

For each of these, I think the easiest thing to do is:

  • Use beautifulsoup to select the subset of a page we want to test
  • Use file_regression to test its structure against a reference we have committed to the tests folder

See the contributing documentation for some guidelines on how this works!

Note that I'm also making this issue because I'll likely go on paternity leave sometime in the next several weeks so probably will not be able to personally do this but I am happy to help advise and guide others!

choldgraf avatar Jul 20 '20 17:07 choldgraf

What you describe in this issue is the exact structure of the current tests so my guess is that you implemented it without closing the issue.

Let me know if other stuff need to be taken care of before closing.

12rambau avatar Jan 03 '24 19:01 12rambau

I'm going to ~~hijack~~ update this issue to reflect the current state of testing, and what I see as needing improvement. Others please feel free to add to this list:

  1. systematic test of components. I think this could be something like mocking each component to be a short, unique text string (or little square of color?), and testing for presence of that element in the rendered doc (as well as correct placement, breakpoint behavior, etc). TBD how thorough we need to be here; testing all possibilities may be prohibitively time-consuming and hard to keep current. But for example, the "fix" in #1583 for breadcrumb truncation is stalled because it will (probably!) only work if the breadcrumb is in the article header (not in footer, content footer, sidebar, etc). I say "probably" because it's not tested yet, and if we had thorough component tests in place it would be easy to know for sure.
  2. systematic test of config values. Several of our config values interact in complicated ways: e.g. sidebar_includehidden, navigation_depth, collapse_navigation, and show_nav_level (and navigation_startdepth, if #1241 ever gets sorted out). Bug reports like #1551 show that we're not testing that well enough.
  3. accessibility testing. work here is already underway, cf. #1428, just mentioning here to say that it's being tracked and worked on.
  4. version switcher testing. I don't think we can test all possible cases, but we should at least try to test more of them, so that changes to the version switcher don't cause regressions. We can look to the various bug reports / feat requests from the community for inspiration on what configurations we should test.
  5. clean up the doc build warnings. every time I do a local doc build I panic a bit when all those "expected" warnings fly by. It would be nice/helpful if the default state of our doc build was no visible warnings so that we didn't have to run a separate test to see if the doc build was "clean" or not.
  6. eliminate the webpack warnings. Similar motivation as (5) --- fewer warnings during "normal" functioning make problems easier to spot.

drammock avatar Jan 16 '24 17:01 drammock