helm-unittest
helm-unittest copied to clipboard
Feature: Implement Test Coverage
Summary
Test coverage would be an excellent addition to this library. Automatically ensuring that we test all parts of our Helm charts and having a threshold for contributions would be a great value add.
Implementation Challenges
The challenges with test coverage for this tool are of course that Helm is merely a templating language and so we can't just run a Go test framework with the --coverage
flag on. Most test coverage tools instrument the source code in order to detect what statements, branches, and functions were covered.
Let me know what you think -- I'd guess you've probably thought about this before! I can potentially make some time to implement this depending on the complexity of the approach as well.
Naive Implementations
I can think of two very naive, incomplete implementations of statement coverage:
- Comparing the number of lines of rendered YAML to the number of lines of the template, possibly ignoring any lines that are purely template and no YAML.
- Performing a diff of rendered YAML to the template, possibly ignoring partial line matches (i.e. where only part of the line is changed in the template, and not the whole thing)
Branch coverage and function coverage would be more difficult
Potential Simpler Implementations?
If there's a way to have Go instrument templates and get coverage data natively that way, that would be ideal and much simpler, however I don't know of such an approach off the top of my head or after doing a good bit of searching.
Misc
Also wanted to say big thanks for keeping this library alive with the fork! Had looked at using helm-unittest
years ago and put it off as it was unmaintained -- glad to see a great tool kept in maintenance and hope I can help!
Was thinking about this a bit and if it's possible to hook into Helm's rendering logic line-by-line, that would effectively act as instrumentation, which could then be counted and totaled.
Another approach, which would actually be very similar to what existing language coverage tools do, is to do a source-to-source transform and add Helm helper functions as wrappers. Those functions would purely do counting, incrementing internal helper variables. Those variables could then be output and calculated at the end.
As a very simple pseudo-code example: __coverage.statement.increment
could be one type of such helper.
A similar approach would be to render "control" comments and count how many times those appeared compared to how many is expected (e.g. once per branch). Those would similarly have to be inserted into the existing template as a source-to-source transform.
Otherwise, the naive approach I listed above is basically "reverse-engineering" of sorts, trying to approximate coverage by comparing input/output instead of via instrumentation. There's likely still a way of implementing it naively, however, with different trade-offs.
I would strongly appreciate the addition of code coverage support. This feature would serve as an invaluable asset, particularly for those involved in crafting unit tests. It would provide clear insights into which parts of the code are covered by tests and which are not.
TL;DR
I'm proposing the addition of code coverage support to benefit unit testing. The main challenge lies in the methodology, as Go Templates were not designed for this. However, a solution involving the generation of an Abstract Syntax Tree (AST) seems promising.
Context: Helm-Unittest & Go Templates
I've been considering the implementation of unit tests using helm-unittest and have pondered this idea multiple times. The main challenge lies in the methodology. It's clear that Go Templates were not originally designed to support this feature, which calls for the creation of a new mechanism from scratch.
Proposed Solution: Leveraging AST
While I'm not a Go expert, an intriguing solution has occurred to me. Go Templates share similarities with Go itself, borrowing certain constructs. When processed via the text/template package, an Abstract Syntax Tree (AST) is generated.
The key question for Go experts is:
Is it feasible to generate Go code from this AST, produce coverage conventionally, and then transform this output into a final coverage format for Go Templates?
Possible Limitations
The only downside is that the methods for AST generation in the text/template package are not public, making them unavailable for direct use. However, this should not impede the development of a custom AST parser that functions in a similar manner.
I'd love to hear your thoughts on this proposal. 🙂
We have implemented something like this in our company's fork of this repository, our implementation is very close to @agilgur5's idea. But there's a limitation when it comes to .tpl
files. Maybe we can start of with a simpler coverage feature and extend it from there?
Happy to work on this @quintush.
+1 for the feature request.
- 1 for the feature request
We have implemented something like this in our company's fork of this repository, our implementation is very close to @agilgur5's idea. But there's a limitation when it comes to
.tpl
files. Maybe we can start of with a simpler coverage feature and extend it from there?Happy to work on this @quintush.
@laureanray This would be a great addition to our CI. Do you have a PR that can be shared? I am happy to test and provide feedback.
Hello, it seems that the work of @laureanray is here: https://github.com/helm-unittest/helm-unittest/compare/main...laureanray:helm-unittest:feat/coverage-report. Possible to get it ? I need too coverage report.
our implementation is very close to @agilgur5's idea.
it seems that the work of @laureanray is here: main...laureanray:helm-unittest:feat/coverage-report
really interesting to see -- the instrumenter seems to be a source-to-source transform specifically, like I mentioned in my second comment, which is a more conventional & resilient approach than the naive one I first mentioned. surprised to see the implementation as concise as it is too 👀
great work on that first take @laureanray!
+1