hercules
hercules copied to clipboard
Measure code complexity & use it to estimate `score` per developer
Hi, 2 points:
- measure code complexity per file/class/function & build a distribution of complexities -> long tail will show files that could be considered for refactoring.
- average complexity per developer based on his history of contribution -> it will show who tends to write complex code and who is not
@vmarkovtsev Any update on this analysis? I am interested in contributing but I have never worked on go
. If you can guide me, I can put some time in figuring this out.
I haven't even started this. You are welcome to contribute! Don't worry, the killer feature of Go is that you can start writing functional idiomatic code in 1 day or even less.
Here is what I expect:
- We need to decide which code parsing library to use. Babelfish used to be a great option but it is no longer maintained. The neighbor analysis to collect imports/includes uses TreeSitter - that could be a fair replacement. Otherwise, maybe there are ready-to-use complexity measurement libs out there. This point will define the details of the following one.
-
internal/plumbing/complexity.go
- depends on the line diff. We find which functions were modified, which places in the functions were changed, and try to estimate the complexity of those places. There is a ready analysis that already tells you which functions are new or changed based on Babelfish/UAST, but I am not sure we should use it now. -
leaves/complexity_score.go
- that guy depends on the new complexity calculator and records the complexity scores per developer. It depends on the Identity analysis.