missing precompiled x86 Windows binary
Please check in a precompiled x86 Windows version binary. I can't succeed with compiling of Clang and ccsm, so everyone could try it out easily, thank you! Looks very promising!
I've got a couple more changes I'd like to incorporate over the next day or so, then I'll upload a pre-compiled binary snapshot. I'd be interested in any feedback you've got when you try it out!
Just added. Please let me know if you run in to any problems.
Hm this seems like a 64bit exe, try compiling with -m32 for compatibility. Need a few days to get 64bit pc.
It should be a 32-bit binary, but you may need the Visual C++ Runtime installed (I had real problems trying to build Clang with MinGW's gcc when I last tried, hence the use of Visual C++)
Couldn't find any "get started" info in read me so I tried a simple empty main() function.
ccsm.exe main.c C:\Users\ext\Desktop\ccsm>ccsm.exe main.c LLVM ERROR: Could not auto-detect compilation database for file "main.c" No compilation database found in C:\Users\ext\Desktop\ccsm or any parent directory json-compilation-database: Error while opening JSON database: no such file or directory
I have clang.exe, gcc.exe in PATH. clang version 3.6.0 gcc (tdm64-1) 4.9.2
Do I need to pass compiler switches like this? ccsm.exe -Wall -DHELLO=1 main1.c main2.c
Credits, one line prog description, www, email would be nice in -help
Cheers,
Works great, here is my experience with it:
Hard to do metrics on one C file at a time: ccsm.exe -disable-global -disable-function -disable-method main.c -- here I would need to exclude each .h header file with -exclude-file="", which is impossible in big project. Better solution: a new switch eg -do-file-only=main.c, or -exclude-file=.h,.hpp, or do file listed in call: main.c --
-- is missing from the end of USAGE: ccsm.exe [options]
Because there are tons of metrics, maybe it would be better to use a config file instead overloading CLI with -output-metrics="", (max len of CLI ~4000), -config-file=../xxx.cfg
To avoid external processing it would be great to have a max threshold value for each metric in the config file: eg: v(G)_max = 11, in next iteration min values too, eg. min Comment density. If threshold is exceeded, only that metric should be printed out, one metric/line. eg:
main.c: Comment density: 0.5 violated (min: 0.1, max: 0,4) main.c:func1(): HIS_CALLING: 15 violated (min: 1, max: 10) dummy.h: 'volatile' keyword count (raw source): 1 violated (min: 0, max: 0)
Similarly like a compiler warning output, this way be much simpler to integrate into a toolchain. Please note the filenames at the begining of line. (simpler to search, maybe display on CI server)
One more useful metric is coupling of a .c file, which is collection of all external functions and global vars (?).
Missing: -output-file="" to generate report would be nice. Wierd thing if I do "... > report.txt" every second line is an empty line.
Is halstead metrics only reported for functions?
Function specific metrics should be after the generic metrics inside the reporting, so it would be easier to find. (not mixed with generic ones)
I hope that non selected metrics are not calculated in the background, only the selected ones:)
Great work!
Thanks for the feedback - it's really useful to have someone else's perspective.
- Getting metrics for a specific file (or even function). Good point, that should be added.
- Missing dash-dash in the usage. Agreed, I'll need to look into that, as it's managed by LLVM's support library
- Allow configuration file rather than specifying everything on a command line. Agreed, it would be a good feature to add - see #86.
- Supporting limits/warnings. This is something I plan to add in some form - see #56 - my original thinking was to make it a script that would process the output of ccsm rather than building it into ccsm itself, but I'm not decided on that yet.
- Coupling metrics. These are in the backlog - see #23
- Direct output to file. No reason not to add this. Added #73
- Halstead metrics. Halstead metrics are only currently output per function simply because there's some extra work needed to calculate them at a file level which I've not got round to yet. Should be done before #60 is closed.
- Grouping function-specific metrics after generic metrics. Makes sense, yes.
Non-selected metrics shouldn't be calculated directly (though you still get some overhead, as the processing of the C is the same regardless of what metrics are enabled). The metric calculation phase could be more efficient by intelligently caching values, but that's a low priority improvement at the moment, as I think the main overhead is in the parsing.
If you use ccsm any further and have more feedback, please let me have it. In due course I'll open some new issues to cover some of the above items.
Yeah I've integrated many software into tool chains and tried out most free metric tools. My suggestion is to set priorities between tasks, it is realy easy to get lost in details for weeks.
On Thu, Jan 7, 2016 at 11:32 PM, bright-tools [email protected] wrote:
Thanks for the feedback - it's really useful to have someone else's perspective.
- Getting metrics for a specific file (or even function). Good point, that should be added.
- Missing dash-dash in the usage. Agreed, I'll need to look into that, as it's managed by LLVM's support library
- Allow configuration file rather than specifying everything on a command line. Agreed, it would be a good feature to add.
- Supporting limits/warnings. This is something I plan to add in some form - see #56 https://github.com/bright-tools/ccsm/issues/56 - my original thinking was to make it a script that would process the output of ccsm rather than building it into ccsm itself, but I'm not decided on that yet.
- Coupling metrics. These are in the backlog - see #23 https://github.com/bright-tools/ccsm/issues/23
- Direct output to file. No reason not to add this.
- Halstead metrics. Halstead metrics are only currently output per function simply because there's some extra work needed to calculate them at a file level which I've not got round to yet. Should be done before #60 https://github.com/bright-tools/ccsm/issues/60 is closed.
- Grouping function-specific metrics after generic metrics. Makes sense, yes.
Non-selected metrics shouldn't be calculated directly (though you still get some overhead, as the processing of the C is the same regardless of what metrics are enabled). The metric calculation phase could be more efficient by intelligently caching values, but that's a low priority improvement at the moment, as I think the main overhead is in the parsing.
If you use ccsm any further and have more feedback, please let me have it. In due course I'll open some new issues to cover some of the above items.
— Reply to this email directly or view it on GitHub https://github.com/bright-tools/ccsm/issues/68#issuecomment-169828042.
For sure. Most recently I've been spending time writing test cases for the tool to try and ensure that the metrics generated are robust (and to provide a set of regression tests for when updates are made).