simplecov
simplecov copied to clipboard
Simplecov does not cover sub-process entry point
I'm attempting to perform integration coverage for Ruby services on a multi-layer product, for this mean I've initialized coverage at the bottom of /usr/share/rubygems/rubygems.rb
using:
require 'simplecov'
SimpleCov.start do
filters.clear
root Dir.pwd
command_name 'subprocess coverage'
coverage_dir '/coverage'
end
Then I've create 2 files:
-
/application/a.rb
puts "at #{__FILE__}"
require_relative 'b.rb'
-
/application/b.rb
puts "at #{__FILE__}"
Now when I run ruby /application/a.rb
, the generated /coverage/.resultset.json
file is:
{
"subprocess coverage": {
"coverage": {
"/application/a.rb": [
null,
null
],
"/application/b.rb": [
1
]
},
"timestamp": 1500283332
}
}
As you can see, the entry point file is not being covered(Verified with a larger requirement chain).
System Info
# uname -a
Linux 786eab7389d4 4.6.7-200.fc23.x86_64 SMP Wed Aug 17 14:24:53 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
# gem which simplecov
/usr/local/share/gems/gems/simplecov-0.14.1/lib/simplecov.rb
# ruby -e "puts RUBY_DESCRIPTION"
ruby 2.2.5p319 (2016-04-26 revision 54774) [x86_64-linux]
:wave:
huh, I'm not sure about entry level coverage actually - that's interesting. Woudn't a worthwhile, workaround be to have top_level.rb
and then require_relative "a"
?
I think the stdlib coverage mechanism hooks mostly into require
to know what to do but I'd have to investigate.
@PragTob I am under the assumption I cannot augment the code. This assumption rests because the products which are tested are large-scale(>10m LOC) and maintained by several independent teams.
Just got back to pushing this issue, any ideas where could we start this investigation?
:wave:
Imo there's not much we can do as this is how the stdlib coverage
that we rely on behaves. Even if it is maintained by lots of people, just creating a file a_for_simplecov.rb
that then just does require "a"
and running this one instead should be possible imo :)
:wave:
Hi sicne all the time has passed I'm not sure if you're still dealing with this issue. If you did, an example repo would be cool :D
If not running all the tests separately and merging their results (with hopefully forthcoming better support) might be a good idea.