6/30/2009

Coverage with lcov, and so what?


A while back we ran an experimental line coverage analysis on our acceptance test suite. The result was 68% on the code for the main control board. I got the result from nightly build and mentioned it in Daily Scrum, and prompted "so what do we think about it, should we track it"? Everyone on the team had a blank stare and then finally a team member came forward "yeah, that's a good question. So, what?"

Coverage is information. It is just that, an additional piece of information, not by any means the final truth. I don't remember who teached me this, but;


"If you take all your assert's away you still have the same coverage. You just ain't testing anything at all."


This has been explained here and of course in Brian Marick's classic How to Misuse Code Coverage (pdf).

Well, maybe the good coverage can not say anything about the quality of your tests, but poor coverage can certainly say thing or two of opposite nature. If your coverage is 20% we can say quite confidently that you ain't there yet.

I started with acceptance test line coverage, but the rest is about unit test line coverage. Some embedded teams use gcov and I have heard people fiddling the data to generate fancier reports. Being lazy as I am I didn't do it myself. I did what I'm good at and searched for what others have already done. I found lgov, which is a tool in Perl to format gcov data.

We run lcov under Cygwin. You can get lcov for example from here, extract, and execute "make install". Next compile and link unit tests with gcc using flags "-fprofile-arcs" and "-ftest-coverage". We have a special build target for intrumenting the unit test executables with debug and coverage information so that we don't unnecessary slow down the bulk of builds. Next execute your full suite just like you normally would.

In our case all .obj files from test build are in ./bin directory, and that's where all the coverage data files go to. Our unit test script moves them to ./bin/code_coverage directory away from .obj files, and we want the final html report to be in ./build/test/code_coverage. Now we have the information necessary to create a shell script to do the actual analysis and reporting of coverage data:

path=`dirname $0`
lcov --directory $path/bin/code_coverage -b $path --capture --output-file $path/ic.info
genhtml -o $path/build/test/code_coverage $path/ic.info

Vola', your disappointing(?) results are ready to be browsed, like so:





What the heck, it's all green, while you only have tests for simple utils files? In this approach there is a limitation - you only get coverage information for the files that are involved with your test suite. With huge legacy code, this would yield too promising picture early on. Again you need to think for yourself.

Experiment with coverage in your team, I think it's worth every penny but even when you start closing 100%, remember to keep analyzing, "so what?"