Code coverage is a bad metric
100% test coverage… sounds safe, right? Wrong. Here’s why your coverage report might be lying to you:
Unfortunately, test coverage does not accurately reflect the extent to which the code is tested. All it does is show the amount of lines of code executed. It does not indicate that the functionality under test is actually correct.
In fact, you can achieve 100% of test coverage without actually testing anything. How? By simply not doing assertions in your tests, but still invoking the methods.
Test coverage is a good negative metric, but a bad positive one.It's useful for highlighting gaps (e.g., 0% coverage signals untested code), but it can be misleading when used to claim thorough testing.
An improvement over line coverage is branch coverage. The branch report indicates the amount of conditional statements traversed. This metric is harder to “trick” in comparison to line coverage.
However, even branch coverage does not guarantee the quality of your tests. The only way to do it is to actually write the proper assertions that verify relevant outcomes.
🚨So, don’t be fooled by high test coverage, it does not guarantee in any way that the code is safe. Write tests that verify the needed outcomes and run them often, preferrably on your CI / CD.
Focus on writing meaningful tests with solid assertions. High coverage is nice, but it’s quality testing that keeps your code safe.
Member discussion