/ Coding

Rethinking Code Quality Metrics

Programming languages, methodologies and tools evolve and offer new insights. Knowing how to evaluate our code in more than one way, can give us a mental framework to conduct code reviews with method and to take better decisions: here code quality metrics can help.

Software engineering manuals tell us about the importance of code metrics as a way to keep under control the quality. Although well advised, we always understand their importance later, when we have completely messed up our codebase. Despite the academical efforts, software industry usually neglects any sort of metrics: it is difficult to find the tools, they are mostly language dependent, and last but not least, metrics are often incomplete difficult to interpret. Thus, code quality metrics and monitoring tools are not widely adopted.

They are neither the most trending topic of the moment; it seems that the interest has reached its minimum: few people really care about metrics!


Code metrics usually are offered as means to monitor the quality of software with the purpose of evaluating technological debt and reducing costs related to new features development, bugs, and consequent maintenance. Although the importance of the matter is not called in question, code related metrics may not be the best fit. In these cases, agile metrics are a concrete and adopted example of metrics that focus not on the code, but rather on the process of delivering a product.

Are then code quality metric useless? In my opinion, they are surprisingly efficient when evaluating different design solutions, even without an actual computation, but with a reasonable estimation. This becomes especially true in library design when we not only have to think about our current code, but we need to forecast on how code built on ours will evolve and be maintained. Following good programming principles, like OOP ones are always the way, however, sometimes it is hard to find out their violation. In such cases, taking a step back from code abstractions and using metrics can help.

Classic metrics are easy to understand and compute but are often incomplete. Moreover, they lack in providing insights on how the codebase evolved and will evolve. Thus, it is convenient to take less usual perspectives. If we stick on reasoning only with classic metrics, we might miss some considerations about the development process in its entire lifetime. Using git repositories and thinking about our interaction with files can give us insights on the hidden, human part of coding. Actually, these new metrics cannot be calculated numerically without historical data, nevertheless, found them crucial to decide whether to prefer a solution or another.

With this idea in mind, I want to share 5 of my favorite essential metrics, that I recurrently use as a personal checklist when evaluating my own code and reviewing others.

1. Cyclomatic complexity

Cyclomatic complexity measures the number of paths in the control flow of a program. Roughly speaking it measures how much you messed up a piece of code. To have an estimate just count the number of if, while, for or switch’s case statements. Since it is a procedural metric, it is harder to use it properly in functional programming: control statements disappear in favor of recursions and higher level control abstractions. In this case, I use alternatives based on the number of line of code if there is no interest in branches coverage.

2. Source Lines Of Code

Keeping the number of lines of code controlled in a single module is a must to ensure readability. No more thousand lines files, nor hundreds of lines methods, please. That’s crap.

3. Lack of Cohesion of Methods

Lack of cohesion of methods means that our class can be split into two or more independent components. This is a violation of the single responsibility principle (SRP).

4. Number of commits

How many times will a file change? How many times has it been already modified? According to the Open Closed Principle, a class (generally a file) should not change after it is completed but simply reused. Writing a class that will change repeatedly and for sure, is a clear violation of the Open Closed Principle. In my opinion, it is easier to think about files modification instead of an abstract OOP principle.

5. File coupling

Reasoning on which files changed or will change can help us to spot explicit dependencies as well implicit ones, that usually cannot be discovered with static analysis. The reason for such problem is often related to a producer-consumer relationship, inadequate information hiding, a violation of the interface segregation or dependency inversion principles. Even though in some cases our expertise can lead us to ignore a given coupling between modules, implicit dependencies should always be avoided since they became unexpected, rather than implicit, just a few weeks after the commit.