A misleading Maintainability-rating.

Recently I was approached by one of our engineers, asking me the question: “Here I have a piece of code with a ‘maintainability-rating’ of A in SonarQube. But when I look at the code, I think it is complex code which is not easy to maintain at all! How is it possible that SonarQube provides an A-score for ‘maintainability’?”.


To answer the question first we need to understand what is meant by maintainability. Maintainability is one of the quality-characteristics of ISO-25010[1]; the ISO Software Product Quality standard. In ISO-25010, maintainability is defined as:

“This characteristic represents the degree of effectiveness and efficiency with which a product or system can be modified to improve it, correct it or adapt it to changes in environment, and in requirements.”

Additionally it goes into more details about five sub-characteristics on which I will not elaborate in this article.


Secondly we need to understand how SonarQube is determining the maintainability-rating of a piece of software. Luckily, SonarQube publishes how it determines their metrics[2]. Looking at their website for maintainability-rating, results in the following definition:

“Maintainability rating: The rating given to your project related to the value of your Technical debt ratio.”

This definition is followed by ranges of percentages for scores A (high) – E (low). Apparently we need to understand Technical debt ratio as well:

“Technical debt ratio: The ratio between the cost to develop the software and the cost to fix it. The Technical Debt Ratio formula is: Remediation cost / Development Cost.
Which can be restated as: Remediation cost / (Cost to develop 1 line of code * Number of lines of code).
The value of the cost to develop a line of code is 0.06 days.”

The next question in our quest is “how are the remediation cost defined?”. Unfortunately, these are not defined on SonarQube’s website as such. However, reliability remediation effort and security remediation effort are both defined as respectively fixing all so-called bug-issues and vulnerability-issues. As far as I could find, these two are the only two I assume to be part of remediation cost. Both, bugs and vulnerabilities as detected by SonarQube are warnings as produced by static code analysis.

To summarize, the maintainability-rating in SonarQube is based on the estimated effort needed to solve warnings produced by static code analysis. Static code analysis refers to the analysis of the source code without actual execution of this code[3], which results in warnings to be considered and solved. Warnings which may lead to actual bugs or to actual vulnerabilities in the code are classified as bugs and vulnerabilities in SonarQube and are in scope of the maintainability-rating.


Referring back to the ISO-25010, maintainability is how easy a product or piece of code can be modified for whatever reason, which is, for sure, heavily determined by the complexity of the code.

Two important aspects of complexity are dependencies and obscurity.


It is obvious that when we have many dependencies between different software entities on different abstraction levels, complexity rises. Therefore, one should focus on reducing dependencies in the software as much as possible. It is no coincidence that design paradigms like ‘low-coupling & high-cohesion’ are the basis for the SOLID design principles, which have the goal to reduce dependencies in the software such that engineers can change entities of the software without having to change others. Applying these design principles in a proper way does mitigate complexity of this software.

Question is, are the registered bugs and vulnerabilities by SonarQube reflecting the dependencies in your code? No, they are not.


Not understanding the intention of the software, or to be more specific of the code, increases complexity as well. This is exactly what should be covered by creating so-called ‘Clean Code’. Clean Code is code that works and is understandable by other human beings. Code which is hard or nearly impossible to understand by other human beings is called Bad Code. In many cases Bad Code is associated with complex, big functions containing deeply nested constructions and a high cyclomatic complexity. However, one should take into account that also seemingly simple small pieces of code can be obscure. Examples of these are the usage of misleading names of variables and not obvious constructions for simple operations.

Question is, are the registered bugs and vulnerabilities by SonarQube reflecting the obscurity of your code? Partly, I would say. Some warnings produced by static code analysis for sure are about understanding and interpretation of language constructs. Addressing those will decrease obscurity. Interesting to see is that SonarQube, apparently, did not include the measured Cyclomatic Complexity in the maintainability-rating. Cyclomatic Complexity clearly is related to maintainability.
Additionally we have others aspects which contribute to obscurity, but which can not be measured by static code analysis as performed by SonarQube, like usage of meaningful names, to mention one obvious one.

Is the Maintainability-rating misleading?

To summarize, is a piece of code with a maintainability-rating of ‘A’ as provided by SonarQube maintainable? You can not tell, simply because a high maintainability-rating of SonarQube only tells you that the majority of reported static code analysis warnings classified as bugs and vulnerabilities are solved. It does not provide sufficient information about dependencies and obscurity of code. As such, I think the maintainability-rating of SonarQube is misleading due the fact that its name is not reflecting what it shows.

[1] https://iso25000.com/index.php/en/iso-25000-standards/iso-25010

[2] https://docs.sonarsource.com/sonarqube/latest/user-guide/metric-definitions/

[3] “What is Software Quality?” – page 159

Leave a Reply

Your email address will not be published.