-
Notifications
You must be signed in to change notification settings - Fork 73
Quality Report
The quality report shows how the changes in a pull request have affected code quality in the affected files.
The first section shows the quality score, complexity and method length averaged over all of the code in these files, along with the change in the number of lines.
The second section then breaks this down per file, while the last section shows up to five individual methods that have the lowest quality score.
These metrics are shown in the quality report and also in the IDE when hovering over a method name. They can be switched off in the IDE in the Sourcery settings.
The quality score is a percentage score for code quality. It is a calibrated and scaled blend of the complexity, length and working memory metrics.
The complexity metric is a measure of how difficult your code is to read and understand. It is based on these principles:
- Each break in the linear flow of the code makes it harder to understand
- When these flow-breaking structures are nested they are even harder to understand
Flow-breaking structures are things like conditionals, loops and complex series of logical operators.
Full details of how this metric is calculated are contained in the Cognitive Complexity: A new way of measuring understandability white paper by G. Ann Campbell of SonarSource.
An important way to reduce the complexity of your code is to reduce the amount of nesting.
- Merging nested if conditions is a quick win.
- Introducing guard clauses can also be useful, if there are conditions that are checked at the beginning of your method.
- Extracting out chunks of related code into their own methods can also keep nesting to a minimum.
The method length metric is a measure of how long each method is on average. It is based on the number of nodes in the method's Abstract Syntax Tree.
It is better for methods to be concise and to do one thing, so this metric penalises overly long methods.
The working memory metric is a measure of the number of variables that need to be kept in your working memory as you read through the code.
See here for a full description of the metric.
The primary way to improve code that scores poorly on this metric is to break up large functions into smaller ones. You can take a look at our blog post here for a practical example of how to do this.
Other ways are to extract complex conditional tests into variables or functions.
Please visit our newer docs at https://docs.sourcery.ai