Analysis Mistakes

When I started my professional career at Fraunhofer in 2009, software quality and static code analysis were among the first things I fell in love with and focused on. Having a bunch of tools at my disposal that showed me where my code (and the code of my colleagues) was flawed seemed like the perfect setting to learn and improve. We used checkstyle, spotbugs (back then called findbugs), and PMD, both locally and on our Jenkins server.

One of the biggest mistakes (if not the biggest) I made back then was to enforce the rules upon my team, without explaining the reasons for each rule and for using these tools in general. This quickly led to heated discussions — most of them emotional and unproductive. I was able to convince them in the end, but that approach took a huge toll, on myself and on the team as a whole.

So, here’s my first word of advice: Never introduce static analysis as a top-down decree. Instead, discuss the rules and agree (!) on your ruleset as a team. Doing this will be (mostly) stress-free and lead to an improvement of the whole team – not only yourself. Many developers don’t know such tools or rules and simply explaining the “why” behind them can work wonders.

It’s a Team Effort

When I joined a new team and employer, I was tasked with pretty much the same scenario: introduce analysis tools and improve the codebase. But this time, I did two things differently.

First, I was not alone this time. I was working closely with two colleagues on the matter of software quality, which was very important. I wasn’t the strange new guy who proposed changes to the development process and introduced code quality tools—having a dedicated sub-team made all the difference

Second, when we chose our tool (ReSharper for C# in this case), we analysed all the existing rules, then preselected a subset of rules that we wanted to start with, and discussed them with the team. We agreed on a set of checks that everyone was on board with (fun fact: the team wanted even more rules than we proposed!). Over the course of 1.5 years, we gradually tightened the rules and introduced more and more of them, always trying not to overwhelm our fellow devs. By taking it step by step, we ensured that static analysis became a natural part of our workflow rather than an obstacle.

Maybe you can already guess my second word of advice: Start small. Do not start with too many rules. Most tools enable a massive set of default checks — far too many to be useful right away. Instead, pick a few key rules, disable the rest, and then slowly increase the number or strictness of checks over time. Your team will be thankful for that, as you’re not overexerting them with new stuff in addition to what they deal with in their day-to-day job.

Absolute Numbers vs. Trend Analysis

Static code analysis is great. It can tell you about the quality of your code at any given time. If your code is free from findings, you can use your analysis tools to keep it that way. But most of the time, you’ll use those tools on a project that’s been alive for quite a while, with thousands or even tens of thousands of findings. And that’s where the trouble begins.

I’ve seen teams completely demoralised by the sheer number of findings in their codebase. Not because the code suddenly got worse, but because they could finally see the full extent of its issues— and that can be paralysing. Sometimes, though, it’s not even the code that’s the problem, but a misconfigured analysis profile burying the team in unnecessary warnings.

Now, having tools that show you findings in your code are great, don’t get me wrong. But what exactly do you think will happen when you tell someone outside the team “There are 217.323 findings in our codebase”? They do not have the same context that you have and can’t interpret the numbers properly. What they will remember is that scary number: 217,323. And suddenly, your project has a ‘quality problem’— even if it’s actually improving.

Shall we turn the tools off, then? No, of course not. We can still use them and focus on something else, which will help everyone. Usually, we use code analysis for two reasons:

  1. Find out about the quality of our code and where it violates our rules.
  2. Monitor the quality continuously.

And (2) is far more important than (1). We want to see how the quality of our code base develops over time. We want to see the trend. And that’s what we should focus on.

My third word of advice is this: Forget absolute numbers. Focus on trends. There are several tools out there that can help you to monitor the trends.

I want to introduce you to two of aforementioned tools that I encountered in projects over my career. (Yes, I know that there is also SonarQube, but I’ve never really worked with it.)

NDepend

If you’re in a .NET only environment, you can use NDepend.

Screenshot of the NDepend dashboard showing code quality metrics for a test project,  including 59,381 lines of code, 3.7% debt with an A rating, and 4,974 issues.
Figure 1: Default dashboard of an NDepend web report

NDepend is a tool that you usually use either as a plugin for visual studio or as a standalone application, but it is for Windows only. However, there’s also a headless version available, that runs on Windows, Linux, and macOS. You can use it to create web-based reports about the quality of your code. You can see the default dashboard in figure 1, which is shown when you open the report. It’s very easy to integrate this headless version into your build pipeline and publish the build results to GitHub pages or GitLab pages, for example. Bonus: If you’re using ReSharper, you can even integrate its findings into NDepend ((1) in figure 2). There’s also a separate trend view available that will show you the development of your code coverage, your technical debt, violated rules and some other metrics ((2) in figure 2).

Screenshot of the NDepend 'Rules' dashboard for a test project, highlighting '52 R# Code Inspections,' violated rules, and issue metrics per rule as well as the tab for 'Trends'.
Figure 2: Rules tab of the report, including Resharper inspections

Teamscale

A second tool that offers continuous monitoring, trend analysis, and more is Teamscale. They support a bunch of different technologies (.NET included) out of the box, and you can also upload the analysis results of tools that you’re already using.

Teamscale dashboard showing 'Lines of Code' at 260.5K, 26.5% clone coverage, 33.5K findings, pie charts for method length and nesting depth, bar chart summarizing findings, and a blue treemap for clone coverage analysis.
Figure 3: Default dashboard of Teamscale

That’s the default dashboard that you see above in figure 3. You can focus on absolute numbers if needed, but the real power lies in its ability to show trends — perfect for presenting results to management without overwhelming them with raw data, or to make the trend visible to the team all the time. Figure 4 shows two exemplary widgets that focus on trends.

Teamscale dashboard depicting a Metrics Table with progress indicators and a Metrics Change Table showing numerical changes in code quality metrics.
Figure 4: Two trend oriented widgets in Teamscale

In the End, it’s Humans After All

Having tools to analyse your code is great, having them integrated into your development process is even better. But unfortunately, that’s still not enough.

At the end of the day, code quality is not just about tools — it’s about people. You need someone who takes care of the continuous improvement of your software and, in turn, your team: a quality engineer (ideally, not just one but at least two).

A quality engineer is not just the person who installs and configures static analysis tools. They are the ones who drive a culture of quality — helping developers understand and embrace the benefits of code analysis instead of seeing it as a bureaucratic nuisance. They ensure that rules evolve alongside the team’s needs, keeping static analysis a living process rather than a rigid, outdated checklist.

No matter how good your tools are, you will always face resistance from developers at some point. Some will see static analysis as a burden, others as a direct challenge to their expertise. This is why a quality engineer must also be a diplomat — not just enforcing rules, but educating, discussing, and adapting. If a rule is constantly ignored, it might not be because developers are lazy — it might be because the rule doesn’t make sense in the project’s context.

A good quality engineer is part technologist, part advocate, and part diplomat.

My final word of advice: Appoint a quality engineer. And if no one volunteers, step up and take the job yourself. If you care about clean code, you don’t have to wait for someone else to drive the change. Start small. Set up discussions. Advocate for better practices. And most importantly, focus on people, not just numbers. Because in the end, code quality is about making life easier for humans — not just for the machines analysing it.