I’m a big fan of pipelines that fail for warnings as well as errors. Such a policy keeps repos clean, current, less buggy, and even more agile. This works best when warnings are allowed in local builds for purposes of fast prototyping, but the developer is on the hook to resolve them all by the time they share their code.
But this preference only holds for small to medium repos that comprise just one or very few tightly knit teams.
In very large repos that are shared by many teams where we also have this policy, I see it as a net loss. The reason? It doesn’t achieve a clean build, keep us modern or bug-free. While it can help prevent a developer from introducing warnings into their own team’s code, it quite often achieves the opposite goal: the engineer sweeps the problem even further under the rug by suppressing the warning instead of fixing it. Now no one will ever see the warning again, and the bug doesn’t get fixed.
Not that I blame that developer. Such a repo is a huge place. When a small change results in hundreds or thousands of new warnings being generated, the developer is on the hook by this policy to either fix them all or suppress them all. Most or all of these warnings are generated in code that the developer isn’t familiar with, and their team may not even own them. The safest course of action (in the short term) is to just suppress the warnings and whistle as you walk away. The alternative of fixing all of them, getting appropriate validation from each owning team is daunting even for the most experienced developers among us. Some of us simply wouldn’t be able to deliver sufficient value to justify the effort.
So warnings get suppressed. Bugs get buried instead of exposed and fixed.
As if that wasn’t enough, the ‘goodness’ of a warning free build isn’t even achieved, because people get clever and adjust MSBuild properties based on whether cloud build is orchestrating the build or not, such that warnings still appear on dev boxes but not cloud build. How does that help anyone? Sure, we still see warnings now, so bugs are seen and fixed, right? Wrong. These warnings aren’t caught by PR/CI builds, and they tend to proliferate in local dev box builds to the point where the code owners totally ignore the warnings, so they still don’t get fixed. But now everyone is annoyed by the warnings.
Most recently, I was totally blown away to see the opposite problem: warnings that appeared in cloud build but not in the local box. These warnings were breaking the official builds but couldn’t even be reproduced locally. In this case I reverse-engineered the build-authoring-from-hell to learn that there were 2-3 msbuild properties I had to manually set locally in order to meet the conditions allowing the warnings to be emitted locally so I could repro and fix the problem, thereby unblocking the official build.
Most commonly, I personally am involved in inserting/updating analyzer packages consumed by one of these large repos that are meant to find and prevent bugs, but with every new or updated analyzer, some (often large) number of projects with bugs that are now caught get those analyzers suppressed instead of the bugs fixed, because I simply cannot go fix everyone else’s code.
All this leaves me wondering whether different policies are more appropriate for large repos vs. small ones. What are your thoughts? Please leave your comments below.