How can you grade and track the ‘goodness’ (or ‘badness’) of your code?
Let’s talk about code violations. Those tiny little itches in every developer’s back. Violations show us how badly we have coded according to a certain quality model somebody set long time ago.
But, just like life, things move fast. Code changes, mutates, vanishes, is versioned or grows. One day you have a class that computes your wage and tomorrow you develop a whole module to do so. Moreover, the quality manager can change the rules your organization applies and suddenly your barely-modified code has 1000 violations that were not there yesterday. How can we keep a consistent track of the “grade” it gives to a software system?
Well, you can take different approaches:
- Design a confidence factor (the ‘grade’) that fulfills a set of desirable requirements or
- Forget about the grade all together. Code violations are the stuff that matters, and the key here is to know whether they can be fixed to improve the quality of your software. Anything else is disposable.
Both strategies have advantages and drawbacks. With a good confidence factor algorithm we can follow the evolution of the portfolio you are analyzing: Have I improved the quality of my code compared to the previous month? . However it can be difficult to know which pieces of code should be fixed first in order to maximize the growth of such confidence factor.
A good confidence factor algorithm should have the following characteristics:
- It should depict a monotonically increasing/decreasing function (in terms of violations). More violations always mean lower grades.
- It should take into account at least the size of the code and the relevance of the violation. (A big piece of code may have more violations than a smaller one, but both can get similar grade if they share similar types of defects).
But this may not be easy. The term “relevance” of the violation is very subjective. You can give every quality rule a ‘weight’, or group rules into categories and tag them with different priorities. Then combining those results and get a final grade (or sub grades) may not be straightforward.
What if the quality rules that were applied change? Or if their relevance is modified? Well, then your quality assurance tool has to grade the change somehow. For instance, versioning the rule set that was used, or building “milestones” that reflect the change of the quality model and will record it. Then you will be able to watch the evolution of the marks in the proper context.
To sum up, Optimyth invites you to try the confidence factors that our tools calculate, and recommends you two things:
- Fix the most critical code violations quickly (to keep the technical debt under control) and
- Track the quality of your software knowing exactly what you did analyze in the past.
Thanks for reading. See you!
Meet the author Eduardo Aguado
Computer engineer in search of excellence in software architecture. I like languages (I like to think I am fluent at Spanish and English,but also interested in Dutch and Chinese), computers and technology and anything you may associate to a geek person. But most of all I like learning, I will never stop doing it ;) I feel comfortable working with J2EE and writing formal language processors. If you want a parser guy,just contact me!