Static Analysis tools. Are you ready for Big Data?
The principles of static analysis are simple: give me your source code and I will evaluate it. The technology has improved over the years, and now static analysis tools are amazing: some have their own meta-rules and execution frameworks, others are real translation systems, many of them build global models to search for defects and others are capable of querying complex information systems for data. The majority has developed advanced lexing/parsing algorithms and perform decent semantic analysis. But this has not stopped yet: now tools are focused in how to decrease the rate of false positives when looking for quality issues, perform more complex checks on the source files, refine existing rules, etc.
Everything evolves and Static Analysis is more alive than ever, but nothing of all this really matters if QA tools cannot handle large volume of files efficiently. When choosing a static analysis engine you have to ask yourself: is this tool prepared for huge amounts of data? That’s an important point!
Nowadays companies have thousands of applications on their servers: legacy systems from ‘the good old days’, cutting-edge projects, prototypes, new developments that arrive every day and so on. Mainframe and modern servers have their own time schedules to balance load and computation time. Sometimes many background tasks must be performed at night in certain “resource slots”. QA teams may want to analyze such software items as fast as they can so that the majority of them can be audited in the time frame they have been assigned.
In a world of economic recession, time is money, and resources are precious money too. If a single server can analyze all the software that belongs to an organization in a short time frame (typically from 4 to 8 hours at night, when the server has lower workload) the resources are being maximized. Consequently QA teams can track all their applications every day, perform more frequent analysis and fix bugs and defects much faster. Therefore productivity raises and benefits increases.
Optimyth’s static analysis engine has been designed to support big amounts of data. This challenge has been accomplished working in two ways:
- Architectural vision: ‘parse once, analyze once and apply QA rules one time per analyzed file’. Then global analysis will collect information from the analysis of every single file to perform the checks the user may have defined.
- CPU Usage: the engine makes use of all the cores available within the processor to find QA violations. This goal is achieved through multi-threading executions. You can customize the number of threads you want to use (or even use no threads at all!).
To sum up, despite the tools can be really powerful they can also be useless if they take long periods of time to analyze all the data you have. This claim gets even more important in those times of economic instability that tends to make CIOs consider “QA analysis” disposable. Prove them wrong!
Meet the author Eduardo Aguado
Computer engineer in search of excellence in software architecture. I like languages (I like to think I am fluent at Spanish and English,but also interested in Dutch and Chinese), computers and technology and anything you may associate to a geek person. But most of all I like learning, I will never stop doing it ;) I feel comfortable working with J2EE and writing formal language processors. If you want a parser guy,just contact me!