In modern, continuous software development life cycle (SDLC) processes, when code is written and before it’s committed to the repository, it’s run through testing, which may include unit testing, regression testing or static application security testing (SAST). The benefit of SAST for DevSecOps is the real-time feedback it provides developers as they create and make changes to the source code before they submit it. The amount of time that’s acceptable in this kind of workflow is 15 minutes to 30 minutes maximum, with some allowance for how big the change is. Fast feedback means quicker turnaround and more progress on features. The business value lies in those features. Testing makes sure they work.
Depth Versus Breadth
Checkers used by SAST are not all created equal, at least in terms of the computation power needed. The time for analysis is directly related to the complexity of the checkers used to detect certain types of vulnerabilities. Typically, coding standard enforcement rules are easier to compute versus complex tainted data analysis used to detect command or SQL injections, for example. Therefore, there is a trade-off to be made in the types of bugs and vulnerabilities being detected. When developers are coding, it makes sense to optimize the compute time for SAST to reduce any delays. During software builds, more time and compute power is available so depth and breadth can be increased to catch complex vulnerabilities.
SAST analysis time scales well with compute power. The availability of more computing resources such as CPU and memory, can impact the time of analysis. Bigger is better in this case and investing in additional hardware for SAST can pay off in terms of productivity.
SAST analysis time is also dependent on the amount of source code analyzed. Depending on the application, the code base is a determining factor in terms of what depth and breadth of analysis is optimal.
There is some correlation between small code base size and strict requirements for SAST such as in safety-critical software. It’s expensive to build and test safety-critical code, but code size is typically smaller than in other types of applications. However, these products need to conform to industrial coding standards and thus require rigorous levels of testing and analysis to detect a wide variety of defects.
Larger and extremely large code bases require a trade-off in terms of depth and breadth of analysis. Prioritizing the types of defects and vulnerabilities to concentrate on is critical.
Speeding up SAST means reducing the amount of work. The most intensive operation is a full analysis of the entire source code base. Just as full compilation from scratch takes a long time, the same is true of SAST analysis.
To accelerate SAST processing times, follow the same three techniques used to avoid compilation in large C/C++ projects: incremental analysis, component analysis and developer analysis.
1. Incremental Analysis
Incremental builds are a major factor in reducing developer build times for C and C++ projects. Small changes don’t require the entire code base to be recompiled. It’s impractical and unnecessary. When a developer modifies something, only what’s needed to be recompiled gets recompiled automatically, and the build infrastructure handles it.
SAST works in a similar manner. As source code is being compiled, SAST parses the same source code and creates a program model. The second phase is analyzing that program model against the configured checkers and generating warnings. The warnings typically go into a database for later analysis.
During the build of the software, SAST is run in parallel. Although similar to a compiler in operation, SAST must do more work than the compiler and takes longer to finish and provide results. The amount of code needed to recompile for each change directly impacts analysis time.
Meanwhile, SAST of an incremental build requires a lot more work than just analyzing the changes. SAST uses abstract execution of code flow through multiple different compilation units. So even though a single file is modified, the calls it makes to other files need to be analyzed as well. The incremental analysis doesn’t increase the build and parse workload, but the analysis scope will be a lot bigger than expected. So, whenever you do an incremental build, SAST will need to reanalyze the dependencies, changes and related bits and pieces around those changed units.
Nevertheless, it remains a good option for reducing SAST analysis times.
2. Component Analysis
If the software being developed has a loosely coupled, component-based architecture with well-defined interfaces, it’s possible to isolate both the testing and the SAST analysis to the component where changes were made. Just as this type of architecture simplifies many aspects of development, it pays off with SAST as well. It’s possible to analyze the component in isolation and get good results.
3. Developer Analysis
To further reduce SAST workload, start the first build without SAST, which removes the burden of the extra computation needed. Since subsequent builds are a lot smaller and build speed is much faster.
However, there is an increase in false negatives due to the smaller scope of analysis since only files that changed are analyzed. This smaller scope means that all possible dependencies are outside the program model and are not considered in the analysis. However, this rapid analysis works well with coding standard enforcement, for example.
Where to Start?
Begin with a full build and analysis. This sets a baseline for detected bugs and vulnerabilities and establishes a performance target for reducing build time. This step will determine the exact number of lines of code, how long the analysis takes and its impact on build server load. These metrics should drive the analysis optimization effort.
Given the options, the first step should be a component-based analysis. If this works well, it’s still possible to further reduce analysis with incremental and developer analysis within components. This approach can be integrated into typical developer workflows.
When creating a developer branch for implementing changes, quick developer local analysis will catch local errors. These tend to be the largest set since they represent the largest part of the code changes. When enforcing coding standards, this is the point where violations are identified. A full analysis is recommended on a merge request to detect remaining defects related to cross-component-boundary behavior.
Speed up SAST
To speed up SAST consider using these techniques at the appropriate development stage to reduce time to complete, while still producing excellent results. Keep in mind that:
● Complete analysis yields the best and most precise results,
● Incremental analysis only analyzes recompiled code that has been modified,
● Component analysis works well in software that follows a well-defined component architecture and
● Developer (local) analysis provides a good compromise of precision and analysis time.