Loading [a11y]/accessibility-menu.js
Evaluating and Improving Static Analysis Tools Via Differential Mutation Analysis | IEEE Conference Publication | IEEE Xplore

Evaluating and Improving Static Analysis Tools Via Differential Mutation Analysis


Abstract:

Static analysis tools attempt to detect faults in code without executing it. Understanding the strengths and weaknesses of such tools, and performing direct comparisons o...Show More

Abstract:

Static analysis tools attempt to detect faults in code without executing it. Understanding the strengths and weaknesses of such tools, and performing direct comparisons of their ef-fectiveness, is difficult, involving either manual examination of differing warnings on real code, or the bias-prone construction of artificial test cases. This paper proposes a novel automated approach to comparing static analysis tools, based on producing mutants of real code, and comparing detection rates over these mutants. In addition to making tool differences quantitatively observable without extensive manual effort, this approach offers a new way to detect and fix omissions in a static analysis tool's set of detectors. We present an extensive comparison of three smart contract static analysis tools, and show how our approach allowed us to add three effective new detectors to the best of these. We also evaluate popular Java and Python static analysis tools and discuss their strengths and weaknesses.
Date of Conference: 06-10 December 2021
Date Added to IEEE Xplore: 10 March 2022
ISBN Information:

ISSN Information:

Conference Location: Hainan, China

Funding Agency:


I. Introduction

Static analysis of code is one of the most effective ways to avoid defects in software, and, when security is a concern, is essential. Static analysis can find problems that are extremely hard to detect by testing, when the inputs triggering a bug are hard to find. Static analysis is also often more efficient than testing; a bug that takes a fuzzer days to find may be immediately identified. Users of static analysis tools often wonder which of multiple tools available for a language are most effective, and how much tools overlap in their results. Tools often find substantially different bugs, making it important to use multiple tools [32]. However, given the high cost of examing results, if a tool provides only marginal novelty, it may not be worth using, especially if it has a high false-positive rate. Developers of static analysis tools also want to be able to compare their tools to other tools, in order to see what detection patterns or precision/soundness trade-offs they might want to imitate. Unfortunately, comparing static analysis tools in these ways is hard, and would seem to require vast manual effort to inspect findings and determine ground truth on a scale that would provide statistical confidence.

Contact IEEE to Subscribe

References

References is not available for this document.