Analyzing the State of Static Analysis: A Large-Scale Evaluation in Open Source Software

Moritz Beller, Radjino Bholanath, Shane McIntosh, Andy Zaidman

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

259 Downloads (Pure)

Abstract

The use of automatic static analysis has been a software engineering best practice for decades. However, we still do not know a lot about its use in real-world software projects: How prevalent is the use of Automated Static Analysis Tools (ASATs) such as FindBugs and JSHint? How do developers use these tools, and how does their use evolve over time? We research these questions in two studies on nine different ASATs for Java, JavaScript, Ruby, and Python with a population of 122 and 168,214 open-source projects. To compare warnings across the ASATs, we introduce the General Defect Classification (GDC) and provide a grounded-theory-derived mapping of 1,825 ASAT-specific warnings to 16 top-level GDC classes. Our results show that ASAT use is widespread, but not ubiquitous, and that projects typically do not enforce a strict policy on ASAT use. Most ASAT configurations deviate slightly from the default, but hardly any introduce new custom analyses. Only a very small set of default ASAT analyses is widely changed. Finally, most ASAT configurations, once introduced, never change. If they do, the changes are small and have a tendency to occur within one day of the configuration's initial introduction.
Original languageEnglish
Title of host publicationProceedings of the 23rd IEEE International Conference on Software Analysis, Evolution, and Reengineering
Place of PublicationPiscataway, NJ
PublisherIEEE
Pages470-481
Number of pages12
ISBN (Electronic)978-1-5090-1855-0
DOIs
Publication statusPublished - Mar 2016

Keywords

  • General Defect Classification
  • Automated Static Analysis Tools
  • ASATs
  • GitHub
  • Open-Source Software

Fingerprint

Dive into the research topics of 'Analyzing the State of Static Analysis: A Large-Scale Evaluation in Open Source Software'. Together they form a unique fingerprint.

Cite this