Revisiting Test Smells in Automatically Generated Tests: Limitations, Pitfalls, and Opportunities

A. Panichella, Sebastiano Panichella, Gordon Fraser, Anand Ashok Sawant, Vincent J. Hellendoorn

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

1 Citation (Scopus)
70 Downloads (Pure)

Abstract

Test smells attempt to capture design issues in test code that reduce their maintainability. Previous work found such smells to be highly common in automatically generated test-cases, but based this result on specific static detection rules; although these are based on the original definition of “test smells”, a recent empirical study showed that developers perceive these as overly strict and non-representative of the maintainability and quality of test suites. This leads us to investigate how effective such test smell detection tools are on automatically generated test suites. In this paper, we build a dataset of 2,340 test cases automatically generated by EVOSUITE for 100 Java classes. We performed a multi-stage, cross-validated manual analysis to identify six types of test smells and label their instances. We benchmark the performance of two test smell detection tools: one widely used in prior work, and one recently introduced with the express goal to match developer perceptions of test smells. Our results show that these test smell detection strategies poorly characterized the issues in automatically generated test suites; the older tool’s detection strategies, especially, misclassified over 70% of test smells, both missing real instances (false negatives) and marking many smell-free tests as smelly (false positives). We identify common patterns in these tests that can be used to improve the tools, refine and update the definition of certain test smells, and highlight as of yet uncharacterized issues. Our findings suggest the need for (i) more appropriate metrics to match development practice; and (ii) more accurate detection strategies, to be evaluated primarily in industrial contexts.
Original languageEnglish
Title of host publicationProceedings - 2020 IEEE International Conference on Software Maintenance and Evolution, ICSME 2020
Place of PublicationAdelaide, Australia, Australia
PublisherIEEE
Pages523-533
Number of pages11
ISBN (Electronic)978-1-7281-5619-4
ISBN (Print)978-1-7281-5620-0
DOIs
Publication statusPublished - 2020
EventICSME 2020: International Conference on Software Maintenance and Evolution - Virtual/online event due to COVID-19 , Adelaide, Australia
Duration: 28 Sep 20202 Oct 2020

Publication series

NameProceedings - 2020 IEEE International Conference on Software Maintenance and Evolution, ICSME 2020

Conference

ConferenceICSME 2020: International Conference on Software Maintenance and Evolution
Abbreviated titleICSME 2020
CountryAustralia
CityAdelaide
Period28/09/202/10/20

Keywords

  • Software Quality
  • Test Generation
  • Test Smells

Fingerprint

Dive into the research topics of 'Revisiting Test Smells in Automatically Generated Tests: Limitations, Pitfalls, and Opportunities'. Together they form a unique fingerprint.

Cite this