Abstract
We report on the results of the seventh edition of the JUnit tool competition. This year, four tools were executed on a benchmark with (i) new classes, selected from real-world software projects, and (ii) challenging classes from the previous edition. We use Randoop and manual test suites from the projects as baselines. Given the interesting findings of last year, we analyzed the effectiveness of the combined test suites generated by all competing tools and compared; results are confronted with the manual test suites of the projects, as well as those generated by the competing tools. This paper describes our methodology and the results, highlight challenges faced during the contest.
Original language | English |
---|---|
Title of host publication | 2019 IEEE/ACM 12th International Workshop on Search-Based Software Testing (SBST) |
Subtitle of host publication | Proceedings |
Place of Publication | Piscataway |
Publisher | IEEE |
Pages | 15-20 |
Number of pages | 6 |
ISBN (Electronic) | 978-1-7281-2233-5 |
ISBN (Print) | 978-1-7281-2234-2 |
DOIs | |
Publication status | Published - 2019 |
Event | SBST '19: 12th International Workshop on Search-Based Software Testing - Montreal, Canada Duration: 27 May 2019 → 27 May 2019 Conference number: 12 |
Conference
Conference | SBST '19: 12th International Workshop on Search-Based Software Testing |
---|---|
Abbreviated title | SBST '19 |
Country | Canada |
City | Montreal |
Period | 27/05/19 → 27/05/19 |
Keywords
- Java
- automation
- benchmark
- combined performance
- mutation testing
- statistical analysis
- tool competition
- unit testing