Assessing Robustness of ML-Based Program Analysis Tools using Metamorphic Program Transformations

Research output: Contribution to conferencePaperpeer-review

274 Downloads (Pure)

Abstract

Metamorphic testing is a well-established testing technique that has been successfully applied in various domains, including testing deep learning models to assess their robustness against data noise or malicious input. Currently, metamorphic testing approaches for machine learning (ML) models focused on image processing and object recognition tasks. Hence, these approaches cannot be ap- plied to ML targeting program analysis tasks. In this paper, we extend metamorphic testing approaches for ML models targeting software programs. We present Lampion, a novel testing frame- work that applies (semantics preserving) metamorphic transforma- tions on the test datasets. Lampion produces new code snippets equivalent to the original test set but different in their identifiers or syntactic structure. We evaluate Lampion against CodeBERT, a state-of-the-art ML model for Code-To-Text tasks that creates Javadoc summaries for given Java methods. Our results show that simple transformations significantly impact the target model be- havior, providing additional information on the models reasoning apart from the classic performance metric.
Original languageEnglish
Number of pages6
Publication statusPublished - 2021
EventIEEE/ACM International Conference on Automated Software Engineering - virtual event
Duration: 14 Nov 202120 Nov 2021

Conference

ConferenceIEEE/ACM International Conference on Automated Software Engineering
Abbreviated titleASE 2021
Period14/11/2120/11/21

Keywords

  • Metamorphic Testing
  • Machine Learning
  • Documentation Generation
  • Code-To-Text
  • Deep learning

Fingerprint

Dive into the research topics of 'Assessing Robustness of ML-Based Program Analysis Tools using Metamorphic Program Transformations'. Together they form a unique fingerprint.

Cite this