Extending Source Code Pre-Trained Language Models to Summarise Decompiled Binarie

Ali Al-Kaswan, Toufique Ahmed, Maliheh Izadi, Anand Ashok Sawant, Premkumar Devanbu, Arie van Deursen

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

6 Downloads (Pure)

Abstract

Reverse engineering binaries is required to understand and analyse programs for which the source code is unavailable. Decompilers can transform the largely unreadable binaries into a more readable source code-like representation. However, reverse engineering is time-consuming, much of which is taken up by labelling the functions with semantic information.
While the automated summarisation of decompiled code can help Reverse Engineers understand and analyse binaries, current work mainly focuses on summarising source code, and no suitable dataset exists for this task.
In this work, we extend large pre-trained language models of source code to summarise decompiled binary functions. Furthermore, we investigate the impact of input and data properties on the performance of such models. Our approach consists of two main components; the data and the model.
We first build CAPYBARA, a dataset of 214K decompiled function-documentation pairs across various compiler optimisations. We extend CAPYBARA further by generating synthetic datasets and deduplicating the data.
Next, we fine-tune the CodeT5 base model with CAPYBARA to create BinT5. BinT5 achieves the state-of-the-art BLEU-4 score of 60.83, 58.82, and 44.21 for summarising source, decompiled, and synthetically stripped decompiled code, respectively. This indicates that these models can be extended to decompiled binaries successfully.
Finally, we found that the performance of BinT5 is not heavily dependent on the dataset size and compiler optimisation level. We recommend future research to further investigate transferring knowledge when working with less expressive input formats such as stripped binaries.
Original languageEnglish
Title of host publicationProceedings of the 30th IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)
EditorsCristina Ceballos
Place of PublicationPiscataway
PublisherIEEE
Pages260-271
Number of pages12
ISBN (Electronic)978-1-6654-5278-6
ISBN (Print)978-1-6654-5279-3
DOIs
Publication statusPublished - 2023
Event2023 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER) - Taipa, Macao
Duration: 21 Mar 202324 Mar 2023

Conference

Conference2023 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)
Country/TerritoryMacao
City Taipa
Period21/03/2324/03/23

Bibliographical note

Accepted author manuscript

Keywords

  • Decompilation
  • Binary
  • Reverse Engineering
  • Summarization
  • Deep Learning
  • Pre-trained Language Models
  • CodeT5
  • Transformers

Fingerprint

Dive into the research topics of 'Extending Source Code Pre-Trained Language Models to Summarise Decompiled Binarie'. Together they form a unique fingerprint.

Cite this