Enriching Source Code with Contextual Data for Code Completion Models: An Empirical Study

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

1 Citation (Scopus)
27 Downloads (Pure)

Abstract

Transformer-based pre-trained models have recently achieved great results in solving many software engineering tasks including automatic code completion which is a staple in a developer’s toolkit. While many have striven to improve the code-understanding abilities of such models, the opposite – making the code easier to understand – has not been properly investigated. In this study, we aim to answer whether making code easier to understand through using contextual data improves the performance of pre-trained code language models for the task of code completion. We consider type annotations and comments as two common forms of additional contextual information that often help developers understand code better. For the experiments, we study code completion in two granularity levels; token and line completion and take three recent and large-scale language models for source code: UniXcoder, CodeGPT, and InCoder with five evaluation metrics. Finally, we perform the Wilcoxon Signed Rank test to gauge significance and measure the effect size. Contrary to our expectations, all models perform better if type annotations are removed (albeit the effect sizes are small). For comments, we find that the models perform better in the presence of multi-line comments (again with small effect sizes). Based on our observations, we recommend making proper design choices when training, fine-tuning, or simply selecting such models given the intended data and application. Better evaluations and multimodal techniques can also be further investigated to improve the practicality and accuracy of auto-completions.
Original languageEnglish
Title of host publicationProceedings of the 2023 IEEE/ACM 20th International Conference on Mining Software Repositories (MSR)
EditorsL. O'Conner
Place of PublicationPiscataway
PublisherIEEE
Pages170-182
Number of pages13
ISBN (Electronic)979-8-3503-1184-6
ISBN (Print)979-8-3503-1185-3
DOIs
Publication statusPublished - 2023
Event2023 IEEE/ACM 20th International Conference on Mining Software Repositories (MSR) - Melbourne, Australia
Duration: 15 May 202316 May 2023
Conference number: 20
https://conf.researchr.org/home/msr-2023

Publication series

NameProceedings - 2023 IEEE/ACM 20th International Conference on Mining Software Repositories, MSR 2023

Conference

Conference2023 IEEE/ACM 20th International Conference on Mining Software Repositories (MSR)
Abbreviated titleMSR
Country/TerritoryAustralia
CityMelbourne
Period15/05/2316/05/23
Internet address

Bibliographical note

Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care
Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • Code Completion
  • Pre-trained Language Models
  • Context
  • Empirical Software Engineering

Fingerprint

Dive into the research topics of 'Enriching Source Code with Contextual Data for Code Completion Models: An Empirical Study'. Together they form a unique fingerprint.

Cite this