TY - GEN
T1 - Semantic source code models using identifier embeddings
AU - Efstathiou, Vasiliki
AU - Spinellis, Diomidis
PY - 2019/5
Y1 - 2019/5
N2 - The emergence of online open source repositories in the recent years has led to an explosion in the volume of openly available source code, coupled with metadata that relate to a variety of software development activities. As an effect, in line with recent advances in machine learning research, software maintenance activities are switching from symbolic formal methods to data-driven methods. In this context, the rich semantics hidden in source code identifiers provide opportunities for building semantic representations of code which can assist tasks of code search and reuse. To this end, we deliver in the form of pretrained vector space models, distributed code representations for six popular programming languages, namely, Java, Python, PHP, C, C++, and C#. The models are produced using fastText, a state-of-the-art library for learning word representations. Each model is trained on data from a single programming language; the code mined for producing all models amounts to over 13.000 repositories. We indicate dissimilarities between natural language and source code, as well as variations in coding conventions in between the different programming languages we processed. We describe how these heterogeneities guided the data preprocessing decisions we took and the selection of the training parameters in the released models. Finally, we propose potential applications of the models and discuss limitations of the models.
AB - The emergence of online open source repositories in the recent years has led to an explosion in the volume of openly available source code, coupled with metadata that relate to a variety of software development activities. As an effect, in line with recent advances in machine learning research, software maintenance activities are switching from symbolic formal methods to data-driven methods. In this context, the rich semantics hidden in source code identifiers provide opportunities for building semantic representations of code which can assist tasks of code search and reuse. To this end, we deliver in the form of pretrained vector space models, distributed code representations for six popular programming languages, namely, Java, Python, PHP, C, C++, and C#. The models are produced using fastText, a state-of-the-art library for learning word representations. Each model is trained on data from a single programming language; the code mined for producing all models amounts to over 13.000 repositories. We indicate dissimilarities between natural language and source code, as well as variations in coding conventions in between the different programming languages we processed. We describe how these heterogeneities guided the data preprocessing decisions we took and the selection of the training parameters in the released models. Finally, we propose potential applications of the models and discuss limitations of the models.
KW - Code Semantics
KW - Fasttext
KW - Semantic Similarity
KW - Vector Space Models
UR - http://www.scopus.com/inward/record.url?scp=85072315217&partnerID=8YFLogxK
U2 - 10.1109/MSR.2019.00015
DO - 10.1109/MSR.2019.00015
M3 - Conference contribution
AN - SCOPUS:85072315217
T3 - IEEE International Working Conference on Mining Software Repositories
SP - 29
EP - 33
BT - Proceedings - 2019 IEEE/ACM 16th International Conference on Mining Software Repositories, MSR 2019
PB - IEEE
T2 - 16th IEEE/ACM International Conference on Mining Software Repositories, MSR 2019
Y2 - 26 May 2019 through 27 May 2019
ER -