The Prevalence of Code Smells in Machine Learning Projects

Research output: Contribution to conferencePaperpeer-review

2 Downloads (Pure)

Abstract

Artificial Intelligence (AI) and Machine Learning (ML) are pervasive in the current computer science landscape. Yet, there still exists a lack of software engineering experience and best practices in this field. One such best practice, static code analysis, can be used to find code smells, i.e., (potential) defects in the source code, refactoring opportunities, and violations of common coding standards. Our research set out to discover the most prevalent code smells in ML projects. We gathered a dataset of 74 open-source ML projects, installed their dependencies and ran Pylint on them. This resulted in a top 20 of all detected code smells, per category. Manual analysis of these smells mainly showed that code duplication is widespread and that the PEP8 convention for identifier naming style may not always be applicable to ML code due to its resemblance with mathematical notation. More interestingly, however, we found several major obstructions to the maintainability and reproducibility of ML projects, primarily related to the dependency management of Python projects. We also found that Pylint cannot reliably check for correct usage of imported dependencies, including prominent ML libraries such as PyTorch.
Original languageEnglish
Pages35-42
DOIs
Publication statusPublished - 2021
EventWAIN'21 - 1st Workshop on AI Engineering – Software Engineering for AI - Virtual, Madrid, Spain
Duration: 30 May 202131 May 2021
https://conf.researchr.org/home/icse-2021/wain-2021

Conference

ConferenceWAIN'21 - 1st Workshop on AI Engineering – Software Engineering for AI
Abbreviated titleWAIN'21
CountrySpain
CityMadrid
Period30/05/2131/05/21
Internet address

Keywords

  • AI Engineering
  • MLOps
  • Code smells

Fingerprint

Dive into the research topics of 'The Prevalence of Code Smells in Machine Learning Projects'. Together they form a unique fingerprint.

Cite this