Abstract
In light of the growing interest in type inference research for Python, both researchers and practitioners require a standardized process to assess the performance of various type inference techniques. This paper introduces TypeEvalPy, a comprehensive microbenchmarking framework for evaluating type inference tools. Type- EvalPy contains 154 code snippets with 845 type annotations across 18 categories that target various Python features. The framework manages the execution of containerized tools, transforms inferred types into a standardized format, and produces meaningful metrics for assessment. Through our analysis, we compare the performance of six type inference tools, highlighting their strengths and limitations. Our findings provide a foundation for further research and optimization in the domain of Python type inference.
Original language | English |
---|---|
Title of host publication | Proceedings - 2024 ACM/IEEE 46th International Conference on Software Engineering |
Subtitle of host publication | Companion, ICSE-Companion 2024 |
Publisher | IEEE |
Pages | 49-53 |
Number of pages | 5 |
ISBN (Electronic) | 9798400705021 |
DOIs | |
Publication status | Published - 2024 |
Event | ACM/IEEE 46th International Conference on Software Engineering - Lisbon, Lisbon, Portugal Duration: 14 Apr 2024 → 20 Apr 2024 Conference number: 46 https://conf.researchr.org/home/icse-2024 |
Publication series
Name | Proceedings - International Conference on Software Engineering |
---|---|
ISSN (Print) | 0270-5257 |
Conference
Conference | ACM/IEEE 46th International Conference on Software Engineering |
---|---|
Abbreviated title | ICSE '24 |
Country/Territory | Portugal |
City | Lisbon |
Period | 14/04/24 → 20/04/24 |
Internet address |