Are We Evaluating Rigorously? Benchmarking Recommendation for Reproducible Evaluation and Fair Comparison

Zhu Sun, DI Yu, Hui Fang, Jie Yang, Xinghua Qu, Jie Zhang, Cong Geng

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

88 Citations (Scopus)

Abstract

With tremendous amount of recommendation algorithms proposed every year, one critical issue has attracted a considerable amount of attention: there are no effective benchmarks for evaluation, which leads to two major concerns, i.e., unreproducible evaluation and unfair comparison. This paper aims to conduct rigorous (i.e., reproducible and fair) evaluation for implicit-feedback based top-N recommendation algorithms. We first systematically review 85 recommendation papers published at eight top-tier conferences (e.g., RecSys, SIGIR) to summarize important evaluation factors, e.g., data splitting and parameter tuning strategies, etc. Through a holistic empirical study, the impacts of different factors on recommendation performance are then analyzed in-depth. Following that, we create benchmarks with standardized procedures and provide the performance of seven well-tuned state-of-the-arts across six metrics on six widely-used datasets as a reference for later study. Additionally, we release a user-friendly Python toolkit, which differs from existing ones in addressing the broad scope of rigorous evaluation for recommendation. Overall, our work sheds light on the issues in recommendation evaluation and lays the foundation for further investigation. Our code and datasets are available at GitHub (https://github.com/AmazingDD/daisyRec).

Original languageEnglish
Title of host publicationRecSys 2020 - 14th ACM Conference on Recommender Systems
PublisherAssociation for Computing Machinery (ACM)
Pages23-32
Number of pages10
ISBN (Electronic)9781450375832
DOIs
Publication statusPublished - 2020
Event14th ACM Conference on Recommender Systems, RecSys 2020 - Virtual, Online, Brazil
Duration: 22 Sept 202026 Sept 2020

Publication series

NameRecSys 2020 - 14th ACM Conference on Recommender Systems

Conference

Conference14th ACM Conference on Recommender Systems, RecSys 2020
Country/TerritoryBrazil
CityVirtual, Online
Period22/09/2026/09/20

Keywords

  • Benchmarks
  • Recommender Systems
  • Reproducible Evaluation

Fingerprint

Dive into the research topics of 'Are We Evaluating Rigorously? Benchmarking Recommendation for Reproducible Evaluation and Fair Comparison'. Together they form a unique fingerprint.

Cite this