TY - GEN
T1 - Leveraging Large Language Models for Sequential Recommendation
AU - Harte, Jesse
AU - Zorgdrager, Wouter
AU - Louridas, Panos
AU - Katsifodimos, Asterios
AU - Jannach, Dietmar
AU - Fragkoulis, Marios
PY - 2023
Y1 - 2023
N2 - Sequential recommendation problems have received increasing attention in research during the past few years, leading to the inception of a large variety of algorithmic approaches. In this work, we explore how large language models (LLMs), which are nowadays introducing disruptive effects in many AI-based applications, can be used to build or improve sequential recommendation approaches. Specifically, we devise and evaluate three approaches to leverage the power of LLMs in different ways. Our results from experiments on two datasets show that initializing the state-of-the-art sequential recommendation model BERT4Rec with embeddings obtained from an LLM improves NDCG by 15-20% compared to the vanilla BERT4Rec model. Furthermore, we find that a simple approach that leverages LLM embeddings for producing recommendations, can provide competitive performance by highlighting semantically related items. We publicly share the code and data of our experiments to ensure reproducibility.1
AB - Sequential recommendation problems have received increasing attention in research during the past few years, leading to the inception of a large variety of algorithmic approaches. In this work, we explore how large language models (LLMs), which are nowadays introducing disruptive effects in many AI-based applications, can be used to build or improve sequential recommendation approaches. Specifically, we devise and evaluate three approaches to leverage the power of LLMs in different ways. Our results from experiments on two datasets show that initializing the state-of-the-art sequential recommendation model BERT4Rec with embeddings obtained from an LLM improves NDCG by 15-20% compared to the vanilla BERT4Rec model. Furthermore, we find that a simple approach that leverages LLM embeddings for producing recommendations, can provide competitive performance by highlighting semantically related items. We publicly share the code and data of our experiments to ensure reproducibility.1
KW - Evaluation
KW - Large Language Models
KW - Recommender Systems
KW - Sequential Recommendation
UR - http://www.scopus.com/inward/record.url?scp=85174488877&partnerID=8YFLogxK
U2 - 10.1145/3604915.3610639
DO - 10.1145/3604915.3610639
M3 - Conference contribution
AN - SCOPUS:85174488877
T3 - Proceedings of the 17th ACM Conference on Recommender Systems, RecSys 2023
SP - 1096
EP - 1102
BT - Proceedings of the 17th ACM Conference on Recommender Systems, RecSys 2023
PB - ACM
T2 - 17th ACM Conference on Recommender Systems, RecSys 2023
Y2 - 18 September 2023 through 22 September 2023
ER -