What does BERT know about books, movies and music? Probing BERT for Conversational Recommendation

Gustavo Penha, Claudia Hauff

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

47 Citations (Scopus)

Abstract

Heavily pre-trained transformer models such as BERT have recently shown to be remarkably powerful at language modelling, achieving impressive results on numerous downstream tasks. It has also been shown that they implicitly store factual knowledge in their parameters after pre-training. Understanding what the pre-training procedure of LMs actually learns is a crucial step for using and improving them for Conversational Recommender Systems (CRS). We first study how much off-the-shelf pre-trained BERT "knows"about recommendation items such as books, movies and music. In order to analyze the knowledge stored in BERT's parameters, we use different probes (i.e., tasks to examine a trained model regarding certain properties) that require different types of knowledge to solve, namely content-based and collaborative-based. Content-based knowledge is knowledge that requires the model to match the titles of items with their content information, such as textual descriptions and genres. In contrast, collaborative-based knowledge requires the model to match items with similar ones, according to community interactions such as ratings. We resort to BERT's Masked Language Modelling (MLM) head to probe its knowledge about the genre of items, with cloze style prompts. In addition, we employ BERT's Next Sentence Prediction (NSP) head and representations' similarity (SIM) to compare relevant and non-relevant search and recommendation query-document inputs to explore whether BERT can, without any fine-tuning, rank relevant items first. Finally, we study how BERT performs in a conversational recommendation downstream task. To this end, we fine-tune BERT to act as a retrieval-based CRS. Overall, our experiments show that: (i) BERT has knowledge stored in its parameters about the content of books, movies and music; (ii) it has more content-based knowledge than collaborative-based knowledge; and (iii) fails on conversational recommendation when faced with adversarial data.

Original languageEnglish
Title of host publicationRecSys 2020 - 14th ACM Conference on Recommender Systems
PublisherAssociation for Computing Machinery (ACM)
Pages388-397
Number of pages10
ISBN (Electronic)9781450375832
DOIs
Publication statusPublished - 2020
Event14th ACM Conference on Recommender Systems, RecSys 2020 - Virtual, Online, Brazil
Duration: 22 Sept 202026 Sept 2020

Publication series

NameRecSys 2020 - 14th ACM Conference on Recommender Systems

Conference

Conference14th ACM Conference on Recommender Systems, RecSys 2020
Country/TerritoryBrazil
CityVirtual, Online
Period22/09/2026/09/20

Keywords

  • conversational recommendation
  • conversational search
  • probing

Fingerprint

Dive into the research topics of 'What does BERT know about books, movies and music? Probing BERT for Conversational Recommendation'. Together they form a unique fingerprint.

Cite this