Can ChatGPT be used to predict citation counts, readership, and social media interaction? An exploration among 2222 scientific abstracts

Research output: Contribution to journalArticleScientificpeer-review

15 Downloads (Pure)

Abstract

This study explores the potential of ChatGPT, a large language model, in scientometrics by assessing its ability to predict citation counts, Mendeley readers, and social media engagement. In this study, 2222 abstracts from PLOS ONE articles published during the initial months of 2022 were analyzed using ChatGPT-4, which used a set of 60 criteria to assess each abstract. Using a principal component analysis, three components were identified: Quality and Reliability, Accessibility and Understandability, and Novelty and Engagement. The Accessibility and Understandability of the abstracts correlated with higher Mendeley readership, while Novelty and Engagement and Accessibility and Understandability were linked to citation counts (Dimensions, Scopus, Google Scholar) and social media attention. Quality and Reliability showed minimal correlation with citation and altmetrics outcomes. Finally, it was found that the predictive correlations of ChatGPT-based assessments surpassed traditional readability metrics. The findings highlight the potential of large language models in scientometrics and possibly pave the way for AI-assisted peer review.
Original languageEnglish
Number of pages19
JournalScientometrics
DOIs
Publication statusPublished - 2024

Keywords

  • Citation prediction
  • Scientometrics
  • Altmetrics
  • ChatGPT
  • GPT-4
  • Scientific abstracts
  • Artificial intelligence

Fingerprint

Dive into the research topics of 'Can ChatGPT be used to predict citation counts, readership, and social media interaction? An exploration among 2222 scientific abstracts'. Together they form a unique fingerprint.

Cite this