Empirical assessment of ChatGPT’s answering capabilities in natural science and engineering

Lukas Schulze Balhorn, Jana M. Weber, Stefan Buijsman, Julian R. Hildebrandt, Martina Ziefle, Artur M. Schweidtmann*

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

58 Downloads (Pure)

Abstract

ChatGPT is a powerful language model from OpenAI that is arguably able to comprehend and generate text. ChatGPT is expected to greatly impact society, research, and education. An essential step to understand ChatGPT’s expected impact is to study its domain-specific answering capabilities. Here, we perform a systematic empirical assessment of its abilities to answer questions across the natural science and engineering domains. We collected 594 questions on natural science and engineering topics from 198 faculty members across five faculties at Delft University of Technology. After collecting the answers from ChatGPT, the participants assessed the quality of the answers using a systematic scheme. Our results show that the answers from ChatGPT are, on average, perceived as “mostly correct”. Two major trends are that the rating of the ChatGPT answers significantly decreases (i) as the educational level of the question increases and (ii) as we evaluate skills beyond scientific knowledge, e.g., critical attitude.

Original languageEnglish
Article number4998
Number of pages11
JournalScientific Reports
Volume14
Issue number1
DOIs
Publication statusPublished - 2024

Fingerprint

Dive into the research topics of 'Empirical assessment of ChatGPT’s answering capabilities in natural science and engineering'. Together they form a unique fingerprint.

Cite this