Rights and Wrongs in Talk of Mind-Reading Technology

Stephen Rainey*

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

16 Downloads (Pure)

Abstract

This article examines the idea of mind-reading technology by focusing on an interesting case of applying a large language model (LLM) to brain data. On the face of it, experimental results appear to show that it is possible to reconstruct mental contents directly from brain data by processing via a chatGPT-like LLM. However, the author argues that this apparent conclusion is not warranted. Through examining how LLMs work, it is shown that they are importantly different from natural language. The former operates on the basis of nonrational data transformations based on a large textual corpus. The latter has a rational dimension, being based on reasons. Using this as a basis, it is argued that brain data does not directly reveal mental content, but can be processed to ground predictions indirectly about mental content. The author concludes that this is impressive but different in principle from technology-mediated mind reading. The applications of LLM-based brain data processing are nevertheless promising for speech rehabilitation or novel communication methods.

Original languageEnglish
Pages (from-to)1-11
JournalCambridge Quarterly of Healthcare Ethics
DOIs
Publication statusPublished - 2024

Keywords

  • brain data
  • chatGPT
  • fMRI
  • large language models
  • mind reading
  • reasons

Fingerprint

Dive into the research topics of 'Rights and Wrongs in Talk of Mind-Reading Technology'. Together they form a unique fingerprint.

Cite this