Validity Claims in Children-AI discurse: Experiment with ChatGPT

back to overview

Reference

Schneider, J., Chandra Kruse, L., & Seeber, I. (2024). Validity Claims in Children-AI discurse: Experiment with ChatGPT. Paper presented at the 16th International Conference on Computer Supported Education.

Publication type

Paper in Conference Proceedings

Abstract

Large language models like ChatGPT are increasingly used by people from all age groups. They have already started to transform education and research. However, these models are also known to have a number of shortcomings, i.e., they can hallucinate or provide biased responses. While adults might be able to assess such shortcomings, the most vulnerable group of our society – children – might not be able to do so. Thus, in this paper, we analyze responses to commonly asked questions tailored to different age groups by OpenAI’s ChatGPT. Our assessment uses Habermas’ validity claims. We operationalize them using computational measures such as established reading scores and interpretative analysis. Our results indicate that responses were mostly (but not always) truthful, legitimate, and comprehensible and aligned with the developmental phases, but with one important exception: responses for two-year-olds.

Persons

Organizational Units

  • Liechtenstein Business School

Original Source URL

Link

DOI

http://dx.doi.org/10.5220/0012552300003693