• Türkçe
    • English
  • English 
    • Türkçe
    • English
  • Login
View Item 
  •   RTEÜ
  • Araştırma Çıktıları | TR-Dizin | WoS | Scopus | PubMed
  • Scopus İndeksli Yayınlar Koleksiyonu
  • View Item
  •   RTEÜ
  • Araştırma Çıktıları | TR-Dizin | WoS | Scopus | PubMed
  • Scopus İndeksli Yayınlar Koleksiyonu
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Can a large language model judge a child’s statement?: a comparative analysis of ChatGPT and human experts in credibility assessment

View/Open

Full Text / Tam Metin (771.5Kb)

Access

info:eu-repo/semantics/closedAccess

Date

2025

Author

Karataş, Zeki

Metadata

Show full item record

Citation

Karataş, Z. (2025). Can a Large Language Model Judge a Child’s Statement?: A Comparative Analysis of ChatGPT and Human Experts in Credibility Assessment. Journal of Evidence-Based Social Work, 1–16. https://doi.org/10.1080/26408066.2025.2547211

Abstract

Purpose: This study investigates the inter-rater reliability between human experts (a forensic psychologist and a social worker) and a large language model (LLM) in the assessment of child sexual abuse statements. The research aims to explore the potential, limitations, and consistency of this class of AI as an evaluation tool within the framework of Criteria-Based Content Analysis (CBCA), a widely used method for assessing statement credibility. Materials and methods: Sixty-five anonymized transcripts of forensic interviews with child sexual abuse victims (N = 65) were independently evaluated by three raters: a forensic psychologist, a social worker, and a large language model (ChatGPT, GPT-4o Plus). Each statement was coded using the 19-item CBCA framework. Inter-rater reliability was analyzed using Intraclass Correlation Coefficient (ICC), Cohen’s Kappa (κ), and other agreement statistics to compare the judgments between the human-human pairing and the human-AI pairings. Results: A high degree of inter-rater reliability was found between the two human experts, with the majority of criteria showing “good” to “excellent” agreement (15 of 19 criteria with ICC >.75). In stark contrast, a dramatic and significant decrease in reliability was observed when the AI model’s evaluations were compared with those of the human experts. The AI demonstrated systematic disagreement on criteria requiring nuanced, contextual judgment, with reliability coefficients frequently falling into “poor” or negative ranges (e.g. ICC = -.106 for “Logical structure”), indicating its evaluation logic fundamentally differs from expert reasoning. Discussion: The findings reveal a profound gap between the nuanced, contextual reasoning of human experts and the pattern-recognition capabilities of the LLM tested. The study concludes that this type of AI, in its current, prompt-engineered form, cannot reliably replicate expert judgment in the complex task of credibility assessment. While not a viable autonomous evaluator, it may hold potential as a “cognitive assistant” to support expert workflows. The assessment of child testimony credibility remains a task that deeply requires professional judgment and appears far beyond the current capabilities of such generative AI models.

Source

Journal of Evidence-Based Social Work (United States)

URI

https://doi.org/10.1080/26408066.2025.2547211
https://hdl.handle.net/11436/10967

Collections

  • Scopus İndeksli Yayınlar Koleksiyonu [6292]
  • Sosyal Hizmet Bölümü Koleksiyonu [6]



DSpace software copyright © 2002-2015  DuraSpace
Contact Us | Send Feedback
Theme by 
@mire NV
 

 




| Instruction | Guide | Contact |

DSpace@RTEÜ

by OpenAIRE
Advanced Search

sherpa/romeo

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsTypeLanguageDepartmentCategoryPublisherAccess TypeInstitution AuthorThis CollectionBy Issue DateAuthorsTitlesSubjectsTypeLanguageDepartmentCategoryPublisherAccess TypeInstitution Author

My Account

LoginRegister

Statistics

View Google Analytics Statistics

DSpace software copyright © 2002-2015  DuraSpace
Contact Us | Send Feedback
Theme by 
@mire NV
 

 


|| Guide|| Instruction || Library || Recep Tayyip Erdoğan University || OAI-PMH ||

Recep Tayyip Erdoğan University, Rize, Turkey
If you find any errors in content, please contact:

Creative Commons License
Recep Tayyip Erdoğan University Institutional Repository is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 Unported License..

DSpace@RTEÜ:


DSpace 6.2

tarafından İdeal DSpace hizmetleri çerçevesinde özelleştirilerek kurulmuştur.