• Türkçe
    • English
  • Türkçe 
    • Türkçe
    • English
  • Giriş
Öğe Göster 
  •   RTEÜ
  • Araştırma Çıktıları | TR-Dizin | WoS | Scopus | PubMed
  • Scopus İndeksli Yayınlar Koleksiyonu
  • Öğe Göster
  •   RTEÜ
  • Araştırma Çıktıları | TR-Dizin | WoS | Scopus | PubMed
  • Scopus İndeksli Yayınlar Koleksiyonu
  • Öğe Göster
JavaScript is disabled for your browser. Some features of this site may not work without it.

Can a large language model judge a child’s statement?: a comparative analysis of ChatGPT and human experts in credibility assessment

Göster/Aç

Full Text / Tam Metin (771.5Kb)

Erişim

info:eu-repo/semantics/closedAccess

Tarih

2025

Yazar

Karataş, Zeki

Üst veri

Tüm öğe kaydını göster

Künye

Karataş, Z. (2025). Can a Large Language Model Judge a Child’s Statement?: A Comparative Analysis of ChatGPT and Human Experts in Credibility Assessment. Journal of Evidence-Based Social Work, 1–16. https://doi.org/10.1080/26408066.2025.2547211

Özet

Purpose: This study investigates the inter-rater reliability between human experts (a forensic psychologist and a social worker) and a large language model (LLM) in the assessment of child sexual abuse statements. The research aims to explore the potential, limitations, and consistency of this class of AI as an evaluation tool within the framework of Criteria-Based Content Analysis (CBCA), a widely used method for assessing statement credibility. Materials and methods: Sixty-five anonymized transcripts of forensic interviews with child sexual abuse victims (N = 65) were independently evaluated by three raters: a forensic psychologist, a social worker, and a large language model (ChatGPT, GPT-4o Plus). Each statement was coded using the 19-item CBCA framework. Inter-rater reliability was analyzed using Intraclass Correlation Coefficient (ICC), Cohen’s Kappa (κ), and other agreement statistics to compare the judgments between the human-human pairing and the human-AI pairings. Results: A high degree of inter-rater reliability was found between the two human experts, with the majority of criteria showing “good” to “excellent” agreement (15 of 19 criteria with ICC >.75). In stark contrast, a dramatic and significant decrease in reliability was observed when the AI model’s evaluations were compared with those of the human experts. The AI demonstrated systematic disagreement on criteria requiring nuanced, contextual judgment, with reliability coefficients frequently falling into “poor” or negative ranges (e.g. ICC = -.106 for “Logical structure”), indicating its evaluation logic fundamentally differs from expert reasoning. Discussion: The findings reveal a profound gap between the nuanced, contextual reasoning of human experts and the pattern-recognition capabilities of the LLM tested. The study concludes that this type of AI, in its current, prompt-engineered form, cannot reliably replicate expert judgment in the complex task of credibility assessment. While not a viable autonomous evaluator, it may hold potential as a “cognitive assistant” to support expert workflows. The assessment of child testimony credibility remains a task that deeply requires professional judgment and appears far beyond the current capabilities of such generative AI models.

Kaynak

Journal of Evidence-Based Social Work (United States)

Bağlantı

https://doi.org/10.1080/26408066.2025.2547211
https://hdl.handle.net/11436/10967

Koleksiyonlar

  • Scopus İndeksli Yayınlar Koleksiyonu [6292]
  • Sosyal Hizmet Bölümü Koleksiyonu [6]



DSpace software copyright © 2002-2015  DuraSpace
İletişim | Geri Bildirim
Theme by 
@mire NV
 

 




| Yönerge | Rehber | İletişim |

DSpace@RTEÜ

by OpenAIRE
Gelişmiş Arama

sherpa/romeo

Göz at

Tüm DSpaceBölümler & KoleksiyonlarTarihe GöreYazara GöreBaşlığa GöreKonuya GöreTüre GöreDile GöreBölüme GöreKategoriye GöreYayıncıya GöreErişim ŞekliKurum Yazarına GöreBu KoleksiyonTarihe GöreYazara GöreBaşlığa GöreKonuya GöreTüre GöreDile GöreBölüme GöreKategoriye GöreYayıncıya GöreErişim ŞekliKurum Yazarına Göre

Hesabım

GirişKayıt

İstatistikler

Google Analitik İstatistiklerini Görüntüle

DSpace software copyright © 2002-2015  DuraSpace
İletişim | Geri Bildirim
Theme by 
@mire NV
 

 


|| Rehber|| Yönerge || Kütüphane || Recep Tayyip Erdoğan Üniversitesi || OAI-PMH ||

Recep Tayyip Erdoğan Üniversitesi, Rize, Türkiye
İçerikte herhangi bir hata görürseniz, lütfen bildiriniz:

Creative Commons License
Recep Tayyip Erdoğan Üniversitesi Institutional Repository is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 Unported License..

DSpace@RTEÜ:


DSpace 6.2

tarafından İdeal DSpace hizmetleri çerçevesinde özelleştirilerek kurulmuştur.