Contacto

Barranca del Muerto 520, Los Alpes, Álvaro Obregón, C.P.01010, Ciudad de México, México

Teléfono

(+52) 55 9171 9570
CIR

Revista Virtual Individual

Autores: Roman Johannes Gertz, MD • Thomas Dratsch, MD • Alexander Christian Bunck, MD • Simon Lennartz, MD • Andra-Iza Iuga, MD • Martin Gunnar Hellmich, PhD • Thorsten Persigehl, MD • Lenhard Pennig, MD • Carsten Herbert Gietzen, MD • Philipp Fervers, MD • David Maintz, MD • Robert Hahnfeldt, MD • Jonathan Kottlors, MD


RESUMEN:
Antecedentes: Los errores en los informes de radiología pueden ocurrir debido a discrepancias entre residentes y médicos asistentes, inexactitudes en el reconocimiento de voz y una gran carga de trabajo. Los modelos de lenguaje grande, como GPT-4 (ChatGPT; OpenAI), pueden ayudar en la generación de informes.
Propósito: Evaluar la efectividad de GPT-4 en la identificación de errores comunes en informes de radiología, centrándose en el rendimiento, el tiempo y la eficiencia de costos.
Materiales y Métodos: En este estudio retrospectivo, se compilaron 200 informes de radiología (radiografía e imágenes por cortes [TC y RM]) entre junio de 2023 y diciembre de 2023 en una institución. Se insertaron intencionalmente 150 errores de cinco categorías comunes de errores (omisiones, inserciones, errores de ortografía, confusión de lados y otros) en 100 de los informes y se utilizaron como estándar de referencia. Se encargó a seis radiólogos (dos radiólogos senior, dos médicos asistentes y dos residentes) y a GPT-4 detectar estos errores. Se evaluó el rendimiento general de detección de errores, la detección de errores en las cinco categorías de errores y el tiempo de lectura utilizando pruebas χ2 de Wald y pruebas t de muestras emparejadas.
Resultados: GPT-4 (tasa de detección, 82.7%; 124 de 150; IC del 95%: 75.8, 87.9) igualó el rendimiento promedio de detección de los radiólogos independientemente de su experiencia (radiólogos senior, 89.3% [134 de 150; IC del 95%: 83.4, 93.3]; médicos asistentes, 80.0% [120 de 150; IC del 95%: 72.9, 85.6]; residentes, 80.0% [120 de 150; IC del 95%: 72.9, 85.6]; rango de valor P, .522–.99). Un radiólogo senior superó a GPT-4 (tasa de detección, 94.7%; 142 de 150; IC del 95%: 89.8, 97.3; P = .006). GPT-4 requirió menos tiempo de procesamiento por informe de radiología que el lector humano más rápido en el estudio (tiempo medio de lectura, 3.5 segundos ± 0.5 [DE] vs 25.1 segundos ± 20.1, respectivamente; P < .001; Cohen d = −1.08). El uso de GPT-4 resultó en un menor costo medio de corrección por informe que el radiólogo más eficiente en costos ($0.03 ± 0.01 vs $0.42 ± 0.41; P < .001; Cohen d = −1.12).
Conclusión: La tasa de detección de errores en informes de radiología de GPT-4 fue comparable a la de los radiólogos, lo que podría reducir las horas de trabajo y los costos.

PDF


ABSTRACT:
Background: Errors in radiology reports may occur because of resident-to-attending discrepancies, speech recognition inaccuracies, and large workload. Large language models, such as GPT-4 (ChatGPT; OpenAI), may assist in generating reports.
Purpose: To assess effectiveness of GPT-4 in identifying common errors in radiology reports, focusing on performance, time, and cost-efficiency.
Materials and Methods: In this retrospective study, 200 radiology reports (radiography and cross-sectional imaging [CT and MRI]) were compiled between June 2023 and December 2023 at one institution. There were 150 errors from five common error categories (omission, insertion, spelling, side confusion, and other) intentionally inserted into 100 of the reports and used as the reference standard. Six radiologists (two senior radiologists, two attending physicians, and two residents) and GPT-4 were tasked with detecting these errors. Overall error detection performance, error detection in the five error categories, and reading time were assessed using Wald χ2 tests and paired-sample t tests.
Results: GPT-4 (detection rate, 82.7%;124 of 150; 95% CI: 75.8, 87.9) matched the average detection performance of radiologists independent of their experience (senior radiologists, 89.3% [134 of 150; 95% CI: 83.4, 93.3]; attending physicians, 80.0% [120 of 150; 95% CI: 72.9, 85.6]; residents, 80.0% [120 of 150; 95% CI: 72.9, 85.6]; P value range, .522–.99). One senior radiologist outperformed GPT-4 (detection rate, 94.7%; 142 of 150; 95% CI: 89.8, 97.3; P = .006). GPT-4 required less processing time per radiology report than the fastest human reader in the study (mean reading time, 3.5 seconds ± 0.5 [SD] vs 25.1 seconds ± 20.1, respectively; P < .001; Cohen d = −1.08). The use of GPT-4 resulted in lower mean correction cost per report than the most costefficient radiologist ($0.03 ± 0.01 vs $0.42 ± 0.41; P < .001; Cohen d = −1.12).
Conclusion: The radiology report error detection rate of GPT-4 was comparable with that of radiologists, potentially reducing work hours and cost.

PDF


ABSTRACT:
Background: Errors in radiology reports may occur because of resident-to-attending discrepancies, speech recognition inaccuracies, and large workload. Large language models, such as GPT-4 (ChatGPT; OpenAI), may assist in generating reports.
Purpose: To assess effectiveness of GPT-4 in identifying common errors in radiology reports, focusing on performance, time, and cost-efficiency.
Materials and Methods: In this retrospective study, 200 radiology reports (radiography and cross-sectional imaging [CT and MRI]) were compiled between June 2023 and December 2023 at one institution. There were 150 errors from five common error categories (omission, insertion, spelling, side confusion, and other) intentionally inserted into 100 of the reports and used as the reference standard. Six radiologists (two senior radiologists, two attending physicians, and two residents) and GPT-4 were tasked with detecting these errors. Overall error detection performance, error detection in the five error categories, and reading time were assessed using Wald χ2 tests and paired-sample t tests.
Results: GPT-4 (detection rate, 82.7%;124 of 150; 95% CI: 75.8, 87.9) matched the average detection performance of radiologists independent of their experience (senior radiologists, 89.3% [134 of 150; 95% CI: 83.4, 93.3]; attending physicians, 80.0% [120 of 150; 95% CI: 72.9, 85.6]; residents, 80.0% [120 of 150; 95% CI: 72.9, 85.6]; P value range, .522–.99). One senior radiologist outperformed GPT-4 (detection rate, 94.7%; 142 of 150; 95% CI: 89.8, 97.3; P = .006). GPT-4 required less processing time per radiology report than the fastest human reader in the study (mean reading time, 3.5 seconds ± 0.5 [SD] vs 25.1 seconds ± 20.1, respectively; P < .001; Cohen d = −1.08). The use of GPT-4 resulted in lower mean correction cost per report than the most costefficient radiologist ($0.03 ± 0.01 vs $0.42 ± 0.41; P < .001; Cohen d = −1.12).
Conclusion: The radiology report error detection rate of GPT-4 was comparable with that of radiologists, potentially reducing work hours and cost.