Contacto

Barranca del Muerto 520, Los Alpes, Álvaro Obregón, C.P.01010, Ciudad de México, México

Teléfono

(+52) 55 9171 9570
CIR

Revista Virtual Individual

Autores: Ali S. Tejani, MD • Yee Seng Ng, MD • Yin Xi, PhD • Jesse C. Rayan, MD


PUNTOS DE ENSEÑANZA
*Entender el sesgo en la IA requiere conciencia de las definiciones coexistentes del término sesgo enmarcadas dentro del contexto del desarrollo y despliegue de la IA.
*La equidad algorítmica es un área de investigación en crecimiento en el aprendizaje automático (ML) destinada a minimizar las diferencias en los resultados del modelo y la posible discriminación que involucra grupos protegidos, según lo definido por atributos sensibles compartidos (por ejemplo, edad, raza, sexo).
*Se debe anticipar un cambio en la distribución de datos después del despliegue clínico de la IA, y las prácticas deben ser proactivas en el monitoreo de la IA para prevenir acciones clínicas basadas en resultados erróneos de la IA debido al cambio de datos.
*Implementar una estructura de gobernanza formal para supervisar el rendimiento del modelo puede ayudar en los esfuerzos para la detección prospectiva del sesgo en la IA.
*Intentar generalizar modelos desarrollados en poblaciones específicas a otros grupos, especialmente en el contexto de un sesgo conocido en el conjunto de datos de entrenamiento o predicciones discriminatorias, introduce un sesgo inequitativo y aumenta el riesgo de disparidades en salud.

PDF


TEACHING POINTS
*Understanding bias in AI requires awareness of coexisting definitions of the term bias framed within the context of AI development and deployment.
*Algorithm fairness is a growing area of research in ML aimed at minimizing differences in model outcomes and potential discrimination involving protected groups, as defined by shared sensitive attributes (eg, age, race, sex).
*Data distribution shift should be anticipated after clinical AI deployment, and practices must be proactive in monitoring AI to prevent clinical action based on erroneous AI results owing to data shift.
*Implementing a formal governance structure to supervise model performance can aid efforts for prospective detection of AI bias.
*Attempting to generalize models developed on specific populations to other groups, especially in the setting of known training dataset bias or discriminatory predictions, introduces inequitable bias and risks augmentation of health disparities.

PDF


TEACHING POINTS
*Understanding bias in AI requires awareness of coexisting definitions of the term bias framed within the context of AI development and deployment.
*Algorithm fairness is a growing area of research in ML aimed at minimizing differences in model outcomes and potential discrimination involving protected groups, as defined by shared sensitive attributes (eg, age, race, sex).
*Data distribution shift should be anticipated after clinical AI deployment, and practices must be proactive in monitoring AI to prevent clinical action based on erroneous AI results owing to data shift.
*Implementing a formal governance structure to supervise model performance can aid efforts for prospective detection of AI bias.
*Attempting to generalize models developed on specific populations to other groups, especially in the setting of known training dataset bias or discriminatory predictions, introduces inequitable bias and risks augmentation of health disparities.