SCI

30 June 2024

Towards equitable AI in oncology

(Nature Reviews Clinical Oncology, IF: 81.1)

  • Vidya Sankar Viswanathan, Vani Parmar & Anant Madabhushi

  • CORRESPONDENCE TO: anantm@emory.edu

Abstract 摘要

Artificial intelligence (AI) stands at the threshold of revolutionizing clinical oncology, with considerable potential to improve early cancer detection and risk assessment, and to enable more accurate personalized treatment recommendations. However, a notable imbalance exists in the distribution of the benefits of AI, which disproportionately favour those living in specific geographical locations and in specific populations. In this Perspective, we discuss the need to foster the development of equitable AI tools that are both accurate in and accessible to a diverse range of patient populations, including those in low-income to middle-income countries. We also discuss some of the challenges and potential solutions in attaining equitable AI, including addressing the historically limited representation of diverse populations in existing clinical datasets and the use of inadequate clinical validation methods. Additionally, we focus on extant sources of inequity including the type of model approach (such as deep learning, and feature engineering-based methods), the implications of dataset curation strategies, the need for rigorous validation across a variety of populations and settings, and the risk of introducing contextual bias that comes with developing tools predominantly in high-income countries.

人工智能(AI)正处于临床肿瘤学革命的门槛,在改善癌症早期检测和风险评估以及提供更准确的个性化治疗建议方面具有相当大的潜力。然而,人工智能的获益分配存在明显的不平衡,这对生活在特定地理位置和特定人口中的人来说是不成比例的。从这个角度来看,我们的讨论存在促进开发公平的人工智能工具的必要性,这些工具在不同的患者群体中都是准确的,也可供低收入到中等收入国家的患者群体使用。我们还讨论了实现公平人工智能的一些挑战和潜在解决方案,包括解决现有临床数据集中存在的不同人群历史仅存有限代表性的问题,以及使用的临床验证方法不充分的问题。此外,我们还关注现有不公平的来源,包括模型方法的类型(如深度学习和基于特征工程的方法)、数据集管理策略的影响、在各种人群和环境中进行严格验证的必要性,以及主要在高收入国家开发工具时引入背景偏见的风险。