- Major U.S., Canadian and European radiology societies called for additional guidelines on the ethical use of artificial intelligence in imaging in a joint statement issued this week in the Journal of the American College of Radiology.
- The technology has pros and cons but widespread use of AI-based intelligent and autonomous systems in radiology could potentially increase errors and worsen health disparities without a clear code of ethics, the groups warned.
- “AI developers ultimately need to be held to the same ‘do no harm’ standard as physicians,” the international coalition wrote. “The radiology community should start now to develop codes of ethics and practice for AI.”
The healthcare industry has been adopting AI tech across myriad use cases, including predictive analytics, administration and diagnostics and imaging.
This new statement, although weighty, is unlikely to put a significant damper on the fertile R&D environment. Data-intensive areas like radiology and dermatology could benefit greatly from AI algorithms that can go through images carefully, identifying small discrepancies or patterns a clinician might miss.
The FDA has approved more than 30 AI algorithms for use in healthcare to date, including Imagen OsteoDetect, which identifies wrist fractures in bone images; IDx_DR, which detects diabetic retinopathy in eye scans; and GE Healthcare’s AI-embedded X-ray, which can find a potentially collapsed lung in a lung image.
Of the more than 100 medical imaging AI startups in 2018, the majority were for image analysis, according to Frost & Sullivan, a healthcare consultancy. And a recent study published in the Lancet Digital Health Journal suggested AI could detect diseases from medical imaging with the same accuracy as a clinician, although it didn’t outperform them.
AI’s adoption as a diagnostic is challenged by several questions around accuracy, the tech’s role as a clinical tool versus a medical professional, and the importance of the human factor in a practice dogged by variability.
“AI has great potential to increase efficiency and accuracy throughout radiology, but it also carries inherent pitfalls and biases,” the statement said. “Widespread use of AI-based intelligent and autonomous systems in radiology can increase the risk of systemic errors with high consequence and highlights complex ethical and societal issues.”
The American College of Radiology, European Society of Radiology, Radiology Society of North America, Society for Imaging Informatics in Medicine, European Society of Medical Imaging Informatics, Canadian Association of Radiologists and American Association of Physicists in Medicine published the call to action. It includes concerns around how data ethics would translate to AI and worries about how human bias could manifest itself in machine learning models as algorithms pick up any prejudices in the datasets on which they’re trained.