Is AI as good as the professional eye in diagnosing from medical imaging?

 

diagnosis

Will deep learning algorithms eventually provide more reliable analyses of diagnostic tests than medical professionals? When it comes to medical imaging, a new meta-analysis cautiously concludes that the accuracy of AI diagnosis is now equivalent to that of trained doctors.

Is AI as good as the professional eye in diagnosing from medical imaging?

The exploratory study giving cause for debate was published on September 25 in an open-access paper in The Lancet. Carried out by a host of international researchers led by UK opthalmology specialists, the study—to the authors' knowledge, the first of its kind—evaluated the diagnostic accuracy of deep learning algorithms versus health-care professionals in classifying diseases using medical imaging. It looked at studies comparing the diagnostic performance of deep learning models and health-care professionals based on medical imaging, for any disease, from January 1, 2012 to June 6, 2019.

And it found that, for a subset of the studied comparisons, deep learning algorithms were accurate in spotting disease in 87% of cases, compared to an 86% accuracy rate for professionals. In identifying healthy images, the rates were very similar, AI and healthcare professionals scoring 93% and 91% accuracy, respectively.

The authors, although listing a number of important conditions that may influence the findings, interpret "the diagnostic performance of deep learning models to be equivalent to that of health-care professionals". They add, however, that "poor reporting is prevalent in deep learning studies, which limits reliable interpretation of the reported diagnostic accuracy. New reporting standards that address specific challenges of deep learning could improve future studies, enabling greater confidence in the results of future evaluations of this promising technology".

The study focuses on ways to improve the quality of studies that evaluate the performance of deep learning AI diagnostics, and it stresses the need for more such studies, but with the algorithms being used in "real-world" settings.

Source: The Lancet

P.W.