top of page
John Evans

Software spots 'ugly duckling' lesions in wide-field photos

To address one of the key limitations of many existing computer-aided diagnosis (CAD) systems for detecting skin cancer from images, a group of researchers has developed a new system. Rather than looking at lesions in isolation, this system compares all of a patient’s lesions to detect anomalous ones—so-called 'ugly ducklings.'


In a press release from Harvard University in Cambridge, Mass., the researchers say this inter-lesion comparison approach more closely resembles the technique dermatologists use to identify suspicious lesions. As a result, they say, their digital tool is more accurate than its predecessors.


The researchers used their deep learning neural network to assign an “ugly duckling score” to each lesion based on how much it differed from other lesions on the same patient’s skin, identifying those most likely to be cancerous. Photo by: Wyss Institute at Harvard University


“We essentially provide a well-defined mathematical proxy for the deep intuition a dermatologist relies on when determining whether a skin lesion is suspicious enough to warrant closer examination,” said the study’s first author Luis Soenksen, PhD, in the release. “This innovation allows photos of patients’ skin to be quickly analyzed to identify lesions that should be evaluated by a dermatologist, allowing effective screening for melanoma at the population level.”


Dr. Soenksen is a postdoctoral fellow at the Wyss Institute for Biologically Inspired Engineering at Harvard.


He said that part of the motivation to develop the project was the recognition that a lack of effective tools for primary care physicians meant a delay in care for some people.


“It amazed me that people can die from melanoma simply because primary care doctors and patients currently don’t have the tools to find the “odd” ones efficiently. I decided to take on that problem by leveraging many of the techniques I learned from my work in artificial intelligence at the Wyss and MIT [Massachusetts Institute of Technology],” he said.


According to the release, the existing CAD systems created for identifying suspicious pigmented lesions (SPLs) only analyzed lesions individually, omitting the ugly duckling criteria that dermatologists use to compare several of a patient’s moles during an exam.


To ensure their system could be used by people without specialized dermatology training, the team created a database of more than 33,000 “wide field” images of patients’ skin that included backgrounds and other non-skin objects. That enabled the artificial intelligence system (known as convolutional deep neural networks or CDNN) to use photos taken from consumer-grade cameras for diagnosis.


The system data flow is shown here. First, a photo is taken of a patient’s skin and fed into the algorithm. Then, all blob-like regions are detected, and images of those regions are fed into a deep classifier developed using a pretrained convolutional neural network (CNN) architecture and fine-tuned on a dataset of 33,980 images of skin, skin edges, and backgrounds. Detected pigmented lesions are classified as suspicious or nonsuspicious using probabilities generated by the dense layer of the network and also an “ugly duckling output” calculated using the deep features from the CNN. Photo by: Wyss Institute at Harvard University

The images they used contained both SPLs and non-suspicious skin lesions labelled and confirmed by a consensus of three board-certified dermatologists. After training on the database and subsequent refinement and testing, the system could distinguish suspicious from non-suspicious lesions with 90.3% sensitivity and 89.9% specificity, improving upon previously published systems.


At this stage, the system was still looking at individual lesions. Next, to capture the ugly duckling criteria, the team created a 3D "map" of all of the lesions in a given image and calculated how far from "typical" each lesion's features were.


The less “typical” a given lesion was compared to the others in an image, the further away it was from the centre of the 3D space. Quantifying the ugly duckling factor in this way, they say, will allow for the deep learning networks to overcome the challenging and time-consuming task of identifying and scrutinizing the differences between all the pigmented lesions in a single patient.


The digital tool's accuracy was then compared to that of the evaluation by three dermatologists. Each human evaluator examined 135 wide-field photos from 68 patients and assigned each lesion an "oddness" score that indicated how concerning it looked. The algorithm then analyzed and scored the same images. When the assessments were compared, the researchers found the algorithm agreed with the dermatologists' consensus 88% of the time. The algorithm agreed with the individual dermatologists 86% of the time.


“This high level of consensus between artificial intelligence and human clinicians is an important advance in this field, because dermatologists’ agreement with each other is typically very high, around 90 per cent,” said co-author Jim Collins, PhD, in the release.

“Essentially, we’ve been able to achieve dermatologist-level accuracy in diagnosing potential skin cancer lesions from images that can be taken by anybody with a smartphone, which opens up huge potential for finding and treating melanoma earlier.”


Dr. Collins is a core faculty member of the Wyss Institute and co-leader of its Predictive BioAnalytics Initiative. He is also the Termeer Professor of Medical Engineering and Science at MIT.


To facilitate collaboration, the research team has made the algorithm freely available on the software repository site GitHub.



Comments


bottom of page