top of page

OCT ‘virtual biopsies’ could reduce need for invasive procedures

Researchers at Stanford Medicine in Stanford, Calif., have developed a trained neural network that uses micro-registered optical coherence tomography (OCT) to penetrate tissue and create a high-resolution, three-dimensional reconstruction of cells. Testing has shown the images created can be cross-sectioned to mimic those generated by a standard biopsy and H&E staining.

While deep learning has shown promise in the virtual staining of unstained tissue slides, true virtual biopsy requires the staining of images taken from intact tissue. In this study, the researchers developed a neural network-trained OCT method that can assess a two-dimensional (2D) H&E slide and find the corresponding section in a 3D OCT image of the original fresh tissue. This technique can provide clinicians with information more quickly than a regular biopsy because a traditional specimen must be sent to a pathologist for staining and assessment, they report.

Published in the Apr. 10 issue of Science Advances, the study outlines how the authors developed new hardware to collect data and new processing methods for OCT scans, which are typically used by ophthalmologists to examine the back of the eye.

In an article on the Stanford website, Adam de la Zerda, PhD, said “We kept improving and improving the quality of the image, letting us see smaller and smaller details of a tissue. And we realized the OCT images we were creating were really getting very similar to the H&Es in terms of what they could show.” He is an associate professor of structural biology and the senior author of the study.

“We’ve not only created something that can replace the current gold-standard pathology slides for diagnosing many conditions, but we actually improved the resolution of these scans so much that we start to pick up information that would be extremely hard to see otherwise,” he added.

The higher resolution of the OCT images meant H&Es weren’t needed to diagnose disease, but Dr. de la Zerda and his colleagues thought clinicians would be more likely to adopt OCT if the images looked familiar.

“Every physician in a hospital is very much used to reading H&Es, and it was important to us that we translate OCT images into something that physicians were already comfortable with—rather than an entirely new type of image,” he said on the Stanford website.

The method was developed by Yonatan Winetraub, PhD, a former graduate student in the de la Zerda lab who now leads his own research lab at Stanford focusing in part on virtual biopsies.

“The uniqueness of this work lies in the method we developed to align OCT and H&E image pairs, letting machine-learning algorithms train on real tissue sections and providing clinicians with more accurate virtual biopsies,” Dr. Winetraub said in the news article.

According to co-author Kavita Sarin, MD, PhD, an associate professor of dermatology at Stanford Medicine, “This [development] has the potential to transform how we diagnose and monitor concerning skin lesions and diseases in the clinic.”


bottom of page