Artificial Intelligence Can Map 2 Retinal Fluid Types

Optical coherence tomography (OCT) of a patient show has a retinal edema. (Photo by: BSIP/Universal Images Group via Getty Images)
Novel deep learning architecture trained with labeled and unlabeled OCT image data is able to segment intraretinal and subretinal fluid, a report shows.

A deep learning system can automate retinal fluid segmentation via optical coherence tomography (OCT) images with fast, objective, and accurate results, according to research published in the British Journal of Ophthalmology. Many deep learning models have been able to assess either intraretinal or subretinal fluid distinctly, but the system reviewed in this study analyzed both at once. 

The architecture employs a DenseNet backbone and consists of 2 major parts. First, a retinal fluid segmentation network roughly predicts regions of fluid with a decoder that has embedded atrous spatial pyramid pooling (ASPP) to help build a global feature map. Reverse attention (RA) modules further refine the edges of fluid-filled areas. The second part is a semi-supervised deep learning framework that boosts the network’s capability by introducing a random sampling of unlabeled OCT images into its training; “randomly selected propagation.”

This semi-supervised retinal fluid segmentation deep network (Ref-Net) was trained with an in-house series of 2814 OCT images from 141 patients, a dataset built in partnership with Shanghai General Hospital and Shanghai Eye Disease Prevention and Treatment Center. De-identified scans that included intraretinal or subretinal fluid, or both fluid types in an image were retrospectively gathered from July 2018 to June 2020.1

Semi-supervised Ref-Net outperformed most other current segmentation models tested; Attention-UNet, CE-Net, U-Net, U-Net++, UTNet, and TransUNet. Investigators used Dice similarity coefficient to compare 2 sets of data, and analyzed sensitivity, specificity, and mean absolute error (MAE). For subretinal fluid, semi-supervised Ref-Net scored higher than the other models; 81.2% in the Dice benchmark, 87.3% sensitivity, as well as placing in the top four with 98.8% specificity. It tied for best MAE at 1.1%. Regarding intraretinal fluid, Ref-Net outdid the others at 84.5% sensitivity, as well as semi-supervised Ref-Net at 78.0% Dice value, and 0.5% for MAE, along with one of the top specificity scores, 99.3%.

The study evaluated how the number of labeled OCT images would affect accuracy of semi-supervised Ref-Net. As the number of labeled images grew from 40 to 100, its performance increased — by 80 images it reached the accuracy of a human ophthalmologist; 1 of 3 experts who manually segmented retinal fluids in this investigation. With 160 labeled images, the model segmented fluids as precisely as 2 of the 3 experienced specialists.

Participants with age-related macular degeneration (AMD) displayed more intraretinal fluid in the nasal, temporal, and inferior quadrants than superiorly. Individuals with diabetic macular edema (DME) showed a greater amount of intraretinal fluid in the inferior quadrant compared with nasal, and more subretinal fluid in nasal, temporal, and inferior quadrants than superiorly. Also, in eyes with DME, subretinal fluid was exhibited less often and with smaller volume than in eyes with AMD or retinal vein occlusion (RVO).

Limiting this study was a relatively small number of labeled OCT scans to train Ref-Net. Also, only 2 types of fluid were analyzed — if a pigment epithelial detachment class label was added, a more comprehensive fluid picture would be possible. The model was tested with the independent, public access Kermany dataset, and did not perform as well as with the in-house images, possibly due to different scanners and image resolution ratios, the investigators speculate.


Li F, Pan W, Xiang W, Zou H. Automatic segmentation of multitype retinal fluid from optical coherence tomography images using semisupervised deep learning networkBr J Ophthalmol. Published online June 13, 2022. doi:10.1136/bjophthalmol-2022-321348