切换至 "中华医学电子期刊资源库"

中华眼科医学杂志(电子版) ›› 2024, Vol. 14 ›› Issue (05) : 262 -268. doi: 10.3877/cma.j.issn.2095-2007.2024.05.002

论著

深度学习在后发性白内障混浊分析中的应用研究
胡师尧1,2, 陈媛媛2, 李辰3, 严宏2,()   
  1. 1.710049 西安交通大学电信学部计算机科学与技术学院2020级博士研究生
    2.710004 西安市人民医院(西安市第四医院)眼科陕西省眼科医院
    3.710049 西安交通大学电信学部计算机科学与技术学院
  • 收稿日期:2024-07-30 出版日期:2024-10-28
  • 通信作者: 严宏
  • 基金资助:
    国家自然科学基金(81873674)陕西省重点研发计划项目(2021ZDLSF02-08)西安市创新能力强基计划(21YXYJ0005)

The application of deep learning in the analysis of posterior capsule opacity after cataract

Shiyao Hu1,2, Yuanyuan Chen2, Chen Li3, Hong Yan,2()   

  1. 1.Doctoral degree 2020,Faculty of Electronic and Information Engineering,Xi'an Jiaotong University,Xi'an 710049,China
    2.Shaanxi Eye Hospital,Xi'an People's Hospital(Xi'an Fourth Hospital),affiliated People's Hospital of Northwest University,Xi'an 710004
    3.Faculty of Electronic and Information Engineering, Xi'an Jiaotong University,Xi'an 710049,China
  • Received:2024-07-30 Published:2024-10-28
  • Corresponding author: Hong Yan
引用本文:

胡师尧, 陈媛媛, 李辰, 严宏. 深度学习在后发性白内障混浊分析中的应用研究[J/OL]. 中华眼科医学杂志(电子版), 2024, 14(05): 262-268.

Shiyao Hu, Yuanyuan Chen, Chen Li, Hong Yan. The application of deep learning in the analysis of posterior capsule opacity after cataract[J/OL]. Chinese Journal of Ophthalmologic Medicine(Electronic Edition), 2024, 14(05): 262-268.

目的

探索深度学习在人工晶状体(IOL)后囊混浊分析中的应用。

方法

收集2020年9月至2023年7月于陕西省眼科医院就诊行白内障摘除联合IOL植入术62例(100只眼)患者的术后数月至数年采用裂隙灯显微镜后照法拍照图像100张。其中,男性24例(34只眼),女性38例(66只眼);年龄41~78岁,平均年龄(58.7±9.82)岁。晶状体后囊混浊分析框架包含IOL区域分割、IOL中心定位、混浊区域分割及混浊表征提取等四个主要模块。其中,IOL区域分割和混浊区域分割均采用Unet作为分割模型,所使用的数据集包含100张白内障术后IOL眼后囊膜的裂隙灯后照法图片;IOL中心定位使用几何矩算法计算;混浊表征提取使用主干为Resnet模型对患者的3个视觉质量分析系统视力指标进行预测。以测试集人工标注标签为参照,评估模型IOL区域分割的结果,使用Python软件计算交并比(IoU)、Dice系数及召回率。以测试集人工标注标签为参照,评估模型混浊区域分割的结果,使用Python软件计算准确率、精确率、召回率及f1-score。以测试集视觉质量分析系统(OQAS)指标真实值为参照,评估模型的图像回归预测结果,使用Python软件计算平均绝对误差。

结果

20张测试集IOL区域分割的IoU为0.9117,Dice系数为0.9527,召回率为0.9524,f1-score为0.9527。20张测试集混浊区域分割的平均准确率为0.9690,平均精确率为0.9329,平均召回率为0.9264,平均f1-score为0.9191。训练完成的模型在7张测试集上进行消融,原图+热力图+混浊区域、原图+热力图、原图+混浊区域及仅原图指导后,模型预测调制传递函数、斯特列尔比、客观散射指数的平均绝对误差分别为2.4319、0.0154、3.4032、4.3300、0.0166、2.9997、10.5013、0.0161、2.8775、3.8151、0.0195及3.7067。在输入中单独加入热力图和混浊区域时,模型预测客观散射指数的平均绝对误差小于其他模型变体。

结论

深度学习模型在晶状体后囊混浊分析中采用IOL区域分割、IOL中心定位、混浊区域分割及混浊表征提取四个主要模块,该模型能够实现自动化预处理,得到精准的混浊分割结果,并能根据混浊图像提取到关于视力指标的表征。

Objective

The aim of this study is to explore the application of deep learning in the analysis of posterior capsule opacification(PCO)following intraocular lens(IOL)implantation.

Methods

Slit-lamp retroillumination images of 100 eyes from 62 patients who underwent cataract extraction combined with IOL implantation at the Shaanxi Eye Hospital between September 2020 and July 2023 were collected.The cohort included 24 males(34 eyes)and 38 females(66 eyes),with an average age of(58.7±9.82)years(ranging from 41 to 78 years).The PCO analysis framework consisted of four main modules:IOL region segmentation,IOL center localization,opacification region segmentation,and extraction of opacification features.Both the IOL region segmentation and opacification region segmentation modules employed the U-Net model,trained on a dataset of 100 slit-lamp retroillumination images of posterior capsules with implanted IOL.The IOL center was localized using the geometric moment algorithm,while opacification feature extraction used a ResNet-based model to predict three visual quality metrics of patients.The performance of the IOL region segmentation model was evaluated against manually labeled test set annotations.Metrics such as Intersection-over-Union(IoU),Dice coefficient,and recall rate were calculated using Python software.The opacification region segmentation results were similarly evaluated with metrics including accuracy,precision,recall,and f1-score.For image regression tasks predicting OQAS(Optical Quality Analysis System)metrics,the mean absolute error(MAE)was computed using Python software.

Results

In the IOL region segmentation task,the test set achieved an IoU of 0.9117,Dice of 0.9527,recall of 0.9524,and f1-score of 0.9527.In the opacification region segmentation task,the test set achieved an average accuracy of 0.9690,precision of 0.9329,recall of 0.9264,and f1-score of 0.9191.In the visual indicator prediction task,the mean error of strehl ratio,modulation transfer function,object scatter index were2.4319,0.0154,3.4032.

Conclusions

The deep learning model for PCOanalysis incorporates four key modules:IOL region segmentation,IOL center localization,opacification region segmentation,and opacification feature extraction.This model enables automated preprocessing,precise segmentation of opacified regions,and the extraction of features related to visual quality metrics from opacified images.

图1 晶状体后囊混浊混浊分析框架 整个框架主要依次可分为人工晶状体区域分割、人工晶状体中心定位、混浊区域分割及混浊表征提取四个模块
图2 测试集人工晶状体分割的结果 图2A~图2D示输入模型的测试集图片,图2E~图2H示白色区域为模型对人工晶状体区域的分割结果
图3 混浊区域分割的结果 图3A~图3D为输入图像,图3E~图3H为人工标注的标签,图3I~图3L为模型输出的分割结果,每一列分别对应一个测试样本。其中紫色区域为混浊,蓝色区域为非混浊,白色区域为反光,灰色区域为不确定
表1 视觉质量分析指标预测实验结果
[1]
Apple DJ,Solomon KD,Tetz MR,et al.Posterior capsule opacification[J].Surv Ophthalmol,1992,37(2):73-116.
[2]
Awasthi N,Guo S,Wagner BJ.Posterior capsular opacification:a problem reduced but not yet eradicated[J].Arch Ophthalmol,2009,127(4):555-562.
[3]
Findl O,Drexler W,Menapace R,et al.Changes in intraocular lens position after neodymium:YAG capsulotomy[J].J Cataract Refract Surg,1999,25(5):659-662.
[4]
Pandey SK,Apple DJ,Werner L,et al.Posterior capsule opacification:a review of the aetiopathogenesis,experimental and clinical studies and factors for prevention[J].Indian J Ophthalmol,2004,52(2):99-112.
[5]
Chang DH,Rocha KM.Intraocular lens optics and aberrations[J].Curr Opin Ophthalmol,2016,27(4):298-303.
[6]
Sinha R,Shekhar H,Sharma N,et al.Posterior capsular opacification:a review[J].Indian JOphthalmol,2013,61(7):371-376.
[7]
Wormstone IM,Wormstone YM,Smith AJ,et al.Posterior capsule opacification:what's in the bag?[J].Prog Retin Eye Res,2021,1;82:100905.
[8]
Wormstone IM.Posterior capsule opacification:a cell biological perspective[J].Exp Eye Res,2002,74(3):337-347.
[9]
Nishi O.Posterior capsule opacification part 1:experimental investigations[J].J Cataract Refract Surg,1999,25(1):106-117.
[10]
Ronneberger O,Fischer P, Brox T.U-net: Convolutional networks for biomedical image segmentation[C]//Medical image computing and computer-assisted intervention(MICCAI),2015:234-241.
[11]
Tan M,Le Q.Efficientnet: Rethinking model scaling for convolutional neural networks[C]//International conference on machine learning,2019:6105-6114.
[12]
Tetz MR,Auffarth GU,Sperker M,et al.Photographic image analysis system of posterior capsule opacification[J].J Cataract Refract Surg,1997,23(10):1515-1520.
[13]
Barman SA,Hollick EJ,Boyce JF,et al.Quantification of posterior capsular opacification in digital images after cataract surgery[J].Invest Ophthalmol Vis Sci,2000,41(12):3882-3892.
[14]
Kronschläger M,Siegl H,Pinz A,et al.Automated qualitative and quantitative assessment of posterior capsule opacification by automated quantification of after-cataractⅡ(AQUAⅡ)system[J].BMC Ophthalmol,2019,19:1-7.
[15]
LeCun Y,Bottou L,Bengio Y,et al.Gradient-based learning applied to document recognition[J].Proc.IEEE,1998,86(11):2278-2324.
[16]
Krizhevsky A,Sutskever I,Hinton GE.ImageNet classification with deep convolutional neural networks[J].Commun ACM,2017,60(6):84-90.
[17]
He K,Zhang X,Ren S,et al.Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition,2016:770-778.
[18]
Simonyan K,Zisserman A.Very deep convolutional networks for large-scale image recognition[EB/OL].[2015-04-10].https://arxiv.org/abs/1409.1556.
[19]
Dosovitskiy A,Beyer L,Kolesnikov A,et al.An image is worth 16x16 words:transformers for image recognition at scale[EB/OL].[2021-06-03].https://arxiv.org/abs/1409.1556.
[20]
Gulshan V,Peng L, Coram M, et al.Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs[J].JAMA,2016,316(22):2402-2410.
[21]
Abràmoff MD,Lou Y,Erginay A,et al.Improved automated detection of diabetic retinopathy on a publicly available dataset through integration of deep learning[J].Invest Ophthalmol Vis Sci,2016,57(13):5200-5206.
[22]
Takahashi H,Tampo H,Arai Y,et al.Applying artificial intelligence to disease staging:deep learning for improved staging of diabetic retinopathy[J].PLoS One, 2017, 12(6):e0179790.
[23]
邵毅,陈蔚,杨卫华,等.人工智能在干眼临床诊断中的应用专家共识(2023)[J].眼科新进展,2023,43(4):253-259.
[24]
李家昱,陈明惠,杨瑞君,等.糖尿病视网膜病变眼底图像筛查研究[J].中国激光,2022,49(11):133-143.
[25]
Shamshad F,Khan S,Zamir SW,et al.Transformers in medical imaging:A survey[J].Med Image Anal,2023,88:102802.
[26]
Oganov AC,Seddon I,Jabbehdari S,et al.Artificial intelligence in retinal image analysis:Development,advances,and challenges[J].Surv Ophthalmol,2023,68(5):905-919.
[27]
Fu H,Cheng J,Xu Y,et al.Joint optic disc and cup segmentation based on multi-label deep network and polar transformation[J].IEEE Trans Med Imaging,2018,37(7):1597-1605.
[28]
杨卫华,邵毅,许言午.眼科人工智能临床研究评价指南(2023)[J].国际眼科杂志,2023,23(7):1064-1071.
[29]
Müller D,Soto-Rey I,Kramer F.Towards a guideline for evaluation metrics in medical image segmentation[J].BMC Res Notes,2022,15(1):210.
[30]
Willmott CJ,Matsuura K.Advantages of the mean absolute error(MAE)over the root mean square error(RMSE)in assessing average model performance[J].Clim Res,2005,30(1):79-82.
[31]
Xu N,Price B,Cohen S,et al.Deep interactive object selection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition,2016:373-381.
[32]
Wang G,Zuluaga MA,Li W,et al.DeepIGeoS:a deep interactive geodesic framework for medical image segmentation[J].IEEE Trans Pattern Anal Mach Intell,2018,41(7):1559-1572.
[33]
Wang G,Li W,Zuluaga MA,et al.Interactive medical image segmentation using deep learning with image-specific fine tuning[J].IEEE Trans Med Imaging,2018,37(7):1562-1573.
[34]
Liao F,Liang M,Li Z,et al.Evaluate the malignancy of pulmonary nodules using the3-d deep leaky noisy-or network[J].IEEE Trans Neural Netw Learn Syst, 2019, 30(11):3484-3495.
[35]
Ribli D,Horváth A,Unger Z,et al.Detecting and classifying lesions in mammograms with deep learning[J].Sci Rep,2018,8(1):4165.
[36]
Liu Y,Sun P,Wergeles N,et al.A survey and performance evaluation of deep learning methods for small object detection[J].Expert Syst Appl,2021,172:114602.
[1] 李洋, 蔡金玉, 党晓智, 常婉英, 巨艳, 高毅, 宋宏萍. 基于深度学习的乳腺超声应变弹性图像生成模型的应用研究[J/OL]. 中华医学超声杂志(电子版), 2024, 21(06): 563-570.
[2] 罗刚, 泮思林, 孙玲玉, 李志新, 陈涛涛, 乔思波, 庞善臣. 一种新型语义网络分析模型对室间隔完整型肺动脉闭锁和危重肺动脉瓣狭窄胎儿右心发育不良程度的评价作用[J/OL]. 中华医学超声杂志(电子版), 2024, 21(04): 377-383.
[3] 赫兰, 杨泽堃, 张颖, 王玉东, 陈伟导, 王一同, 申锷. 双输入BCNN-ResNet模型对超声颈动脉斑块稳定性的分类诊断价值[J/OL]. 中华医学超声杂志(电子版), 2024, 21(02): 137-142.
[4] 孔德铭, 刘铮, 李睿, 钱文伟, 王飞, 蔡道章, 柴伟. 人工智能辅助全髋关节置换三维术前规划准确性评价[J/OL]. 中华关节外科杂志(电子版), 2024, 18(04): 431-438.
[5] 张嘉炜, 王瑞, 张克诚, 易磊, 周增丁. 烧烫伤创面深度智能检测模型P-YOLO的建立及测试效果[J/OL]. 中华损伤与修复杂志(电子版), 2024, 19(05): 379-385.
[6] 王瑞, 张嘉炜, 张克诚, 周增丁. 基于深度学习的皮肤烧烫伤创面图像分割与分类及检测的研究进展[J/OL]. 中华损伤与修复杂志(电子版), 2024, 19(02): 172-175.
[7] 叶莉, 杜宇. 深度学习在牙髓根尖周病临床诊疗中的应用[J/OL]. 中华口腔医学研究杂志(电子版), 2024, 18(06): 351-356.
[8] 张悦, 张可, 邓锶锶, 向青, 郭亚豪, 曹键, 罗涛, 孟占鳌. 深度学习图像重建三低方案在肾动脉血管成像中的应用[J/OL]. 中华腔镜泌尿外科杂志(电子版), 2025, 19(01): 76-82.
[9] 黄俊龙, 李文双, 李晓阳, 刘柏隆, 陈逸龙, 丘惠平, 周祥福. 基于盆底彩超的人工智能模型在女性压力性尿失禁分度诊断中的应用[J/OL]. 中华腔镜泌尿外科杂志(电子版), 2024, 18(06): 597-605.
[10] 赵毅, 李昶田, 唐文博, 白雪婷, 刘荣. 腹腔镜术中超声主胰管自动识别模型的临床应用[J/OL]. 中华腔镜外科杂志(电子版), 2024, 17(05): 290-294.
[11] 尹泽新, 杨继林, 李有尧, 吴美龙, 刘利平. 肝癌微血管侵犯的术前预测研究进展[J/OL]. 中华肝脏外科手术学电子杂志, 2025, 14(01): 128-134.
[12] 陈健, 周静洁, 夏开建, 王甘红, 刘罗杰, 徐晓丹. 基于卷积神经网络实现结直肠息肉的实时检测与自动NICE分型(附视频)[J/OL]. 中华结直肠疾病电子杂志, 2024, 13(03): 217-228.
[13] 潘清, 葛慧青. 基于机械通气波形大数据的人机不同步自动监测方法[J/OL]. 中华重症医学电子杂志, 2024, 10(04): 399-403.
[14] 孙铭远, 褚恒, 徐海滨, 张哲. 人工智能应用于多发性肺结节诊断的研究进展[J/OL]. 中华临床医师杂志(电子版), 2024, 18(08): 785-790.
[15] 林一鑫, 董晶, 贾建文, 黄菊梅, 武军元, 王双坤, 柳云鹏, 汪阳. 基于人工智能分析颈内动脉颅外段迂曲特征及对称性的应用性评价[J/OL]. 中华脑血管病杂志(电子版), 2024, 18(03): 202-209.
阅读次数
全文


摘要