摘要
2D/3D医学图像配准是骨科手术三维实时导航中的一项关键技术,然而传统的基于优化迭代的2D/3D配准方法需要经过多次迭代计算,无法满足医生在手术过程中对于实时配准的要求。针对该问题,提出一种基于自编码器的姿态回归网络来通过隐空间解码捕获几何姿态信息,从而快速地回归出术中X射线图像对应的术前脊椎位置的3D姿态,并经过重新投影生成最终的配准图像。通过引入新的损失函数,以“粗细”结合配准的方式对模型进行约束,保证了姿态回归的精确度。在CTSpine1K脊椎数据集中抽取100组CT扫描图像进行10折交叉验证,实验结果表明:所提出的模型所生成的配准结果图像与X射线图像的平均绝对误差(MAE)为0.04,平均目标配准误差(mTRE)为1.16 mm,单帧耗时1.7 s。与基于传统优化的方法相比,该模型配准时间大幅缩短。相较于基于学习的方法,该模型在快速配准的同时,保证了较高的配准精度。可见,所提模型可以满足术中实时高精配准的要求。
2D/3D medical image registration is a key technology in 3D real-time navigation of orthopedic surgery.However,the traditional 2D/3D registration methods based on optimization iteration require multiple iterative calculations,which cannot meet the requirements of doctors for real-time registration during surgery.To solve this problem,a pose regression network based on autoencoder was proposed.In this network,the geometric pose information was captured through hidden space decoding,thereby quickly regressing the 3D pose of preoperative spine pose corresponding to the intraoperative X-ray image,and the final registration image was generated through reprojection.By introducing new loss functions,the model was constrained by“Rough to Fine”combined registration method to ensure the accuracy of pose regression.In CTSpine1K spine dataset,100 CT scan image sets were extracted for 10-fold cross-validation.Experimental results show that the registration result image generated by the proposed model has the Mean Absolute Error(MAE)with the X-ray image of 0.04,the mean Target Registration Error(mTRE)with the X-ray image of 1.16 mm,and the single frame consumption time of 1.7 s.Compared to the traditional optimization based method,the proposed model has registration time greatly shortened.Compared with the learning-based method,this model ensures a high registration accuracy with quick registration.Therefore,the proposed model can meet the requirements of intraoperative real-time high-precision registration.
作者
徐少康
张战成
姚浩男
邹智伟
张宝成
XU Shaokang;ZHANG Zhancheng;YAO Haonan;ZOU Zhiwei;ZHANG Baocheng(School of Electronic and Information Engineering,Suzhou University of Science and Technology,Suzhou Jiangsu 215009,China;Suzhou Key Laboratory of Virtual Reality Intelligent Interaction and Application Technology,Suzhou University of Science and Technology,Suzhou Jiangsu 215009,China;Department of Orthopaedics,General Hospital of Central Theater Command,Wuhan Hubei 430070,China)
出处
《计算机应用》
CSCD
北大核心
2023年第2期589-594,共6页
journal of Computer Applications
基金
国家自然科学基金资助项目(61772237)。