●AIM:To determine the teaching effects of a real-time three dimensional(3D)visualization system in the operating room for early-stage phacoemulsification training.●METHODS:A total of 10 ophthalmology residents of th...●AIM:To determine the teaching effects of a real-time three dimensional(3D)visualization system in the operating room for early-stage phacoemulsification training.●METHODS:A total of 10 ophthalmology residents of the first-year postgraduate were included.All the residents were novices to cataract surgery.Real-time cataract surgical observations were performed using a custom-built 3D visualization system.The training lasted 4wk(32h)in all.A modified International Council of Ophthalmology’s Ophthalmology Surgical Competency Assessment Rubric(ICO-OSCAR)containing 4 specific steps of cataract surgery was applied.The self-assessment(self)and expert-assessment(expert)were performed through the microsurgical attempts in the wet lab for each participant.●RESULTS:Compared with pre-training assessments(self 3.2±0.8,expert 2.5±0.6),the overall mean scores of posttraining(self 5.2±0.4,expert 4.7±0.6)were significantly improved after real-time observation training of 3D visualization system(P<0.05).Scores of 4 surgical items were significantly improved both self and expert assessment after training(P<0.05).●CONCLUSION:The 3D observation training provides novice ophthalmic residents with a better understanding of intraocular microsurgical techniques.It is a useful tool to improve teaching efficiency of surgical education.展开更多
Facial wound segmentation plays a crucial role in preoperative planning and optimizing patient outcomes in various medical applications.In this paper,we propose an efficient approach for automating 3D facial wound seg...Facial wound segmentation plays a crucial role in preoperative planning and optimizing patient outcomes in various medical applications.In this paper,we propose an efficient approach for automating 3D facial wound segmentation using a two-stream graph convolutional network.Our method leverages the Cir3D-FaIR dataset and addresses the challenge of data imbalance through extensive experimentation with different loss functions.To achieve accurate segmentation,we conducted thorough experiments and selected a high-performing model from the trainedmodels.The selectedmodel demonstrates exceptional segmentation performance for complex 3D facial wounds.Furthermore,based on the segmentation model,we propose an improved approach for extracting 3D facial wound fillers and compare it to the results of the previous study.Our method achieved a remarkable accuracy of 0.9999993% on the test suite,surpassing the performance of the previous method.From this result,we use 3D printing technology to illustrate the shape of the wound filling.The outcomes of this study have significant implications for physicians involved in preoperative planning and intervention design.By automating facial wound segmentation and improving the accuracy ofwound-filling extraction,our approach can assist in carefully assessing and optimizing interventions,leading to enhanced patient outcomes.Additionally,it contributes to advancing facial reconstruction techniques by utilizing machine learning and 3D bioprinting for printing skin tissue implants.Our source code is available at https://github.com/SIMOGroup/WoundFilling3D.展开更多
This paper focuses on the effective utilization of data augmentation techniques for 3Dlidar point clouds to enhance the performance of neural network models.These point clouds,which represent spatial information throu...This paper focuses on the effective utilization of data augmentation techniques for 3Dlidar point clouds to enhance the performance of neural network models.These point clouds,which represent spatial information through a collection of 3D coordinates,have found wide-ranging applications.Data augmentation has emerged as a potent solution to the challenges posed by limited labeled data and the need to enhance model generalization capabilities.Much of the existing research is devoted to crafting novel data augmentation methods specifically for 3D lidar point clouds.However,there has been a lack of focus on making the most of the numerous existing augmentation techniques.Addressing this deficiency,this research investigates the possibility of combining two fundamental data augmentation strategies.The paper introduces PolarMix andMix3D,two commonly employed augmentation techniques,and presents a new approach,named RandomFusion.Instead of using a fixed or predetermined combination of augmentation methods,RandomFusion randomly chooses one method from a pool of options for each instance or sample.This innovative data augmentation technique randomly augments each point in the point cloud with either PolarMix or Mix3D.The crux of this strategy is the random choice between PolarMix and Mix3Dfor the augmentation of each point within the point cloud data set.The results of the experiments conducted validate the efficacy of the RandomFusion strategy in enhancing the performance of neural network models for 3D lidar point cloud semantic segmentation tasks.This is achieved without compromising computational efficiency.By examining the potential of merging different augmentation techniques,the research contributes significantly to a more comprehensive understanding of how to utilize existing augmentation methods for 3D lidar point clouds.RandomFusion data augmentation technique offers a simple yet effective method to leverage the diversity of augmentation techniques and boost the robustness of models.The insights gained from this research can pave the way for future work aimed at developing more advanced and efficient data augmentation strategies for 3D lidar point cloud analysis.展开更多
In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and...In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and inherently sparse.Therefore,it is very difficult to extract long-range contexts and effectively aggregate local features for semantic segmentation in 3D point cloud space.Most current methods either focus on local feature aggregation or long-range context dependency,but fail to directly establish a global-local feature extractor to complete the point cloud semantic segmentation tasks.In this paper,we propose a Transformer-based stratified graph convolutional network(SGT-Net),which enlarges the effective receptive field and builds direct long-range dependency.Specifically,we first propose a novel dense-sparse sampling strategy that provides dense local vertices and sparse long-distance vertices for subsequent graph convolutional network(GCN).Secondly,we propose a multi-key self-attention mechanism based on the Transformer to further weight augmentation for crucial neighboring relationships and enlarge the effective receptive field.In addition,to further improve the efficiency of the network,we propose a similarity measurement module to determine whether the neighborhood near the center point is effective.We demonstrate the validity and superiority of our method on the S3DIS and ShapeNet datasets.Through ablation experiments and segmentation visualization,we verify that the SGT model can improve the performance of the point cloud semantic segmentation.展开更多
Building model data organization is often programmed to solve a specific problem,resulting in the inability to organize indoor and outdoor 3D scenes in an integrated manner.In this paper,existing building spatial data...Building model data organization is often programmed to solve a specific problem,resulting in the inability to organize indoor and outdoor 3D scenes in an integrated manner.In this paper,existing building spatial data models are studied,and the characteristics of building information modeling standards(IFC),city geographic modeling language(CityGML),indoor modeling language(IndoorGML),and other models are compared and analyzed.CityGML and IndoorGML models face challenges in satisfying diverse application scenarios and requirements due to limitations in their expression capabilities.It is proposed to combine the semantic information of the model objects to effectively partition and organize the indoor and outdoor spatial 3D model data and to construct the indoor and outdoor data organization mechanism of“chunk-layer-subobject-entrances-area-detail object.”This method is verified by proposing a 3D data organization method for indoor and outdoor space and constructing a 3D visualization system based on it.展开更多
Tumour segmentation in medical images(especially 3D tumour segmentation)is highly challenging due to the possible similarity between tumours and adjacent tissues,occurrence of multiple tumours and variable tumour shap...Tumour segmentation in medical images(especially 3D tumour segmentation)is highly challenging due to the possible similarity between tumours and adjacent tissues,occurrence of multiple tumours and variable tumour shapes and sizes.The popular deep learning‐based segmentation algorithms generally rely on the convolutional neural network(CNN)and Transformer.The former cannot extract the global image features effectively while the latter lacks the inductive bias and involves the complicated computation for 3D volume data.The existing hybrid CNN‐Transformer network can only provide the limited performance improvement or even poorer segmentation performance than the pure CNN.To address these issues,a short‐term and long‐term memory self‐attention network is proposed.Firstly,a distinctive self‐attention block uses the Transformer to explore the correlation among the region features at different levels extracted by the CNN.Then,the memory structure filters and combines the above information to exclude the similar regions and detect the multiple tumours.Finally,the multi‐layer reconstruction blocks will predict the tumour boundaries.Experimental results demonstrate that our method outperforms other methods in terms of subjective visual and quantitative evaluation.Compared with the most competitive method,the proposed method provides Dice(82.4%vs.76.6%)and Hausdorff distance 95%(HD95)(10.66 vs.11.54 mm)on the KiTS19 as well as Dice(80.2%vs.78.4%)and HD95(9.632 vs.12.17 mm)on the LiTS.展开更多
Currently,deep learning is widely used in medical image segmentation and has achieved good results.However,3D medical image segmentation tasks with diverse lesion characters,blurred edges,and unstable positions requir...Currently,deep learning is widely used in medical image segmentation and has achieved good results.However,3D medical image segmentation tasks with diverse lesion characters,blurred edges,and unstable positions require complex networks with a large number of parameters.It is computationally expensive and results in high requirements on equipment,making it hard to deploy the network in hospitals.In this work,we propose a method for network lightweighting and applied it to a 3D CNN based network.We experimented on a COVID-19 lesion segmentation dataset.Specifically,we use three cascaded one-dimensional convolutions to replace a 3D convolution,and integrate instance normalization with the previous layer of one-dimensional convolutions to accelerate network inference.In addition,we simplify test-time augmentation and deep supervision of the network.Experiments show that the lightweight network can reduce the prediction time of each sample and the memory usage by 50%and reduce the number of parameters by 60%compared with the original network.The training time of one epoch is also reduced by 50%with the segmentation accuracy dropped within the acceptable range.展开更多
基金Supported by research grants from the National Key Research and Development Program of China(No.2020YFE0204400)the National Natural Science Foundation of China(No.82271042+1 种基金No.52203191)the Zhejiang Province Key Research and Development Program(No.2023C03090).
文摘●AIM:To determine the teaching effects of a real-time three dimensional(3D)visualization system in the operating room for early-stage phacoemulsification training.●METHODS:A total of 10 ophthalmology residents of the first-year postgraduate were included.All the residents were novices to cataract surgery.Real-time cataract surgical observations were performed using a custom-built 3D visualization system.The training lasted 4wk(32h)in all.A modified International Council of Ophthalmology’s Ophthalmology Surgical Competency Assessment Rubric(ICO-OSCAR)containing 4 specific steps of cataract surgery was applied.The self-assessment(self)and expert-assessment(expert)were performed through the microsurgical attempts in the wet lab for each participant.●RESULTS:Compared with pre-training assessments(self 3.2±0.8,expert 2.5±0.6),the overall mean scores of posttraining(self 5.2±0.4,expert 4.7±0.6)were significantly improved after real-time observation training of 3D visualization system(P<0.05).Scores of 4 surgical items were significantly improved both self and expert assessment after training(P<0.05).●CONCLUSION:The 3D observation training provides novice ophthalmic residents with a better understanding of intraocular microsurgical techniques.It is a useful tool to improve teaching efficiency of surgical education.
文摘Facial wound segmentation plays a crucial role in preoperative planning and optimizing patient outcomes in various medical applications.In this paper,we propose an efficient approach for automating 3D facial wound segmentation using a two-stream graph convolutional network.Our method leverages the Cir3D-FaIR dataset and addresses the challenge of data imbalance through extensive experimentation with different loss functions.To achieve accurate segmentation,we conducted thorough experiments and selected a high-performing model from the trainedmodels.The selectedmodel demonstrates exceptional segmentation performance for complex 3D facial wounds.Furthermore,based on the segmentation model,we propose an improved approach for extracting 3D facial wound fillers and compare it to the results of the previous study.Our method achieved a remarkable accuracy of 0.9999993% on the test suite,surpassing the performance of the previous method.From this result,we use 3D printing technology to illustrate the shape of the wound filling.The outcomes of this study have significant implications for physicians involved in preoperative planning and intervention design.By automating facial wound segmentation and improving the accuracy ofwound-filling extraction,our approach can assist in carefully assessing and optimizing interventions,leading to enhanced patient outcomes.Additionally,it contributes to advancing facial reconstruction techniques by utilizing machine learning and 3D bioprinting for printing skin tissue implants.Our source code is available at https://github.com/SIMOGroup/WoundFilling3D.
基金funded in part by the Key Project of Nature Science Research for Universities of Anhui Province of China(No.2022AH051720)in part by the Science and Technology Development Fund,Macao SAR(Grant Nos.0093/2022/A2,0076/2022/A2 and 0008/2022/AGJ)in part by the China University Industry-University-Research Collaborative Innovation Fund(No.2021FNA04017).
文摘This paper focuses on the effective utilization of data augmentation techniques for 3Dlidar point clouds to enhance the performance of neural network models.These point clouds,which represent spatial information through a collection of 3D coordinates,have found wide-ranging applications.Data augmentation has emerged as a potent solution to the challenges posed by limited labeled data and the need to enhance model generalization capabilities.Much of the existing research is devoted to crafting novel data augmentation methods specifically for 3D lidar point clouds.However,there has been a lack of focus on making the most of the numerous existing augmentation techniques.Addressing this deficiency,this research investigates the possibility of combining two fundamental data augmentation strategies.The paper introduces PolarMix andMix3D,two commonly employed augmentation techniques,and presents a new approach,named RandomFusion.Instead of using a fixed or predetermined combination of augmentation methods,RandomFusion randomly chooses one method from a pool of options for each instance or sample.This innovative data augmentation technique randomly augments each point in the point cloud with either PolarMix or Mix3D.The crux of this strategy is the random choice between PolarMix and Mix3Dfor the augmentation of each point within the point cloud data set.The results of the experiments conducted validate the efficacy of the RandomFusion strategy in enhancing the performance of neural network models for 3D lidar point cloud semantic segmentation tasks.This is achieved without compromising computational efficiency.By examining the potential of merging different augmentation techniques,the research contributes significantly to a more comprehensive understanding of how to utilize existing augmentation methods for 3D lidar point clouds.RandomFusion data augmentation technique offers a simple yet effective method to leverage the diversity of augmentation techniques and boost the robustness of models.The insights gained from this research can pave the way for future work aimed at developing more advanced and efficient data augmentation strategies for 3D lidar point cloud analysis.
基金supported in part by the National Natural Science Foundation of China under Grant Nos.U20A20197,62306187the Foundation of Ministry of Industry and Information Technology TC220H05X-04.
文摘In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and inherently sparse.Therefore,it is very difficult to extract long-range contexts and effectively aggregate local features for semantic segmentation in 3D point cloud space.Most current methods either focus on local feature aggregation or long-range context dependency,but fail to directly establish a global-local feature extractor to complete the point cloud semantic segmentation tasks.In this paper,we propose a Transformer-based stratified graph convolutional network(SGT-Net),which enlarges the effective receptive field and builds direct long-range dependency.Specifically,we first propose a novel dense-sparse sampling strategy that provides dense local vertices and sparse long-distance vertices for subsequent graph convolutional network(GCN).Secondly,we propose a multi-key self-attention mechanism based on the Transformer to further weight augmentation for crucial neighboring relationships and enlarge the effective receptive field.In addition,to further improve the efficiency of the network,we propose a similarity measurement module to determine whether the neighborhood near the center point is effective.We demonstrate the validity and superiority of our method on the S3DIS and ShapeNet datasets.Through ablation experiments and segmentation visualization,we verify that the SGT model can improve the performance of the point cloud semantic segmentation.
文摘Building model data organization is often programmed to solve a specific problem,resulting in the inability to organize indoor and outdoor 3D scenes in an integrated manner.In this paper,existing building spatial data models are studied,and the characteristics of building information modeling standards(IFC),city geographic modeling language(CityGML),indoor modeling language(IndoorGML),and other models are compared and analyzed.CityGML and IndoorGML models face challenges in satisfying diverse application scenarios and requirements due to limitations in their expression capabilities.It is proposed to combine the semantic information of the model objects to effectively partition and organize the indoor and outdoor spatial 3D model data and to construct the indoor and outdoor data organization mechanism of“chunk-layer-subobject-entrances-area-detail object.”This method is verified by proposing a 3D data organization method for indoor and outdoor space and constructing a 3D visualization system based on it.
基金supported by the National Key Research and Development Program of China under Grant No.2018YFE0206900the National Natural Science Foundation of China under Grant No.61871440 and CAAI‐Huawei Mind-Spore Open Fund.
文摘Tumour segmentation in medical images(especially 3D tumour segmentation)is highly challenging due to the possible similarity between tumours and adjacent tissues,occurrence of multiple tumours and variable tumour shapes and sizes.The popular deep learning‐based segmentation algorithms generally rely on the convolutional neural network(CNN)and Transformer.The former cannot extract the global image features effectively while the latter lacks the inductive bias and involves the complicated computation for 3D volume data.The existing hybrid CNN‐Transformer network can only provide the limited performance improvement or even poorer segmentation performance than the pure CNN.To address these issues,a short‐term and long‐term memory self‐attention network is proposed.Firstly,a distinctive self‐attention block uses the Transformer to explore the correlation among the region features at different levels extracted by the CNN.Then,the memory structure filters and combines the above information to exclude the similar regions and detect the multiple tumours.Finally,the multi‐layer reconstruction blocks will predict the tumour boundaries.Experimental results demonstrate that our method outperforms other methods in terms of subjective visual and quantitative evaluation.Compared with the most competitive method,the proposed method provides Dice(82.4%vs.76.6%)and Hausdorff distance 95%(HD95)(10.66 vs.11.54 mm)on the KiTS19 as well as Dice(80.2%vs.78.4%)and HD95(9.632 vs.12.17 mm)on the LiTS.
文摘Currently,deep learning is widely used in medical image segmentation and has achieved good results.However,3D medical image segmentation tasks with diverse lesion characters,blurred edges,and unstable positions require complex networks with a large number of parameters.It is computationally expensive and results in high requirements on equipment,making it hard to deploy the network in hospitals.In this work,we propose a method for network lightweighting and applied it to a 3D CNN based network.We experimented on a COVID-19 lesion segmentation dataset.Specifically,we use three cascaded one-dimensional convolutions to replace a 3D convolution,and integrate instance normalization with the previous layer of one-dimensional convolutions to accelerate network inference.In addition,we simplify test-time augmentation and deep supervision of the network.Experiments show that the lightweight network can reduce the prediction time of each sample and the memory usage by 50%and reduce the number of parameters by 60%compared with the original network.The training time of one epoch is also reduced by 50%with the segmentation accuracy dropped within the acceptable range.