Automatic crack detection of cement pavement chiefly benefits from the rapid development of deep learning,with convolutional neural networks(CNN)playing an important role in this field.However,as the performance of cr...Automatic crack detection of cement pavement chiefly benefits from the rapid development of deep learning,with convolutional neural networks(CNN)playing an important role in this field.However,as the performance of crack detection in cement pavement improves,the depth and width of the network structure are significantly increased,which necessitates more computing power and storage space.This limitation hampers the practical implementation of crack detection models on various platforms,particularly portable devices like small mobile devices.To solve these problems,we propose a dual-encoder-based network architecture that focuses on extracting more comprehensive fracture feature information and combines cross-fusion modules and coordinated attention mechanisms formore efficient feature fusion.Firstly,we use small channel convolution to construct shallow feature extractionmodule(SFEM)to extract low-level feature information of cracks in cement pavement images,in order to obtainmore information about cracks in the shallowfeatures of images.In addition,we construct large kernel atrous convolution(LKAC)to enhance crack information,which incorporates coordination attention mechanism for non-crack information filtering,and large kernel atrous convolution with different cores,using different receptive fields to extract more detailed edge and context information.Finally,the three-stage feature map outputs from the shallow feature extraction module is cross-fused with the two-stage feature map outputs from the large kernel atrous convolution module,and the shallow feature and detailed edge feature are fully fused to obtain the final crack prediction map.We evaluate our method on three public crack datasets:DeepCrack,CFD,and Crack500.Experimental results on theDeepCrack dataset demonstrate the effectiveness of our proposed method compared to state-of-the-art crack detection methods,which achieves Precision(P)87.2%,Recall(R)87.7%,and F-score(F1)87.4%.Thanks to our lightweight crack detectionmodel,the parameter count of the model in real-world detection scenarios has been significantly reduced to less than 2M.This advancement also facilitates technical support for portable scene detection.展开更多
Sinus floor elevation with a lateral window approach requires bone graft(BG)to ensure sufficient bone mass,and it is necessary to measure and analyse the BG region for follow-up of postoperative patients.However,the B...Sinus floor elevation with a lateral window approach requires bone graft(BG)to ensure sufficient bone mass,and it is necessary to measure and analyse the BG region for follow-up of postoperative patients.However,the BG region from cone-beam computed tomography(CBCT)images is connected to the margin of the maxillary sinus,and its boundary is blurred.Common segmentation methods are usually performed manually by experienced doctors,and are complicated by challenges such as low efficiency and low precision.In this study,an auto-segmentation approach was applied to the BG region within the maxillary sinus based on an atrous spatial pyramid convolution(ASPC)network.The ASPC module was adopted using residual connections to compose multiple atrous convolutions,which could extract more features on multiple scales.Subsequently,a segmentation network of the BG region with multiple ASPC modules was established,which effectively improved the segmentation performance.Although the training data were insufficient,our networks still achieved good auto-segmentation results,with a dice coefficient(Dice)of 87.13%,an Intersection over Union(Iou)of 78.01%,and a sensitivity of 95.02%.Compared with other methods,our method achieved a better segmentation effect,and effectively reduced the misjudgement of segmentation.Our method can thus be used to implement automatic segmentation of the BG region and improve doctors’work efficiency,which is of great importance for developing preliminary studies on the measurement of postoperative BG within the maxillary sinus.展开更多
With the rapid spread of the coronavirus disease 2019(COVID-19)worldwide,the establishment of an accurate and fast process to diagnose the disease is important.The routine real-time reverse transcription-polymerase ch...With the rapid spread of the coronavirus disease 2019(COVID-19)worldwide,the establishment of an accurate and fast process to diagnose the disease is important.The routine real-time reverse transcription-polymerase chain reaction(rRT-PCR)test that is currently used does not provide such high accuracy or speed in the screening process.Among the good choices for an accurate and fast test to screen COVID-19 are deep learning techniques.In this study,a new convolutional neural network(CNN)framework for COVID-19 detection using computed tomography(CT)images is proposed.The EfficientNet architecture is applied as the backbone structure of the proposed network,in which feature maps with different scales are extracted from the input CT scan images.In addition,atrous convolution at different rates is applied to these multi-scale feature maps to generate denser features,which facilitates in obtaining COVID-19 findings in CT scan images.The proposed framework is also evaluated in this study using a public CT dataset containing 2482 CT scan images from patients of both classes(i.e.,COVID-19 and non-COVID-19).To augment the dataset using additional training examples,adversarial examples generation is performed.The proposed system validates its superiority over the state-of-the-art methods with values exceeding 99.10%in terms of several metrics,such as accuracy,precision,recall,and F1.The proposed system also exhibits good robustness,when it is trained using a small portion of data(20%),with an accuracy of 96.16%.展开更多
In recent years,a gain in popularity and significance of science understanding has been observed due to the high paced progress in computer vision techniques and technologies.The primary focus of computer vision based...In recent years,a gain in popularity and significance of science understanding has been observed due to the high paced progress in computer vision techniques and technologies.The primary focus of computer vision based scene understanding is to label each and every pixel in an image as the category of the object it belongs to.So it is required to combine segmentation and detection in a single framework.Recently many successful computer vision methods has been developed to aid scene understanding for a variety of real world application.Scene understanding systems typically involves detection and segmentation of different natural and manmade things.A lot of research has been performed in recent years,mostly with a focus on things(a well-defined objects that has shape,orientations and size)with a less focus on stuff classes(amorphous regions that are unclear and lack a shape,size or other characteristics Stuff region describes many aspects of scene,like type,situation,environment of scene etc.and hence can be very helpful in scene understanding.Existing methods for scene understanding still have to cover a challenging path to cope up with the challenges of computational time,accuracy and robustness for varying level of scene complexity.A robust scene understanding method has to effectively deal with imbalanced distribution of classes,overlapping objects,fuzzy object boundaries and poorly localized objects.The proposed method presents Panoptic Segmentation on Cityscapes Dataset.Mobilenet-V2 is used as a backbone for feature extraction that is pre-trained on ImageNet.MobileNet-V2 with state-of-art encoder-decoder architecture of DeepLabV3+with some customization and optimization is employed Atrous convolution along with Spatial Pyramid Pooling are also utilized in the proposed method to make it more accurate and robust.Very promising and encouraging results have been achieved that indicates the potential of the proposed method for robust scene understanding in a fast and reliable way.展开更多
基金supported by the National Natural Science Foundation of China(No.62176034)the Science and Technology Research Program of Chongqing Municipal Education Commission(No.KJZD-M202300604)the Natural Science Foundation of Chongqing(Nos.cstc2021jcyj-msxmX0518,2023NSCQ-MSX1781).
文摘Automatic crack detection of cement pavement chiefly benefits from the rapid development of deep learning,with convolutional neural networks(CNN)playing an important role in this field.However,as the performance of crack detection in cement pavement improves,the depth and width of the network structure are significantly increased,which necessitates more computing power and storage space.This limitation hampers the practical implementation of crack detection models on various platforms,particularly portable devices like small mobile devices.To solve these problems,we propose a dual-encoder-based network architecture that focuses on extracting more comprehensive fracture feature information and combines cross-fusion modules and coordinated attention mechanisms formore efficient feature fusion.Firstly,we use small channel convolution to construct shallow feature extractionmodule(SFEM)to extract low-level feature information of cracks in cement pavement images,in order to obtainmore information about cracks in the shallowfeatures of images.In addition,we construct large kernel atrous convolution(LKAC)to enhance crack information,which incorporates coordination attention mechanism for non-crack information filtering,and large kernel atrous convolution with different cores,using different receptive fields to extract more detailed edge and context information.Finally,the three-stage feature map outputs from the shallow feature extraction module is cross-fused with the two-stage feature map outputs from the large kernel atrous convolution module,and the shallow feature and detailed edge feature are fully fused to obtain the final crack prediction map.We evaluate our method on three public crack datasets:DeepCrack,CFD,and Crack500.Experimental results on theDeepCrack dataset demonstrate the effectiveness of our proposed method compared to state-of-the-art crack detection methods,which achieves Precision(P)87.2%,Recall(R)87.7%,and F-score(F1)87.4%.Thanks to our lightweight crack detectionmodel,the parameter count of the model in real-world detection scenarios has been significantly reduced to less than 2M.This advancement also facilitates technical support for portable scene detection.
基金the National Key Research and Development Program of China(No.2017YFB1302900)the National Natural Science Foundation of China(Nos.81971709,M-0019,and 82011530141)+2 种基金the Foundation of Science and Technology Commission of Shanghai Municipality(Nos.19510712200,and 20490740700)the Shanghai Jiao Tong University Foundation on Medical and Technological Joint Science Research(Nos.ZH2018ZDA15,YG2019ZDA06,and ZH2018QNA23)the 2020 Key Research Project of Xiamen Municipal Government(No.3502Z20201030)。
文摘Sinus floor elevation with a lateral window approach requires bone graft(BG)to ensure sufficient bone mass,and it is necessary to measure and analyse the BG region for follow-up of postoperative patients.However,the BG region from cone-beam computed tomography(CBCT)images is connected to the margin of the maxillary sinus,and its boundary is blurred.Common segmentation methods are usually performed manually by experienced doctors,and are complicated by challenges such as low efficiency and low precision.In this study,an auto-segmentation approach was applied to the BG region within the maxillary sinus based on an atrous spatial pyramid convolution(ASPC)network.The ASPC module was adopted using residual connections to compose multiple atrous convolutions,which could extract more features on multiple scales.Subsequently,a segmentation network of the BG region with multiple ASPC modules was established,which effectively improved the segmentation performance.Although the training data were insufficient,our networks still achieved good auto-segmentation results,with a dice coefficient(Dice)of 87.13%,an Intersection over Union(Iou)of 78.01%,and a sensitivity of 95.02%.Compared with other methods,our method achieved a better segmentation effect,and effectively reduced the misjudgement of segmentation.Our method can thus be used to implement automatic segmentation of the BG region and improve doctors’work efficiency,which is of great importance for developing preliminary studies on the measurement of postoperative BG within the maxillary sinus.
基金support provided from the Deanship of Scientific Research at King Saud University through the,Research Group No.(RG-1435-050.)。
文摘With the rapid spread of the coronavirus disease 2019(COVID-19)worldwide,the establishment of an accurate and fast process to diagnose the disease is important.The routine real-time reverse transcription-polymerase chain reaction(rRT-PCR)test that is currently used does not provide such high accuracy or speed in the screening process.Among the good choices for an accurate and fast test to screen COVID-19 are deep learning techniques.In this study,a new convolutional neural network(CNN)framework for COVID-19 detection using computed tomography(CT)images is proposed.The EfficientNet architecture is applied as the backbone structure of the proposed network,in which feature maps with different scales are extracted from the input CT scan images.In addition,atrous convolution at different rates is applied to these multi-scale feature maps to generate denser features,which facilitates in obtaining COVID-19 findings in CT scan images.The proposed framework is also evaluated in this study using a public CT dataset containing 2482 CT scan images from patients of both classes(i.e.,COVID-19 and non-COVID-19).To augment the dataset using additional training examples,adversarial examples generation is performed.The proposed system validates its superiority over the state-of-the-art methods with values exceeding 99.10%in terms of several metrics,such as accuracy,precision,recall,and F1.The proposed system also exhibits good robustness,when it is trained using a small portion of data(20%),with an accuracy of 96.16%.
文摘In recent years,a gain in popularity and significance of science understanding has been observed due to the high paced progress in computer vision techniques and technologies.The primary focus of computer vision based scene understanding is to label each and every pixel in an image as the category of the object it belongs to.So it is required to combine segmentation and detection in a single framework.Recently many successful computer vision methods has been developed to aid scene understanding for a variety of real world application.Scene understanding systems typically involves detection and segmentation of different natural and manmade things.A lot of research has been performed in recent years,mostly with a focus on things(a well-defined objects that has shape,orientations and size)with a less focus on stuff classes(amorphous regions that are unclear and lack a shape,size or other characteristics Stuff region describes many aspects of scene,like type,situation,environment of scene etc.and hence can be very helpful in scene understanding.Existing methods for scene understanding still have to cover a challenging path to cope up with the challenges of computational time,accuracy and robustness for varying level of scene complexity.A robust scene understanding method has to effectively deal with imbalanced distribution of classes,overlapping objects,fuzzy object boundaries and poorly localized objects.The proposed method presents Panoptic Segmentation on Cityscapes Dataset.Mobilenet-V2 is used as a backbone for feature extraction that is pre-trained on ImageNet.MobileNet-V2 with state-of-art encoder-decoder architecture of DeepLabV3+with some customization and optimization is employed Atrous convolution along with Spatial Pyramid Pooling are also utilized in the proposed method to make it more accurate and robust.Very promising and encouraging results have been achieved that indicates the potential of the proposed method for robust scene understanding in a fast and reliable way.