Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ...Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ice class often navigate in channels opened up by icebreakers.Navigation in the ice channel often depends on good maneuverability skills and abundant experience from the captain to a large extent.The ship may get stuck if steered into ice fields off the channel.Under this circumstance,it is very important to study how to identify the boundary lines of ice channels with a reliable method.In this paper,a two-staged ice channel identification method is developed based on image segmentation and corner point regression.The first stage employs the image segmentation method to extract channel regions.In the second stage,an intelligent corner regression network is proposed to extract the channel boundary lines from the channel region.A non-intelligent angle-based filtering and clustering method is proposed and compared with corner point regression network.The training and evaluation of the segmentation method and corner regression network are carried out on the synthetic and real ice channel dataset.The evaluation results show that the accuracy of the method using the corner point regression network in the second stage is achieved as high as 73.33%on the synthetic ice channel dataset and 70.66%on the real ice channel dataset,and the processing speed can reach up to 14.58frames per second.展开更多
In this study,the vertical components of broadband teleseismic P wave data recorded by China Earthquake Network are used to image the rupture processes of the February 6th,2023 Turkish earthquake doublet via back proj...In this study,the vertical components of broadband teleseismic P wave data recorded by China Earthquake Network are used to image the rupture processes of the February 6th,2023 Turkish earthquake doublet via back projection analysis.Data in two frequency bands(0.5-2 Hz and 1-3 Hz)are used in the imaging processes.The results show that the rupture of the first event extends about 200 km to the northeast and about 150 km to the southwest,lasting~90 s in total.The southwestern rupture is triggered by the northeastern rupture,demonstrating a sequential bidirectional unilateral rupture pattern.The rupture of the second event extends approximately 80 km in both northeast and west directions,lasting~35 s in total and demonstrates a typical bilateral rupture feature.The cascading ruptures on both sides also reflect the occurrence of selective rupture behaviors on bifurcated faults.In addition,we observe super-shear ruptures on certain fault sections with relatively straight fault structures and sparse aftershocks.展开更多
Visual semantic segmentation aims at separating a visual sample into diverse blocks with specific semantic attributes and identifying the category for each block,and it plays a crucial role in environmental perception...Visual semantic segmentation aims at separating a visual sample into diverse blocks with specific semantic attributes and identifying the category for each block,and it plays a crucial role in environmental perception.Conventional learning-based visual semantic segmentation approaches count heavily on largescale training data with dense annotations and consistently fail to estimate accurate semantic labels for unseen categories.This obstruction spurs a craze for studying visual semantic segmentation with the assistance of few/zero-shot learning.The emergence and rapid progress of few/zero-shot visual semantic segmentation make it possible to learn unseen categories from a few labeled or even zero-labeled samples,which advances the extension to practical applications.Therefore,this paper focuses on the recently published few/zero-shot visual semantic segmentation methods varying from 2D to 3D space and explores the commonalities and discrepancies of technical settlements under different segmentation circumstances.Specifically,the preliminaries on few/zeroshot visual semantic segmentation,including the problem definitions,typical datasets,and technical remedies,are briefly reviewed and discussed.Moreover,three typical instantiations are involved to uncover the interactions of few/zero-shot learning with visual semantic segmentation,including image semantic segmentation,video object segmentation,and 3D segmentation.Finally,the future challenges of few/zero-shot visual semantic segmentation are discussed.展开更多
Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR...Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR).A new approach to automatic OAR seg-mentation in the chest cavity in Computed Tomography(CT)images is presented.The proposed approach is based on the modified U‐Net architecture with the ResNet‐34 encoder,which is the baseline adopted in this work.The new two‐branch CS‐SA U‐Net architecture is proposed,which consists of two parallel U‐Net models in which self‐attention blocks with cosine similarity as query‐key similarity function(CS‐SA)blocks are inserted between the encoder and decoder,which enabled the use of con-sistency regularisation.The proposed solution demonstrates state‐of‐the‐art performance for the problem of OAR segmentation in CT images on the publicly available SegTHOR benchmark dataset in terms of a Dice coefficient(oesophagus-0.8714,heart-0.9516,trachea-0.9286,aorta-0.9510)and Hausdorff distance(oesophagus-0.2541,heart-0.1514,trachea-0.1722,aorta-0.1114)and significantly outperforms the baseline.The current approach is demonstrated to be viable for improving the quality of OAR segmentation for radiotherapy planning.展开更多
Lung cancer is a leading cause of global mortality rates.Early detection of pulmonary tumors can significantly enhance the survival rate of patients.Recently,various Computer-Aided Diagnostic(CAD)methods have been dev...Lung cancer is a leading cause of global mortality rates.Early detection of pulmonary tumors can significantly enhance the survival rate of patients.Recently,various Computer-Aided Diagnostic(CAD)methods have been developed to enhance the detection of pulmonary nodules with high accuracy.Nevertheless,the existing method-ologies cannot obtain a high level of specificity and sensitivity.The present study introduces a novel model for Lung Cancer Segmentation and Classification(LCSC),which incorporates two improved architectures,namely the improved U-Net architecture and the improved AlexNet architecture.The LCSC model comprises two distinct stages.The first stage involves the utilization of an improved U-Net architecture to segment candidate nodules extracted from the lung lobes.Subsequently,an improved AlexNet architecture is employed to classify lung cancer.During the first stage,the proposed model demonstrates a dice accuracy of 0.855,a precision of 0.933,and a recall of 0.789 for the segmentation of candidate nodules.The suggested improved AlexNet architecture attains 97.06%accuracy,a true positive rate of 96.36%,a true negative rate of 97.77%,a positive predictive value of 97.74%,and a negative predictive value of 96.41%for classifying pulmonary cancer as either benign or malignant.The proposed LCSC model is tested and evaluated employing the publically available dataset furnished by the Lung Image Database Consortium and Image Database Resource Initiative(LIDC-IDRI).This proposed technique exhibits remarkable performance compared to the existing methods by using various evaluation parameters.展开更多
High-resolution remote sensing image segmentation is a challenging task. In urban remote sensing, the presenceof occlusions and shadows often results in blurred or invisible object boundaries, thereby increasing the d...High-resolution remote sensing image segmentation is a challenging task. In urban remote sensing, the presenceof occlusions and shadows often results in blurred or invisible object boundaries, thereby increasing the difficultyof segmentation. In this paper, an improved network with a cross-region self-attention mechanism for multi-scalefeatures based onDeepLabv3+is designed to address the difficulties of small object segmentation and blurred targetedge segmentation. First,we use CrossFormer as the backbone feature extraction network to achieve the interactionbetween large- and small-scale features, and establish self-attention associations between features at both large andsmall scales to capture global contextual feature information. Next, an improved atrous spatial pyramid poolingmodule is introduced to establish multi-scale feature maps with large- and small-scale feature associations, andattention vectors are added in the channel direction to enable adaptive adjustment of multi-scale channel features.The proposed networkmodel is validated using the PotsdamandVaihingen datasets. The experimental results showthat, compared with existing techniques, the network model designed in this paper can extract and fuse multiscaleinformation, more clearly extract edge information and small-scale information, and segment boundariesmore smoothly. Experimental results on public datasets demonstrate the superiority of ourmethod compared withseveral state-of-the-art networks.展开更多
This paper focuses on the task of few-shot 3D point cloud semantic segmentation.Despite some progress,this task still encounters many issues due to the insufficient samples given,e.g.,incomplete object segmentation an...This paper focuses on the task of few-shot 3D point cloud semantic segmentation.Despite some progress,this task still encounters many issues due to the insufficient samples given,e.g.,incomplete object segmentation and inaccurate semantic discrimination.To tackle these issues,we first leverage part-whole relationships into the task of 3D point cloud semantic segmentation to capture semantic integrity,which is empowered by the dynamic capsule routing with the module of 3D Capsule Networks(CapsNets)in the embedding network.Concretely,the dynamic routing amalgamates geometric information of the 3D point cloud data to construct higher-level feature representations,which capture the relationships between object parts and their wholes.Secondly,we designed a multi-prototype enhancement module to enhance the prototype discriminability.Specifically,the single-prototype enhancement mechanism is expanded to the multi-prototype enhancement version for capturing rich semantics.Besides,the shot-correlation within the category is calculated via the interaction of different samples to enhance the intra-category similarity.Ablation studies prove that the involved part-whole relations and proposed multi-prototype enhancement module help to achieve complete object segmentation and improve semantic discrimination.Moreover,under the integration of these two modules,quantitative and qualitative experiments on two public benchmarks,including S3DIS and ScanNet,indicate the superior performance of the proposed framework on the task of 3D point cloud semantic segmentation,compared to some state-of-the-art methods.展开更多
Breast cancer is one of the major health issues with high mortality rates and a substantial impact on patients and healthcare systems worldwide.Various Computer-Aided Diagnosis(CAD)tools,based on breast thermograms,ha...Breast cancer is one of the major health issues with high mortality rates and a substantial impact on patients and healthcare systems worldwide.Various Computer-Aided Diagnosis(CAD)tools,based on breast thermograms,have been developed for early detection of this disease.However,accurately segmenting the Region of Interest(ROI)fromthermograms remains challenging.This paper presents an approach that leverages image acquisition protocol parameters to identify the lateral breast region and estimate its bottomboundary using a second-degree polynomial.The proposed method demonstrated high efficacy,achieving an impressive Jaccard coefficient of 86%and a Dice index of 92%when evaluated against manually created ground truths.Textural features were extracted from each view’s ROI,with significant features selected via Mutual Information for training Multi-Layer Perceptron(MLP)and K-Nearest Neighbors(KNN)classifiers.Our findings revealed that the MLP classifier outperformed the KNN,achieving an accuracy of 86%,a specificity of 100%,and an Area Under the Curve(AUC)of 0.85.The consistency of the method across both sides of the breast suggests its viability as an auto-segmentation tool.Furthermore,the classification results suggests that lateral views of breast thermograms harbor valuable features that can significantly aid in the early detection of breast cancer.展开更多
In standard iris recognition systems,a cooperative imaging framework is employed that includes a light source with a near-infrared wavelength to reveal iris texture,look-and-stare constraints,and a close distance requ...In standard iris recognition systems,a cooperative imaging framework is employed that includes a light source with a near-infrared wavelength to reveal iris texture,look-and-stare constraints,and a close distance requirement to the capture device.When these conditions are relaxed,the system’s performance significantly deteriorates due to segmentation and feature extraction problems.Herein,a novel segmentation algorithm is proposed to correctly detect the pupil and limbus boundaries of iris images captured in unconstrained environments.First,the algorithm scans the whole iris image in the Hue Saturation Value(HSV)color space for local maxima to detect the sclera region.The image quality is then assessed by computing global features in red,green and blue(RGB)space,as noisy images have heterogeneous characteristics.The iris images are accordingly classified into seven categories based on their global RGB intensities.After the classification process,the images are filtered,and adaptive thresholding is applied to enhance the global contrast and detect the outer iris ring.Finally,to characterize the pupil area,the algorithm scans the cropped outer ring region for local minima values to identify the darkest area in the iris ring.The experimental results show that our method outperforms existing segmentation techniques using the UBIRIS.v1 and v2 databases and achieved a segmentation accuracy of 99.32 on UBIRIS.v1 and an error rate of 1.59 on UBIRIS.v2.展开更多
Dynamic Simultaneous Localization and Mapping(SLAM)in visual scenes is currently a major research area in fields such as robot navigation and autonomous driving.However,in the face of complex real-world envi-ronments,...Dynamic Simultaneous Localization and Mapping(SLAM)in visual scenes is currently a major research area in fields such as robot navigation and autonomous driving.However,in the face of complex real-world envi-ronments,current dynamic SLAM systems struggle to achieve precise localization and map construction.With the advancement of deep learning,there has been increasing interest in the development of deep learning-based dynamic SLAM visual odometry in recent years,and more researchers are turning to deep learning techniques to address the challenges of dynamic SLAM.Compared to dynamic SLAM systems based on deep learning methods such as object detection and semantic segmentation,dynamic SLAM systems based on instance segmentation can not only detect dynamic objects in the scene but also distinguish different instances of the same type of object,thereby reducing the impact of dynamic objects on the SLAM system’s positioning.This article not only introduces traditional dynamic SLAM systems based on mathematical models but also provides a comprehensive analysis of existing instance segmentation algorithms and dynamic SLAM systems based on instance segmentation,comparing and summarizing their advantages and disadvantages.Through comparisons on datasets,it is found that instance segmentation-based methods have significant advantages in accuracy and robustness in dynamic environments.However,the real-time performance of instance segmentation algorithms hinders the widespread application of dynamic SLAM systems.In recent years,the rapid development of single-stage instance segmentationmethods has brought hope for the widespread application of dynamic SLAM systems based on instance segmentation.Finally,possible future research directions and improvementmeasures are discussed for reference by relevant professionals.展开更多
With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection abil...With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection ability of a single vehicle limits the SLAM performance in wide areas.Thereby,cooperative SLAM using multiple vehicles has become an important research direction.The key factor of cooperative SLAM is timely and efficient sonar image transmission among underwater vehicles.However,the limited bandwidth of underwater acoustic channels contradicts a large amount of sonar image data.It is essential to compress the images before transmission.Recently,deep neural networks have great value in image compression by virtue of the powerful learning ability of neural networks,but the existing sonar image compression methods based on neural network usually focus on the pixel-level information without the semantic-level information.In this paper,we propose a novel underwater acoustic transmission scheme called UAT-SSIC that includes semantic segmentation-based sonar image compression(SSIC)framework and the joint source-channel codec,to improve the accuracy of the semantic information of the reconstructed sonar image at the receiver.The SSIC framework consists of Auto-Encoder structure-based sonar image compression network,which is measured by a semantic segmentation network's residual.Considering that sonar images have the characteristics of blurred target edges,the semantic segmentation network used a special dilated convolution neural network(DiCNN)to enhance segmentation accuracy by expanding the range of receptive fields.The joint source-channel codec with unequal error protection is proposed that adjusts the power level of the transmitted data,which deal with sonar image transmission error caused by the serious underwater acoustic channel.Experiment results demonstrate that our method preserves more semantic information,with advantages over existing methods at the same compression ratio.It also improves the error tolerance and packet loss resistance of transmission.展开更多
The real-time detection and instance segmentation of strawberries constitute fundamental components in the development of strawberry harvesting robots.Real-time identification of strawberries in an unstructured envi-r...The real-time detection and instance segmentation of strawberries constitute fundamental components in the development of strawberry harvesting robots.Real-time identification of strawberries in an unstructured envi-ronment is a challenging task.Current instance segmentation algorithms for strawberries suffer from issues such as poor real-time performance and low accuracy.To this end,the present study proposes an Efficient YOLACT(E-YOLACT)algorithm for strawberry detection and segmentation based on the YOLACT framework.The key enhancements of the E-YOLACT encompass the development of a lightweight attention mechanism,pyramid squeeze shuffle attention(PSSA),for efficient feature extraction.Additionally,an attention-guided context-feature pyramid network(AC-FPN)is employed instead of FPN to optimize the architecture’s performance.Furthermore,a feature-enhanced model(FEM)is introduced to enhance the prediction head’s capabilities,while efficient fast non-maximum suppression(EF-NMS)is devised to improve non-maximum suppression.The experimental results demonstrate that the E-YOLACT achieves a Box-mAP and Mask-mAP of 77.9 and 76.6,respectively,on the custom dataset.Moreover,it exhibits an impressive category accuracy of 93.5%.Notably,the E-YOLACT also demonstrates a remarkable real-time detection capability with a speed of 34.8 FPS.The method proposed in this article presents an efficient approach for the vision system of a strawberry-picking robot.展开更多
Ophthalmology is a subject that highly depends on imaging examination.Artificial intelligence(AI)technology has great potential in medical imaging analysis,including image diagnosis,classification,grading,guiding trea...Ophthalmology is a subject that highly depends on imaging examination.Artificial intelligence(AI)technology has great potential in medical imaging analysis,including image diagnosis,classification,grading,guiding treatment and evaluating prognosis.The combination of the two can realize mass screening of grass-roots eye health,making it possible to seek medical treatment in the mode of“first treatment at the grass-roots level,two-way referral,emergency and slow treatment,and linkage between the upper and lower levels”.On the basis of summarizing the AI technology carried out by scholars and their teams all over the world in the field of ophthalmology,quite a lot of studies have confirmed that machine learning can assist in diagnosis,grading,providing optimal treatment plans and evaluating prognosis in corneal and conjunctival diseases,ametropia,lens diseases,glaucoma,iris diseases,etc.This paper systematically shows the application and progress of AI technology in common anterior segment ocular diseases,the current limitations,and prospects for the future.展开更多
The recent advancements in vision technology have had a significant impact on our ability to identify multiple objects and understand complex scenes.Various technologies,such as augmented reality-driven scene integrat...The recent advancements in vision technology have had a significant impact on our ability to identify multiple objects and understand complex scenes.Various technologies,such as augmented reality-driven scene integration,robotic navigation,autonomous driving,and guided tour systems,heavily rely on this type of scene comprehension.This paper presents a novel segmentation approach based on the UNet network model,aimed at recognizing multiple objects within an image.The methodology begins with the acquisition and preprocessing of the image,followed by segmentation using the fine-tuned UNet architecture.Afterward,we use an annotation tool to accurately label the segmented regions.Upon labeling,significant features are extracted from these segmented objects,encompassing KAZE(Accelerated Segmentation and Extraction)features,energy-based edge detection,frequency-based,and blob characteristics.For the classification stage,a convolution neural network(CNN)is employed.This comprehensive methodology demonstrates a robust framework for achieving accurate and efficient recognition of multiple objects in images.The experimental results,which include complex object datasets like MSRC-v2 and PASCAL-VOC12,have been documented.After analyzing the experimental results,it was found that the PASCAL-VOC12 dataset achieved an accuracy rate of 95%,while the MSRC-v2 dataset achieved an accuracy of 89%.The evaluation performed on these diverse datasets highlights a notably impressive level of performance.展开更多
In cornfields,factors such as the similarity between corn seedlings and weeds and the blurring of plant edge details pose challenges to corn and weed segmentation.In addition,remote areas such as farmland are usually ...In cornfields,factors such as the similarity between corn seedlings and weeds and the blurring of plant edge details pose challenges to corn and weed segmentation.In addition,remote areas such as farmland are usually constrained by limited computational resources and limited collected data.Therefore,it becomes necessary to lighten the model to better adapt to complex cornfield scene,and make full use of the limited data information.In this paper,we propose an improved image segmentation algorithm based on unet.Firstly,the inverted residual structure is introduced into the contraction path to reduce the number of parameters in the training process and improve the feature extraction ability;secondly,the pyramid pooling module is introduced to enhance the network’s ability of acquiring contextual information as well as the ability of dealing with the small target loss problem;and lastly,Finally,to further enhance the segmentation capability of the model,the squeeze and excitation mechanism is introduced in the expansion path.We used images of corn seedlings collected in the field and publicly available corn weed datasets to evaluate the improved model.The improved model has a total parameter of 3.79 M and miou can achieve 87.9%.The fps on a single 3050 ti video card is about 58.9.The experimental results show that the network proposed in this paper can quickly segment corn weeds in a cornfield scenario with good segmentation accuracy.展开更多
We present a method using Zernike moments for quantifying rotational and reflectional symmetries in scanning transmission electron microscopy(STEM)images,aimed at improving structural analysis of materials at the atom...We present a method using Zernike moments for quantifying rotational and reflectional symmetries in scanning transmission electron microscopy(STEM)images,aimed at improving structural analysis of materials at the atomic scale.This technique is effective against common imaging noises and is potentially suited for low-dose imaging and identifying quantum defects.We showcase its utility in the unsupervised segmentation of polytypes in a twisted bilayer TaS_(2),enabling accurate differentiation of structural phases and monitoring transitions caused by electron beam effects.This approach enhances the analysis of structural variations in crystalline materials,marking a notable advancement in the characterization of structures in materials science.展开更多
Colorectal cancer,a malignant lesion of the intestines,significantly affects human health and life,emphasizing the necessity of early detection and treatment.Accurate segmentation of colorectal cancer regions directly...Colorectal cancer,a malignant lesion of the intestines,significantly affects human health and life,emphasizing the necessity of early detection and treatment.Accurate segmentation of colorectal cancer regions directly impacts subsequent staging,treatment methods,and prognostic outcomes.While colonoscopy is an effective method for detecting colorectal cancer,its data collection approach can cause patient discomfort.To address this,current research utilizes Computed Tomography(CT)imaging;however,conventional CT images only capture transient states,lacking sufficient representational capability to precisely locate colorectal cancer.This study utilizes enhanced CT images,constructing a deep feature network from the arterial,portal venous,and delay phases to simulate the physician’s diagnostic process and achieve accurate cancer segmentation.The innovations include:1)Utilizing portal venous phase CT images to introduce a context-aware multi-scale aggregation module for preliminary shape extraction of colorectal cancer.2)Building an image sequence based on arterial and delay phases,transforming the cancer segmentation issue into an anomaly detection problem,establishing a pixel-pairing strategy,and proposing a colorectal cancer segmentation algorithm using a Siamese network.Experiments with 84 clinical cases of colorectal cancer enhanced CT data demonstrated an Area Overlap Measure of 0.90,significantly better than Fully Convolutional Networks(FCNs)at 0.20.Future research will explore the relationship between conventional and enhanced CT to further reduce segmentation time and improve accuracy.展开更多
In actual traffic scenarios,precise recognition of traffic participants,such as vehicles and pedestrians,is crucial for intelligent transportation.This study proposes an improved algorithm built on Mask-RCNN to enhanc...In actual traffic scenarios,precise recognition of traffic participants,such as vehicles and pedestrians,is crucial for intelligent transportation.This study proposes an improved algorithm built on Mask-RCNN to enhance the ability of autonomous driving systems to recognize traffic participants.The algorithmincorporates long and shortterm memory networks and the fused attention module(GSAM,GCT,and Spatial Attention Module)to enhance the algorithm’s capability to process both global and local information.Additionally,to increase the network’s initial operation stability,the original network activation function was replaced with Gaussian error linear unit.Experiments were conducted using the publicly available Cityscapes dataset.Comparing the test results,it was observed that the revised algorithmoutperformed the original algorithmin terms of AP_(50),AP_(75),and othermetrics by 8.7%and 9.6%for target detection and 12.5%and 13.3%for segmentation.展开更多
Segment Anything Model(SAM)is a cutting-edge model that has shown impressive performance in general object segmentation.The birth of the segment anything is a groundbreaking step towards creating a universal intellige...Segment Anything Model(SAM)is a cutting-edge model that has shown impressive performance in general object segmentation.The birth of the segment anything is a groundbreaking step towards creating a universal intelligent model.Due to its superior performance in general object segmentation,it quickly gained attention and interest.This makes SAM particularly attractive in industrial surface defect segmentation,especially for complex industrial scenes with limited training data.However,its segmentation ability for specific industrial scenes remains unknown.Therefore,in this work,we select three representative and complex industrial surface defect detection scenarios,namely strip steel surface defects,tile surface defects,and rail surface defects,to evaluate the segmentation performance of SAM.Our results show that although SAM has great potential in general object segmentation,it cannot achieve satisfactory performance in complex industrial scenes.Our test results are available at:https://github.com/VDT-2048/SAM-IS.展开更多
This paper proposes a new network structure,namely the ProNet network.Retinal medical image segmentation can help clinical diagnosis of related eye diseases and is essential for subsequent rational treatment.The basel...This paper proposes a new network structure,namely the ProNet network.Retinal medical image segmentation can help clinical diagnosis of related eye diseases and is essential for subsequent rational treatment.The baseline model of the ProNet network is UperNet(Unified perceptual parsing Network),and the backbone network is ConvNext(Convolutional Network).A network structure based on depth-separable convolution and 1×1 convolution is used,which has good performance and robustness.We further optimise ProNet mainly in two aspects.One is data enhancement using increased noise and slight angle rotation,which can significantly increase the diversity of data and help the model better learn the patterns and features of the data and improve the model’s performance.Meanwhile,it can effectively expand the training data set,reduce the influence of noise and abnormal data in the data set on the model,and improve the accuracy and reliability of the model.Another is the loss function aspect,and we finally use the focal loss function.The focal loss function is well suited for complex tasks such as object detection.The function will penalise the loss carried by samples that the model misclassifies,thus enabling better training of the model to avoid these errors while solving the category imbalance problem as a way to improve image segmentation density and segmentation accuracy.From the experimental results,the evaluation metrics mIoU(mean Intersection over Union)enhanced by 4.47%,and mDice enhanced by 2.92% compared to the baseline network.Better generalization effects and more accurate image segmentation are achieved.展开更多
基金financially supported by the National Key Research and Development Program(Grant No.2022YFE0107000)the General Projects of the National Natural Science Foundation of China(Grant No.52171259)the High-Tech Ship Research Project of the Ministry of Industry and Information Technology(Grant No.[2021]342)。
文摘Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ice class often navigate in channels opened up by icebreakers.Navigation in the ice channel often depends on good maneuverability skills and abundant experience from the captain to a large extent.The ship may get stuck if steered into ice fields off the channel.Under this circumstance,it is very important to study how to identify the boundary lines of ice channels with a reliable method.In this paper,a two-staged ice channel identification method is developed based on image segmentation and corner point regression.The first stage employs the image segmentation method to extract channel regions.In the second stage,an intelligent corner regression network is proposed to extract the channel boundary lines from the channel region.A non-intelligent angle-based filtering and clustering method is proposed and compared with corner point regression network.The training and evaluation of the segmentation method and corner regression network are carried out on the synthetic and real ice channel dataset.The evaluation results show that the accuracy of the method using the corner point regression network in the second stage is achieved as high as 73.33%on the synthetic ice channel dataset and 70.66%on the real ice channel dataset,and the processing speed can reach up to 14.58frames per second.
基金supported by the National Key R&D Program of China(No.2022YFF0800601)National Scientific Foundation of China(Nos.41930103 and 41774047).
文摘In this study,the vertical components of broadband teleseismic P wave data recorded by China Earthquake Network are used to image the rupture processes of the February 6th,2023 Turkish earthquake doublet via back projection analysis.Data in two frequency bands(0.5-2 Hz and 1-3 Hz)are used in the imaging processes.The results show that the rupture of the first event extends about 200 km to the northeast and about 150 km to the southwest,lasting~90 s in total.The southwestern rupture is triggered by the northeastern rupture,demonstrating a sequential bidirectional unilateral rupture pattern.The rupture of the second event extends approximately 80 km in both northeast and west directions,lasting~35 s in total and demonstrates a typical bilateral rupture feature.The cascading ruptures on both sides also reflect the occurrence of selective rupture behaviors on bifurcated faults.In addition,we observe super-shear ruptures on certain fault sections with relatively straight fault structures and sparse aftershocks.
基金supported by National Key Research and Development Program of China(2021YFB1714300)the National Natural Science Foundation of China(62233005)+2 种基金in part by the CNPC Innovation Fund(2021D002-0902)Fundamental Research Funds for the Central Universities and Shanghai AI Labsponsored by Shanghai Gaofeng and Gaoyuan Project for University Academic Program Development。
文摘Visual semantic segmentation aims at separating a visual sample into diverse blocks with specific semantic attributes and identifying the category for each block,and it plays a crucial role in environmental perception.Conventional learning-based visual semantic segmentation approaches count heavily on largescale training data with dense annotations and consistently fail to estimate accurate semantic labels for unseen categories.This obstruction spurs a craze for studying visual semantic segmentation with the assistance of few/zero-shot learning.The emergence and rapid progress of few/zero-shot visual semantic segmentation make it possible to learn unseen categories from a few labeled or even zero-labeled samples,which advances the extension to practical applications.Therefore,this paper focuses on the recently published few/zero-shot visual semantic segmentation methods varying from 2D to 3D space and explores the commonalities and discrepancies of technical settlements under different segmentation circumstances.Specifically,the preliminaries on few/zeroshot visual semantic segmentation,including the problem definitions,typical datasets,and technical remedies,are briefly reviewed and discussed.Moreover,three typical instantiations are involved to uncover the interactions of few/zero-shot learning with visual semantic segmentation,including image semantic segmentation,video object segmentation,and 3D segmentation.Finally,the future challenges of few/zero-shot visual semantic segmentation are discussed.
基金the PID2022‐137451OB‐I00 and PID2022‐137629OA‐I00 projects funded by the MICIU/AEIAEI/10.13039/501100011033 and by ERDF/EU.
文摘Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR).A new approach to automatic OAR seg-mentation in the chest cavity in Computed Tomography(CT)images is presented.The proposed approach is based on the modified U‐Net architecture with the ResNet‐34 encoder,which is the baseline adopted in this work.The new two‐branch CS‐SA U‐Net architecture is proposed,which consists of two parallel U‐Net models in which self‐attention blocks with cosine similarity as query‐key similarity function(CS‐SA)blocks are inserted between the encoder and decoder,which enabled the use of con-sistency regularisation.The proposed solution demonstrates state‐of‐the‐art performance for the problem of OAR segmentation in CT images on the publicly available SegTHOR benchmark dataset in terms of a Dice coefficient(oesophagus-0.8714,heart-0.9516,trachea-0.9286,aorta-0.9510)and Hausdorff distance(oesophagus-0.2541,heart-0.1514,trachea-0.1722,aorta-0.1114)and significantly outperforms the baseline.The current approach is demonstrated to be viable for improving the quality of OAR segmentation for radiotherapy planning.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(Grant Number IMSIU-RP23044).
文摘Lung cancer is a leading cause of global mortality rates.Early detection of pulmonary tumors can significantly enhance the survival rate of patients.Recently,various Computer-Aided Diagnostic(CAD)methods have been developed to enhance the detection of pulmonary nodules with high accuracy.Nevertheless,the existing method-ologies cannot obtain a high level of specificity and sensitivity.The present study introduces a novel model for Lung Cancer Segmentation and Classification(LCSC),which incorporates two improved architectures,namely the improved U-Net architecture and the improved AlexNet architecture.The LCSC model comprises two distinct stages.The first stage involves the utilization of an improved U-Net architecture to segment candidate nodules extracted from the lung lobes.Subsequently,an improved AlexNet architecture is employed to classify lung cancer.During the first stage,the proposed model demonstrates a dice accuracy of 0.855,a precision of 0.933,and a recall of 0.789 for the segmentation of candidate nodules.The suggested improved AlexNet architecture attains 97.06%accuracy,a true positive rate of 96.36%,a true negative rate of 97.77%,a positive predictive value of 97.74%,and a negative predictive value of 96.41%for classifying pulmonary cancer as either benign or malignant.The proposed LCSC model is tested and evaluated employing the publically available dataset furnished by the Lung Image Database Consortium and Image Database Resource Initiative(LIDC-IDRI).This proposed technique exhibits remarkable performance compared to the existing methods by using various evaluation parameters.
基金the National Natural Science Foundation of China(Grant Number 62066013)Hainan Provincial Natural Science Foundation of China(Grant Numbers 622RC674 and 2019RC182).
文摘High-resolution remote sensing image segmentation is a challenging task. In urban remote sensing, the presenceof occlusions and shadows often results in blurred or invisible object boundaries, thereby increasing the difficultyof segmentation. In this paper, an improved network with a cross-region self-attention mechanism for multi-scalefeatures based onDeepLabv3+is designed to address the difficulties of small object segmentation and blurred targetedge segmentation. First,we use CrossFormer as the backbone feature extraction network to achieve the interactionbetween large- and small-scale features, and establish self-attention associations between features at both large andsmall scales to capture global contextual feature information. Next, an improved atrous spatial pyramid poolingmodule is introduced to establish multi-scale feature maps with large- and small-scale feature associations, andattention vectors are added in the channel direction to enable adaptive adjustment of multi-scale channel features.The proposed networkmodel is validated using the PotsdamandVaihingen datasets. The experimental results showthat, compared with existing techniques, the network model designed in this paper can extract and fuse multiscaleinformation, more clearly extract edge information and small-scale information, and segment boundariesmore smoothly. Experimental results on public datasets demonstrate the superiority of ourmethod compared withseveral state-of-the-art networks.
基金This work is supported by the National Natural Science Foundation of China under Grant No.62001341the National Natural Science Foundation of Jiangsu Province under Grant No.BK20221379the Jiangsu Engineering Research Center of Digital Twinning Technology for Key Equipment in Petrochemical Process under Grant No.DTEC202104.
文摘This paper focuses on the task of few-shot 3D point cloud semantic segmentation.Despite some progress,this task still encounters many issues due to the insufficient samples given,e.g.,incomplete object segmentation and inaccurate semantic discrimination.To tackle these issues,we first leverage part-whole relationships into the task of 3D point cloud semantic segmentation to capture semantic integrity,which is empowered by the dynamic capsule routing with the module of 3D Capsule Networks(CapsNets)in the embedding network.Concretely,the dynamic routing amalgamates geometric information of the 3D point cloud data to construct higher-level feature representations,which capture the relationships between object parts and their wholes.Secondly,we designed a multi-prototype enhancement module to enhance the prototype discriminability.Specifically,the single-prototype enhancement mechanism is expanded to the multi-prototype enhancement version for capturing rich semantics.Besides,the shot-correlation within the category is calculated via the interaction of different samples to enhance the intra-category similarity.Ablation studies prove that the involved part-whole relations and proposed multi-prototype enhancement module help to achieve complete object segmentation and improve semantic discrimination.Moreover,under the integration of these two modules,quantitative and qualitative experiments on two public benchmarks,including S3DIS and ScanNet,indicate the superior performance of the proposed framework on the task of 3D point cloud semantic segmentation,compared to some state-of-the-art methods.
基金supported by the research grant(SEED-CCIS-2024-166),Prince Sultan University,Saudi Arabia。
文摘Breast cancer is one of the major health issues with high mortality rates and a substantial impact on patients and healthcare systems worldwide.Various Computer-Aided Diagnosis(CAD)tools,based on breast thermograms,have been developed for early detection of this disease.However,accurately segmenting the Region of Interest(ROI)fromthermograms remains challenging.This paper presents an approach that leverages image acquisition protocol parameters to identify the lateral breast region and estimate its bottomboundary using a second-degree polynomial.The proposed method demonstrated high efficacy,achieving an impressive Jaccard coefficient of 86%and a Dice index of 92%when evaluated against manually created ground truths.Textural features were extracted from each view’s ROI,with significant features selected via Mutual Information for training Multi-Layer Perceptron(MLP)and K-Nearest Neighbors(KNN)classifiers.Our findings revealed that the MLP classifier outperformed the KNN,achieving an accuracy of 86%,a specificity of 100%,and an Area Under the Curve(AUC)of 0.85.The consistency of the method across both sides of the breast suggests its viability as an auto-segmentation tool.Furthermore,the classification results suggests that lateral views of breast thermograms harbor valuable features that can significantly aid in the early detection of breast cancer.
基金The authors extend their appreciation to the Arab Open University,Saudi Arabia,for funding this work through AOU research fund No.AOURG-2023-009.
文摘In standard iris recognition systems,a cooperative imaging framework is employed that includes a light source with a near-infrared wavelength to reveal iris texture,look-and-stare constraints,and a close distance requirement to the capture device.When these conditions are relaxed,the system’s performance significantly deteriorates due to segmentation and feature extraction problems.Herein,a novel segmentation algorithm is proposed to correctly detect the pupil and limbus boundaries of iris images captured in unconstrained environments.First,the algorithm scans the whole iris image in the Hue Saturation Value(HSV)color space for local maxima to detect the sclera region.The image quality is then assessed by computing global features in red,green and blue(RGB)space,as noisy images have heterogeneous characteristics.The iris images are accordingly classified into seven categories based on their global RGB intensities.After the classification process,the images are filtered,and adaptive thresholding is applied to enhance the global contrast and detect the outer iris ring.Finally,to characterize the pupil area,the algorithm scans the cropped outer ring region for local minima values to identify the darkest area in the iris ring.The experimental results show that our method outperforms existing segmentation techniques using the UBIRIS.v1 and v2 databases and achieved a segmentation accuracy of 99.32 on UBIRIS.v1 and an error rate of 1.59 on UBIRIS.v2.
基金the National Natural Science Foundation of China(No.62063006)the Natural Science Foundation of Guangxi Province(No.2023GXNS-FAA026025)+3 种基金the Innovation Fund of Chinese Universities Industry-University-Research(ID:2021RYC06005)the Research Project for Young andMiddle-Aged Teachers in Guangxi Universi-ties(ID:2020KY15013)the Special Research Project of Hechi University(ID:2021GCC028)financially supported by the Project of Outstanding Thousand Young Teachers’Training in Higher Education Institutions of Guangxi,Guangxi Colleges and Universities Key Laboratory of AI and Information Processing(Hechi University),Education Department of Guangxi Zhuang Autonomous Region.
文摘Dynamic Simultaneous Localization and Mapping(SLAM)in visual scenes is currently a major research area in fields such as robot navigation and autonomous driving.However,in the face of complex real-world envi-ronments,current dynamic SLAM systems struggle to achieve precise localization and map construction.With the advancement of deep learning,there has been increasing interest in the development of deep learning-based dynamic SLAM visual odometry in recent years,and more researchers are turning to deep learning techniques to address the challenges of dynamic SLAM.Compared to dynamic SLAM systems based on deep learning methods such as object detection and semantic segmentation,dynamic SLAM systems based on instance segmentation can not only detect dynamic objects in the scene but also distinguish different instances of the same type of object,thereby reducing the impact of dynamic objects on the SLAM system’s positioning.This article not only introduces traditional dynamic SLAM systems based on mathematical models but also provides a comprehensive analysis of existing instance segmentation algorithms and dynamic SLAM systems based on instance segmentation,comparing and summarizing their advantages and disadvantages.Through comparisons on datasets,it is found that instance segmentation-based methods have significant advantages in accuracy and robustness in dynamic environments.However,the real-time performance of instance segmentation algorithms hinders the widespread application of dynamic SLAM systems.In recent years,the rapid development of single-stage instance segmentationmethods has brought hope for the widespread application of dynamic SLAM systems based on instance segmentation.Finally,possible future research directions and improvementmeasures are discussed for reference by relevant professionals.
基金supported in part by the Tianjin Technology Innovation Guidance Special Fund Project under Grant No.21YDTPJC00850in part by the National Natural Science Foundation of China under Grant No.41906161in part by the Natural Science Foundation of Tianjin under Grant No.21JCQNJC00650。
文摘With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection ability of a single vehicle limits the SLAM performance in wide areas.Thereby,cooperative SLAM using multiple vehicles has become an important research direction.The key factor of cooperative SLAM is timely and efficient sonar image transmission among underwater vehicles.However,the limited bandwidth of underwater acoustic channels contradicts a large amount of sonar image data.It is essential to compress the images before transmission.Recently,deep neural networks have great value in image compression by virtue of the powerful learning ability of neural networks,but the existing sonar image compression methods based on neural network usually focus on the pixel-level information without the semantic-level information.In this paper,we propose a novel underwater acoustic transmission scheme called UAT-SSIC that includes semantic segmentation-based sonar image compression(SSIC)framework and the joint source-channel codec,to improve the accuracy of the semantic information of the reconstructed sonar image at the receiver.The SSIC framework consists of Auto-Encoder structure-based sonar image compression network,which is measured by a semantic segmentation network's residual.Considering that sonar images have the characteristics of blurred target edges,the semantic segmentation network used a special dilated convolution neural network(DiCNN)to enhance segmentation accuracy by expanding the range of receptive fields.The joint source-channel codec with unequal error protection is proposed that adjusts the power level of the transmitted data,which deal with sonar image transmission error caused by the serious underwater acoustic channel.Experiment results demonstrate that our method preserves more semantic information,with advantages over existing methods at the same compression ratio.It also improves the error tolerance and packet loss resistance of transmission.
基金funded by Anhui Provincial Natural Science Foundation(No.2208085ME128)the Anhui University-Level Special Project of Anhui University of Science and Technology(No.XCZX2021-01)+1 种基金the Research and the Development Fund of the Institute of Environmental Friendly Materials and Occupational Health,Anhui University of Science and Technology(No.ALW2022YF06)Anhui Province New Era Education Quality Project(Graduate Education)(No.2022xscx073).
文摘The real-time detection and instance segmentation of strawberries constitute fundamental components in the development of strawberry harvesting robots.Real-time identification of strawberries in an unstructured envi-ronment is a challenging task.Current instance segmentation algorithms for strawberries suffer from issues such as poor real-time performance and low accuracy.To this end,the present study proposes an Efficient YOLACT(E-YOLACT)algorithm for strawberry detection and segmentation based on the YOLACT framework.The key enhancements of the E-YOLACT encompass the development of a lightweight attention mechanism,pyramid squeeze shuffle attention(PSSA),for efficient feature extraction.Additionally,an attention-guided context-feature pyramid network(AC-FPN)is employed instead of FPN to optimize the architecture’s performance.Furthermore,a feature-enhanced model(FEM)is introduced to enhance the prediction head’s capabilities,while efficient fast non-maximum suppression(EF-NMS)is devised to improve non-maximum suppression.The experimental results demonstrate that the E-YOLACT achieves a Box-mAP and Mask-mAP of 77.9 and 76.6,respectively,on the custom dataset.Moreover,it exhibits an impressive category accuracy of 93.5%.Notably,the E-YOLACT also demonstrates a remarkable real-time detection capability with a speed of 34.8 FPS.The method proposed in this article presents an efficient approach for the vision system of a strawberry-picking robot.
基金Supported by National Natural Science Foundation of China(No.82101097,No.82070937).
文摘Ophthalmology is a subject that highly depends on imaging examination.Artificial intelligence(AI)technology has great potential in medical imaging analysis,including image diagnosis,classification,grading,guiding treatment and evaluating prognosis.The combination of the two can realize mass screening of grass-roots eye health,making it possible to seek medical treatment in the mode of“first treatment at the grass-roots level,two-way referral,emergency and slow treatment,and linkage between the upper and lower levels”.On the basis of summarizing the AI technology carried out by scholars and their teams all over the world in the field of ophthalmology,quite a lot of studies have confirmed that machine learning can assist in diagnosis,grading,providing optimal treatment plans and evaluating prognosis in corneal and conjunctival diseases,ametropia,lens diseases,glaucoma,iris diseases,etc.This paper systematically shows the application and progress of AI technology in common anterior segment ocular diseases,the current limitations,and prospects for the future.
基金supported by the MSIT(Ministry of Science and ICT),Korea,under the ICAN(ICT Challenge and Advanced Network of HRD)Program(IITP-2024-RS-2022-00156326)supervised by the IITP(Institute of Information&Communications Technology Planning&Evaluation)+2 种基金The authors are thankful to the Deanship of Scientific Research at Najran University for funding this work under the Research Group Funding Program Grant Code(NU/GP/SERC/13/30)funding for this work was provided by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R410)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors extend their appreciation to the Deanship of Scientific Research at Northern Border University,Arar,KSA for funding this research work through the Project Number“NBU-FFR-2024-231-06”.
文摘The recent advancements in vision technology have had a significant impact on our ability to identify multiple objects and understand complex scenes.Various technologies,such as augmented reality-driven scene integration,robotic navigation,autonomous driving,and guided tour systems,heavily rely on this type of scene comprehension.This paper presents a novel segmentation approach based on the UNet network model,aimed at recognizing multiple objects within an image.The methodology begins with the acquisition and preprocessing of the image,followed by segmentation using the fine-tuned UNet architecture.Afterward,we use an annotation tool to accurately label the segmented regions.Upon labeling,significant features are extracted from these segmented objects,encompassing KAZE(Accelerated Segmentation and Extraction)features,energy-based edge detection,frequency-based,and blob characteristics.For the classification stage,a convolution neural network(CNN)is employed.This comprehensive methodology demonstrates a robust framework for achieving accurate and efficient recognition of multiple objects in images.The experimental results,which include complex object datasets like MSRC-v2 and PASCAL-VOC12,have been documented.After analyzing the experimental results,it was found that the PASCAL-VOC12 dataset achieved an accuracy rate of 95%,while the MSRC-v2 dataset achieved an accuracy of 89%.The evaluation performed on these diverse datasets highlights a notably impressive level of performance.
文摘In cornfields,factors such as the similarity between corn seedlings and weeds and the blurring of plant edge details pose challenges to corn and weed segmentation.In addition,remote areas such as farmland are usually constrained by limited computational resources and limited collected data.Therefore,it becomes necessary to lighten the model to better adapt to complex cornfield scene,and make full use of the limited data information.In this paper,we propose an improved image segmentation algorithm based on unet.Firstly,the inverted residual structure is introduced into the contraction path to reduce the number of parameters in the training process and improve the feature extraction ability;secondly,the pyramid pooling module is introduced to enhance the network’s ability of acquiring contextual information as well as the ability of dealing with the small target loss problem;and lastly,Finally,to further enhance the segmentation capability of the model,the squeeze and excitation mechanism is introduced in the expansion path.We used images of corn seedlings collected in the field and publicly available corn weed datasets to evaluate the improved model.The improved model has a total parameter of 3.79 M and miou can achieve 87.9%.The fps on a single 3050 ti video card is about 58.9.The experimental results show that the network proposed in this paper can quickly segment corn weeds in a cornfield scenario with good segmentation accuracy.
基金funding support from the National Research Foundation (Competitive Research Program grant number NRF-CRP16-2015-05)the National University of Singapore Early Career Research Award+1 种基金supported by the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowshipa Schmidt Sciences program。
文摘We present a method using Zernike moments for quantifying rotational and reflectional symmetries in scanning transmission electron microscopy(STEM)images,aimed at improving structural analysis of materials at the atomic scale.This technique is effective against common imaging noises and is potentially suited for low-dose imaging and identifying quantum defects.We showcase its utility in the unsupervised segmentation of polytypes in a twisted bilayer TaS_(2),enabling accurate differentiation of structural phases and monitoring transitions caused by electron beam effects.This approach enhances the analysis of structural variations in crystalline materials,marking a notable advancement in the characterization of structures in materials science.
基金This work is supported by the Natural Science Foundation of China(No.82372035)National Transportation Preparedness Projects(No.ZYZZYJ).Light of West China(No.XAB2022YN10)The China Postdoctoral Science Foundation(No.2023M740760).
文摘Colorectal cancer,a malignant lesion of the intestines,significantly affects human health and life,emphasizing the necessity of early detection and treatment.Accurate segmentation of colorectal cancer regions directly impacts subsequent staging,treatment methods,and prognostic outcomes.While colonoscopy is an effective method for detecting colorectal cancer,its data collection approach can cause patient discomfort.To address this,current research utilizes Computed Tomography(CT)imaging;however,conventional CT images only capture transient states,lacking sufficient representational capability to precisely locate colorectal cancer.This study utilizes enhanced CT images,constructing a deep feature network from the arterial,portal venous,and delay phases to simulate the physician’s diagnostic process and achieve accurate cancer segmentation.The innovations include:1)Utilizing portal venous phase CT images to introduce a context-aware multi-scale aggregation module for preliminary shape extraction of colorectal cancer.2)Building an image sequence based on arterial and delay phases,transforming the cancer segmentation issue into an anomaly detection problem,establishing a pixel-pairing strategy,and proposing a colorectal cancer segmentation algorithm using a Siamese network.Experiments with 84 clinical cases of colorectal cancer enhanced CT data demonstrated an Area Overlap Measure of 0.90,significantly better than Fully Convolutional Networks(FCNs)at 0.20.Future research will explore the relationship between conventional and enhanced CT to further reduce segmentation time and improve accuracy.
基金the National Natural Science Foundation of China(52175236)Qingdao People’s Livelihood Science and Technology Plan(19-6-1-88-nsh).
文摘In actual traffic scenarios,precise recognition of traffic participants,such as vehicles and pedestrians,is crucial for intelligent transportation.This study proposes an improved algorithm built on Mask-RCNN to enhance the ability of autonomous driving systems to recognize traffic participants.The algorithmincorporates long and shortterm memory networks and the fused attention module(GSAM,GCT,and Spatial Attention Module)to enhance the algorithm’s capability to process both global and local information.Additionally,to increase the network’s initial operation stability,the original network activation function was replaced with Gaussian error linear unit.Experiments were conducted using the publicly available Cityscapes dataset.Comparing the test results,it was observed that the revised algorithmoutperformed the original algorithmin terms of AP_(50),AP_(75),and othermetrics by 8.7%and 9.6%for target detection and 12.5%and 13.3%for segmentation.
基金supported by the National Natural Science Foundation of China(51805078)Project of National Key Laboratory of Advanced Casting Technologies(CAT2023-002)the 111 Project(B16009).
文摘Segment Anything Model(SAM)is a cutting-edge model that has shown impressive performance in general object segmentation.The birth of the segment anything is a groundbreaking step towards creating a universal intelligent model.Due to its superior performance in general object segmentation,it quickly gained attention and interest.This makes SAM particularly attractive in industrial surface defect segmentation,especially for complex industrial scenes with limited training data.However,its segmentation ability for specific industrial scenes remains unknown.Therefore,in this work,we select three representative and complex industrial surface defect detection scenarios,namely strip steel surface defects,tile surface defects,and rail surface defects,to evaluate the segmentation performance of SAM.Our results show that although SAM has great potential in general object segmentation,it cannot achieve satisfactory performance in complex industrial scenes.Our test results are available at:https://github.com/VDT-2048/SAM-IS.
文摘This paper proposes a new network structure,namely the ProNet network.Retinal medical image segmentation can help clinical diagnosis of related eye diseases and is essential for subsequent rational treatment.The baseline model of the ProNet network is UperNet(Unified perceptual parsing Network),and the backbone network is ConvNext(Convolutional Network).A network structure based on depth-separable convolution and 1×1 convolution is used,which has good performance and robustness.We further optimise ProNet mainly in two aspects.One is data enhancement using increased noise and slight angle rotation,which can significantly increase the diversity of data and help the model better learn the patterns and features of the data and improve the model’s performance.Meanwhile,it can effectively expand the training data set,reduce the influence of noise and abnormal data in the data set on the model,and improve the accuracy and reliability of the model.Another is the loss function aspect,and we finally use the focal loss function.The focal loss function is well suited for complex tasks such as object detection.The function will penalise the loss carried by samples that the model misclassifies,thus enabling better training of the model to avoid these errors while solving the category imbalance problem as a way to improve image segmentation density and segmentation accuracy.From the experimental results,the evaluation metrics mIoU(mean Intersection over Union)enhanced by 4.47%,and mDice enhanced by 2.92% compared to the baseline network.Better generalization effects and more accurate image segmentation are achieved.