“精灵圈”是海岸带盐沼植被生态系统中的一种“空间自组织”结构,对盐沼湿地的生产力、稳定性和恢复力有重要影响。无人机影像是实现“精灵圈”空间位置高精度识别及解译其时空演化趋势与规律的重要数据源,但“精灵圈”像素与背景像素...“精灵圈”是海岸带盐沼植被生态系统中的一种“空间自组织”结构,对盐沼湿地的生产力、稳定性和恢复力有重要影响。无人机影像是实现“精灵圈”空间位置高精度识别及解译其时空演化趋势与规律的重要数据源,但“精灵圈”像素与背景像素在色彩信息和外形特征上差异较小,如何从二维影像中智能精准地识别“精灵圈”像素并对识别的单个像素形成个体“精灵圈”是目前的技术难点。本文提出了一种结合分割万物模型(Segment Anything Model,SAM)视觉分割模型与随机森林机器学习的无人机影像“精灵圈”分割及分类方法,实现了单个“精灵圈”的识别和提取。首先,通过构建索伦森-骰子系数(S?rensen-Dice coefficient,Dice)和交并比(Intersection over Union,IOU)评价指标,从SAM中筛选预训练模型并对其参数进行优化,实现全自动影像分割,得到无属性信息的分割掩码/分割类;然后,利用红、绿、蓝(RGB)三通道信息及空间二维坐标将分割掩码与原图像进行信息匹配,构造分割掩码的特征指标,并根据袋外数据(Out of Bag,OOB)误差减小及特征分布规律对特征进行分析和筛选;最后,利用筛选的特征对随机森林模型进行训练,实现“精灵圈”植被、普通植被和光滩的自动识别与分类。实验结果表明:本文方法“精灵圈”平均正确提取率96.1%,平均错误提取率为9.5%,为精准刻画“精灵圈”时空格局及海岸带无人机遥感图像处理提供了方法和技术支撑。展开更多
Visual semantic segmentation aims at separating a visual sample into diverse blocks with specific semantic attributes and identifying the category for each block,and it plays a crucial role in environmental perception...Visual semantic segmentation aims at separating a visual sample into diverse blocks with specific semantic attributes and identifying the category for each block,and it plays a crucial role in environmental perception.Conventional learning-based visual semantic segmentation approaches count heavily on largescale training data with dense annotations and consistently fail to estimate accurate semantic labels for unseen categories.This obstruction spurs a craze for studying visual semantic segmentation with the assistance of few/zero-shot learning.The emergence and rapid progress of few/zero-shot visual semantic segmentation make it possible to learn unseen categories from a few labeled or even zero-labeled samples,which advances the extension to practical applications.Therefore,this paper focuses on the recently published few/zero-shot visual semantic segmentation methods varying from 2D to 3D space and explores the commonalities and discrepancies of technical settlements under different segmentation circumstances.Specifically,the preliminaries on few/zeroshot visual semantic segmentation,including the problem definitions,typical datasets,and technical remedies,are briefly reviewed and discussed.Moreover,three typical instantiations are involved to uncover the interactions of few/zero-shot learning with visual semantic segmentation,including image semantic segmentation,video object segmentation,and 3D segmentation.Finally,the future challenges of few/zero-shot visual semantic segmentation are discussed.展开更多
The printed circuit heat exchanger(PCHE) is receiving wide attention as a new kind of compact heat exchanger and is considered as a promising vaporizer in the LNG process. In this paper, a PCHE straight channel in the...The printed circuit heat exchanger(PCHE) is receiving wide attention as a new kind of compact heat exchanger and is considered as a promising vaporizer in the LNG process. In this paper, a PCHE straight channel in the length of 500 mm is established, with a semicircular cross section in a diameter of 1.2 mm.Numerical simulation is employed to investigate the flow and heat transfer performance of supercritical methane in the channel. The pseudo-boiling theory is adopted and the liquid-like, two-phase-like, and vapor-like regimes are divided for supercritical methane to analyze the heat transfer and flow features.The results are presented in micro segment to show the local convective heat transfer coefficient and pressure drop. It shows that the convective heat transfer coefficient in segments along the channel has a significant peak feature near the pseudo-critical point and a heat transfer deterioration when the average fluid temperature in the segment is higher than the pseudo-critical point. The reason is explained with the generation of vapor-like film near the channel wall that the peak feature related to a nucleateboiling-like state and heat transfer deterioration related to a film-boiling-like state. The effects of parameters, including mass flow rate, pressure, and wall heat flux on flow and heat transfer were analyzed.In calculating of the averaged heat transfer coefficient of the whole channel, the traditional method shows significant deviation and the micro segment weighted average method is adopted. The pressure drop can mainly be affected by the mass flux and pressure and little affected by the wall heat flux. The peak of the convective heat transfer coefficient can only form at high mass flux, low wall heat flux, and near critical pressure, in which condition the nucleate-boiling-like state is easier to appear. Moreover,heat transfer deterioration will always appear, since the supercritical flow will finally develop into a filmboiling-like state. So heat transfer deterioration should be taken seriously in the design and safe operation of vaporizer PCHE. The study of this work clarified the local heat transfer and flow feature of supercritical methane in microchannel and contributed to the deep understanding of supercritical methane flow of the vaporization process in PCHE.展开更多
Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ...Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ice class often navigate in channels opened up by icebreakers.Navigation in the ice channel often depends on good maneuverability skills and abundant experience from the captain to a large extent.The ship may get stuck if steered into ice fields off the channel.Under this circumstance,it is very important to study how to identify the boundary lines of ice channels with a reliable method.In this paper,a two-staged ice channel identification method is developed based on image segmentation and corner point regression.The first stage employs the image segmentation method to extract channel regions.In the second stage,an intelligent corner regression network is proposed to extract the channel boundary lines from the channel region.A non-intelligent angle-based filtering and clustering method is proposed and compared with corner point regression network.The training and evaluation of the segmentation method and corner regression network are carried out on the synthetic and real ice channel dataset.The evaluation results show that the accuracy of the method using the corner point regression network in the second stage is achieved as high as 73.33%on the synthetic ice channel dataset and 70.66%on the real ice channel dataset,and the processing speed can reach up to 14.58frames per second.展开更多
Watermarks can provide reliable and secure copyright protection for optical coherence tomography(OCT)fundus images.The effective image segmentation is helpful for promoting OCT image watermarking.However,OCT images ha...Watermarks can provide reliable and secure copyright protection for optical coherence tomography(OCT)fundus images.The effective image segmentation is helpful for promoting OCT image watermarking.However,OCT images have a large amount of low-quality data,which seriously affects the performance of segmentationmethods.Therefore,this paper proposes an effective segmentation method for OCT fundus image watermarking using a rough convolutional neural network(RCNN).First,the rough-set-based feature discretization module is designed to preprocess the input data.Second,a dual attention mechanism for feature channels and spatial regions in the CNN is added to enable the model to adaptively select important information for fusion.Finally,the refinement module for enhancing the extraction power of multi-scale information is added to improve the edge accuracy in segmentation.RCNN is compared with CE-Net and MultiResUNet on 83 gold standard 3D retinal OCT data samples.The average dice similarly coefficient(DSC)obtained by RCNN is 6%higher than that of CE-Net.The average 95 percent Hausdorff distance(95HD)and average symmetric surface distance(ASD)obtained by RCNN are 32.4%and 33.3%lower than those of MultiResUNet,respectively.We also evaluate the effect of feature discretization,as well as analyze the initial learning rate of RCNN and conduct ablation experiments with the four different models.The experimental results indicate that our method can improve the segmentation accuracy of OCT fundus images,providing strong support for its application in medical image watermarking.展开更多
Lung cancer is a malady of the lungs that gravely jeopardizes human health.Therefore,early detection and treatment are paramount for the preservation of human life.Lung computed tomography(CT)image sequences can expli...Lung cancer is a malady of the lungs that gravely jeopardizes human health.Therefore,early detection and treatment are paramount for the preservation of human life.Lung computed tomography(CT)image sequences can explicitly delineate the pathological condition of the lungs.To meet the imperative for accurate diagnosis by physicians,expeditious segmentation of the region harboring lung cancer is of utmost significance.We utilize computer-aided methods to emulate the diagnostic process in which physicians concentrate on lung cancer in a sequential manner,erect an interpretable model,and attain segmentation of lung cancer.The specific advancements can be encapsulated as follows:1)Concentration on the lung parenchyma region:Based on 16-bit CT image capturing and the luminance characteristics of lung cancer,we proffer an intercept histogram algorithm.2)Focus on the specific locus of lung malignancy:Utilizing the spatial interrelation of lung cancer,we propose a memory-based Unet architecture and incorporate skip connections.3)Data Imbalance:In accordance with the prevalent situation of an overabundance of negative samples and a paucity of positive samples,we scrutinize the existing loss function and suggest a mixed loss function.Experimental results with pre-existing publicly available datasets and assembled datasets demonstrate that the segmentation efficacy,measured as Area Overlap Measure(AOM)is superior to 0.81,which markedly ameliorates in comparison with conventional algorithms,thereby facilitating physicians in diagnosis.展开更多
In this study,the vertical components of broadband teleseismic P wave data recorded by China Earthquake Network are used to image the rupture processes of the February 6th,2023 Turkish earthquake doublet via back proj...In this study,the vertical components of broadband teleseismic P wave data recorded by China Earthquake Network are used to image the rupture processes of the February 6th,2023 Turkish earthquake doublet via back projection analysis.Data in two frequency bands(0.5-2 Hz and 1-3 Hz)are used in the imaging processes.The results show that the rupture of the first event extends about 200 km to the northeast and about 150 km to the southwest,lasting~90 s in total.The southwestern rupture is triggered by the northeastern rupture,demonstrating a sequential bidirectional unilateral rupture pattern.The rupture of the second event extends approximately 80 km in both northeast and west directions,lasting~35 s in total and demonstrates a typical bilateral rupture feature.The cascading ruptures on both sides also reflect the occurrence of selective rupture behaviors on bifurcated faults.In addition,we observe super-shear ruptures on certain fault sections with relatively straight fault structures and sparse aftershocks.展开更多
Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR...Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR).A new approach to automatic OAR seg-mentation in the chest cavity in Computed Tomography(CT)images is presented.The proposed approach is based on the modified U‐Net architecture with the ResNet‐34 encoder,which is the baseline adopted in this work.The new two‐branch CS‐SA U‐Net architecture is proposed,which consists of two parallel U‐Net models in which self‐attention blocks with cosine similarity as query‐key similarity function(CS‐SA)blocks are inserted between the encoder and decoder,which enabled the use of con-sistency regularisation.The proposed solution demonstrates state‐of‐the‐art performance for the problem of OAR segmentation in CT images on the publicly available SegTHOR benchmark dataset in terms of a Dice coefficient(oesophagus-0.8714,heart-0.9516,trachea-0.9286,aorta-0.9510)and Hausdorff distance(oesophagus-0.2541,heart-0.1514,trachea-0.1722,aorta-0.1114)and significantly outperforms the baseline.The current approach is demonstrated to be viable for improving the quality of OAR segmentation for radiotherapy planning.展开更多
Lung cancer is a leading cause of global mortality rates.Early detection of pulmonary tumors can significantly enhance the survival rate of patients.Recently,various Computer-Aided Diagnostic(CAD)methods have been dev...Lung cancer is a leading cause of global mortality rates.Early detection of pulmonary tumors can significantly enhance the survival rate of patients.Recently,various Computer-Aided Diagnostic(CAD)methods have been developed to enhance the detection of pulmonary nodules with high accuracy.Nevertheless,the existing method-ologies cannot obtain a high level of specificity and sensitivity.The present study introduces a novel model for Lung Cancer Segmentation and Classification(LCSC),which incorporates two improved architectures,namely the improved U-Net architecture and the improved AlexNet architecture.The LCSC model comprises two distinct stages.The first stage involves the utilization of an improved U-Net architecture to segment candidate nodules extracted from the lung lobes.Subsequently,an improved AlexNet architecture is employed to classify lung cancer.During the first stage,the proposed model demonstrates a dice accuracy of 0.855,a precision of 0.933,and a recall of 0.789 for the segmentation of candidate nodules.The suggested improved AlexNet architecture attains 97.06%accuracy,a true positive rate of 96.36%,a true negative rate of 97.77%,a positive predictive value of 97.74%,and a negative predictive value of 96.41%for classifying pulmonary cancer as either benign or malignant.The proposed LCSC model is tested and evaluated employing the publically available dataset furnished by the Lung Image Database Consortium and Image Database Resource Initiative(LIDC-IDRI).This proposed technique exhibits remarkable performance compared to the existing methods by using various evaluation parameters.展开更多
Pulmonary nodules are small, round, or oval-shaped growths on the lungs. They can be benign (noncancerous) or malignant (cancerous). The size of a nodule can range from a few millimeters to a few centimeters in diamet...Pulmonary nodules are small, round, or oval-shaped growths on the lungs. They can be benign (noncancerous) or malignant (cancerous). The size of a nodule can range from a few millimeters to a few centimeters in diameter. Nodules may be found during a chest X-ray or other imaging test for an unrelated health problem. In the proposed methodology pulmonary nodules can be classified into three stages. Firstly, a 2D histogram thresholding technique is used to identify volume segmentation. An ant colony optimization algorithm is used to determine the optimal threshold value. Secondly, geometrical features such as lines, arcs, extended arcs, and ellipses are used to detect oval shapes. Thirdly, Histogram Oriented Surface Normal Vector (HOSNV) feature descriptors can be used to identify nodules of different sizes and shapes by using a scaled and rotation-invariant texture description. Smart nodule classification was performed with the XGBoost classifier. The results are tested and validated using the Lung Image Consortium Database (LICD). The proposed method has a sensitivity of 98.49% for nodules sized 3–30 mm.展开更多
The distinction and precise identification of tumor nodules are crucial for timely lung cancer diagnosis andplanning intervention. This research work addresses the major issues pertaining to the field of medical image...The distinction and precise identification of tumor nodules are crucial for timely lung cancer diagnosis andplanning intervention. This research work addresses the major issues pertaining to the field of medical imageprocessing while focusing on lung cancer Computed Tomography (CT) images. In this context, the paper proposesan improved lung cancer segmentation technique based on the strengths of nature-inspired approaches. Thebetter resolution of CT is exploited to distinguish healthy subjects from those who have lung cancer. In thisprocess, the visual challenges of the K-means are addressed with the integration of four nature-inspired swarmintelligent techniques. The techniques experimented in this paper are K-means with Artificial Bee Colony (ABC),K-means with Cuckoo Search Algorithm (CSA), K-means with Particle Swarm Optimization (PSO), and Kmeanswith Firefly Algorithm (FFA). The testing and evaluation are performed on Early Lung Cancer ActionProgram (ELCAP) database. The simulation analysis is performed using lung cancer images set against metrics:precision, sensitivity, specificity, f-measure, accuracy,Matthews Correlation Coefficient (MCC), Jaccard, and Dice.The detailed evaluation shows that the K-means with Cuckoo Search Algorithm (CSA) significantly improved thequality of lung cancer segmentation in comparison to the other optimization approaches utilized for lung cancerimages. The results exhibit that the proposed approach (K-means with CSA) achieves precision, sensitivity, and Fmeasureof 0.942, 0.964, and 0.953, respectively, and an average accuracy of 93%. The experimental results prove thatK-meanswithABC,K-meanswith PSO,K-meanswith FFA, andK-meanswithCSAhave achieved an improvementof 10.8%, 13.38%, 13.93%, and 15.7%, respectively, for accuracy measure in comparison to K-means segmentationfor lung cancer images. Further, it is highlighted that the proposed K-means with CSA have achieved a significantimprovement in accuracy, hence can be utilized by researchers for improved segmentation processes of medicalimage datasets for identifying the targeted region of interest.展开更多
Gliomas are aggressive brain tumors known for their heterogeneity,unclear borders,and diverse locations on Magnetic Resonance Imaging(MRI)scans.These factors present significant challenges for MRI-based segmentation,a...Gliomas are aggressive brain tumors known for their heterogeneity,unclear borders,and diverse locations on Magnetic Resonance Imaging(MRI)scans.These factors present significant challenges for MRI-based segmentation,a crucial step for effective treatment planning and monitoring of glioma progression.This study proposes a novel deep learning framework,ResNet Multi-Head Attention U-Net(ResMHA-Net),to address these challenges and enhance glioma segmentation accuracy.ResMHA-Net leverages the strengths of both residual blocks from the ResNet architecture and multi-head attention mechanisms.This powerful combination empowers the network to prioritize informative regions within the 3D MRI data and capture long-range dependencies.By doing so,ResMHANet effectively segments intricate glioma sub-regions and reduces the impact of uncertain tumor boundaries.We rigorously trained and validated ResMHA-Net on the BraTS 2018,2019,2020 and 2021 datasets.Notably,ResMHA-Net achieved superior segmentation accuracy on the BraTS 2021 dataset compared to the previous years,demonstrating its remarkable adaptability and robustness across diverse datasets.Furthermore,we collected the predicted masks obtained from three datasets to enhance survival prediction,effectively augmenting the dataset size.Radiomic features were then extracted from these predicted masks and,along with clinical data,were used to train a novel ensemble learning-based machine learning model for survival prediction.This model employs a voting mechanism aggregating predictions from multiple models,leading to significant improvements over existing methods.This ensemble approach capitalizes on the strengths of various models,resulting in more accurate and reliable predictions for patient survival.Importantly,we achieved an impressive accuracy of 73%for overall survival(OS)prediction.展开更多
This research focuses on addressing the challenges associated with image detection in low-light environments,particularly by applying artificial intelligence techniques to machine vision and object recognition systems...This research focuses on addressing the challenges associated with image detection in low-light environments,particularly by applying artificial intelligence techniques to machine vision and object recognition systems.The primary goal is to tackle issues related to recognizing objects with low brightness levels.In this study,the Intel RealSense Lidar Camera L515 is used to simultaneously capture color information and 16-bit depth information images.The detection scenarios are categorized into normal brightness and low brightness situations.When the system determines a normal brightness environment,normal brightness images are recognized using deep learning methods.In low-brightness situations,three methods are proposed for recognition.The first method is the SegmentationwithDepth image(SD)methodwhich involves segmenting the depth image,creating amask from the segmented depth image,mapping the obtained mask onto the true color(RGB)image to obtain a backgroundreduced RGB image,and recognizing the segmented image.The second method is theHDVmethod(hue,depth,value)which combines RGB images converted to HSV images(hue,saturation,value)with depth images D to form HDV images for recognition.The third method is the HSD(hue,saturation,depth)method which similarly combines RGB images converted to HSV images with depth images D to form HSD images for recognition.In experimental results,in normal brightness environments,the average recognition rate obtained using image recognition methods is 91%.For low-brightness environments,using the SD method with original images for training and segmented images for recognition achieves an average recognition rate of over 82%.TheHDVmethod achieves an average recognition rate of over 70%,while the HSD method achieves an average recognition rate of over 84%.The HSD method allows for a quick and convenient low-light object recognition system.This research outcome can be applied to nighttime surveillance systems or nighttime road safety systems.展开更多
High-resolution remote sensing image segmentation is a challenging task. In urban remote sensing, the presenceof occlusions and shadows often results in blurred or invisible object boundaries, thereby increasing the d...High-resolution remote sensing image segmentation is a challenging task. In urban remote sensing, the presenceof occlusions and shadows often results in blurred or invisible object boundaries, thereby increasing the difficultyof segmentation. In this paper, an improved network with a cross-region self-attention mechanism for multi-scalefeatures based onDeepLabv3+is designed to address the difficulties of small object segmentation and blurred targetedge segmentation. First,we use CrossFormer as the backbone feature extraction network to achieve the interactionbetween large- and small-scale features, and establish self-attention associations between features at both large andsmall scales to capture global contextual feature information. Next, an improved atrous spatial pyramid poolingmodule is introduced to establish multi-scale feature maps with large- and small-scale feature associations, andattention vectors are added in the channel direction to enable adaptive adjustment of multi-scale channel features.The proposed networkmodel is validated using the PotsdamandVaihingen datasets. The experimental results showthat, compared with existing techniques, the network model designed in this paper can extract and fuse multiscaleinformation, more clearly extract edge information and small-scale information, and segment boundariesmore smoothly. Experimental results on public datasets demonstrate the superiority of ourmethod compared withseveral state-of-the-art networks.展开更多
This paper focuses on the task of few-shot 3D point cloud semantic segmentation.Despite some progress,this task still encounters many issues due to the insufficient samples given,e.g.,incomplete object segmentation an...This paper focuses on the task of few-shot 3D point cloud semantic segmentation.Despite some progress,this task still encounters many issues due to the insufficient samples given,e.g.,incomplete object segmentation and inaccurate semantic discrimination.To tackle these issues,we first leverage part-whole relationships into the task of 3D point cloud semantic segmentation to capture semantic integrity,which is empowered by the dynamic capsule routing with the module of 3D Capsule Networks(CapsNets)in the embedding network.Concretely,the dynamic routing amalgamates geometric information of the 3D point cloud data to construct higher-level feature representations,which capture the relationships between object parts and their wholes.Secondly,we designed a multi-prototype enhancement module to enhance the prototype discriminability.Specifically,the single-prototype enhancement mechanism is expanded to the multi-prototype enhancement version for capturing rich semantics.Besides,the shot-correlation within the category is calculated via the interaction of different samples to enhance the intra-category similarity.Ablation studies prove that the involved part-whole relations and proposed multi-prototype enhancement module help to achieve complete object segmentation and improve semantic discrimination.Moreover,under the integration of these two modules,quantitative and qualitative experiments on two public benchmarks,including S3DIS and ScanNet,indicate the superior performance of the proposed framework on the task of 3D point cloud semantic segmentation,compared to some state-of-the-art methods.展开更多
Colorectal cancer,a malignant lesion of the intestines,significantly affects human health and life,emphasizing the necessity of early detection and treatment.Accurate segmentation of colorectal cancer regions directly...Colorectal cancer,a malignant lesion of the intestines,significantly affects human health and life,emphasizing the necessity of early detection and treatment.Accurate segmentation of colorectal cancer regions directly impacts subsequent staging,treatment methods,and prognostic outcomes.While colonoscopy is an effective method for detecting colorectal cancer,its data collection approach can cause patient discomfort.To address this,current research utilizes Computed Tomography(CT)imaging;however,conventional CT images only capture transient states,lacking sufficient representational capability to precisely locate colorectal cancer.This study utilizes enhanced CT images,constructing a deep feature network from the arterial,portal venous,and delay phases to simulate the physician’s diagnostic process and achieve accurate cancer segmentation.The innovations include:1)Utilizing portal venous phase CT images to introduce a context-aware multi-scale aggregation module for preliminary shape extraction of colorectal cancer.2)Building an image sequence based on arterial and delay phases,transforming the cancer segmentation issue into an anomaly detection problem,establishing a pixel-pairing strategy,and proposing a colorectal cancer segmentation algorithm using a Siamese network.Experiments with 84 clinical cases of colorectal cancer enhanced CT data demonstrated an Area Overlap Measure of 0.90,significantly better than Fully Convolutional Networks(FCNs)at 0.20.Future research will explore the relationship between conventional and enhanced CT to further reduce segmentation time and improve accuracy.展开更多
Breast cancer detection heavily relies on medical imaging, particularly ultrasound, for early diagnosis and effectivetreatment. This research addresses the challenges associated with computer-aided diagnosis (CAD) of ...Breast cancer detection heavily relies on medical imaging, particularly ultrasound, for early diagnosis and effectivetreatment. This research addresses the challenges associated with computer-aided diagnosis (CAD) of breastcancer fromultrasound images. The primary challenge is accurately distinguishing between malignant and benigntumors, complicated by factors such as speckle noise, variable image quality, and the need for precise segmentationand classification. The main objective of the research paper is to develop an advanced methodology for breastultrasound image classification, focusing on speckle noise reduction, precise segmentation, feature extraction, andmachine learning-based classification. A unique approach is introduced that combines Enhanced Speckle ReducedAnisotropic Diffusion (SRAD) filters for speckle noise reduction, U-NET-based segmentation, Genetic Algorithm(GA)-based feature selection, and Random Forest and Bagging Tree classifiers, resulting in a novel and efficientmodel. To test and validate the hybrid model, rigorous experimentations were performed and results state thatthe proposed hybrid model achieved accuracy rate of 99.9%, outperforming other existing techniques, and alsosignificantly reducing computational time. This enhanced accuracy, along with improved sensitivity and specificity,makes the proposed hybrid model a valuable addition to CAD systems in breast cancer diagnosis, ultimatelyenhancing diagnostic accuracy in clinical applications.展开更多
There is no unified planning standard for unstructured roads,and the morphological structures of these roads are complex and varied.It is important to maintain a balance between accuracy and speed for unstructured roa...There is no unified planning standard for unstructured roads,and the morphological structures of these roads are complex and varied.It is important to maintain a balance between accuracy and speed for unstructured road extraction models.Unstructured road extraction algorithms based on deep learning have problems such as high model complexity,high computational cost,and the inability to adapt to current edge computing devices.Therefore,it is best to use lightweight network models.Considering the need for lightweight models and the characteristics of unstructured roads with different pattern shapes,such as blocks and strips,a TMB(Triple Multi-Block)feature extraction module is proposed,and the overall structure of the TMBNet network is described.The TMB module was compared with SS-nbt,Non-bottleneck-1D,and other modules via experiments.The feasibility and effectiveness of the TMB module design were proven through experiments and visualizations.The comparison experiment,using multiple convolution kernel categories,proved that the TMB module can improve the segmentation accuracy of the network.The comparison with different semantic segmentation networks demonstrates that the TMBNet network has advantages in terms of unstructured road extraction.展开更多
文摘“精灵圈”是海岸带盐沼植被生态系统中的一种“空间自组织”结构,对盐沼湿地的生产力、稳定性和恢复力有重要影响。无人机影像是实现“精灵圈”空间位置高精度识别及解译其时空演化趋势与规律的重要数据源,但“精灵圈”像素与背景像素在色彩信息和外形特征上差异较小,如何从二维影像中智能精准地识别“精灵圈”像素并对识别的单个像素形成个体“精灵圈”是目前的技术难点。本文提出了一种结合分割万物模型(Segment Anything Model,SAM)视觉分割模型与随机森林机器学习的无人机影像“精灵圈”分割及分类方法,实现了单个“精灵圈”的识别和提取。首先,通过构建索伦森-骰子系数(S?rensen-Dice coefficient,Dice)和交并比(Intersection over Union,IOU)评价指标,从SAM中筛选预训练模型并对其参数进行优化,实现全自动影像分割,得到无属性信息的分割掩码/分割类;然后,利用红、绿、蓝(RGB)三通道信息及空间二维坐标将分割掩码与原图像进行信息匹配,构造分割掩码的特征指标,并根据袋外数据(Out of Bag,OOB)误差减小及特征分布规律对特征进行分析和筛选;最后,利用筛选的特征对随机森林模型进行训练,实现“精灵圈”植被、普通植被和光滩的自动识别与分类。实验结果表明:本文方法“精灵圈”平均正确提取率96.1%,平均错误提取率为9.5%,为精准刻画“精灵圈”时空格局及海岸带无人机遥感图像处理提供了方法和技术支撑。
基金supported by National Key Research and Development Program of China(2021YFB1714300)the National Natural Science Foundation of China(62233005)+2 种基金in part by the CNPC Innovation Fund(2021D002-0902)Fundamental Research Funds for the Central Universities and Shanghai AI Labsponsored by Shanghai Gaofeng and Gaoyuan Project for University Academic Program Development。
文摘Visual semantic segmentation aims at separating a visual sample into diverse blocks with specific semantic attributes and identifying the category for each block,and it plays a crucial role in environmental perception.Conventional learning-based visual semantic segmentation approaches count heavily on largescale training data with dense annotations and consistently fail to estimate accurate semantic labels for unseen categories.This obstruction spurs a craze for studying visual semantic segmentation with the assistance of few/zero-shot learning.The emergence and rapid progress of few/zero-shot visual semantic segmentation make it possible to learn unseen categories from a few labeled or even zero-labeled samples,which advances the extension to practical applications.Therefore,this paper focuses on the recently published few/zero-shot visual semantic segmentation methods varying from 2D to 3D space and explores the commonalities and discrepancies of technical settlements under different segmentation circumstances.Specifically,the preliminaries on few/zeroshot visual semantic segmentation,including the problem definitions,typical datasets,and technical remedies,are briefly reviewed and discussed.Moreover,three typical instantiations are involved to uncover the interactions of few/zero-shot learning with visual semantic segmentation,including image semantic segmentation,video object segmentation,and 3D segmentation.Finally,the future challenges of few/zero-shot visual semantic segmentation are discussed.
基金provided by Science and Technology Development Project of Jilin Province(No.20230101338JC)。
文摘The printed circuit heat exchanger(PCHE) is receiving wide attention as a new kind of compact heat exchanger and is considered as a promising vaporizer in the LNG process. In this paper, a PCHE straight channel in the length of 500 mm is established, with a semicircular cross section in a diameter of 1.2 mm.Numerical simulation is employed to investigate the flow and heat transfer performance of supercritical methane in the channel. The pseudo-boiling theory is adopted and the liquid-like, two-phase-like, and vapor-like regimes are divided for supercritical methane to analyze the heat transfer and flow features.The results are presented in micro segment to show the local convective heat transfer coefficient and pressure drop. It shows that the convective heat transfer coefficient in segments along the channel has a significant peak feature near the pseudo-critical point and a heat transfer deterioration when the average fluid temperature in the segment is higher than the pseudo-critical point. The reason is explained with the generation of vapor-like film near the channel wall that the peak feature related to a nucleateboiling-like state and heat transfer deterioration related to a film-boiling-like state. The effects of parameters, including mass flow rate, pressure, and wall heat flux on flow and heat transfer were analyzed.In calculating of the averaged heat transfer coefficient of the whole channel, the traditional method shows significant deviation and the micro segment weighted average method is adopted. The pressure drop can mainly be affected by the mass flux and pressure and little affected by the wall heat flux. The peak of the convective heat transfer coefficient can only form at high mass flux, low wall heat flux, and near critical pressure, in which condition the nucleate-boiling-like state is easier to appear. Moreover,heat transfer deterioration will always appear, since the supercritical flow will finally develop into a filmboiling-like state. So heat transfer deterioration should be taken seriously in the design and safe operation of vaporizer PCHE. The study of this work clarified the local heat transfer and flow feature of supercritical methane in microchannel and contributed to the deep understanding of supercritical methane flow of the vaporization process in PCHE.
基金financially supported by the National Key Research and Development Program(Grant No.2022YFE0107000)the General Projects of the National Natural Science Foundation of China(Grant No.52171259)the High-Tech Ship Research Project of the Ministry of Industry and Information Technology(Grant No.[2021]342)。
文摘Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ice class often navigate in channels opened up by icebreakers.Navigation in the ice channel often depends on good maneuverability skills and abundant experience from the captain to a large extent.The ship may get stuck if steered into ice fields off the channel.Under this circumstance,it is very important to study how to identify the boundary lines of ice channels with a reliable method.In this paper,a two-staged ice channel identification method is developed based on image segmentation and corner point regression.The first stage employs the image segmentation method to extract channel regions.In the second stage,an intelligent corner regression network is proposed to extract the channel boundary lines from the channel region.A non-intelligent angle-based filtering and clustering method is proposed and compared with corner point regression network.The training and evaluation of the segmentation method and corner regression network are carried out on the synthetic and real ice channel dataset.The evaluation results show that the accuracy of the method using the corner point regression network in the second stage is achieved as high as 73.33%on the synthetic ice channel dataset and 70.66%on the real ice channel dataset,and the processing speed can reach up to 14.58frames per second.
基金the China Postdoctoral Science Foundation under Grant 2021M701838the Natural Science Foundation of Hainan Province of China under Grants 621MS042 and 622MS067the Hainan Medical University Teaching Achievement Award Cultivation under Grant HYjcpx202209.
文摘Watermarks can provide reliable and secure copyright protection for optical coherence tomography(OCT)fundus images.The effective image segmentation is helpful for promoting OCT image watermarking.However,OCT images have a large amount of low-quality data,which seriously affects the performance of segmentationmethods.Therefore,this paper proposes an effective segmentation method for OCT fundus image watermarking using a rough convolutional neural network(RCNN).First,the rough-set-based feature discretization module is designed to preprocess the input data.Second,a dual attention mechanism for feature channels and spatial regions in the CNN is added to enable the model to adaptively select important information for fusion.Finally,the refinement module for enhancing the extraction power of multi-scale information is added to improve the edge accuracy in segmentation.RCNN is compared with CE-Net and MultiResUNet on 83 gold standard 3D retinal OCT data samples.The average dice similarly coefficient(DSC)obtained by RCNN is 6%higher than that of CE-Net.The average 95 percent Hausdorff distance(95HD)and average symmetric surface distance(ASD)obtained by RCNN are 32.4%and 33.3%lower than those of MultiResUNet,respectively.We also evaluate the effect of feature discretization,as well as analyze the initial learning rate of RCNN and conduct ablation experiments with the four different models.The experimental results indicate that our method can improve the segmentation accuracy of OCT fundus images,providing strong support for its application in medical image watermarking.
基金This work is supported by Light of West China(No.XAB2022YN10).
文摘Lung cancer is a malady of the lungs that gravely jeopardizes human health.Therefore,early detection and treatment are paramount for the preservation of human life.Lung computed tomography(CT)image sequences can explicitly delineate the pathological condition of the lungs.To meet the imperative for accurate diagnosis by physicians,expeditious segmentation of the region harboring lung cancer is of utmost significance.We utilize computer-aided methods to emulate the diagnostic process in which physicians concentrate on lung cancer in a sequential manner,erect an interpretable model,and attain segmentation of lung cancer.The specific advancements can be encapsulated as follows:1)Concentration on the lung parenchyma region:Based on 16-bit CT image capturing and the luminance characteristics of lung cancer,we proffer an intercept histogram algorithm.2)Focus on the specific locus of lung malignancy:Utilizing the spatial interrelation of lung cancer,we propose a memory-based Unet architecture and incorporate skip connections.3)Data Imbalance:In accordance with the prevalent situation of an overabundance of negative samples and a paucity of positive samples,we scrutinize the existing loss function and suggest a mixed loss function.Experimental results with pre-existing publicly available datasets and assembled datasets demonstrate that the segmentation efficacy,measured as Area Overlap Measure(AOM)is superior to 0.81,which markedly ameliorates in comparison with conventional algorithms,thereby facilitating physicians in diagnosis.
基金supported by the National Key R&D Program of China(No.2022YFF0800601)National Scientific Foundation of China(Nos.41930103 and 41774047).
文摘In this study,the vertical components of broadband teleseismic P wave data recorded by China Earthquake Network are used to image the rupture processes of the February 6th,2023 Turkish earthquake doublet via back projection analysis.Data in two frequency bands(0.5-2 Hz and 1-3 Hz)are used in the imaging processes.The results show that the rupture of the first event extends about 200 km to the northeast and about 150 km to the southwest,lasting~90 s in total.The southwestern rupture is triggered by the northeastern rupture,demonstrating a sequential bidirectional unilateral rupture pattern.The rupture of the second event extends approximately 80 km in both northeast and west directions,lasting~35 s in total and demonstrates a typical bilateral rupture feature.The cascading ruptures on both sides also reflect the occurrence of selective rupture behaviors on bifurcated faults.In addition,we observe super-shear ruptures on certain fault sections with relatively straight fault structures and sparse aftershocks.
基金the PID2022‐137451OB‐I00 and PID2022‐137629OA‐I00 projects funded by the MICIU/AEIAEI/10.13039/501100011033 and by ERDF/EU.
文摘Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR).A new approach to automatic OAR seg-mentation in the chest cavity in Computed Tomography(CT)images is presented.The proposed approach is based on the modified U‐Net architecture with the ResNet‐34 encoder,which is the baseline adopted in this work.The new two‐branch CS‐SA U‐Net architecture is proposed,which consists of two parallel U‐Net models in which self‐attention blocks with cosine similarity as query‐key similarity function(CS‐SA)blocks are inserted between the encoder and decoder,which enabled the use of con-sistency regularisation.The proposed solution demonstrates state‐of‐the‐art performance for the problem of OAR segmentation in CT images on the publicly available SegTHOR benchmark dataset in terms of a Dice coefficient(oesophagus-0.8714,heart-0.9516,trachea-0.9286,aorta-0.9510)and Hausdorff distance(oesophagus-0.2541,heart-0.1514,trachea-0.1722,aorta-0.1114)and significantly outperforms the baseline.The current approach is demonstrated to be viable for improving the quality of OAR segmentation for radiotherapy planning.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(Grant Number IMSIU-RP23044).
文摘Lung cancer is a leading cause of global mortality rates.Early detection of pulmonary tumors can significantly enhance the survival rate of patients.Recently,various Computer-Aided Diagnostic(CAD)methods have been developed to enhance the detection of pulmonary nodules with high accuracy.Nevertheless,the existing method-ologies cannot obtain a high level of specificity and sensitivity.The present study introduces a novel model for Lung Cancer Segmentation and Classification(LCSC),which incorporates two improved architectures,namely the improved U-Net architecture and the improved AlexNet architecture.The LCSC model comprises two distinct stages.The first stage involves the utilization of an improved U-Net architecture to segment candidate nodules extracted from the lung lobes.Subsequently,an improved AlexNet architecture is employed to classify lung cancer.During the first stage,the proposed model demonstrates a dice accuracy of 0.855,a precision of 0.933,and a recall of 0.789 for the segmentation of candidate nodules.The suggested improved AlexNet architecture attains 97.06%accuracy,a true positive rate of 96.36%,a true negative rate of 97.77%,a positive predictive value of 97.74%,and a negative predictive value of 96.41%for classifying pulmonary cancer as either benign or malignant.The proposed LCSC model is tested and evaluated employing the publically available dataset furnished by the Lung Image Database Consortium and Image Database Resource Initiative(LIDC-IDRI).This proposed technique exhibits remarkable performance compared to the existing methods by using various evaluation parameters.
文摘Pulmonary nodules are small, round, or oval-shaped growths on the lungs. They can be benign (noncancerous) or malignant (cancerous). The size of a nodule can range from a few millimeters to a few centimeters in diameter. Nodules may be found during a chest X-ray or other imaging test for an unrelated health problem. In the proposed methodology pulmonary nodules can be classified into three stages. Firstly, a 2D histogram thresholding technique is used to identify volume segmentation. An ant colony optimization algorithm is used to determine the optimal threshold value. Secondly, geometrical features such as lines, arcs, extended arcs, and ellipses are used to detect oval shapes. Thirdly, Histogram Oriented Surface Normal Vector (HOSNV) feature descriptors can be used to identify nodules of different sizes and shapes by using a scaled and rotation-invariant texture description. Smart nodule classification was performed with the XGBoost classifier. The results are tested and validated using the Lung Image Consortium Database (LICD). The proposed method has a sensitivity of 98.49% for nodules sized 3–30 mm.
基金the Researchers Supporting Project(RSP2023R395),King Saud University,Riyadh,Saudi Arabia.
文摘The distinction and precise identification of tumor nodules are crucial for timely lung cancer diagnosis andplanning intervention. This research work addresses the major issues pertaining to the field of medical imageprocessing while focusing on lung cancer Computed Tomography (CT) images. In this context, the paper proposesan improved lung cancer segmentation technique based on the strengths of nature-inspired approaches. Thebetter resolution of CT is exploited to distinguish healthy subjects from those who have lung cancer. In thisprocess, the visual challenges of the K-means are addressed with the integration of four nature-inspired swarmintelligent techniques. The techniques experimented in this paper are K-means with Artificial Bee Colony (ABC),K-means with Cuckoo Search Algorithm (CSA), K-means with Particle Swarm Optimization (PSO), and Kmeanswith Firefly Algorithm (FFA). The testing and evaluation are performed on Early Lung Cancer ActionProgram (ELCAP) database. The simulation analysis is performed using lung cancer images set against metrics:precision, sensitivity, specificity, f-measure, accuracy,Matthews Correlation Coefficient (MCC), Jaccard, and Dice.The detailed evaluation shows that the K-means with Cuckoo Search Algorithm (CSA) significantly improved thequality of lung cancer segmentation in comparison to the other optimization approaches utilized for lung cancerimages. The results exhibit that the proposed approach (K-means with CSA) achieves precision, sensitivity, and Fmeasureof 0.942, 0.964, and 0.953, respectively, and an average accuracy of 93%. The experimental results prove thatK-meanswithABC,K-meanswith PSO,K-meanswith FFA, andK-meanswithCSAhave achieved an improvementof 10.8%, 13.38%, 13.93%, and 15.7%, respectively, for accuracy measure in comparison to K-means segmentationfor lung cancer images. Further, it is highlighted that the proposed K-means with CSA have achieved a significantimprovement in accuracy, hence can be utilized by researchers for improved segmentation processes of medicalimage datasets for identifying the targeted region of interest.
基金the Deanship of Research and Graduate Studies at King Khalid University for funding this work through a Large Research Project under grant number RGP2/254/45.
文摘Gliomas are aggressive brain tumors known for their heterogeneity,unclear borders,and diverse locations on Magnetic Resonance Imaging(MRI)scans.These factors present significant challenges for MRI-based segmentation,a crucial step for effective treatment planning and monitoring of glioma progression.This study proposes a novel deep learning framework,ResNet Multi-Head Attention U-Net(ResMHA-Net),to address these challenges and enhance glioma segmentation accuracy.ResMHA-Net leverages the strengths of both residual blocks from the ResNet architecture and multi-head attention mechanisms.This powerful combination empowers the network to prioritize informative regions within the 3D MRI data and capture long-range dependencies.By doing so,ResMHANet effectively segments intricate glioma sub-regions and reduces the impact of uncertain tumor boundaries.We rigorously trained and validated ResMHA-Net on the BraTS 2018,2019,2020 and 2021 datasets.Notably,ResMHA-Net achieved superior segmentation accuracy on the BraTS 2021 dataset compared to the previous years,demonstrating its remarkable adaptability and robustness across diverse datasets.Furthermore,we collected the predicted masks obtained from three datasets to enhance survival prediction,effectively augmenting the dataset size.Radiomic features were then extracted from these predicted masks and,along with clinical data,were used to train a novel ensemble learning-based machine learning model for survival prediction.This model employs a voting mechanism aggregating predictions from multiple models,leading to significant improvements over existing methods.This ensemble approach capitalizes on the strengths of various models,resulting in more accurate and reliable predictions for patient survival.Importantly,we achieved an impressive accuracy of 73%for overall survival(OS)prediction.
基金the National Science and Technology Council of Taiwan under Grant NSTC 112-2221-E-130-005.
文摘This research focuses on addressing the challenges associated with image detection in low-light environments,particularly by applying artificial intelligence techniques to machine vision and object recognition systems.The primary goal is to tackle issues related to recognizing objects with low brightness levels.In this study,the Intel RealSense Lidar Camera L515 is used to simultaneously capture color information and 16-bit depth information images.The detection scenarios are categorized into normal brightness and low brightness situations.When the system determines a normal brightness environment,normal brightness images are recognized using deep learning methods.In low-brightness situations,three methods are proposed for recognition.The first method is the SegmentationwithDepth image(SD)methodwhich involves segmenting the depth image,creating amask from the segmented depth image,mapping the obtained mask onto the true color(RGB)image to obtain a backgroundreduced RGB image,and recognizing the segmented image.The second method is theHDVmethod(hue,depth,value)which combines RGB images converted to HSV images(hue,saturation,value)with depth images D to form HDV images for recognition.The third method is the HSD(hue,saturation,depth)method which similarly combines RGB images converted to HSV images with depth images D to form HSD images for recognition.In experimental results,in normal brightness environments,the average recognition rate obtained using image recognition methods is 91%.For low-brightness environments,using the SD method with original images for training and segmented images for recognition achieves an average recognition rate of over 82%.TheHDVmethod achieves an average recognition rate of over 70%,while the HSD method achieves an average recognition rate of over 84%.The HSD method allows for a quick and convenient low-light object recognition system.This research outcome can be applied to nighttime surveillance systems or nighttime road safety systems.
基金the National Natural Science Foundation of China(Grant Number 62066013)Hainan Provincial Natural Science Foundation of China(Grant Numbers 622RC674 and 2019RC182).
文摘High-resolution remote sensing image segmentation is a challenging task. In urban remote sensing, the presenceof occlusions and shadows often results in blurred or invisible object boundaries, thereby increasing the difficultyof segmentation. In this paper, an improved network with a cross-region self-attention mechanism for multi-scalefeatures based onDeepLabv3+is designed to address the difficulties of small object segmentation and blurred targetedge segmentation. First,we use CrossFormer as the backbone feature extraction network to achieve the interactionbetween large- and small-scale features, and establish self-attention associations between features at both large andsmall scales to capture global contextual feature information. Next, an improved atrous spatial pyramid poolingmodule is introduced to establish multi-scale feature maps with large- and small-scale feature associations, andattention vectors are added in the channel direction to enable adaptive adjustment of multi-scale channel features.The proposed networkmodel is validated using the PotsdamandVaihingen datasets. The experimental results showthat, compared with existing techniques, the network model designed in this paper can extract and fuse multiscaleinformation, more clearly extract edge information and small-scale information, and segment boundariesmore smoothly. Experimental results on public datasets demonstrate the superiority of ourmethod compared withseveral state-of-the-art networks.
基金This work is supported by the National Natural Science Foundation of China under Grant No.62001341the National Natural Science Foundation of Jiangsu Province under Grant No.BK20221379the Jiangsu Engineering Research Center of Digital Twinning Technology for Key Equipment in Petrochemical Process under Grant No.DTEC202104.
文摘This paper focuses on the task of few-shot 3D point cloud semantic segmentation.Despite some progress,this task still encounters many issues due to the insufficient samples given,e.g.,incomplete object segmentation and inaccurate semantic discrimination.To tackle these issues,we first leverage part-whole relationships into the task of 3D point cloud semantic segmentation to capture semantic integrity,which is empowered by the dynamic capsule routing with the module of 3D Capsule Networks(CapsNets)in the embedding network.Concretely,the dynamic routing amalgamates geometric information of the 3D point cloud data to construct higher-level feature representations,which capture the relationships between object parts and their wholes.Secondly,we designed a multi-prototype enhancement module to enhance the prototype discriminability.Specifically,the single-prototype enhancement mechanism is expanded to the multi-prototype enhancement version for capturing rich semantics.Besides,the shot-correlation within the category is calculated via the interaction of different samples to enhance the intra-category similarity.Ablation studies prove that the involved part-whole relations and proposed multi-prototype enhancement module help to achieve complete object segmentation and improve semantic discrimination.Moreover,under the integration of these two modules,quantitative and qualitative experiments on two public benchmarks,including S3DIS and ScanNet,indicate the superior performance of the proposed framework on the task of 3D point cloud semantic segmentation,compared to some state-of-the-art methods.
基金This work is supported by the Natural Science Foundation of China(No.82372035)National Transportation Preparedness Projects(No.ZYZZYJ).Light of West China(No.XAB2022YN10)The China Postdoctoral Science Foundation(No.2023M740760).
文摘Colorectal cancer,a malignant lesion of the intestines,significantly affects human health and life,emphasizing the necessity of early detection and treatment.Accurate segmentation of colorectal cancer regions directly impacts subsequent staging,treatment methods,and prognostic outcomes.While colonoscopy is an effective method for detecting colorectal cancer,its data collection approach can cause patient discomfort.To address this,current research utilizes Computed Tomography(CT)imaging;however,conventional CT images only capture transient states,lacking sufficient representational capability to precisely locate colorectal cancer.This study utilizes enhanced CT images,constructing a deep feature network from the arterial,portal venous,and delay phases to simulate the physician’s diagnostic process and achieve accurate cancer segmentation.The innovations include:1)Utilizing portal venous phase CT images to introduce a context-aware multi-scale aggregation module for preliminary shape extraction of colorectal cancer.2)Building an image sequence based on arterial and delay phases,transforming the cancer segmentation issue into an anomaly detection problem,establishing a pixel-pairing strategy,and proposing a colorectal cancer segmentation algorithm using a Siamese network.Experiments with 84 clinical cases of colorectal cancer enhanced CT data demonstrated an Area Overlap Measure of 0.90,significantly better than Fully Convolutional Networks(FCNs)at 0.20.Future research will explore the relationship between conventional and enhanced CT to further reduce segmentation time and improve accuracy.
基金funded through Researchers Supporting Project Number(RSPD2024R996)King Saud University,Riyadh,Saudi Arabia。
文摘Breast cancer detection heavily relies on medical imaging, particularly ultrasound, for early diagnosis and effectivetreatment. This research addresses the challenges associated with computer-aided diagnosis (CAD) of breastcancer fromultrasound images. The primary challenge is accurately distinguishing between malignant and benigntumors, complicated by factors such as speckle noise, variable image quality, and the need for precise segmentationand classification. The main objective of the research paper is to develop an advanced methodology for breastultrasound image classification, focusing on speckle noise reduction, precise segmentation, feature extraction, andmachine learning-based classification. A unique approach is introduced that combines Enhanced Speckle ReducedAnisotropic Diffusion (SRAD) filters for speckle noise reduction, U-NET-based segmentation, Genetic Algorithm(GA)-based feature selection, and Random Forest and Bagging Tree classifiers, resulting in a novel and efficientmodel. To test and validate the hybrid model, rigorous experimentations were performed and results state thatthe proposed hybrid model achieved accuracy rate of 99.9%, outperforming other existing techniques, and alsosignificantly reducing computational time. This enhanced accuracy, along with improved sensitivity and specificity,makes the proposed hybrid model a valuable addition to CAD systems in breast cancer diagnosis, ultimatelyenhancing diagnostic accuracy in clinical applications.
基金Supported by National Natural Science Foundation of China(Grant Nos.62261160575,61991414,61973036)Technical Field Foundation of the National Defense Science and Technology 173 Program of China(Grant Nos.20220601053,20220601030)。
文摘There is no unified planning standard for unstructured roads,and the morphological structures of these roads are complex and varied.It is important to maintain a balance between accuracy and speed for unstructured road extraction models.Unstructured road extraction algorithms based on deep learning have problems such as high model complexity,high computational cost,and the inability to adapt to current edge computing devices.Therefore,it is best to use lightweight network models.Considering the need for lightweight models and the characteristics of unstructured roads with different pattern shapes,such as blocks and strips,a TMB(Triple Multi-Block)feature extraction module is proposed,and the overall structure of the TMBNet network is described.The TMB module was compared with SS-nbt,Non-bottleneck-1D,and other modules via experiments.The feasibility and effectiveness of the TMB module design were proven through experiments and visualizations.The comparison experiment,using multiple convolution kernel categories,proved that the TMB module can improve the segmentation accuracy of the network.The comparison with different semantic segmentation networks demonstrates that the TMBNet network has advantages in terms of unstructured road extraction.