Segmenting vegetation in color images is a complex task, especially when the background and lighting conditions of the environment are uncontrolled. This paper proposes a vegetation segmentation algorithm that combine...Segmenting vegetation in color images is a complex task, especially when the background and lighting conditions of the environment are uncontrolled. This paper proposes a vegetation segmentation algorithm that combines a supervised and an unsupervised learning method to segment healthy and diseased plant images from the background. During the training stage, a Self-Organizing Map (SOM) neural network is applied to create different color groups from a set of images containing vegetation, acquired from a tomato greenhouse. The color groups are labeled as vegetation and non-vegetation and then used to create two color histogram models corresponding to vegetation and non-vegetation. In the online mode, input images are segmented by a Bayesian classifier using the two histogram models. This algorithm has provided a qualitatively better segmentation rate of images containing plants’ foliage in uncontrolled environments than the segmentation rate obtained by a color index technique, resulting in the elimination of the background and the preservation of important color information. This segmentation method will be applied in disease diagnosis of tomato plants in greenhouses as future work.展开更多
Acting as a pilot of the Square Kilometer Array (SKA), a Five hundred meter Aperture Spherical Telescope (FAST) project puts forward many innovative ideas, among which the design of the active main reflector shows fas...Acting as a pilot of the Square Kilometer Array (SKA), a Five hundred meter Aperture Spherical Telescope (FAST) project puts forward many innovative ideas, among which the design of the active main reflector shows fascinating potential. The main spherical reflector is to be composed of thousands of small spherical panels, which can be adjusted to fit a paraboloid of revolution in real time. For the construction and performance, the rms of the fit must be optimized, and so appropriate dimensional limits for the panels need to be determined. The issue of how to divide the spherical reflector mathematically is addressed in this paper. The advantages and drawbacks of various segmenting methods are discussed and an optimum one is suggested.展开更多
Segmenting Arabic handwritings had been one of the subjects of research in the field of Arabic character recognition for more than 25 years. The majority of reported segmentation techniques share a critical shortcomin...Segmenting Arabic handwritings had been one of the subjects of research in the field of Arabic character recognition for more than 25 years. The majority of reported segmentation techniques share a critical shortcoming, which is over-segmentation. The aim of segmentation is to produce the letters (segments) of a handwritten word. When a resulting letter (segment) is made of more than one piece (stroke) instead of one, this is called over-segmentation. Our objective is to overcome this problem by using an Artificial Neural Networks (ANN) to verify the resulting segment. We propose a set of heuristic-based rules to assemble strokes in order to report the precise segmented letters. Preprocessing phases that include normalization and feature extraction are required as a prerequisite step for the ANN system for recognition and verification. In our previous work [1], we did achieve a segmentation success rate of 86% but without recognition. In this work, our experimental results confirmed a segmentation success rate of no less than 95%.展开更多
The wide availability, low radiation dose and short acquisition time of Cone-Beam CT (CBCT) scans make them an attractive source of data for compiling databases of anatomical structures. However CBCT has higher noise ...The wide availability, low radiation dose and short acquisition time of Cone-Beam CT (CBCT) scans make them an attractive source of data for compiling databases of anatomical structures. However CBCT has higher noise and lower contrast than helical slice CT, which makes segmentation more challenging and the optimal methods are not yet known. This paper evaluates several methods of segmenting airway geometries (nares, nasal cavities and pharynx) from typical dental quality head and neck CBCT data. The nasal cavity has narrow and intricate passages and is separated from the paranasal sinuses by thin walls, making it is susceptible to either over- or under-segmentation. The upper airway was split into two: the nasal cavity and the pharyngeal region (nasopharynx to larynx). Each part was segmented using global thresholding, multi-step level-set, and region competition methods (the latter using thresholding, clustering and classification initialisation and edge attraction techniques). The segmented 3D surfaces were evaluated against a reference manual segmentation using distance-, overlap- and volume-based metrics. Global thresholding, multi-step level-set, and region competition all gave satisfactory results for the lower part of the airway (nasopharynx to larynx). Edge attraction failed completely. A semi-automatic region-growing segmentation with multi-thresholding (or classification) initialization offered the best quality segmentation. With some minimal manual editing, it resulted in an accurate upper airway model, as judged by the similarity and volumetric indices, while being the least time consuming of the semi-automatic methods, and relying the least on the operator’s expertise.展开更多
Visual attention mechanisms allow humans to extract relevant and important information from raw input percepts. Many applications in robotics and computer vision have modeled human visual attention mechanisms using a ...Visual attention mechanisms allow humans to extract relevant and important information from raw input percepts. Many applications in robotics and computer vision have modeled human visual attention mechanisms using a bottom-up data centric approach. In contrast, recent studies in cognitive science highlight advantages of a top-down approach to the attention mechanisms, especially in applications involving goal-directed search. In this paper, we propose a top-down approach for extracting salient objects/regions of space. The top-down methodology first isolates different objects in an unorganized point cloud, and compares each object for uniqueness. A measure of saliency using the properties of geodesic distance on the object’s surface is defined. Our method works on 3D point cloud data, and identifies salient objects of high curvature and unique silhouette. These being the most unique features of a scene, are robust to clutter, occlusions and view point changes. We provide the details of the proposed method and initial experimental results.展开更多
Caenorhabditis elegans has been widely used as a model organism in developmental biology due to its invariant development.In this study,we developed a desktop software CShaperApp to segment fluorescence-labeled images...Caenorhabditis elegans has been widely used as a model organism in developmental biology due to its invariant development.In this study,we developed a desktop software CShaperApp to segment fluorescence-labeled images of cell membranes and analyze cellular morphologies interactively during C.elegans embryogenesis.Based on the previously proposed framework CShaper,CShaperApp empowers biologists to automatically and efficiently extract quantitative cellular morphological data with either an existing deep learning model or a fine-tuned one adapted to their in-house dataset.Experimental results show that it takes about 30 min to process a three-dimensional time-lapse(4D)dataset,which consists of 150 image stacks at a~1.5-min interval and covers C.elegans embryogenesis from the 4-cell to 350-cell stages.The robustness of CShaperApp is also validated with the datasets from different laboratories.Furthermore,modularized implementation increases the flexibility in multi-task applications and promotes its flexibility for future enhancements.As cell morphology over development has emerged as a focus of interest in developmental biology,CShaperApp is anticipated to pave the way for those studies by accelerating the high-throughput generation of systems-level quantitative data collection.The software can be freely downloaded from the website of Github(cao13jf/CShaperApp)and is executable on Windows,macOS,and Linux operating systems.展开更多
“精灵圈”是海岸带盐沼植被生态系统中的一种“空间自组织”结构,对盐沼湿地的生产力、稳定性和恢复力有重要影响。无人机影像是实现“精灵圈”空间位置高精度识别及解译其时空演化趋势与规律的重要数据源,但“精灵圈”像素与背景像素...“精灵圈”是海岸带盐沼植被生态系统中的一种“空间自组织”结构,对盐沼湿地的生产力、稳定性和恢复力有重要影响。无人机影像是实现“精灵圈”空间位置高精度识别及解译其时空演化趋势与规律的重要数据源,但“精灵圈”像素与背景像素在色彩信息和外形特征上差异较小,如何从二维影像中智能精准地识别“精灵圈”像素并对识别的单个像素形成个体“精灵圈”是目前的技术难点。本文提出了一种结合分割万物模型(Segment Anything Model,SAM)视觉分割模型与随机森林机器学习的无人机影像“精灵圈”分割及分类方法,实现了单个“精灵圈”的识别和提取。首先,通过构建索伦森-骰子系数(S?rensen-Dice coefficient,Dice)和交并比(Intersection over Union,IOU)评价指标,从SAM中筛选预训练模型并对其参数进行优化,实现全自动影像分割,得到无属性信息的分割掩码/分割类;然后,利用红、绿、蓝(RGB)三通道信息及空间二维坐标将分割掩码与原图像进行信息匹配,构造分割掩码的特征指标,并根据袋外数据(Out of Bag,OOB)误差减小及特征分布规律对特征进行分析和筛选;最后,利用筛选的特征对随机森林模型进行训练,实现“精灵圈”植被、普通植被和光滩的自动识别与分类。实验结果表明:本文方法“精灵圈”平均正确提取率96.1%,平均错误提取率为9.5%,为精准刻画“精灵圈”时空格局及海岸带无人机遥感图像处理提供了方法和技术支撑。展开更多
Visual semantic segmentation aims at separating a visual sample into diverse blocks with specific semantic attributes and identifying the category for each block,and it plays a crucial role in environmental perception...Visual semantic segmentation aims at separating a visual sample into diverse blocks with specific semantic attributes and identifying the category for each block,and it plays a crucial role in environmental perception.Conventional learning-based visual semantic segmentation approaches count heavily on largescale training data with dense annotations and consistently fail to estimate accurate semantic labels for unseen categories.This obstruction spurs a craze for studying visual semantic segmentation with the assistance of few/zero-shot learning.The emergence and rapid progress of few/zero-shot visual semantic segmentation make it possible to learn unseen categories from a few labeled or even zero-labeled samples,which advances the extension to practical applications.Therefore,this paper focuses on the recently published few/zero-shot visual semantic segmentation methods varying from 2D to 3D space and explores the commonalities and discrepancies of technical settlements under different segmentation circumstances.Specifically,the preliminaries on few/zeroshot visual semantic segmentation,including the problem definitions,typical datasets,and technical remedies,are briefly reviewed and discussed.Moreover,three typical instantiations are involved to uncover the interactions of few/zero-shot learning with visual semantic segmentation,including image semantic segmentation,video object segmentation,and 3D segmentation.Finally,the future challenges of few/zero-shot visual semantic segmentation are discussed.展开更多
The printed circuit heat exchanger(PCHE) is receiving wide attention as a new kind of compact heat exchanger and is considered as a promising vaporizer in the LNG process. In this paper, a PCHE straight channel in the...The printed circuit heat exchanger(PCHE) is receiving wide attention as a new kind of compact heat exchanger and is considered as a promising vaporizer in the LNG process. In this paper, a PCHE straight channel in the length of 500 mm is established, with a semicircular cross section in a diameter of 1.2 mm.Numerical simulation is employed to investigate the flow and heat transfer performance of supercritical methane in the channel. The pseudo-boiling theory is adopted and the liquid-like, two-phase-like, and vapor-like regimes are divided for supercritical methane to analyze the heat transfer and flow features.The results are presented in micro segment to show the local convective heat transfer coefficient and pressure drop. It shows that the convective heat transfer coefficient in segments along the channel has a significant peak feature near the pseudo-critical point and a heat transfer deterioration when the average fluid temperature in the segment is higher than the pseudo-critical point. The reason is explained with the generation of vapor-like film near the channel wall that the peak feature related to a nucleateboiling-like state and heat transfer deterioration related to a film-boiling-like state. The effects of parameters, including mass flow rate, pressure, and wall heat flux on flow and heat transfer were analyzed.In calculating of the averaged heat transfer coefficient of the whole channel, the traditional method shows significant deviation and the micro segment weighted average method is adopted. The pressure drop can mainly be affected by the mass flux and pressure and little affected by the wall heat flux. The peak of the convective heat transfer coefficient can only form at high mass flux, low wall heat flux, and near critical pressure, in which condition the nucleate-boiling-like state is easier to appear. Moreover,heat transfer deterioration will always appear, since the supercritical flow will finally develop into a filmboiling-like state. So heat transfer deterioration should be taken seriously in the design and safe operation of vaporizer PCHE. The study of this work clarified the local heat transfer and flow feature of supercritical methane in microchannel and contributed to the deep understanding of supercritical methane flow of the vaporization process in PCHE.展开更多
Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ...Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ice class often navigate in channels opened up by icebreakers.Navigation in the ice channel often depends on good maneuverability skills and abundant experience from the captain to a large extent.The ship may get stuck if steered into ice fields off the channel.Under this circumstance,it is very important to study how to identify the boundary lines of ice channels with a reliable method.In this paper,a two-staged ice channel identification method is developed based on image segmentation and corner point regression.The first stage employs the image segmentation method to extract channel regions.In the second stage,an intelligent corner regression network is proposed to extract the channel boundary lines from the channel region.A non-intelligent angle-based filtering and clustering method is proposed and compared with corner point regression network.The training and evaluation of the segmentation method and corner regression network are carried out on the synthetic and real ice channel dataset.The evaluation results show that the accuracy of the method using the corner point regression network in the second stage is achieved as high as 73.33%on the synthetic ice channel dataset and 70.66%on the real ice channel dataset,and the processing speed can reach up to 14.58frames per second.展开更多
Watermarks can provide reliable and secure copyright protection for optical coherence tomography(OCT)fundus images.The effective image segmentation is helpful for promoting OCT image watermarking.However,OCT images ha...Watermarks can provide reliable and secure copyright protection for optical coherence tomography(OCT)fundus images.The effective image segmentation is helpful for promoting OCT image watermarking.However,OCT images have a large amount of low-quality data,which seriously affects the performance of segmentationmethods.Therefore,this paper proposes an effective segmentation method for OCT fundus image watermarking using a rough convolutional neural network(RCNN).First,the rough-set-based feature discretization module is designed to preprocess the input data.Second,a dual attention mechanism for feature channels and spatial regions in the CNN is added to enable the model to adaptively select important information for fusion.Finally,the refinement module for enhancing the extraction power of multi-scale information is added to improve the edge accuracy in segmentation.RCNN is compared with CE-Net and MultiResUNet on 83 gold standard 3D retinal OCT data samples.The average dice similarly coefficient(DSC)obtained by RCNN is 6%higher than that of CE-Net.The average 95 percent Hausdorff distance(95HD)and average symmetric surface distance(ASD)obtained by RCNN are 32.4%and 33.3%lower than those of MultiResUNet,respectively.We also evaluate the effect of feature discretization,as well as analyze the initial learning rate of RCNN and conduct ablation experiments with the four different models.The experimental results indicate that our method can improve the segmentation accuracy of OCT fundus images,providing strong support for its application in medical image watermarking.展开更多
Lung cancer is a malady of the lungs that gravely jeopardizes human health.Therefore,early detection and treatment are paramount for the preservation of human life.Lung computed tomography(CT)image sequences can expli...Lung cancer is a malady of the lungs that gravely jeopardizes human health.Therefore,early detection and treatment are paramount for the preservation of human life.Lung computed tomography(CT)image sequences can explicitly delineate the pathological condition of the lungs.To meet the imperative for accurate diagnosis by physicians,expeditious segmentation of the region harboring lung cancer is of utmost significance.We utilize computer-aided methods to emulate the diagnostic process in which physicians concentrate on lung cancer in a sequential manner,erect an interpretable model,and attain segmentation of lung cancer.The specific advancements can be encapsulated as follows:1)Concentration on the lung parenchyma region:Based on 16-bit CT image capturing and the luminance characteristics of lung cancer,we proffer an intercept histogram algorithm.2)Focus on the specific locus of lung malignancy:Utilizing the spatial interrelation of lung cancer,we propose a memory-based Unet architecture and incorporate skip connections.3)Data Imbalance:In accordance with the prevalent situation of an overabundance of negative samples and a paucity of positive samples,we scrutinize the existing loss function and suggest a mixed loss function.Experimental results with pre-existing publicly available datasets and assembled datasets demonstrate that the segmentation efficacy,measured as Area Overlap Measure(AOM)is superior to 0.81,which markedly ameliorates in comparison with conventional algorithms,thereby facilitating physicians in diagnosis.展开更多
In this study,the vertical components of broadband teleseismic P wave data recorded by China Earthquake Network are used to image the rupture processes of the February 6th,2023 Turkish earthquake doublet via back proj...In this study,the vertical components of broadband teleseismic P wave data recorded by China Earthquake Network are used to image the rupture processes of the February 6th,2023 Turkish earthquake doublet via back projection analysis.Data in two frequency bands(0.5-2 Hz and 1-3 Hz)are used in the imaging processes.The results show that the rupture of the first event extends about 200 km to the northeast and about 150 km to the southwest,lasting~90 s in total.The southwestern rupture is triggered by the northeastern rupture,demonstrating a sequential bidirectional unilateral rupture pattern.The rupture of the second event extends approximately 80 km in both northeast and west directions,lasting~35 s in total and demonstrates a typical bilateral rupture feature.The cascading ruptures on both sides also reflect the occurrence of selective rupture behaviors on bifurcated faults.In addition,we observe super-shear ruptures on certain fault sections with relatively straight fault structures and sparse aftershocks.展开更多
Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR...Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR).A new approach to automatic OAR seg-mentation in the chest cavity in Computed Tomography(CT)images is presented.The proposed approach is based on the modified U‐Net architecture with the ResNet‐34 encoder,which is the baseline adopted in this work.The new two‐branch CS‐SA U‐Net architecture is proposed,which consists of two parallel U‐Net models in which self‐attention blocks with cosine similarity as query‐key similarity function(CS‐SA)blocks are inserted between the encoder and decoder,which enabled the use of con-sistency regularisation.The proposed solution demonstrates state‐of‐the‐art performance for the problem of OAR segmentation in CT images on the publicly available SegTHOR benchmark dataset in terms of a Dice coefficient(oesophagus-0.8714,heart-0.9516,trachea-0.9286,aorta-0.9510)and Hausdorff distance(oesophagus-0.2541,heart-0.1514,trachea-0.1722,aorta-0.1114)and significantly outperforms the baseline.The current approach is demonstrated to be viable for improving the quality of OAR segmentation for radiotherapy planning.展开更多
Lung cancer is a leading cause of global mortality rates.Early detection of pulmonary tumors can significantly enhance the survival rate of patients.Recently,various Computer-Aided Diagnostic(CAD)methods have been dev...Lung cancer is a leading cause of global mortality rates.Early detection of pulmonary tumors can significantly enhance the survival rate of patients.Recently,various Computer-Aided Diagnostic(CAD)methods have been developed to enhance the detection of pulmonary nodules with high accuracy.Nevertheless,the existing method-ologies cannot obtain a high level of specificity and sensitivity.The present study introduces a novel model for Lung Cancer Segmentation and Classification(LCSC),which incorporates two improved architectures,namely the improved U-Net architecture and the improved AlexNet architecture.The LCSC model comprises two distinct stages.The first stage involves the utilization of an improved U-Net architecture to segment candidate nodules extracted from the lung lobes.Subsequently,an improved AlexNet architecture is employed to classify lung cancer.During the first stage,the proposed model demonstrates a dice accuracy of 0.855,a precision of 0.933,and a recall of 0.789 for the segmentation of candidate nodules.The suggested improved AlexNet architecture attains 97.06%accuracy,a true positive rate of 96.36%,a true negative rate of 97.77%,a positive predictive value of 97.74%,and a negative predictive value of 96.41%for classifying pulmonary cancer as either benign or malignant.The proposed LCSC model is tested and evaluated employing the publically available dataset furnished by the Lung Image Database Consortium and Image Database Resource Initiative(LIDC-IDRI).This proposed technique exhibits remarkable performance compared to the existing methods by using various evaluation parameters.展开更多
Breast cancer is one of the major health issues with high mortality rates and a substantial impact on patients and healthcare systems worldwide.Various Computer-Aided Diagnosis(CAD)tools,based on breast thermograms,ha...Breast cancer is one of the major health issues with high mortality rates and a substantial impact on patients and healthcare systems worldwide.Various Computer-Aided Diagnosis(CAD)tools,based on breast thermograms,have been developed for early detection of this disease.However,accurately segmenting the Region of Interest(ROI)fromthermograms remains challenging.This paper presents an approach that leverages image acquisition protocol parameters to identify the lateral breast region and estimate its bottomboundary using a second-degree polynomial.The proposed method demonstrated high efficacy,achieving an impressive Jaccard coefficient of 86%and a Dice index of 92%when evaluated against manually created ground truths.Textural features were extracted from each view’s ROI,with significant features selected via Mutual Information for training Multi-Layer Perceptron(MLP)and K-Nearest Neighbors(KNN)classifiers.Our findings revealed that the MLP classifier outperformed the KNN,achieving an accuracy of 86%,a specificity of 100%,and an Area Under the Curve(AUC)of 0.85.The consistency of the method across both sides of the breast suggests its viability as an auto-segmentation tool.Furthermore,the classification results suggests that lateral views of breast thermograms harbor valuable features that can significantly aid in the early detection of breast cancer.展开更多
Pulmonary nodules are small, round, or oval-shaped growths on the lungs. They can be benign (noncancerous) or malignant (cancerous). The size of a nodule can range from a few millimeters to a few centimeters in diamet...Pulmonary nodules are small, round, or oval-shaped growths on the lungs. They can be benign (noncancerous) or malignant (cancerous). The size of a nodule can range from a few millimeters to a few centimeters in diameter. Nodules may be found during a chest X-ray or other imaging test for an unrelated health problem. In the proposed methodology pulmonary nodules can be classified into three stages. Firstly, a 2D histogram thresholding technique is used to identify volume segmentation. An ant colony optimization algorithm is used to determine the optimal threshold value. Secondly, geometrical features such as lines, arcs, extended arcs, and ellipses are used to detect oval shapes. Thirdly, Histogram Oriented Surface Normal Vector (HOSNV) feature descriptors can be used to identify nodules of different sizes and shapes by using a scaled and rotation-invariant texture description. Smart nodule classification was performed with the XGBoost classifier. The results are tested and validated using the Lung Image Consortium Database (LICD). The proposed method has a sensitivity of 98.49% for nodules sized 3–30 mm.展开更多
The distinction and precise identification of tumor nodules are crucial for timely lung cancer diagnosis andplanning intervention. This research work addresses the major issues pertaining to the field of medical image...The distinction and precise identification of tumor nodules are crucial for timely lung cancer diagnosis andplanning intervention. This research work addresses the major issues pertaining to the field of medical imageprocessing while focusing on lung cancer Computed Tomography (CT) images. In this context, the paper proposesan improved lung cancer segmentation technique based on the strengths of nature-inspired approaches. Thebetter resolution of CT is exploited to distinguish healthy subjects from those who have lung cancer. In thisprocess, the visual challenges of the K-means are addressed with the integration of four nature-inspired swarmintelligent techniques. The techniques experimented in this paper are K-means with Artificial Bee Colony (ABC),K-means with Cuckoo Search Algorithm (CSA), K-means with Particle Swarm Optimization (PSO), and Kmeanswith Firefly Algorithm (FFA). The testing and evaluation are performed on Early Lung Cancer ActionProgram (ELCAP) database. The simulation analysis is performed using lung cancer images set against metrics:precision, sensitivity, specificity, f-measure, accuracy,Matthews Correlation Coefficient (MCC), Jaccard, and Dice.The detailed evaluation shows that the K-means with Cuckoo Search Algorithm (CSA) significantly improved thequality of lung cancer segmentation in comparison to the other optimization approaches utilized for lung cancerimages. The results exhibit that the proposed approach (K-means with CSA) achieves precision, sensitivity, and Fmeasureof 0.942, 0.964, and 0.953, respectively, and an average accuracy of 93%. The experimental results prove thatK-meanswithABC,K-meanswith PSO,K-meanswith FFA, andK-meanswithCSAhave achieved an improvementof 10.8%, 13.38%, 13.93%, and 15.7%, respectively, for accuracy measure in comparison to K-means segmentationfor lung cancer images. Further, it is highlighted that the proposed K-means with CSA have achieved a significantimprovement in accuracy, hence can be utilized by researchers for improved segmentation processes of medicalimage datasets for identifying the targeted region of interest.展开更多
文摘Segmenting vegetation in color images is a complex task, especially when the background and lighting conditions of the environment are uncontrolled. This paper proposes a vegetation segmentation algorithm that combines a supervised and an unsupervised learning method to segment healthy and diseased plant images from the background. During the training stage, a Self-Organizing Map (SOM) neural network is applied to create different color groups from a set of images containing vegetation, acquired from a tomato greenhouse. The color groups are labeled as vegetation and non-vegetation and then used to create two color histogram models corresponding to vegetation and non-vegetation. In the online mode, input images are segmented by a Bayesian classifier using the two histogram models. This algorithm has provided a qualitatively better segmentation rate of images containing plants’ foliage in uncontrolled environments than the segmentation rate obtained by a color index technique, resulting in the elimination of the background and the preservation of important color information. This segmentation method will be applied in disease diagnosis of tomato plants in greenhouses as future work.
文摘Acting as a pilot of the Square Kilometer Array (SKA), a Five hundred meter Aperture Spherical Telescope (FAST) project puts forward many innovative ideas, among which the design of the active main reflector shows fascinating potential. The main spherical reflector is to be composed of thousands of small spherical panels, which can be adjusted to fit a paraboloid of revolution in real time. For the construction and performance, the rms of the fit must be optimized, and so appropriate dimensional limits for the panels need to be determined. The issue of how to divide the spherical reflector mathematically is addressed in this paper. The advantages and drawbacks of various segmenting methods are discussed and an optimum one is suggested.
文摘Segmenting Arabic handwritings had been one of the subjects of research in the field of Arabic character recognition for more than 25 years. The majority of reported segmentation techniques share a critical shortcoming, which is over-segmentation. The aim of segmentation is to produce the letters (segments) of a handwritten word. When a resulting letter (segment) is made of more than one piece (stroke) instead of one, this is called over-segmentation. Our objective is to overcome this problem by using an Artificial Neural Networks (ANN) to verify the resulting segment. We propose a set of heuristic-based rules to assemble strokes in order to report the precise segmented letters. Preprocessing phases that include normalization and feature extraction are required as a prerequisite step for the ANN system for recognition and verification. In our previous work [1], we did achieve a segmentation success rate of 86% but without recognition. In this work, our experimental results confirmed a segmentation success rate of no less than 95%.
文摘The wide availability, low radiation dose and short acquisition time of Cone-Beam CT (CBCT) scans make them an attractive source of data for compiling databases of anatomical structures. However CBCT has higher noise and lower contrast than helical slice CT, which makes segmentation more challenging and the optimal methods are not yet known. This paper evaluates several methods of segmenting airway geometries (nares, nasal cavities and pharynx) from typical dental quality head and neck CBCT data. The nasal cavity has narrow and intricate passages and is separated from the paranasal sinuses by thin walls, making it is susceptible to either over- or under-segmentation. The upper airway was split into two: the nasal cavity and the pharyngeal region (nasopharynx to larynx). Each part was segmented using global thresholding, multi-step level-set, and region competition methods (the latter using thresholding, clustering and classification initialisation and edge attraction techniques). The segmented 3D surfaces were evaluated against a reference manual segmentation using distance-, overlap- and volume-based metrics. Global thresholding, multi-step level-set, and region competition all gave satisfactory results for the lower part of the airway (nasopharynx to larynx). Edge attraction failed completely. A semi-automatic region-growing segmentation with multi-thresholding (or classification) initialization offered the best quality segmentation. With some minimal manual editing, it resulted in an accurate upper airway model, as judged by the similarity and volumetric indices, while being the least time consuming of the semi-automatic methods, and relying the least on the operator’s expertise.
文摘Visual attention mechanisms allow humans to extract relevant and important information from raw input percepts. Many applications in robotics and computer vision have modeled human visual attention mechanisms using a bottom-up data centric approach. In contrast, recent studies in cognitive science highlight advantages of a top-down approach to the attention mechanisms, especially in applications involving goal-directed search. In this paper, we propose a top-down approach for extracting salient objects/regions of space. The top-down methodology first isolates different objects in an unorganized point cloud, and compares each object for uniqueness. A measure of saliency using the properties of geodesic distance on the object’s surface is defined. Our method works on 3D point cloud data, and identifies salient objects of high curvature and unique silhouette. These being the most unique features of a scene, are robust to clutter, occlusions and view point changes. We provide the details of the proposed method and initial experimental results.
基金National Natural Science Foundation of China,Grant/Award Numbers:12090053,32088101Hong Kong Innovation and Technology Fund,Grant/Award Numbers:GHP/176/21SZ,InnoHK Project CIMDAHong Kong Research Grants Council,Grant/Award Numbers:11204821,HKBU12101323,HKBU12101520,HKBU12101522,N_HKBU201/18。
文摘Caenorhabditis elegans has been widely used as a model organism in developmental biology due to its invariant development.In this study,we developed a desktop software CShaperApp to segment fluorescence-labeled images of cell membranes and analyze cellular morphologies interactively during C.elegans embryogenesis.Based on the previously proposed framework CShaper,CShaperApp empowers biologists to automatically and efficiently extract quantitative cellular morphological data with either an existing deep learning model or a fine-tuned one adapted to their in-house dataset.Experimental results show that it takes about 30 min to process a three-dimensional time-lapse(4D)dataset,which consists of 150 image stacks at a~1.5-min interval and covers C.elegans embryogenesis from the 4-cell to 350-cell stages.The robustness of CShaperApp is also validated with the datasets from different laboratories.Furthermore,modularized implementation increases the flexibility in multi-task applications and promotes its flexibility for future enhancements.As cell morphology over development has emerged as a focus of interest in developmental biology,CShaperApp is anticipated to pave the way for those studies by accelerating the high-throughput generation of systems-level quantitative data collection.The software can be freely downloaded from the website of Github(cao13jf/CShaperApp)and is executable on Windows,macOS,and Linux operating systems.
文摘“精灵圈”是海岸带盐沼植被生态系统中的一种“空间自组织”结构,对盐沼湿地的生产力、稳定性和恢复力有重要影响。无人机影像是实现“精灵圈”空间位置高精度识别及解译其时空演化趋势与规律的重要数据源,但“精灵圈”像素与背景像素在色彩信息和外形特征上差异较小,如何从二维影像中智能精准地识别“精灵圈”像素并对识别的单个像素形成个体“精灵圈”是目前的技术难点。本文提出了一种结合分割万物模型(Segment Anything Model,SAM)视觉分割模型与随机森林机器学习的无人机影像“精灵圈”分割及分类方法,实现了单个“精灵圈”的识别和提取。首先,通过构建索伦森-骰子系数(S?rensen-Dice coefficient,Dice)和交并比(Intersection over Union,IOU)评价指标,从SAM中筛选预训练模型并对其参数进行优化,实现全自动影像分割,得到无属性信息的分割掩码/分割类;然后,利用红、绿、蓝(RGB)三通道信息及空间二维坐标将分割掩码与原图像进行信息匹配,构造分割掩码的特征指标,并根据袋外数据(Out of Bag,OOB)误差减小及特征分布规律对特征进行分析和筛选;最后,利用筛选的特征对随机森林模型进行训练,实现“精灵圈”植被、普通植被和光滩的自动识别与分类。实验结果表明:本文方法“精灵圈”平均正确提取率96.1%,平均错误提取率为9.5%,为精准刻画“精灵圈”时空格局及海岸带无人机遥感图像处理提供了方法和技术支撑。
基金supported by National Key Research and Development Program of China(2021YFB1714300)the National Natural Science Foundation of China(62233005)+2 种基金in part by the CNPC Innovation Fund(2021D002-0902)Fundamental Research Funds for the Central Universities and Shanghai AI Labsponsored by Shanghai Gaofeng and Gaoyuan Project for University Academic Program Development。
文摘Visual semantic segmentation aims at separating a visual sample into diverse blocks with specific semantic attributes and identifying the category for each block,and it plays a crucial role in environmental perception.Conventional learning-based visual semantic segmentation approaches count heavily on largescale training data with dense annotations and consistently fail to estimate accurate semantic labels for unseen categories.This obstruction spurs a craze for studying visual semantic segmentation with the assistance of few/zero-shot learning.The emergence and rapid progress of few/zero-shot visual semantic segmentation make it possible to learn unseen categories from a few labeled or even zero-labeled samples,which advances the extension to practical applications.Therefore,this paper focuses on the recently published few/zero-shot visual semantic segmentation methods varying from 2D to 3D space and explores the commonalities and discrepancies of technical settlements under different segmentation circumstances.Specifically,the preliminaries on few/zeroshot visual semantic segmentation,including the problem definitions,typical datasets,and technical remedies,are briefly reviewed and discussed.Moreover,three typical instantiations are involved to uncover the interactions of few/zero-shot learning with visual semantic segmentation,including image semantic segmentation,video object segmentation,and 3D segmentation.Finally,the future challenges of few/zero-shot visual semantic segmentation are discussed.
基金provided by Science and Technology Development Project of Jilin Province(No.20230101338JC)。
文摘The printed circuit heat exchanger(PCHE) is receiving wide attention as a new kind of compact heat exchanger and is considered as a promising vaporizer in the LNG process. In this paper, a PCHE straight channel in the length of 500 mm is established, with a semicircular cross section in a diameter of 1.2 mm.Numerical simulation is employed to investigate the flow and heat transfer performance of supercritical methane in the channel. The pseudo-boiling theory is adopted and the liquid-like, two-phase-like, and vapor-like regimes are divided for supercritical methane to analyze the heat transfer and flow features.The results are presented in micro segment to show the local convective heat transfer coefficient and pressure drop. It shows that the convective heat transfer coefficient in segments along the channel has a significant peak feature near the pseudo-critical point and a heat transfer deterioration when the average fluid temperature in the segment is higher than the pseudo-critical point. The reason is explained with the generation of vapor-like film near the channel wall that the peak feature related to a nucleateboiling-like state and heat transfer deterioration related to a film-boiling-like state. The effects of parameters, including mass flow rate, pressure, and wall heat flux on flow and heat transfer were analyzed.In calculating of the averaged heat transfer coefficient of the whole channel, the traditional method shows significant deviation and the micro segment weighted average method is adopted. The pressure drop can mainly be affected by the mass flux and pressure and little affected by the wall heat flux. The peak of the convective heat transfer coefficient can only form at high mass flux, low wall heat flux, and near critical pressure, in which condition the nucleate-boiling-like state is easier to appear. Moreover,heat transfer deterioration will always appear, since the supercritical flow will finally develop into a filmboiling-like state. So heat transfer deterioration should be taken seriously in the design and safe operation of vaporizer PCHE. The study of this work clarified the local heat transfer and flow feature of supercritical methane in microchannel and contributed to the deep understanding of supercritical methane flow of the vaporization process in PCHE.
基金financially supported by the National Key Research and Development Program(Grant No.2022YFE0107000)the General Projects of the National Natural Science Foundation of China(Grant No.52171259)the High-Tech Ship Research Project of the Ministry of Industry and Information Technology(Grant No.[2021]342)。
文摘Identification of the ice channel is the basic technology for developing intelligent ships in ice-covered waters,which is important to ensure the safety and economy of navigation.In the Arctic,merchant ships with low ice class often navigate in channels opened up by icebreakers.Navigation in the ice channel often depends on good maneuverability skills and abundant experience from the captain to a large extent.The ship may get stuck if steered into ice fields off the channel.Under this circumstance,it is very important to study how to identify the boundary lines of ice channels with a reliable method.In this paper,a two-staged ice channel identification method is developed based on image segmentation and corner point regression.The first stage employs the image segmentation method to extract channel regions.In the second stage,an intelligent corner regression network is proposed to extract the channel boundary lines from the channel region.A non-intelligent angle-based filtering and clustering method is proposed and compared with corner point regression network.The training and evaluation of the segmentation method and corner regression network are carried out on the synthetic and real ice channel dataset.The evaluation results show that the accuracy of the method using the corner point regression network in the second stage is achieved as high as 73.33%on the synthetic ice channel dataset and 70.66%on the real ice channel dataset,and the processing speed can reach up to 14.58frames per second.
基金the China Postdoctoral Science Foundation under Grant 2021M701838the Natural Science Foundation of Hainan Province of China under Grants 621MS042 and 622MS067the Hainan Medical University Teaching Achievement Award Cultivation under Grant HYjcpx202209.
文摘Watermarks can provide reliable and secure copyright protection for optical coherence tomography(OCT)fundus images.The effective image segmentation is helpful for promoting OCT image watermarking.However,OCT images have a large amount of low-quality data,which seriously affects the performance of segmentationmethods.Therefore,this paper proposes an effective segmentation method for OCT fundus image watermarking using a rough convolutional neural network(RCNN).First,the rough-set-based feature discretization module is designed to preprocess the input data.Second,a dual attention mechanism for feature channels and spatial regions in the CNN is added to enable the model to adaptively select important information for fusion.Finally,the refinement module for enhancing the extraction power of multi-scale information is added to improve the edge accuracy in segmentation.RCNN is compared with CE-Net and MultiResUNet on 83 gold standard 3D retinal OCT data samples.The average dice similarly coefficient(DSC)obtained by RCNN is 6%higher than that of CE-Net.The average 95 percent Hausdorff distance(95HD)and average symmetric surface distance(ASD)obtained by RCNN are 32.4%and 33.3%lower than those of MultiResUNet,respectively.We also evaluate the effect of feature discretization,as well as analyze the initial learning rate of RCNN and conduct ablation experiments with the four different models.The experimental results indicate that our method can improve the segmentation accuracy of OCT fundus images,providing strong support for its application in medical image watermarking.
基金This work is supported by Light of West China(No.XAB2022YN10).
文摘Lung cancer is a malady of the lungs that gravely jeopardizes human health.Therefore,early detection and treatment are paramount for the preservation of human life.Lung computed tomography(CT)image sequences can explicitly delineate the pathological condition of the lungs.To meet the imperative for accurate diagnosis by physicians,expeditious segmentation of the region harboring lung cancer is of utmost significance.We utilize computer-aided methods to emulate the diagnostic process in which physicians concentrate on lung cancer in a sequential manner,erect an interpretable model,and attain segmentation of lung cancer.The specific advancements can be encapsulated as follows:1)Concentration on the lung parenchyma region:Based on 16-bit CT image capturing and the luminance characteristics of lung cancer,we proffer an intercept histogram algorithm.2)Focus on the specific locus of lung malignancy:Utilizing the spatial interrelation of lung cancer,we propose a memory-based Unet architecture and incorporate skip connections.3)Data Imbalance:In accordance with the prevalent situation of an overabundance of negative samples and a paucity of positive samples,we scrutinize the existing loss function and suggest a mixed loss function.Experimental results with pre-existing publicly available datasets and assembled datasets demonstrate that the segmentation efficacy,measured as Area Overlap Measure(AOM)is superior to 0.81,which markedly ameliorates in comparison with conventional algorithms,thereby facilitating physicians in diagnosis.
基金supported by the National Key R&D Program of China(No.2022YFF0800601)National Scientific Foundation of China(Nos.41930103 and 41774047).
文摘In this study,the vertical components of broadband teleseismic P wave data recorded by China Earthquake Network are used to image the rupture processes of the February 6th,2023 Turkish earthquake doublet via back projection analysis.Data in two frequency bands(0.5-2 Hz and 1-3 Hz)are used in the imaging processes.The results show that the rupture of the first event extends about 200 km to the northeast and about 150 km to the southwest,lasting~90 s in total.The southwestern rupture is triggered by the northeastern rupture,demonstrating a sequential bidirectional unilateral rupture pattern.The rupture of the second event extends approximately 80 km in both northeast and west directions,lasting~35 s in total and demonstrates a typical bilateral rupture feature.The cascading ruptures on both sides also reflect the occurrence of selective rupture behaviors on bifurcated faults.In addition,we observe super-shear ruptures on certain fault sections with relatively straight fault structures and sparse aftershocks.
基金the PID2022‐137451OB‐I00 and PID2022‐137629OA‐I00 projects funded by the MICIU/AEIAEI/10.13039/501100011033 and by ERDF/EU.
文摘Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR).A new approach to automatic OAR seg-mentation in the chest cavity in Computed Tomography(CT)images is presented.The proposed approach is based on the modified U‐Net architecture with the ResNet‐34 encoder,which is the baseline adopted in this work.The new two‐branch CS‐SA U‐Net architecture is proposed,which consists of two parallel U‐Net models in which self‐attention blocks with cosine similarity as query‐key similarity function(CS‐SA)blocks are inserted between the encoder and decoder,which enabled the use of con-sistency regularisation.The proposed solution demonstrates state‐of‐the‐art performance for the problem of OAR segmentation in CT images on the publicly available SegTHOR benchmark dataset in terms of a Dice coefficient(oesophagus-0.8714,heart-0.9516,trachea-0.9286,aorta-0.9510)and Hausdorff distance(oesophagus-0.2541,heart-0.1514,trachea-0.1722,aorta-0.1114)and significantly outperforms the baseline.The current approach is demonstrated to be viable for improving the quality of OAR segmentation for radiotherapy planning.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(Grant Number IMSIU-RP23044).
文摘Lung cancer is a leading cause of global mortality rates.Early detection of pulmonary tumors can significantly enhance the survival rate of patients.Recently,various Computer-Aided Diagnostic(CAD)methods have been developed to enhance the detection of pulmonary nodules with high accuracy.Nevertheless,the existing method-ologies cannot obtain a high level of specificity and sensitivity.The present study introduces a novel model for Lung Cancer Segmentation and Classification(LCSC),which incorporates two improved architectures,namely the improved U-Net architecture and the improved AlexNet architecture.The LCSC model comprises two distinct stages.The first stage involves the utilization of an improved U-Net architecture to segment candidate nodules extracted from the lung lobes.Subsequently,an improved AlexNet architecture is employed to classify lung cancer.During the first stage,the proposed model demonstrates a dice accuracy of 0.855,a precision of 0.933,and a recall of 0.789 for the segmentation of candidate nodules.The suggested improved AlexNet architecture attains 97.06%accuracy,a true positive rate of 96.36%,a true negative rate of 97.77%,a positive predictive value of 97.74%,and a negative predictive value of 96.41%for classifying pulmonary cancer as either benign or malignant.The proposed LCSC model is tested and evaluated employing the publically available dataset furnished by the Lung Image Database Consortium and Image Database Resource Initiative(LIDC-IDRI).This proposed technique exhibits remarkable performance compared to the existing methods by using various evaluation parameters.
基金supported by the research grant(SEED-CCIS-2024-166),Prince Sultan University,Saudi Arabia。
文摘Breast cancer is one of the major health issues with high mortality rates and a substantial impact on patients and healthcare systems worldwide.Various Computer-Aided Diagnosis(CAD)tools,based on breast thermograms,have been developed for early detection of this disease.However,accurately segmenting the Region of Interest(ROI)fromthermograms remains challenging.This paper presents an approach that leverages image acquisition protocol parameters to identify the lateral breast region and estimate its bottomboundary using a second-degree polynomial.The proposed method demonstrated high efficacy,achieving an impressive Jaccard coefficient of 86%and a Dice index of 92%when evaluated against manually created ground truths.Textural features were extracted from each view’s ROI,with significant features selected via Mutual Information for training Multi-Layer Perceptron(MLP)and K-Nearest Neighbors(KNN)classifiers.Our findings revealed that the MLP classifier outperformed the KNN,achieving an accuracy of 86%,a specificity of 100%,and an Area Under the Curve(AUC)of 0.85.The consistency of the method across both sides of the breast suggests its viability as an auto-segmentation tool.Furthermore,the classification results suggests that lateral views of breast thermograms harbor valuable features that can significantly aid in the early detection of breast cancer.
文摘Pulmonary nodules are small, round, or oval-shaped growths on the lungs. They can be benign (noncancerous) or malignant (cancerous). The size of a nodule can range from a few millimeters to a few centimeters in diameter. Nodules may be found during a chest X-ray or other imaging test for an unrelated health problem. In the proposed methodology pulmonary nodules can be classified into three stages. Firstly, a 2D histogram thresholding technique is used to identify volume segmentation. An ant colony optimization algorithm is used to determine the optimal threshold value. Secondly, geometrical features such as lines, arcs, extended arcs, and ellipses are used to detect oval shapes. Thirdly, Histogram Oriented Surface Normal Vector (HOSNV) feature descriptors can be used to identify nodules of different sizes and shapes by using a scaled and rotation-invariant texture description. Smart nodule classification was performed with the XGBoost classifier. The results are tested and validated using the Lung Image Consortium Database (LICD). The proposed method has a sensitivity of 98.49% for nodules sized 3–30 mm.
基金the Researchers Supporting Project(RSP2023R395),King Saud University,Riyadh,Saudi Arabia.
文摘The distinction and precise identification of tumor nodules are crucial for timely lung cancer diagnosis andplanning intervention. This research work addresses the major issues pertaining to the field of medical imageprocessing while focusing on lung cancer Computed Tomography (CT) images. In this context, the paper proposesan improved lung cancer segmentation technique based on the strengths of nature-inspired approaches. Thebetter resolution of CT is exploited to distinguish healthy subjects from those who have lung cancer. In thisprocess, the visual challenges of the K-means are addressed with the integration of four nature-inspired swarmintelligent techniques. The techniques experimented in this paper are K-means with Artificial Bee Colony (ABC),K-means with Cuckoo Search Algorithm (CSA), K-means with Particle Swarm Optimization (PSO), and Kmeanswith Firefly Algorithm (FFA). The testing and evaluation are performed on Early Lung Cancer ActionProgram (ELCAP) database. The simulation analysis is performed using lung cancer images set against metrics:precision, sensitivity, specificity, f-measure, accuracy,Matthews Correlation Coefficient (MCC), Jaccard, and Dice.The detailed evaluation shows that the K-means with Cuckoo Search Algorithm (CSA) significantly improved thequality of lung cancer segmentation in comparison to the other optimization approaches utilized for lung cancerimages. The results exhibit that the proposed approach (K-means with CSA) achieves precision, sensitivity, and Fmeasureof 0.942, 0.964, and 0.953, respectively, and an average accuracy of 93%. The experimental results prove thatK-meanswithABC,K-meanswith PSO,K-meanswith FFA, andK-meanswithCSAhave achieved an improvementof 10.8%, 13.38%, 13.93%, and 15.7%, respectively, for accuracy measure in comparison to K-means segmentationfor lung cancer images. Further, it is highlighted that the proposed K-means with CSA have achieved a significantimprovement in accuracy, hence can be utilized by researchers for improved segmentation processes of medicalimage datasets for identifying the targeted region of interest.