期刊文献+
共找到3,659篇文章
< 1 2 183 >
每页显示 20 50 100
Multi-Level Parallel Network for Brain Tumor Segmentation
1
作者 Juhong Tie Hui Peng 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期741-757,共17页
Accurate automatic segmentation of gliomas in various sub-regions,including peritumoral edema,necrotic core,and enhancing and non-enhancing tumor core from 3D multimodal MRI images,is challenging because of its highly... Accurate automatic segmentation of gliomas in various sub-regions,including peritumoral edema,necrotic core,and enhancing and non-enhancing tumor core from 3D multimodal MRI images,is challenging because of its highly heterogeneous appearance and shape.Deep convolution neural networks(CNNs)have recently improved glioma segmentation performance.However,extensive down-sampling such as pooling or stridden convolution in CNNs significantly decreases the initial image resolution,resulting in the loss of accurate spatial and object parts information,especially information on the small sub-region tumors,affecting segmentation performance.Hence,this paper proposes a novel multi-level parallel network comprising three different level parallel subnetworks to fully use low-level,mid-level,and high-level information and improve the performance of brain tumor segmentation.We also introduce the Combo loss function to address input class imbalance and false positives and negatives imbalance in deep learning.The proposed method is trained and validated on the BraTS 2020 training and validation dataset.On the validation dataset,ourmethod achieved a mean Dice score of 0.907,0.830,and 0.787 for the whole tumor,tumor core,and enhancing tumor core,respectively.Compared with state-of-the-art methods,the multi-level parallel network has achieved competitive results on the validation dataset. 展开更多
关键词 Convolution neural network brain tumor segmentation parallel network
下载PDF
UNet Based onMulti-Object Segmentation and Convolution Neural Network for Object Recognition
2
作者 Nouf Abdullah Almujally Bisma Riaz Chughtai +4 位作者 Naif Al Mudawi Abdulwahab Alazeb Asaad Algarni Hamdan A.Alzahrani Jeongmin Park 《Computers, Materials & Continua》 SCIE EI 2024年第7期1563-1580,共18页
The recent advancements in vision technology have had a significant impact on our ability to identify multiple objects and understand complex scenes.Various technologies,such as augmented reality-driven scene integrat... The recent advancements in vision technology have had a significant impact on our ability to identify multiple objects and understand complex scenes.Various technologies,such as augmented reality-driven scene integration,robotic navigation,autonomous driving,and guided tour systems,heavily rely on this type of scene comprehension.This paper presents a novel segmentation approach based on the UNet network model,aimed at recognizing multiple objects within an image.The methodology begins with the acquisition and preprocessing of the image,followed by segmentation using the fine-tuned UNet architecture.Afterward,we use an annotation tool to accurately label the segmented regions.Upon labeling,significant features are extracted from these segmented objects,encompassing KAZE(Accelerated Segmentation and Extraction)features,energy-based edge detection,frequency-based,and blob characteristics.For the classification stage,a convolution neural network(CNN)is employed.This comprehensive methodology demonstrates a robust framework for achieving accurate and efficient recognition of multiple objects in images.The experimental results,which include complex object datasets like MSRC-v2 and PASCAL-VOC12,have been documented.After analyzing the experimental results,it was found that the PASCAL-VOC12 dataset achieved an accuracy rate of 95%,while the MSRC-v2 dataset achieved an accuracy of 89%.The evaluation performed on these diverse datasets highlights a notably impressive level of performance. 展开更多
关键词 UNet segmentation BLOB fourier transform convolution neural network
下载PDF
Improved Convolutional Neural Network for Traffic Scene Segmentation
3
作者 Fuliang Xu Yong Luo +1 位作者 Chuanlong Sun Hong Zhao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期2691-2708,共18页
In actual traffic scenarios,precise recognition of traffic participants,such as vehicles and pedestrians,is crucial for intelligent transportation.This study proposes an improved algorithm built on Mask-RCNN to enhanc... In actual traffic scenarios,precise recognition of traffic participants,such as vehicles and pedestrians,is crucial for intelligent transportation.This study proposes an improved algorithm built on Mask-RCNN to enhance the ability of autonomous driving systems to recognize traffic participants.The algorithmincorporates long and shortterm memory networks and the fused attention module(GSAM,GCT,and Spatial Attention Module)to enhance the algorithm’s capability to process both global and local information.Additionally,to increase the network’s initial operation stability,the original network activation function was replaced with Gaussian error linear unit.Experiments were conducted using the publicly available Cityscapes dataset.Comparing the test results,it was observed that the revised algorithmoutperformed the original algorithmin terms of AP_(50),AP_(75),and othermetrics by 8.7%and 9.6%for target detection and 12.5%and 13.3%for segmentation. 展开更多
关键词 Instance segmentation deep learning convolutional neural network attention mechanism
下载PDF
Efficient Object Segmentation and Recognition Using Multi-Layer Perceptron Networks
4
作者 Aysha Naseer Nouf Abdullah Almujally +2 位作者 Saud S.Alotaibi Abdulwahab Alazeb Jeongmin Park 《Computers, Materials & Continua》 SCIE EI 2024年第1期1381-1398,共18页
Object segmentation and recognition is an imperative area of computer vision andmachine learning that identifies and separates individual objects within an image or video and determines classes or categories based on ... Object segmentation and recognition is an imperative area of computer vision andmachine learning that identifies and separates individual objects within an image or video and determines classes or categories based on their features.The proposed system presents a distinctive approach to object segmentation and recognition using Artificial Neural Networks(ANNs).The system takes RGB images as input and uses a k-means clustering-based segmentation technique to fragment the intended parts of the images into different regions and label thembased on their characteristics.Then,two distinct kinds of features are obtained from the segmented images to help identify the objects of interest.An Artificial Neural Network(ANN)is then used to recognize the objects based on their features.Experiments were carried out with three standard datasets,MSRC,MS COCO,and Caltech 101 which are extensively used in object recognition research,to measure the productivity of the suggested approach.The findings from the experiment support the suggested system’s validity,as it achieved class recognition accuracies of 89%,83%,and 90.30% on the MSRC,MS COCO,and Caltech 101 datasets,respectively. 展开更多
关键词 K-region fusion segmentation recognition feature extraction artificial neural network computer vision
下载PDF
DCFNet:An Effective Dual-Branch Cross-Attention Fusion Network for Medical Image Segmentation
5
作者 Chengzhang Zhu Renmao Zhang +5 位作者 Yalong Xiao Beiji Zou Xian Chai Zhangzheng Yang Rong Hu Xuanchu Duan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期1103-1128,共26页
Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Trans... Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Transformers have made significant progress.However,there are some limitations in the current integration of CNN and Transformer technology in two key aspects.Firstly,most methods either overlook or fail to fully incorporate the complementary nature between local and global features.Secondly,the significance of integrating the multiscale encoder features from the dual-branch network to enhance the decoding features is often disregarded in methods that combine CNN and Transformer.To address this issue,we present a groundbreaking dual-branch cross-attention fusion network(DCFNet),which efficiently combines the power of Swin Transformer and CNN to generate complementary global and local features.We then designed the Feature Cross-Fusion(FCF)module to efficiently fuse local and global features.In the FCF,the utilization of the Channel-wise Cross-fusion Transformer(CCT)serves the purpose of aggregatingmulti-scale features,and the Feature FusionModule(FFM)is employed to effectively aggregate dual-branch prominent feature regions from the spatial perspective.Furthermore,within the decoding phase of the dual-branch network,our proposed Channel Attention Block(CAB)aims to emphasize the significance of the channel features between the up-sampled features and the features generated by the FCFmodule to enhance the details of the decoding.Experimental results demonstrate that DCFNet exhibits enhanced accuracy in segmentation performance.Compared to other state-of-the-art(SOTA)methods,our segmentation framework exhibits a superior level of competitiveness.DCFNet’s accurate segmentation of medical images can greatly assist medical professionals in making crucial diagnoses of lesion areas in advance. 展开更多
关键词 Convolutional neural networks Swin Transformer dual branch medical image segmentation feature cross fusion
下载PDF
SGT-Net: A Transformer-Based Stratified Graph Convolutional Network for 3D Point Cloud Semantic Segmentation
6
作者 Suyi Liu Jianning Chi +2 位作者 Chengdong Wu Fang Xu Xiaosheng Yu 《Computers, Materials & Continua》 SCIE EI 2024年第6期4471-4489,共19页
In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and... In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and inherently sparse.Therefore,it is very difficult to extract long-range contexts and effectively aggregate local features for semantic segmentation in 3D point cloud space.Most current methods either focus on local feature aggregation or long-range context dependency,but fail to directly establish a global-local feature extractor to complete the point cloud semantic segmentation tasks.In this paper,we propose a Transformer-based stratified graph convolutional network(SGT-Net),which enlarges the effective receptive field and builds direct long-range dependency.Specifically,we first propose a novel dense-sparse sampling strategy that provides dense local vertices and sparse long-distance vertices for subsequent graph convolutional network(GCN).Secondly,we propose a multi-key self-attention mechanism based on the Transformer to further weight augmentation for crucial neighboring relationships and enlarge the effective receptive field.In addition,to further improve the efficiency of the network,we propose a similarity measurement module to determine whether the neighborhood near the center point is effective.We demonstrate the validity and superiority of our method on the S3DIS and ShapeNet datasets.Through ablation experiments and segmentation visualization,we verify that the SGT model can improve the performance of the point cloud semantic segmentation. 展开更多
关键词 3D point cloud semantic segmentation long-range contexts global-local feature graph convolutional network dense-sparse sampling strategy
下载PDF
Improved organs at risk segmentation based on modified U‐Net with self‐attention and consistency regularisation
7
作者 Maksym Manko Anton Popov +1 位作者 Juan Manuel Gorriz Javier Ramirez 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第4期850-865,共16页
Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR... Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR).A new approach to automatic OAR seg-mentation in the chest cavity in Computed Tomography(CT)images is presented.The proposed approach is based on the modified U‐Net architecture with the ResNet‐34 encoder,which is the baseline adopted in this work.The new two‐branch CS‐SA U‐Net architecture is proposed,which consists of two parallel U‐Net models in which self‐attention blocks with cosine similarity as query‐key similarity function(CS‐SA)blocks are inserted between the encoder and decoder,which enabled the use of con-sistency regularisation.The proposed solution demonstrates state‐of‐the‐art performance for the problem of OAR segmentation in CT images on the publicly available SegTHOR benchmark dataset in terms of a Dice coefficient(oesophagus-0.8714,heart-0.9516,trachea-0.9286,aorta-0.9510)and Hausdorff distance(oesophagus-0.2541,heart-0.1514,trachea-0.1722,aorta-0.1114)and significantly outperforms the baseline.The current approach is demonstrated to be viable for improving the quality of OAR segmentation for radiotherapy planning. 展开更多
关键词 3‐D computer vision deep learning deep neural networks image segmentation medical image processing object segmentation
下载PDF
Colorectal Cancer Segmentation Algorithm Based on Deep Features from Enhanced CT Images
8
作者 Shi Qiu Hongbing Lu +2 位作者 Jun Shu Ting Liang Tao Zhou 《Computers, Materials & Continua》 SCIE EI 2024年第8期2495-2510,共16页
Colorectal cancer,a malignant lesion of the intestines,significantly affects human health and life,emphasizing the necessity of early detection and treatment.Accurate segmentation of colorectal cancer regions directly... Colorectal cancer,a malignant lesion of the intestines,significantly affects human health and life,emphasizing the necessity of early detection and treatment.Accurate segmentation of colorectal cancer regions directly impacts subsequent staging,treatment methods,and prognostic outcomes.While colonoscopy is an effective method for detecting colorectal cancer,its data collection approach can cause patient discomfort.To address this,current research utilizes Computed Tomography(CT)imaging;however,conventional CT images only capture transient states,lacking sufficient representational capability to precisely locate colorectal cancer.This study utilizes enhanced CT images,constructing a deep feature network from the arterial,portal venous,and delay phases to simulate the physician’s diagnostic process and achieve accurate cancer segmentation.The innovations include:1)Utilizing portal venous phase CT images to introduce a context-aware multi-scale aggregation module for preliminary shape extraction of colorectal cancer.2)Building an image sequence based on arterial and delay phases,transforming the cancer segmentation issue into an anomaly detection problem,establishing a pixel-pairing strategy,and proposing a colorectal cancer segmentation algorithm using a Siamese network.Experiments with 84 clinical cases of colorectal cancer enhanced CT data demonstrated an Area Overlap Measure of 0.90,significantly better than Fully Convolutional Networks(FCNs)at 0.20.Future research will explore the relationship between conventional and enhanced CT to further reduce segmentation time and improve accuracy. 展开更多
关键词 Colorectal cancer enhanced CT MULTI-SCALE siamese network segmentation
下载PDF
Semantic segmentation via pixel-to-center similarity calculation
9
作者 Dongyue Wu Zilin Guo +3 位作者 Aoyan Li Changqian Yu Nong Sang Changxin Gao 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第1期87-100,共14页
Since the fully convolutional network has achieved great success in semantic segmentation,lots of works have been proposed to extract discriminative pixel representations.However,the authors observe that existing meth... Since the fully convolutional network has achieved great success in semantic segmentation,lots of works have been proposed to extract discriminative pixel representations.However,the authors observe that existing methods still suffer from two typical challenges:(i)The intra-class feature variation between different scenes may be large,leading to the difficulty in maintaining the consistency between same-class pixels from different scenes;(ii)The inter-class feature distinction in the same scene could be small,resulting in the limited performance to distinguish different classes in each scene.The authors first rethink se-mantic segmentation from a perspective of similarity between pixels and class centers.Each weight vector of the segmentation head represents its corresponding semantic class in the whole dataset,which can be regarded as the embedding of the class center.Thus,the pixel-wise classification amounts to computing similarity in the final feature space between pixels and the class centers.Under this novel view,the authors propose a Class Center Similarity(CCS)layer to address the above-mentioned challenges by generating adaptive class centers conditioned on each scenes and supervising the similarities between class centers.The CCS layer utilises the Adaptive Class Center Module to generate class centers conditioned on each scene,which adapt the large intra-class variation between different scenes.Specially designed Class Distance Loss(CD Loss)is introduced to control both inter-class and intra-class distances based on the predicted center-to-center and pixel-to-center similarity.Finally,the CCS layer outputs the processed pixel-to-center similarity as the segmentation prediction.Extensive experiments demonstrate that our model performs favourably against the state-of-the-art methods. 展开更多
关键词 computer vision deep neural networks image segmentation scene understanding
下载PDF
A Comprehensive Systematic Review: Advancements in Skin Cancer Classification and Segmentation Using the ISIC Dataset
10
作者 Madiha Hameed Aneela Zameer Muhammad Asif Zahoor Raja 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期2131-2164,共34页
The International Skin Imaging Collaboration(ISIC)datasets are pivotal resources for researchers in machine learning for medical image analysis,especially in skin cancer detection.These datasets contain tens of thousa... The International Skin Imaging Collaboration(ISIC)datasets are pivotal resources for researchers in machine learning for medical image analysis,especially in skin cancer detection.These datasets contain tens of thousands of dermoscopic photographs,each accompanied by gold-standard lesion diagnosis metadata.Annual challenges associated with ISIC datasets have spurred significant advancements,with research papers reporting metrics surpassing those of human experts.Skin cancers are categorized into melanoma and non-melanoma types,with melanoma posing a greater threat due to its rapid potential for metastasis if left untreated.This paper aims to address challenges in skin cancer detection via visual inspection and manual examination of skin lesion images,processes historically known for their laboriousness.Despite notable advancements in machine learning and deep learning models,persistent challenges remain,largely due to the intricate nature of skin lesion images.We review research on convolutional neural networks(CNNs)in skin cancer classification and segmentation,identifying issues like data duplication and augmentation problems.We explore the efficacy of Vision Transformers(ViTs)in overcoming these challenges within ISIC dataset processing.ViTs leverage their capabilities to capture both global and local relationships within images,reducing data duplication and enhancing model generalization.Additionally,ViTs alleviate augmentation issues by effectively leveraging original data.Through a thorough examination of ViT-based methodologies,we illustrate their pivotal role in enhancing ISIC image classification and segmentation.This study offers valuable insights for researchers and practitioners looking to utilize ViTs for improved analysis of dermatological images.Furthermore,this paper emphasizes the crucial role of mathematical and computational modeling processes in advancing skin cancer detection methodologies,highlighting their significance in improving algorithmic performance and interpretability. 展开更多
关键词 Medical image skin cancer classification skin cancer segmentation international skin imaging collaboration convolutional neural network deep learning
下载PDF
Axial Assembled Correspondence Network for Few-Shot Semantic Segmentation 被引量:2
11
作者 Yu Liu Bin Jiang Jiaming Xu 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第3期711-721,共11页
Few-shot semantic segmentation aims at training a model that can segment novel classes in a query image with only a few densely annotated support exemplars.It remains a challenge because of large intra-class variation... Few-shot semantic segmentation aims at training a model that can segment novel classes in a query image with only a few densely annotated support exemplars.It remains a challenge because of large intra-class variations between the support and query images.Existing approaches utilize 4D convolutions to mine semantic correspondence between the support and query images.However,they still suffer from heavy computation,sparse correspondence,and large memory.We propose axial assembled correspondence network(AACNet)to alleviate these issues.The key point of AACNet is the proposed axial assembled 4D kernel,which constructs the basic block for semantic correspondence encoder(SCE).Furthermore,we propose the deblurring equations to provide more robust correspondence for the aforementioned SCE and design a novel fusion module to mix correspondences in a learnable manner.Experiments on PASCAL-5~i reveal that our AACNet achieves a mean intersection-over-union score of 65.9%for 1-shot segmentation and 70.6%for 5-shot segmentation,surpassing the state-of-the-art method by 5.8%and 5.0%respectively. 展开更多
关键词 Artificial intelligence computer vision deep convolutional neural network few-shot semantic segmentation
下载PDF
Dual-Branch-UNet: A Dual-Branch Convolutional Neural Network for Medical Image Segmentation 被引量:2
12
作者 Muwei Jian Ronghua Wu +2 位作者 Hongyu Chen Lanqi Fu Chengdong Yang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第10期705-716,共12页
In intelligent perception and diagnosis of medical equipment,the visual and morphological changes in retinal vessels are closely related to the severity of cardiovascular diseases(e.g.,diabetes and hypertension).Intel... In intelligent perception and diagnosis of medical equipment,the visual and morphological changes in retinal vessels are closely related to the severity of cardiovascular diseases(e.g.,diabetes and hypertension).Intelligent auxiliary diagnosis of these diseases depends on the accuracy of the retinal vascular segmentation results.To address this challenge,we design a Dual-Branch-UNet framework,which comprises a Dual-Branch encoder structure for feature extraction based on the traditional U-Net model for medical image segmentation.To be more explicit,we utilize a novel parallel encoder made up of various convolutional modules to enhance the encoder portion of the original U-Net.Then,image features are combined at each layer to produce richer semantic data and the model’s capacity is adjusted to various input images.Meanwhile,in the lower sampling section,we give up pooling and conduct the lower sampling by convolution operation to control step size for information fusion.We also employ an attentionmodule in the decoder stage to filter the image noises so as to lessen the response of irrelevant features.Experiments are verified and compared on the DRIVE and ARIA datasets for retinal vessels segmentation.The proposed Dual-Branch-UNet has proved to be superior to other five typical state-of-the-art methods. 展开更多
关键词 Convolutional neural network medical image processing retinal vessel segmentation
下载PDF
DuFNet:Dual Flow Network of Real-Time Semantic Segmentation for Unmanned Driving Application of Internet of Things 被引量:1
13
作者 Tao Duan Yue Liu +2 位作者 Jingze Li Zhichao Lian d Qianmu Li 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第7期223-239,共17页
The application of unmanned driving in the Internet of Things is one of the concrete manifestations of the application of artificial intelligence technology.Image semantic segmentation can help the unmanned driving sy... The application of unmanned driving in the Internet of Things is one of the concrete manifestations of the application of artificial intelligence technology.Image semantic segmentation can help the unmanned driving system by achieving road accessibility analysis.Semantic segmentation is also a challenging technology for image understanding and scene parsing.We focused on the challenging task of real-time semantic segmentation in this paper.In this paper,we proposed a novel fast architecture for real-time semantic segmentation named DuFNet.Starting from the existing work of Bilateral Segmentation Network(BiSeNet),DuFNet proposes a novel Semantic Information Flow(SIF)structure for context information and a novel Fringe Information Flow(FIF)structure for spatial information.We also proposed two kinds of SIF with cascaded and paralleled structures,respectively.The SIF encodes the input stage by stage in the ResNet18 backbone and provides context information for the feature fusionmodule.Features from previous stages usually contain rich low-level details but high-level semantics for later stages.Themultiple convolutions embed in Parallel SIF aggregate the corresponding features among different stages and generate a powerful global context representation with less computational cost.The FIF consists of a pooling layer and an upsampling operator followed by projection convolution layer.The concise component provides more spatial details for the network.Compared with BiSeNet,our work achieved faster speed and comparable performance with 72.34%mIoU accuracy and 78 FPS on Cityscapes Dataset based on the ResNet18 backbone. 展开更多
关键词 Real-time semantic segmentation convolutional neural network feature fusion unmanned driving fringe information flow
下载PDF
Short‐term and long‐term memory self‐attention network for segmentation of tumours in 3D medical images
14
作者 Mingwei Wen Quan Zhou +3 位作者 Bo Tao Pavel Shcherbakov Yang Xu Xuming Zhang 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第4期1524-1537,共14页
Tumour segmentation in medical images(especially 3D tumour segmentation)is highly challenging due to the possible similarity between tumours and adjacent tissues,occurrence of multiple tumours and variable tumour shap... Tumour segmentation in medical images(especially 3D tumour segmentation)is highly challenging due to the possible similarity between tumours and adjacent tissues,occurrence of multiple tumours and variable tumour shapes and sizes.The popular deep learning‐based segmentation algorithms generally rely on the convolutional neural network(CNN)and Transformer.The former cannot extract the global image features effectively while the latter lacks the inductive bias and involves the complicated computation for 3D volume data.The existing hybrid CNN‐Transformer network can only provide the limited performance improvement or even poorer segmentation performance than the pure CNN.To address these issues,a short‐term and long‐term memory self‐attention network is proposed.Firstly,a distinctive self‐attention block uses the Transformer to explore the correlation among the region features at different levels extracted by the CNN.Then,the memory structure filters and combines the above information to exclude the similar regions and detect the multiple tumours.Finally,the multi‐layer reconstruction blocks will predict the tumour boundaries.Experimental results demonstrate that our method outperforms other methods in terms of subjective visual and quantitative evaluation.Compared with the most competitive method,the proposed method provides Dice(82.4%vs.76.6%)and Hausdorff distance 95%(HD95)(10.66 vs.11.54 mm)on the KiTS19 as well as Dice(80.2%vs.78.4%)and HD95(9.632 vs.12.17 mm)on the LiTS. 展开更多
关键词 3D medical images convolutional neural network self‐attention network TRANSFORMER tumor segmentation
下载PDF
TC-Fuse: A Transformers Fusing CNNs Network for Medical Image Segmentation
15
作者 Peng Geng Ji Lu +3 位作者 Ying Zhang Simin Ma Zhanzhong Tang Jianhua Liu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第11期2001-2023,共23页
In medical image segmentation task,convolutional neural networks(CNNs)are difficult to capture long-range dependencies,but transformers can model the long-range dependencies effectively.However,transformers have a fle... In medical image segmentation task,convolutional neural networks(CNNs)are difficult to capture long-range dependencies,but transformers can model the long-range dependencies effectively.However,transformers have a flexible structure and seldom assume the structural bias of input data,so it is difficult for transformers to learn positional encoding of the medical images when using fewer images for training.To solve these problems,a dual branch structure is proposed.In one branch,Mix-Feed-Forward Network(Mix-FFN)and axial attention are adopted to capture long-range dependencies and keep the translation invariance of the model.Mix-FFN whose depth-wise convolutions can provide position information is better than ordinary positional encoding.In the other branch,traditional convolutional neural networks(CNNs)are used to extract different features of fewer medical images.In addition,the attention fusion module BiFusion is used to effectively integrate the information from the CNN branch and Transformer branch,and the fused features can effectively capture the global and local context of the current spatial resolution.On the public standard datasets Gland Segmentation(GlaS),Colorectal adenocarcinoma gland(CRAG)and COVID-19 CT Images Segmentation,the F1-score,Intersection over Union(IoU)and parameters of the proposed TC-Fuse are superior to those by Axial Attention U-Net,U-Net,Medical Transformer and other methods.And F1-score increased respectively by 2.99%,3.42%and 3.95%compared with Medical Transformer. 展开更多
关键词 TRANSFORMERS convolutional neural networks fusion medical image segmentation axial attention
下载PDF
Vessels Segmentation in Angiograms Using Convolutional Neural Network: A Deep Learning Based Approach
16
作者 Sanjiban Sekhar Roy Ching-Hsien Hsu +3 位作者 Akash Samaran Ranjan Goyal Arindam Pande Valentina E.Balas 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第7期241-255,共15页
Coronary arterydisease(CAD)has become a significant causeof heart attack,especially amongthose 40yearsoldor younger.There is a need to develop new technologies andmethods to deal with this disease.Many researchers hav... Coronary arterydisease(CAD)has become a significant causeof heart attack,especially amongthose 40yearsoldor younger.There is a need to develop new technologies andmethods to deal with this disease.Many researchers have proposed image processing-based solutions for CADdiagnosis,but achieving highly accurate results for angiogram segmentation is still a challenge.Several different types of angiograms are adopted for CAD diagnosis.This paper proposes an approach for image segmentation using ConvolutionNeuralNetworks(CNN)for diagnosing coronary artery disease to achieve state-of-the-art results.We have collected the 2D X-ray images from the hospital,and the proposed model has been applied to them.Image augmentation has been performed in this research as it’s the most significant task required to be initiated to increase the dataset’s size.Also,the images have been enhanced using noise removal techniques before being fed to the CNN model for segmentation to achieve high accuracy.As the output,different settings of the network architecture undoubtedly have achieved different accuracy,among which the highest accuracy of the model is 97.61%.Compared with the other models,these results have proven to be superior to this proposed method in achieving state-of-the-art results. 展开更多
关键词 ANGIOGRAM convolution neural network coronary artery disease diagnosis of CAD image segmentation
下载PDF
Faster Region Based Convolutional Neural Network for Skin Lesion Segmentation
17
作者 G.Murugesan J.Jeyapriya +1 位作者 M.Hemalatha S.Rajeshkannan 《Intelligent Automation & Soft Computing》 SCIE 2023年第5期2099-2109,共11页
The diagnostic interpretation of dermoscopic images is a complex task as it is very difficult to identify the skin lesions from the normal.Thus the accurate detection of potential abnormalities is required for patient ... The diagnostic interpretation of dermoscopic images is a complex task as it is very difficult to identify the skin lesions from the normal.Thus the accurate detection of potential abnormalities is required for patient monitoring and effec-tive treatment.In this work,a Two-Tier Segmentation(TTS)system is designed,which combines the unsupervised and supervised techniques for skin lesion seg-mentation.It comprises preprocessing by the medianfilter,TTS by Colour K-Means Clustering(CKMC)for initial segmentation and Faster Region based Con-volutional Neural Network(FR-CNN)for refined segmentation.The CKMC approach is evaluated using the different number of clusters(k=3,5,7,and 9).An inception network with batch normalization is employed to segment mel-anoma regions effectively.Different loss functions such as Mean Absolute Error(MAE),Cross Entropy Loss(CEL),and Dice Loss(DL)are utilized for perfor-mance evaluation of the TTS system.The anchor box technique is employed to detect the melanoma region effectively.The TTS system is evaluated using 200 dermoscopic images from the PH2 database.The segmentation accuracies are analyzed in terms of Pixel Accuracy(PA)and Jaccard Index(JI).Results show that the TTS system has 90.19%PA with 0.8048 JI for skin lesion segmentation using DL in FR-CNN with seven clusters in CKMC than CEL and MAE. 展开更多
关键词 Skin cancer melanoma diagnosis CLUSTERING convolution neural network unsupervised segmentation deep learning
下载PDF
Deep Neural Network Based Detection and Segmentation of Ships for Maritime Surveillance
18
作者 Kyamelia Roy Sheli Sinha Chaudhuri +1 位作者 Sayan Pramanik Soumen Banerjee 《Computer Systems Science & Engineering》 SCIE EI 2023年第1期647-662,共16页
In recent years,computer visionfinds wide applications in maritime surveillance with its sophisticated algorithms and advanced architecture.Auto-matic ship detection with computer vision techniques provide an efficien... In recent years,computer visionfinds wide applications in maritime surveillance with its sophisticated algorithms and advanced architecture.Auto-matic ship detection with computer vision techniques provide an efficient means to monitor as well as track ships in water bodies.Waterways being an important medium of transport require continuous monitoring for protection of national security.The remote sensing satellite images of ships in harbours and water bodies are the image data that aid the neural network models to localize ships and to facilitate early identification of possible threats at sea.This paper proposes a deep learning based model capable enough to classify between ships and no-ships as well as to localize ships in the original images using bounding box tech-nique.Furthermore,classified ships are again segmented with deep learning based auto-encoder model.The proposed model,in terms of classification,provides suc-cessful results generating 99.5%and 99.2%validation and training accuracy respectively.The auto-encoder model also produces 85.1%and 84.2%validation and training accuracies.Moreover the IoU metric of the segmented images is found to be of 0.77 value.The experimental results reveal that the model is accu-rate and can be implemented for automatic ship detection in water bodies consid-ering remote sensing satellite images as input to the computer vision system. 展开更多
关键词 Auto-encoder computer vision deep convolution neural network satellite imagery semantic segmentation ship detection
下载PDF
A method of convolutional neural network based on frequency segmentation for monitoring the state of wind turbine blades
19
作者 Weijun Zhu Yunan Wu +3 位作者 Zhenye Sun Wenzhong Shen Guangxing Guo Jianwei Lin 《Theoretical & Applied Mechanics Letters》 CAS CSCD 2023年第6期465-480,共16页
Wind turbine blades are prone to failure due to high tip speed,rain,dust and so on.A surface condition detecting approach based on wind turbine blade aerodynamic noise is proposed.On the experimental measurement data,... Wind turbine blades are prone to failure due to high tip speed,rain,dust and so on.A surface condition detecting approach based on wind turbine blade aerodynamic noise is proposed.On the experimental measurement data,variational mode decomposition filtering and Mel spectrogram drawing are conducted first.The Mel spectrogram is divided into two halves based on frequency characteristics and then sent into the convolutional neural network.Gaussian white noise is superimposed on the original signal and the output results are assessed based on score coefficients,considering the complexity of the real environment.The surfaces of Wind turbine blades are classified into four types:standard,attachments,polishing,and serrated trailing edge.The proposed method is evaluated and the detection accuracy in complicated background conditions is found to be 99.59%.In addition to support the differentiation of trained models,utilizing proper score coefficients also permit the screening of unknown types. 展开更多
关键词 Wind turbine aerodynamic noise Surface condition detection Mel spectrogram Image segmentation Convolution neural network(CNN)
下载PDF
Deep Belief Network for Lung Nodule Segmentation and Cancer Detection
20
作者 Sindhuja Manickavasagam Poonkuzhali Sugumaran 《Computer Systems Science & Engineering》 SCIE EI 2023年第10期135-151,共17页
Cancer disease is a deadliest disease cause more dangerous one.By identifying the disease through Artificial intelligence to getting the mage features directly from patients.This paper presents the lung knob division ... Cancer disease is a deadliest disease cause more dangerous one.By identifying the disease through Artificial intelligence to getting the mage features directly from patients.This paper presents the lung knob division and disease characterization by proposing an enhancement calculation.Most of the machine learning techniques failed to observe the feature dimensions leads inaccuracy in feature selection and classification.This cause inaccuracy in sensitivity and specificity rate to reduce the identification accuracy.To resolve this problem,to propose a Chicken Sine Cosine Algorithm based Deep Belief Network to identify the disease factor.The general technique of the created approach includes four stages,such as pre-processing,segmentation,highlight extraction,and the order.From the outset,the Computerized Tomography(CT)image of the lung is taken care of to the division.When the division is done,the highlights are extricated through morphological factors for feature observation.By getting the features are analysed and the characterization is done dependent on the Deep Belief Network(DBN)which is prepared by utilizing the proposed Chicken-Sine Cosine Algorithm(CSCA)which distinguish the lung tumour,giving two classes in particular,knob or non-knob.The proposed system produce high performance as well compared to the other system.The presentation assessment of lung knob division and malignant growth grouping dependent on CSCA is figured utilizing three measurements to be specificity,precision,affectability,and the explicitness. 展开更多
关键词 Chicken-sine cosine algorithm deep belief network lung cancer Subject classification codes artificial intelligence machine learning segmentation
下载PDF
上一页 1 2 183 下一页 到第
使用帮助 返回顶部