With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection abil...With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection ability of a single vehicle limits the SLAM performance in wide areas.Thereby,cooperative SLAM using multiple vehicles has become an important research direction.The key factor of cooperative SLAM is timely and efficient sonar image transmission among underwater vehicles.However,the limited bandwidth of underwater acoustic channels contradicts a large amount of sonar image data.It is essential to compress the images before transmission.Recently,deep neural networks have great value in image compression by virtue of the powerful learning ability of neural networks,but the existing sonar image compression methods based on neural network usually focus on the pixel-level information without the semantic-level information.In this paper,we propose a novel underwater acoustic transmission scheme called UAT-SSIC that includes semantic segmentation-based sonar image compression(SSIC)framework and the joint source-channel codec,to improve the accuracy of the semantic information of the reconstructed sonar image at the receiver.The SSIC framework consists of Auto-Encoder structure-based sonar image compression network,which is measured by a semantic segmentation network's residual.Considering that sonar images have the characteristics of blurred target edges,the semantic segmentation network used a special dilated convolution neural network(DiCNN)to enhance segmentation accuracy by expanding the range of receptive fields.The joint source-channel codec with unequal error protection is proposed that adjusts the power level of the transmitted data,which deal with sonar image transmission error caused by the serious underwater acoustic channel.Experiment results demonstrate that our method preserves more semantic information,with advantages over existing methods at the same compression ratio.It also improves the error tolerance and packet loss resistance of transmission.展开更多
Semantic segmentation of driving scene images is crucial for autonomous driving.While deep learning technology has significantly improved daytime image semantic segmentation,nighttime images pose challenges due to fac...Semantic segmentation of driving scene images is crucial for autonomous driving.While deep learning technology has significantly improved daytime image semantic segmentation,nighttime images pose challenges due to factors like poor lighting and overexposure,making it difficult to recognize small objects.To address this,we propose an Image Adaptive Enhancement(IAEN)module comprising a parameter predictor(Edip),multiple image processing filters(Mdif),and a Detail Processing Module(DPM).Edip combines image processing filters to predict parameters like exposure and hue,optimizing image quality.We adopt a novel image encoder to enhance parameter prediction accuracy by enabling Edip to handle features at different scales.DPM strengthens overlooked image details,extending the IAEN module’s functionality.After the segmentation network,we integrate a Depth Guided Filter(DGF)to refine segmentation outputs.The entire network is trained end-to-end,with segmentation results guiding parameter prediction optimization,promoting self-learning and network improvement.This lightweight and efficient network architecture is particularly suitable for addressing challenges in nighttime image segmentation.Extensive experiments validate significant performance improvements of our approach on the ACDC-night and Nightcity datasets.展开更多
Cross-lingual image description,the task of generating image captions in a target language from images and descriptions in a source language,is addressed in this study through a novel approach that combines neural net...Cross-lingual image description,the task of generating image captions in a target language from images and descriptions in a source language,is addressed in this study through a novel approach that combines neural network models and semantic matching techniques.Experiments conducted on the Flickr8k and AraImg2k benchmark datasets,featuring images and descriptions in English and Arabic,showcase remarkable performance improvements over state-of-the-art methods.Our model,equipped with the Image&Cross-Language Semantic Matching module and the Target Language Domain Evaluation module,significantly enhances the semantic relevance of generated image descriptions.For English-to-Arabic and Arabic-to-English cross-language image descriptions,our approach achieves a CIDEr score for English and Arabic of 87.9%and 81.7%,respectively,emphasizing the substantial contributions of our methodology.Comparative analyses with previous works further affirm the superior performance of our approach,and visual results underscore that our model generates image captions that are both semantically accurate and stylistically consistent with the target language.In summary,this study advances the field of cross-lingual image description,offering an effective solution for generating image captions across languages,with the potential to impact multilingual communication and accessibility.Future research directions include expanding to more languages and incorporating diverse visual and textual data sources.展开更多
Many deep learning-based registration methods rely on a single-stream encoder-decoder network for computing deformation fields between 3D volumes.However,these methods often lack constraint information and overlook se...Many deep learning-based registration methods rely on a single-stream encoder-decoder network for computing deformation fields between 3D volumes.However,these methods often lack constraint information and overlook semantic consistency,limiting their performance.To address these issues,we present a novel approach for medical image registration called theDual-VoxelMorph,featuring a dual-channel cross-constraint network.This innovative network utilizes both intensity and segmentation images,which share identical semantic information and feature representations.Two encoder-decoder structures calculate deformation fields for intensity and segmentation images,as generated by the dual-channel cross-constraint network.This design facilitates bidirectional communication between grayscale and segmentation information,enabling the model to better learn the corresponding grayscale and segmentation details of the same anatomical structures.To ensure semantic and directional consistency,we introduce constraints and apply the cosine similarity function to enhance semantic consistency.Evaluation on four public datasets demonstrates superior performance compared to the baselinemethod,achieving Dice scores of 79.9%,64.5%,69.9%,and 63.5%for OASIS-1,OASIS-3,LPBA40,and ADNI,respectively.展开更多
Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous human...Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous humaneffort to label the image. Within this field, other research endeavors utilize weakly supervised methods. Theseapproaches aim to reduce the expenses associated with annotation by leveraging sparsely annotated data, such asscribbles. This paper presents a novel technique called a weakly supervised network using scribble-supervised andedge-mask (WSSE-net). This network is a three-branch network architecture, whereby each branch is equippedwith a distinct decoder module dedicated to road extraction tasks. One of the branches is dedicated to generatingedge masks using edge detection algorithms and optimizing road edge details. The other two branches supervise themodel’s training by employing scribble labels and spreading scribble information throughout the image. To addressthe historical flaw that created pseudo-labels that are not updated with network training, we use mixup to blendprediction results dynamically and continually update new pseudo-labels to steer network training. Our solutiondemonstrates efficient operation by simultaneously considering both edge-mask aid and dynamic pseudo-labelsupport. The studies are conducted on three separate road datasets, which consist primarily of high-resolutionremote-sensing satellite photos and drone images. The experimental findings suggest that our methodologyperforms better than advanced scribble-supervised approaches and specific traditional fully supervised methods.展开更多
We propose a novel image segmentation algorithm to tackle the challenge of limited recognition and segmentation performance in identifying welding seam images during robotic intelligent operations.Initially,to enhance...We propose a novel image segmentation algorithm to tackle the challenge of limited recognition and segmentation performance in identifying welding seam images during robotic intelligent operations.Initially,to enhance the capability of deep neural networks in extracting geometric attributes from depth images,we developed a novel deep geometric convolution operator(DGConv).DGConv is utilized to construct a deep local geometric feature extraction module,facilitating a more comprehensive exploration of the intrinsic geometric information within depth images.Secondly,we integrate the newly proposed deep geometric feature module with the Fully Convolutional Network(FCN8)to establish a high-performance deep neural network algorithm tailored for depth image segmentation.Concurrently,we enhance the FCN8 detection head by separating the segmentation and classification processes.This enhancement significantly boosts the network’s overall detection capability.Thirdly,for a comprehensive assessment of our proposed algorithm and its applicability in real-world industrial settings,we curated a line-scan image dataset featuring weld seams.This dataset,named the Standardized Linear Depth Profile(SLDP)dataset,was collected from actual industrial sites where autonomous robots are in operation.Ultimately,we conducted experiments utilizing the SLDP dataset,achieving an average accuracy of 92.7%.Our proposed approach exhibited a remarkable performance improvement over the prior method on the identical dataset.Moreover,we have successfully deployed the proposed algorithm in genuine industrial environments,fulfilling the prerequisites of unmanned robot operations.展开更多
There are two types of methods for image segmentation.One is traditional image processing methods,which are sensitive to details and boundaries,yet fail to recognize semantic information.The other is deep learning met...There are two types of methods for image segmentation.One is traditional image processing methods,which are sensitive to details and boundaries,yet fail to recognize semantic information.The other is deep learning methods,which can locate and identify different objects,but boundary identifications are not accurate enough.Both of them cannot generate entire segmentation information.In order to obtain accurate edge detection and semantic information,an Adaptive Boundary and Semantic Composite Segmentation method(ABSCS)is proposed.This method can precisely semantic segment individual objects in large-size aerial images with limited GPU performances.It includes adaptively dividing and modifying the aerial images with the proposed principles and methods,using the deep learning method to semantic segment and preprocess the small divided pieces,using three traditional methods to segment and preprocess original-size aerial images,adaptively selecting traditional results tomodify the boundaries of individual objects in deep learning results,and combining the results of different objects.Individual object semantic segmentation experiments are conducted by using the AeroScapes dataset,and their results are analyzed qualitatively and quantitatively.The experimental results demonstrate that the proposed method can achieve more promising object boundaries than the original deep learning method.This work also demonstrates the advantages of the proposed method in applications of point cloud semantic segmentation and image inpainting.展开更多
This paper proposes an improved high-precision 3D semantic mapping method for indoor scenes using RGB-D images.The current semantic mapping algorithms suffer from low semantic annotation accuracy and insufficient real...This paper proposes an improved high-precision 3D semantic mapping method for indoor scenes using RGB-D images.The current semantic mapping algorithms suffer from low semantic annotation accuracy and insufficient real-time performance.To address these issues,we first adopt the Elastic Fusion algorithm to select key frames from indoor environment image sequences captured by the Kinect sensor and construct the indoor environment space model.Then,an indoor RGB-D image semantic segmentation network is proposed,which uses multi-scale feature fusion to quickly and accurately obtain object labeling information at the pixel level of the spatial point cloud model.Finally,Bayesian updating is used to conduct incremental semantic label fusion on the established spatial point cloud model.We also employ dense conditional random fields(CRF)to optimize the 3D semantic map model,resulting in a high-precision spatial semantic map of indoor scenes.Experimental results show that the proposed semantic mapping system can process image sequences collected by RGB-D sensors in real-time and output accurate semantic segmentation results of indoor scene images and the current local spatial semantic map.Finally,it constructs a globally consistent high-precision indoor scenes 3D semantic map.展开更多
The semantic segmentation methods based on CNN have made great progress,but there are still some shortcomings in the application of remote sensing images segmentation,such as the small receptive field can not effectiv...The semantic segmentation methods based on CNN have made great progress,but there are still some shortcomings in the application of remote sensing images segmentation,such as the small receptive field can not effectively capture global context.In order to solve this problem,this paper proposes a hybrid model based on ResNet50 and swin transformer to directly capture long-range dependence,which fuses features through Cross Feature Modulation Module(CFMM).Experimental results on two publicly available datasets,Vaihingen and Potsdam,are mIoU of 70.27%and 76.63%,respectively.Thus,CFM-UNet can maintain a high segmentation performance compared with other competitive networks.展开更多
Semantic segmentation of remote sensing images is one of the core tasks of remote sensing image interpretation.With the continuous develop-ment of artificial intelligence technology,the use of deep learning methods fo...Semantic segmentation of remote sensing images is one of the core tasks of remote sensing image interpretation.With the continuous develop-ment of artificial intelligence technology,the use of deep learning methods for interpreting remote-sensing images has matured.Existing neural networks disregard the spatial relationship between two targets in remote sensing images.Semantic segmentation models that combine convolutional neural networks(CNNs)and graph convolutional neural networks(GCNs)cause a lack of feature boundaries,which leads to the unsatisfactory segmentation of various target feature boundaries.In this paper,we propose a new semantic segmentation model for remote sensing images(called DGCN hereinafter),which combines deep semantic segmentation networks(DSSN)and GCNs.In the GCN module,a loss function for boundary information is employed to optimize the learning of spatial relationship features between the target features and their relationships.A hierarchical fusion method is utilized for feature fusion and classification to optimize the spatial relationship informa-tion in the original feature information.Extensive experiments on ISPRS 2D and DeepGlobe semantic segmentation datasets show that compared with the existing semantic segmentation models of remote sensing images,the DGCN significantly optimizes the segmentation effect of feature boundaries,effectively reduces the noise in the segmentation results and improves the segmentation accuracy,which demonstrates the advancements of our model.展开更多
Because pixel values of foggy images are irregularly higher than those of images captured in normal weather(clear images),it is difficult to extract and express their texture.No method has previously been developed to...Because pixel values of foggy images are irregularly higher than those of images captured in normal weather(clear images),it is difficult to extract and express their texture.No method has previously been developed to directly explore the relationship between foggy images and semantic segmentation images.We investigated this relationship and propose a generative adversarial network(GAN)for foggy image semantic segmentation(FISS GAN),which contains two parts:an edge GAN and a semantic segmentation GAN.The edge GAN is designed to generate edge information from foggy images to provide auxiliary information to the semantic segmentation GAN.The semantic segmentation GAN is designed to extract and express the texture of foggy images and generate semantic segmentation images.Experiments on foggy cityscapes datasets and foggy driving datasets indicated that FISS GAN achieved state-of-the-art performance.展开更多
Image fusion aims to integrate complementary information in source images to synthesize a fused image comprehensively characterizing the imaging scene. However, existing image fusion algorithms are only applicable to ...Image fusion aims to integrate complementary information in source images to synthesize a fused image comprehensively characterizing the imaging scene. However, existing image fusion algorithms are only applicable to strictly aligned source images and cause severe artifacts in the fusion results when input images have slight shifts or deformations. In addition,the fusion results typically only have good visual effect, but neglect the semantic requirements of high-level vision tasks.This study incorporates image registration, image fusion, and semantic requirements of high-level vision tasks into a single framework and proposes a novel image registration and fusion method, named Super Fusion. Specifically, we design a registration network to estimate bidirectional deformation fields to rectify geometric distortions of input images under the supervision of both photometric and end-point constraints. The registration and fusion are combined in a symmetric scheme, in which while mutual promotion can be achieved by optimizing the naive fusion loss, it is further enhanced by the mono-modal consistent constraint on symmetric fusion outputs. In addition, the image fusion network is equipped with the global spatial attention mechanism to achieve adaptive feature integration. Moreover, the semantic constraint based on the pre-trained segmentation model and Lovasz-Softmax loss is deployed to guide the fusion network to focus more on the semantic requirements of high-level vision tasks. Extensive experiments on image registration, image fusion,and semantic segmentation tasks demonstrate the superiority of our Super Fusion compared to the state-of-the-art alternatives.The source code and pre-trained model are publicly available at https://github.com/Linfeng-Tang/Super Fusion.展开更多
An effective approach is proposed for 3D urban scene reconstruction in the form of point cloud with semantic labeling. Starting from high resolution oblique aerial images,our approach proceeds through three main stage...An effective approach is proposed for 3D urban scene reconstruction in the form of point cloud with semantic labeling. Starting from high resolution oblique aerial images,our approach proceeds through three main stages: geographic reconstruction, geometrical reconstruction and semantic reconstruction. The absolute position and orientation of all the cameras relative to the real world are recovered in the geographic reconstruction stage. Then, in the geometrical reconstruction stage,an improved multi-view stereo matching method is employed to produce 3D dense points with color and normal information by taking into account the prior knowledge of aerial imagery.Finally the point cloud is classified into three classes(building,vegetation, and ground) by a rule-based hierarchical approach in the semantic reconstruction step. Experiments on complex urban scene show that our proposed 3-stage approach could generate reasonable reconstruction result robustly and efficiently.By comparing our final semantic reconstruction result with the manually labeled ground truth, classification accuracies from86.75% to 93.02% are obtained.展开更多
A second-generation fast Non-dominated Sorting Genetic Algorithm product shape multi-objective imagery optimization model based on degradation(DNSGA-II)strategy is proposed to make the product appearance optimization ...A second-generation fast Non-dominated Sorting Genetic Algorithm product shape multi-objective imagery optimization model based on degradation(DNSGA-II)strategy is proposed to make the product appearance optimization scheme meet the complex emotional needs of users for the product.First,the semantic differential method and K-Means cluster analysis are applied to extract the multi-objective imagery of users;then,the product multidimensional scale analysis is applied to classify the research objects,and again the reference samples are screened by the semantic differentialmethod,and the samples are parametrized in two dimensions by using elliptic Fourier analysis;finally,the fuzzy dynamic evaluation function is used as the objective function of the algorithm,and the coordinates of key points of product contours Finally,with the fuzzy dynamic evaluation function as the objective function of the algorithm and the coordinates of key points of the product profile as the decision variables,the optimal product profile solution set is solved by DNSGA-II.The validity of the model is verified by taking the optimization of the shape scheme of the hospital connection site as an example.For comparison with DNSGA-II,other multi-objective optimization algorithms are also presented.To evaluate the performance of each algorithm,the performance evaluation index values of the five multi-objective optimization algorithms are calculated in this paper.The results show that DNSGA-II is superior in improving individual diversity and has better overall performance.展开更多
Automatic segmentation of early esophagus cancer(EEC)in gastrointestinal endoscopy(GIE)images is a critical and challenging task in clinical settings,which relies primarily on labor-intensive and time-consuming routin...Automatic segmentation of early esophagus cancer(EEC)in gastrointestinal endoscopy(GIE)images is a critical and challenging task in clinical settings,which relies primarily on labor-intensive and time-consuming routines.EEC has often been diagnosed at the late stage since early signs of cancer are not obvious,resulting in low survival rates.This work proposes a deep learning approach based on the U-Net++method to segment EEC in GIE images.A total of 2690 GIE images collected from 617 patients at the Digestive Endoscopy Center,West China Hospital of Sichuan University,China,have been utilized.The experimental result shows that our proposed method achieved promising results.Furthermore,the comparison has been made between the proposed and other U-Net-related methods using the same dataset.The mean and standard deviation(SD)of the dice similarity coefficient(DSC),intersection over union(IoU),precision(Pre),and recall(Rec)achieved by the proposed framework were DSC(%)=94.62±0.02,IoU(%)=90.99±0.04,Pre(%)=94.61±0.04,and Rec(%)=95.00±0.02,respectively,outperforming the others.The proposed method has the potential to be applied in EEC automatic diagnoses.展开更多
In recent years,multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas,especially for automatic image annotation,whose purpose is to provide an efficie...In recent years,multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas,especially for automatic image annotation,whose purpose is to provide an efficient and effective searching environment for users to query their images more easily. In this paper,a semi-supervised learning based probabilistic latent semantic analysis( PLSA) model for automatic image annotation is presenred. Since it's often hard to obtain or create labeled images in large quantities while unlabeled ones are easier to collect,a transductive support vector machine( TSVM) is exploited to enhance the quality of the training image data. Then,different image features with different magnitudes will result in different performance for automatic image annotation. To this end,a Gaussian normalization method is utilized to normalize different features extracted from effective image regions segmented by the normalized cuts algorithm so as to reserve the intrinsic content of images as complete as possible. Finally,a PLSA model with asymmetric modalities is constructed based on the expectation maximization( EM) algorithm to predict a candidate set of annotations with confidence scores. Extensive experiments on the general-purpose Corel5k dataset demonstrate that the proposed model can significantly improve performance of traditional PLSA for the task of automatic image annotation.展开更多
With the continuous calls for energy conservation and emission reduction in recent years,more and more people choose walking as their travel mode.The improvement of the quality of street space will directly affect peo...With the continuous calls for energy conservation and emission reduction in recent years,more and more people choose walking as their travel mode.The improvement of the quality of street space will directly affect people's willingness to walk.By sorting out relevant research on street quality measurement,extracting quality keywords with high frequency of reference as impact factors,and using street view image data from different eras,semantic segmentation technology,factor analysis,and questionnaire survey methods,this paper evaluates the street quality of Jingshan East Street,Dongcheng District,Beijing,further explores the impact of different factors on street quality,and analyzes possible ways to improve it.展开更多
Many urban block planning in China can not be well satisfied for the elderly’s quality of life and activity demand.The study of urban block travel senility is helpful to discover the real needs of the elderly for wal...Many urban block planning in China can not be well satisfied for the elderly’s quality of life and activity demand.The study of urban block travel senility is helpful to discover the real needs of the elderly for walking space,distinguish the senility level of public walking space,and promote the development of the construction of the senility city.Based on literature research,this paper firstly summarizes the needs of the elderly in the pedestrian space in terms of their traffic feelings and visual experience,and then carries out image semantic segmentation operations on street view images by taking street view images,image semantic segmentation and other research methods.Factor division was carried out on the summary data of the elderly walking space experience,and scoring was carried out according to the elements in the image.Finally,it was found that the development of the elderly walking space in Pingguoyuan Street through the street view pictures was of great significance.展开更多
Based on low-altitude remote sensing images,this paper established sample set of typical river vegetation elements and proposed river vegetation extraction technical solution to adaptively extract typical vegetation e...Based on low-altitude remote sensing images,this paper established sample set of typical river vegetation elements and proposed river vegetation extraction technical solution to adaptively extract typical vegetation elements of river basins.The main research of this paper were as follows:(1)a typical vegetation extraction sample set based on low-altitude remote sensing images was established.(2)A low-altitude remote sensing image vegetation extraction model based on the focus perception module was designed to realize the end-to-end automatic extraction of different types of vegetation areas of low-altitude remote sensing images to fully learn the spectral spatial texture information and deep semantic information of the images.(3)By comparison with the baseline method,baseline method with embedded focus perception module showed an improvement in the precision by 7.37%and mIoU by 49.49%.Through visual interpretation and quantitative calculation analysis,the typical river vegetation adaptive extraction network has effectiveness and generalization ability,consistent with the needs of practical applications of vegetation extraction.展开更多
基金supported in part by the Tianjin Technology Innovation Guidance Special Fund Project under Grant No.21YDTPJC00850in part by the National Natural Science Foundation of China under Grant No.41906161in part by the Natural Science Foundation of Tianjin under Grant No.21JCQNJC00650。
文摘With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection ability of a single vehicle limits the SLAM performance in wide areas.Thereby,cooperative SLAM using multiple vehicles has become an important research direction.The key factor of cooperative SLAM is timely and efficient sonar image transmission among underwater vehicles.However,the limited bandwidth of underwater acoustic channels contradicts a large amount of sonar image data.It is essential to compress the images before transmission.Recently,deep neural networks have great value in image compression by virtue of the powerful learning ability of neural networks,but the existing sonar image compression methods based on neural network usually focus on the pixel-level information without the semantic-level information.In this paper,we propose a novel underwater acoustic transmission scheme called UAT-SSIC that includes semantic segmentation-based sonar image compression(SSIC)framework and the joint source-channel codec,to improve the accuracy of the semantic information of the reconstructed sonar image at the receiver.The SSIC framework consists of Auto-Encoder structure-based sonar image compression network,which is measured by a semantic segmentation network's residual.Considering that sonar images have the characteristics of blurred target edges,the semantic segmentation network used a special dilated convolution neural network(DiCNN)to enhance segmentation accuracy by expanding the range of receptive fields.The joint source-channel codec with unequal error protection is proposed that adjusts the power level of the transmitted data,which deal with sonar image transmission error caused by the serious underwater acoustic channel.Experiment results demonstrate that our method preserves more semantic information,with advantages over existing methods at the same compression ratio.It also improves the error tolerance and packet loss resistance of transmission.
基金This work is supported in part by The National Natural Science Foundation of China(Grant Number 61971078),which provided domain expertise and computational power that greatly assisted the activityThis work was financially supported by Chongqing Municipal Education Commission Grants for-Major Science and Technology Project(Grant Number gzlcx20243175).
文摘Semantic segmentation of driving scene images is crucial for autonomous driving.While deep learning technology has significantly improved daytime image semantic segmentation,nighttime images pose challenges due to factors like poor lighting and overexposure,making it difficult to recognize small objects.To address this,we propose an Image Adaptive Enhancement(IAEN)module comprising a parameter predictor(Edip),multiple image processing filters(Mdif),and a Detail Processing Module(DPM).Edip combines image processing filters to predict parameters like exposure and hue,optimizing image quality.We adopt a novel image encoder to enhance parameter prediction accuracy by enabling Edip to handle features at different scales.DPM strengthens overlooked image details,extending the IAEN module’s functionality.After the segmentation network,we integrate a Depth Guided Filter(DGF)to refine segmentation outputs.The entire network is trained end-to-end,with segmentation results guiding parameter prediction optimization,promoting self-learning and network improvement.This lightweight and efficient network architecture is particularly suitable for addressing challenges in nighttime image segmentation.Extensive experiments validate significant performance improvements of our approach on the ACDC-night and Nightcity datasets.
文摘Cross-lingual image description,the task of generating image captions in a target language from images and descriptions in a source language,is addressed in this study through a novel approach that combines neural network models and semantic matching techniques.Experiments conducted on the Flickr8k and AraImg2k benchmark datasets,featuring images and descriptions in English and Arabic,showcase remarkable performance improvements over state-of-the-art methods.Our model,equipped with the Image&Cross-Language Semantic Matching module and the Target Language Domain Evaluation module,significantly enhances the semantic relevance of generated image descriptions.For English-to-Arabic and Arabic-to-English cross-language image descriptions,our approach achieves a CIDEr score for English and Arabic of 87.9%and 81.7%,respectively,emphasizing the substantial contributions of our methodology.Comparative analyses with previous works further affirm the superior performance of our approach,and visual results underscore that our model generates image captions that are both semantically accurate and stylistically consistent with the target language.In summary,this study advances the field of cross-lingual image description,offering an effective solution for generating image captions across languages,with the potential to impact multilingual communication and accessibility.Future research directions include expanding to more languages and incorporating diverse visual and textual data sources.
基金National Natural Science Foundation of China(Grant Nos.62171130,62172197,61972093)the Natural Science Foundation of Fujian Province(Grant Nos.2020J01573,2022J01131257,2022J01607)+3 种基金Fujian University Industry University Research Joint Innovation Project(No.2022H6006)in part by the Fund of Cloud Computing and BigData for SmartAgriculture(GrantNo.117-612014063)NationalNatural Science Foundation of China(Grant No.62301160)Nature Science Foundation of Fujian Province(Grant No.2022J01607).
文摘Many deep learning-based registration methods rely on a single-stream encoder-decoder network for computing deformation fields between 3D volumes.However,these methods often lack constraint information and overlook semantic consistency,limiting their performance.To address these issues,we present a novel approach for medical image registration called theDual-VoxelMorph,featuring a dual-channel cross-constraint network.This innovative network utilizes both intensity and segmentation images,which share identical semantic information and feature representations.Two encoder-decoder structures calculate deformation fields for intensity and segmentation images,as generated by the dual-channel cross-constraint network.This design facilitates bidirectional communication between grayscale and segmentation information,enabling the model to better learn the corresponding grayscale and segmentation details of the same anatomical structures.To ensure semantic and directional consistency,we introduce constraints and apply the cosine similarity function to enhance semantic consistency.Evaluation on four public datasets demonstrates superior performance compared to the baselinemethod,achieving Dice scores of 79.9%,64.5%,69.9%,and 63.5%for OASIS-1,OASIS-3,LPBA40,and ADNI,respectively.
基金the National Natural Science Foundation of China(42001408,61806097).
文摘Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous humaneffort to label the image. Within this field, other research endeavors utilize weakly supervised methods. Theseapproaches aim to reduce the expenses associated with annotation by leveraging sparsely annotated data, such asscribbles. This paper presents a novel technique called a weakly supervised network using scribble-supervised andedge-mask (WSSE-net). This network is a three-branch network architecture, whereby each branch is equippedwith a distinct decoder module dedicated to road extraction tasks. One of the branches is dedicated to generatingedge masks using edge detection algorithms and optimizing road edge details. The other two branches supervise themodel’s training by employing scribble labels and spreading scribble information throughout the image. To addressthe historical flaw that created pseudo-labels that are not updated with network training, we use mixup to blendprediction results dynamically and continually update new pseudo-labels to steer network training. Our solutiondemonstrates efficient operation by simultaneously considering both edge-mask aid and dynamic pseudo-labelsupport. The studies are conducted on three separate road datasets, which consist primarily of high-resolutionremote-sensing satellite photos and drone images. The experimental findings suggest that our methodologyperforms better than advanced scribble-supervised approaches and specific traditional fully supervised methods.
基金This work was supported by the National Natural Science Foundation of China(Grant No.U20A20197).
文摘We propose a novel image segmentation algorithm to tackle the challenge of limited recognition and segmentation performance in identifying welding seam images during robotic intelligent operations.Initially,to enhance the capability of deep neural networks in extracting geometric attributes from depth images,we developed a novel deep geometric convolution operator(DGConv).DGConv is utilized to construct a deep local geometric feature extraction module,facilitating a more comprehensive exploration of the intrinsic geometric information within depth images.Secondly,we integrate the newly proposed deep geometric feature module with the Fully Convolutional Network(FCN8)to establish a high-performance deep neural network algorithm tailored for depth image segmentation.Concurrently,we enhance the FCN8 detection head by separating the segmentation and classification processes.This enhancement significantly boosts the network’s overall detection capability.Thirdly,for a comprehensive assessment of our proposed algorithm and its applicability in real-world industrial settings,we curated a line-scan image dataset featuring weld seams.This dataset,named the Standardized Linear Depth Profile(SLDP)dataset,was collected from actual industrial sites where autonomous robots are in operation.Ultimately,we conducted experiments utilizing the SLDP dataset,achieving an average accuracy of 92.7%.Our proposed approach exhibited a remarkable performance improvement over the prior method on the identical dataset.Moreover,we have successfully deployed the proposed algorithm in genuine industrial environments,fulfilling the prerequisites of unmanned robot operations.
基金funded in part by the Equipment Pre-Research Foundation of China,Grant No.61400010203in part by the Independent Project of the State Key Laboratory of Virtual Reality Technology and Systems.
文摘There are two types of methods for image segmentation.One is traditional image processing methods,which are sensitive to details and boundaries,yet fail to recognize semantic information.The other is deep learning methods,which can locate and identify different objects,but boundary identifications are not accurate enough.Both of them cannot generate entire segmentation information.In order to obtain accurate edge detection and semantic information,an Adaptive Boundary and Semantic Composite Segmentation method(ABSCS)is proposed.This method can precisely semantic segment individual objects in large-size aerial images with limited GPU performances.It includes adaptively dividing and modifying the aerial images with the proposed principles and methods,using the deep learning method to semantic segment and preprocess the small divided pieces,using three traditional methods to segment and preprocess original-size aerial images,adaptively selecting traditional results tomodify the boundaries of individual objects in deep learning results,and combining the results of different objects.Individual object semantic segmentation experiments are conducted by using the AeroScapes dataset,and their results are analyzed qualitatively and quantitatively.The experimental results demonstrate that the proposed method can achieve more promising object boundaries than the original deep learning method.This work also demonstrates the advantages of the proposed method in applications of point cloud semantic segmentation and image inpainting.
基金This work was supported in part by the National Natural Science Foundation of China under Grant U20A20225,61833013in part by Shaanxi Provincial Key Research and Development Program under Grant 2022-GY111.
文摘This paper proposes an improved high-precision 3D semantic mapping method for indoor scenes using RGB-D images.The current semantic mapping algorithms suffer from low semantic annotation accuracy and insufficient real-time performance.To address these issues,we first adopt the Elastic Fusion algorithm to select key frames from indoor environment image sequences captured by the Kinect sensor and construct the indoor environment space model.Then,an indoor RGB-D image semantic segmentation network is proposed,which uses multi-scale feature fusion to quickly and accurately obtain object labeling information at the pixel level of the spatial point cloud model.Finally,Bayesian updating is used to conduct incremental semantic label fusion on the established spatial point cloud model.We also employ dense conditional random fields(CRF)to optimize the 3D semantic map model,resulting in a high-precision spatial semantic map of indoor scenes.Experimental results show that the proposed semantic mapping system can process image sequences collected by RGB-D sensors in real-time and output accurate semantic segmentation results of indoor scene images and the current local spatial semantic map.Finally,it constructs a globally consistent high-precision indoor scenes 3D semantic map.
基金Young Innovative Talents Project of Guangdong Ordinary Universities(No.2022KQNCX225)School-level Teaching and Research Project of Guangzhou City Polytechnic(No.2022xky046)。
文摘The semantic segmentation methods based on CNN have made great progress,but there are still some shortcomings in the application of remote sensing images segmentation,such as the small receptive field can not effectively capture global context.In order to solve this problem,this paper proposes a hybrid model based on ResNet50 and swin transformer to directly capture long-range dependence,which fuses features through Cross Feature Modulation Module(CFMM).Experimental results on two publicly available datasets,Vaihingen and Potsdam,are mIoU of 70.27%and 76.63%,respectively.Thus,CFM-UNet can maintain a high segmentation performance compared with other competitive networks.
基金funded by the Major Scientific and Technological Innovation Project of Shandong Province,Grant No.2022CXGC010609.
文摘Semantic segmentation of remote sensing images is one of the core tasks of remote sensing image interpretation.With the continuous develop-ment of artificial intelligence technology,the use of deep learning methods for interpreting remote-sensing images has matured.Existing neural networks disregard the spatial relationship between two targets in remote sensing images.Semantic segmentation models that combine convolutional neural networks(CNNs)and graph convolutional neural networks(GCNs)cause a lack of feature boundaries,which leads to the unsatisfactory segmentation of various target feature boundaries.In this paper,we propose a new semantic segmentation model for remote sensing images(called DGCN hereinafter),which combines deep semantic segmentation networks(DSSN)and GCNs.In the GCN module,a loss function for boundary information is employed to optimize the learning of spatial relationship features between the target features and their relationships.A hierarchical fusion method is utilized for feature fusion and classification to optimize the spatial relationship informa-tion in the original feature information.Extensive experiments on ISPRS 2D and DeepGlobe semantic segmentation datasets show that compared with the existing semantic segmentation models of remote sensing images,the DGCN significantly optimizes the segmentation effect of feature boundaries,effectively reduces the noise in the segmentation results and improves the segmentation accuracy,which demonstrates the advancements of our model.
基金supported in part by the National Key Research and Development Program of China(2018YFB1305002)the National Natural Science Foundation of China(62006256)+2 种基金the Postdoctoral Science Foundation of China(2020M683050)the Key Research and Development Program of Guangzhou(202007050002)the Fundamental Research Funds for the Central Universities(67000-31610134)。
文摘Because pixel values of foggy images are irregularly higher than those of images captured in normal weather(clear images),it is difficult to extract and express their texture.No method has previously been developed to directly explore the relationship between foggy images and semantic segmentation images.We investigated this relationship and propose a generative adversarial network(GAN)for foggy image semantic segmentation(FISS GAN),which contains two parts:an edge GAN and a semantic segmentation GAN.The edge GAN is designed to generate edge information from foggy images to provide auxiliary information to the semantic segmentation GAN.The semantic segmentation GAN is designed to extract and express the texture of foggy images and generate semantic segmentation images.Experiments on foggy cityscapes datasets and foggy driving datasets indicated that FISS GAN achieved state-of-the-art performance.
基金supported by the National Natural Science Foundation of China(62276192,62075169,62061160370)the Key Research and Development Program of Hubei Province(2020BAB113)。
文摘Image fusion aims to integrate complementary information in source images to synthesize a fused image comprehensively characterizing the imaging scene. However, existing image fusion algorithms are only applicable to strictly aligned source images and cause severe artifacts in the fusion results when input images have slight shifts or deformations. In addition,the fusion results typically only have good visual effect, but neglect the semantic requirements of high-level vision tasks.This study incorporates image registration, image fusion, and semantic requirements of high-level vision tasks into a single framework and proposes a novel image registration and fusion method, named Super Fusion. Specifically, we design a registration network to estimate bidirectional deformation fields to rectify geometric distortions of input images under the supervision of both photometric and end-point constraints. The registration and fusion are combined in a symmetric scheme, in which while mutual promotion can be achieved by optimizing the naive fusion loss, it is further enhanced by the mono-modal consistent constraint on symmetric fusion outputs. In addition, the image fusion network is equipped with the global spatial attention mechanism to achieve adaptive feature integration. Moreover, the semantic constraint based on the pre-trained segmentation model and Lovasz-Softmax loss is deployed to guide the fusion network to focus more on the semantic requirements of high-level vision tasks. Extensive experiments on image registration, image fusion,and semantic segmentation tasks demonstrate the superiority of our Super Fusion compared to the state-of-the-art alternatives.The source code and pre-trained model are publicly available at https://github.com/Linfeng-Tang/Super Fusion.
基金supported in part by the National Natural Science Foundation of China (61421004,61402316,61333015,61632003)Doctoral Research Fund of Taiyuan University of Science and Technology under grant (20162009)National Key Technologies R&D Program(2016YFB0502002)
文摘An effective approach is proposed for 3D urban scene reconstruction in the form of point cloud with semantic labeling. Starting from high resolution oblique aerial images,our approach proceeds through three main stages: geographic reconstruction, geometrical reconstruction and semantic reconstruction. The absolute position and orientation of all the cameras relative to the real world are recovered in the geographic reconstruction stage. Then, in the geometrical reconstruction stage,an improved multi-view stereo matching method is employed to produce 3D dense points with color and normal information by taking into account the prior knowledge of aerial imagery.Finally the point cloud is classified into three classes(building,vegetation, and ground) by a rule-based hierarchical approach in the semantic reconstruction step. Experiments on complex urban scene show that our proposed 3-stage approach could generate reasonable reconstruction result robustly and efficiently.By comparing our final semantic reconstruction result with the manually labeled ground truth, classification accuracies from86.75% to 93.02% are obtained.
基金supported by National Natural Science Foundation Grant 52065010the Science and Technology Project supported by Guizhou Province of China ZK[2021]341 and[2021]397the transformation Project of Scientific and Technological Achievements in Guiyang,Guizhou Province,China[2021]7-3.
文摘A second-generation fast Non-dominated Sorting Genetic Algorithm product shape multi-objective imagery optimization model based on degradation(DNSGA-II)strategy is proposed to make the product appearance optimization scheme meet the complex emotional needs of users for the product.First,the semantic differential method and K-Means cluster analysis are applied to extract the multi-objective imagery of users;then,the product multidimensional scale analysis is applied to classify the research objects,and again the reference samples are screened by the semantic differentialmethod,and the samples are parametrized in two dimensions by using elliptic Fourier analysis;finally,the fuzzy dynamic evaluation function is used as the objective function of the algorithm,and the coordinates of key points of product contours Finally,with the fuzzy dynamic evaluation function as the objective function of the algorithm and the coordinates of key points of the product profile as the decision variables,the optimal product profile solution set is solved by DNSGA-II.The validity of the model is verified by taking the optimization of the shape scheme of the hospital connection site as an example.For comparison with DNSGA-II,other multi-objective optimization algorithms are also presented.To evaluate the performance of each algorithm,the performance evaluation index values of the five multi-objective optimization algorithms are calculated in this paper.The results show that DNSGA-II is superior in improving individual diversity and has better overall performance.
基金supported by the National Natural Science Foundation under Grants No.62271127,No.61872405,and No.81171411Natural Science Foundation of Sichuan Province,China under Grant No.23NSFSC0627Medico-Engineering Cooperation Funds from University of Electronic Science and Technology of China and West China Hospital of Sichuan University under Grants No.ZYGX2022YGRH011 and No.HXDZ22005.
文摘Automatic segmentation of early esophagus cancer(EEC)in gastrointestinal endoscopy(GIE)images is a critical and challenging task in clinical settings,which relies primarily on labor-intensive and time-consuming routines.EEC has often been diagnosed at the late stage since early signs of cancer are not obvious,resulting in low survival rates.This work proposes a deep learning approach based on the U-Net++method to segment EEC in GIE images.A total of 2690 GIE images collected from 617 patients at the Digestive Endoscopy Center,West China Hospital of Sichuan University,China,have been utilized.The experimental result shows that our proposed method achieved promising results.Furthermore,the comparison has been made between the proposed and other U-Net-related methods using the same dataset.The mean and standard deviation(SD)of the dice similarity coefficient(DSC),intersection over union(IoU),precision(Pre),and recall(Rec)achieved by the proposed framework were DSC(%)=94.62±0.02,IoU(%)=90.99±0.04,Pre(%)=94.61±0.04,and Rec(%)=95.00±0.02,respectively,outperforming the others.The proposed method has the potential to be applied in EEC automatic diagnoses.
基金Supported by the National Program on Key Basic Research Project(No.2013CB329502)the National Natural Science Foundation of China(No.61202212)+1 种基金the Special Research Project of the Educational Department of Shaanxi Province of China(No.15JK1038)the Key Research Project of Baoji University of Arts and Sciences(No.ZK16047)
文摘In recent years,multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas,especially for automatic image annotation,whose purpose is to provide an efficient and effective searching environment for users to query their images more easily. In this paper,a semi-supervised learning based probabilistic latent semantic analysis( PLSA) model for automatic image annotation is presenred. Since it's often hard to obtain or create labeled images in large quantities while unlabeled ones are easier to collect,a transductive support vector machine( TSVM) is exploited to enhance the quality of the training image data. Then,different image features with different magnitudes will result in different performance for automatic image annotation. To this end,a Gaussian normalization method is utilized to normalize different features extracted from effective image regions segmented by the normalized cuts algorithm so as to reserve the intrinsic content of images as complete as possible. Finally,a PLSA model with asymmetric modalities is constructed based on the expectation maximization( EM) algorithm to predict a candidate set of annotations with confidence scores. Extensive experiments on the general-purpose Corel5k dataset demonstrate that the proposed model can significantly improve performance of traditional PLSA for the task of automatic image annotation.
文摘With the continuous calls for energy conservation and emission reduction in recent years,more and more people choose walking as their travel mode.The improvement of the quality of street space will directly affect people's willingness to walk.By sorting out relevant research on street quality measurement,extracting quality keywords with high frequency of reference as impact factors,and using street view image data from different eras,semantic segmentation technology,factor analysis,and questionnaire survey methods,this paper evaluates the street quality of Jingshan East Street,Dongcheng District,Beijing,further explores the impact of different factors on street quality,and analyzes possible ways to improve it.
基金Sponsored by Beijing Municipal Social Science Foundation(22GLC062).
文摘Many urban block planning in China can not be well satisfied for the elderly’s quality of life and activity demand.The study of urban block travel senility is helpful to discover the real needs of the elderly for walking space,distinguish the senility level of public walking space,and promote the development of the construction of the senility city.Based on literature research,this paper firstly summarizes the needs of the elderly in the pedestrian space in terms of their traffic feelings and visual experience,and then carries out image semantic segmentation operations on street view images by taking street view images,image semantic segmentation and other research methods.Factor division was carried out on the summary data of the elderly walking space experience,and scoring was carried out according to the elements in the image.Finally,it was found that the development of the elderly walking space in Pingguoyuan Street through the street view pictures was of great significance.
文摘Based on low-altitude remote sensing images,this paper established sample set of typical river vegetation elements and proposed river vegetation extraction technical solution to adaptively extract typical vegetation elements of river basins.The main research of this paper were as follows:(1)a typical vegetation extraction sample set based on low-altitude remote sensing images was established.(2)A low-altitude remote sensing image vegetation extraction model based on the focus perception module was designed to realize the end-to-end automatic extraction of different types of vegetation areas of low-altitude remote sensing images to fully learn the spectral spatial texture information and deep semantic information of the images.(3)By comparison with the baseline method,baseline method with embedded focus perception module showed an improvement in the precision by 7.37%and mIoU by 49.49%.Through visual interpretation and quantitative calculation analysis,the typical river vegetation adaptive extraction network has effectiveness and generalization ability,consistent with the needs of practical applications of vegetation extraction.