The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-atten...The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-attention mechanisms falter when applied to datasets with intricate semantic content and extensive dependency structures.In response,this paper introduces a Diffusion Sampling and Label-Driven Co-attention Neural Network(DSLD),which adopts a diffusion sampling method to capture more comprehensive semantic information of the data.Additionally,themodel leverages the joint correlation information of labels and data to introduce the computation of text representation,correcting semantic representationbiases in thedata,andincreasing the accuracyof semantic representation.Ultimately,the model computes the corresponding classification results by synthesizing these rich data semantic representations.Experiments on seven benchmark datasets show that our proposed model achieves competitive results compared to state-of-the-art methods.展开更多
Semantic Communication(SC)has emerged as a novel communication paradigm that provides a receiver with meaningful information extracted from the source to maximize information transmission throughput in wireless networ...Semantic Communication(SC)has emerged as a novel communication paradigm that provides a receiver with meaningful information extracted from the source to maximize information transmission throughput in wireless networks,beyond the theoretical capacity limit.Despite the extensive research on SC,there is a lack of comprehensive survey on technologies,solutions,applications,and challenges for SC.In this article,the development of SC is first reviewed and its characteristics,architecture,and advantages are summarized.Next,key technologies such as semantic extraction,semantic encoding,and semantic segmentation are discussed and their corresponding solutions in terms of efficiency,robustness,adaptability,and reliability are summarized.Applications of SC to UAV communication,remote image sensing and fusion,intelligent transportation,and healthcare are also presented and their strategies are summarized.Finally,some challenges and future research directions are presented to provide guidance for further research of SC.展开更多
As conventional communication systems based on classic information theory have closely approached Shannon capacity,semantic communication is emerging as a key enabling technology for the further improvement of communi...As conventional communication systems based on classic information theory have closely approached Shannon capacity,semantic communication is emerging as a key enabling technology for the further improvement of communication performance.However,it is still unsettled on how to represent semantic information and characterise the theoretical limits of semantic-oriented compression and transmission.In this paper,we consider a semantic source which is characterised by a set of correlated random variables whose joint probabilistic distribution can be described by a Bayesian network.We give the information-theoretic limit on the lossless compression of the semantic source and introduce a low complexity encoding method by exploiting the conditional independence.We further characterise the limits on lossy compression of the semantic source and the upper and lower bounds of the rate-distortion function.We also investigate the lossy compression of the semantic source with two-sided information at the encoder and decoder,and obtain the corresponding rate distortion function.We prove that the optimal code of the semantic source is the combination of the optimal codes of each conditional independent set given the side information.展开更多
With the rapid development of artificial intelligence and the widespread use of the Internet of Things, semantic communication, as an emerging communication paradigm, has been attracting great interest. Taking image t...With the rapid development of artificial intelligence and the widespread use of the Internet of Things, semantic communication, as an emerging communication paradigm, has been attracting great interest. Taking image transmission as an example, from the semantic communication's view, not all pixels in the images are equally important for certain receivers. The existing semantic communication systems directly perform semantic encoding and decoding on the whole image, in which the region of interest cannot be identified. In this paper, we propose a novel semantic communication system for image transmission that can distinguish between Regions Of Interest (ROI) and Regions Of Non-Interest (RONI) based on semantic segmentation, where a semantic segmentation algorithm is used to classify each pixel of the image and distinguish ROI and RONI. The system also enables high-quality transmission of ROI with lower communication overheads by transmissions through different semantic communication networks with different bandwidth requirements. An improved metric θPSNR is proposed to evaluate the transmission accuracy of the novel semantic transmission network. Experimental results show that our proposed system achieves a significant performance improvement compared with existing approaches, namely, existing semantic communication approaches and the conventional approach without semantics.展开更多
Increasing research has focused on semantic communication,the goal of which is to convey accurately the meaning instead of transmitting symbols from the sender to the receiver.In this paper,we design a novel encoding ...Increasing research has focused on semantic communication,the goal of which is to convey accurately the meaning instead of transmitting symbols from the sender to the receiver.In this paper,we design a novel encoding and decoding semantic communication framework,which adopts the semantic information and the contextual correlations between items to optimize the performance of a communication system over various channels.On the sender side,the average semantic loss caused by the wrong detection is defined,and a semantic source encoding strategy is developed to minimize the average semantic loss.To further improve communication reliability,a decoding strategy that utilizes the semantic and the context information to recover messages is proposed in the receiver.Extensive simulation results validate the superior performance of our strategies over state-of-the-art semantic coding and decoding policies on different communication channels.展开更多
This paper focuses on the task of few-shot 3D point cloud semantic segmentation.Despite some progress,this task still encounters many issues due to the insufficient samples given,e.g.,incomplete object segmentation an...This paper focuses on the task of few-shot 3D point cloud semantic segmentation.Despite some progress,this task still encounters many issues due to the insufficient samples given,e.g.,incomplete object segmentation and inaccurate semantic discrimination.To tackle these issues,we first leverage part-whole relationships into the task of 3D point cloud semantic segmentation to capture semantic integrity,which is empowered by the dynamic capsule routing with the module of 3D Capsule Networks(CapsNets)in the embedding network.Concretely,the dynamic routing amalgamates geometric information of the 3D point cloud data to construct higher-level feature representations,which capture the relationships between object parts and their wholes.Secondly,we designed a multi-prototype enhancement module to enhance the prototype discriminability.Specifically,the single-prototype enhancement mechanism is expanded to the multi-prototype enhancement version for capturing rich semantics.Besides,the shot-correlation within the category is calculated via the interaction of different samples to enhance the intra-category similarity.Ablation studies prove that the involved part-whole relations and proposed multi-prototype enhancement module help to achieve complete object segmentation and improve semantic discrimination.Moreover,under the integration of these two modules,quantitative and qualitative experiments on two public benchmarks,including S3DIS and ScanNet,indicate the superior performance of the proposed framework on the task of 3D point cloud semantic segmentation,compared to some state-of-the-art methods.展开更多
High-resolution remote sensing image segmentation is a challenging task. In urban remote sensing, the presenceof occlusions and shadows often results in blurred or invisible object boundaries, thereby increasing the d...High-resolution remote sensing image segmentation is a challenging task. In urban remote sensing, the presenceof occlusions and shadows often results in blurred or invisible object boundaries, thereby increasing the difficultyof segmentation. In this paper, an improved network with a cross-region self-attention mechanism for multi-scalefeatures based onDeepLabv3+is designed to address the difficulties of small object segmentation and blurred targetedge segmentation. First,we use CrossFormer as the backbone feature extraction network to achieve the interactionbetween large- and small-scale features, and establish self-attention associations between features at both large andsmall scales to capture global contextual feature information. Next, an improved atrous spatial pyramid poolingmodule is introduced to establish multi-scale feature maps with large- and small-scale feature associations, andattention vectors are added in the channel direction to enable adaptive adjustment of multi-scale channel features.The proposed networkmodel is validated using the PotsdamandVaihingen datasets. The experimental results showthat, compared with existing techniques, the network model designed in this paper can extract and fuse multiscaleinformation, more clearly extract edge information and small-scale information, and segment boundariesmore smoothly. Experimental results on public datasets demonstrate the superiority of ourmethod compared withseveral state-of-the-art networks.展开更多
Context information is significant for semantic extraction and recovery of messages in semantic communication.However,context information is not fully utilized in the existing semantic communication systems since re-l...Context information is significant for semantic extraction and recovery of messages in semantic communication.However,context information is not fully utilized in the existing semantic communication systems since re-lationships between sentences are often ignored.In this paper,we propose an Extended Context-based Semantic Communication(ECSC)system for text transmission,in which context information within and between sentences is explored for semantic representation and recovery.At the encoder,self-attention and segment-level relative attention are used to extract context information within and between sentences,respectively.In addition,a gate mechanism is adopted at the encoder to incorporate the context information from different ranges.At the decoder,Transformer-XL is introduced to obtain more semantic information from the historical communication processes for semantic recovery.Simulation results show the effectiveness of our proposed model in improving the semantic accuracy between transmitted and recovered messages under various channel conditions.展开更多
We consider an image semantic communication system in a time-varying fading Gaussian MIMO channel,with a finite number of channel states.A deep learning-aided broadcast approach scheme is proposed to benefit the adapt...We consider an image semantic communication system in a time-varying fading Gaussian MIMO channel,with a finite number of channel states.A deep learning-aided broadcast approach scheme is proposed to benefit the adaptive semantic transmission in terms of different channel states.We combine the classic broadcast approach with the image transformer to implement this adaptive joint source and channel coding(JSCC)scheme.Specifically,we utilize the neural network(NN)to jointly optimize the hierarchical image compression and superposition code mapping within this scheme.The learned transformers and codebooks allow recovering of the image with an adaptive quality and low error rate at the receiver side,in each channel state.The simulation results exhibit our proposed scheme can dynamically adapt the coding to the current channel state and outperform some existing intelligent schemes with the fixed coding block.展开更多
Video transmission requires considerable bandwidth,and current widely employed schemes prove inadequate when confronted with scenes featuring prominently.Motivated by the strides in talkinghead generative technology,t...Video transmission requires considerable bandwidth,and current widely employed schemes prove inadequate when confronted with scenes featuring prominently.Motivated by the strides in talkinghead generative technology,the paper introduces a semantic transmission system tailored for talking-head videos.The system captures semantic information from talking-head video and faithfully reconstructs source video at the receiver,only one-shot reference frame and compact semantic features are required for the entire transmission.Specifically,we analyze video semantics in the pixel domain frame-by-frame and jointly process multi-frame semantic information to seamlessly incorporate spatial and temporal information.Variational modeling is utilized to evaluate the diversity of importance among group semantics,thereby guiding bandwidth resource allocation for semantics to enhance system efficiency.The whole endto-end system is modeled as an optimization problem and equivalent to acquiring optimal rate-distortion performance.We evaluate our system on both reference frame and video transmission,experimental results demonstrate that our system can improve the efficiency and robustness of communications.Compared to the classical approaches,our system can save over 90%of bandwidth when user perception is close.展开更多
With the rapid growth of information transmission via the Internet,efforts have been made to reduce network load to promote efficiency.One such application is semantic computing,which can extract and process semantic ...With the rapid growth of information transmission via the Internet,efforts have been made to reduce network load to promote efficiency.One such application is semantic computing,which can extract and process semantic communication.Social media has enabled users to share their current emotions,opinions,and life events through their mobile devices.Notably,people suffering from mental health problems are more willing to share their feelings on social networks.Therefore,it is necessary to extract semantic information from social media(vlog data)to identify abnormal emotional states to facilitate early identification and intervention.Most studies do not consider spatio-temporal information when fusing multimodal information to identify abnormal emotional states such as depression.To solve this problem,this paper proposes a spatio-temporal squeeze transformer method for the extraction of semantic features of depression.First,a module with spatio-temporal data is embedded into the transformer encoder,which is utilized to obtain a representation of spatio-temporal features.Second,a classifier with a voting mechanism is designed to encourage the model to classify depression and non-depression effec-tively.Experiments are conducted on the D-Vlog dataset.The results show that the method is effective,and the accuracy rate can reach 70.70%.This work provides scaffolding for future work in the detection of affect recognition in semantic communication based on social media vlog data.展开更多
The concept of semantic communication provides a novel approach for applications in scenarios with limited communication resources.In this paper,we propose an end-to-end(E2E)semantic molecular communication system,aim...The concept of semantic communication provides a novel approach for applications in scenarios with limited communication resources.In this paper,we propose an end-to-end(E2E)semantic molecular communication system,aiming to enhance the efficiency of molecular communication systems by reducing the transmitted information.Specifically,following the joint source channel coding paradigm,the network is designed to encode the task-relevant information into the concentration of the information molecules,which is robust to the degradation of the molecular communication channel.Furthermore,we propose a channel network to enable the E2E learning over the non-differentiable molecular channel.Experimental results demonstrate the superior performance of the semantic molecular communication system over the conventional methods in classification tasks.展开更多
With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection abil...With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection ability of a single vehicle limits the SLAM performance in wide areas.Thereby,cooperative SLAM using multiple vehicles has become an important research direction.The key factor of cooperative SLAM is timely and efficient sonar image transmission among underwater vehicles.However,the limited bandwidth of underwater acoustic channels contradicts a large amount of sonar image data.It is essential to compress the images before transmission.Recently,deep neural networks have great value in image compression by virtue of the powerful learning ability of neural networks,but the existing sonar image compression methods based on neural network usually focus on the pixel-level information without the semantic-level information.In this paper,we propose a novel underwater acoustic transmission scheme called UAT-SSIC that includes semantic segmentation-based sonar image compression(SSIC)framework and the joint source-channel codec,to improve the accuracy of the semantic information of the reconstructed sonar image at the receiver.The SSIC framework consists of Auto-Encoder structure-based sonar image compression network,which is measured by a semantic segmentation network's residual.Considering that sonar images have the characteristics of blurred target edges,the semantic segmentation network used a special dilated convolution neural network(DiCNN)to enhance segmentation accuracy by expanding the range of receptive fields.The joint source-channel codec with unequal error protection is proposed that adjusts the power level of the transmitted data,which deal with sonar image transmission error caused by the serious underwater acoustic channel.Experiment results demonstrate that our method preserves more semantic information,with advantages over existing methods at the same compression ratio.It also improves the error tolerance and packet loss resistance of transmission.展开更多
Recently,deep learning-based semantic communication has garnered widespread attention,with numerous systems designed for transmitting diverse data sources,including text,image,and speech,etc.While efforts have been di...Recently,deep learning-based semantic communication has garnered widespread attention,with numerous systems designed for transmitting diverse data sources,including text,image,and speech,etc.While efforts have been directed toward improving system performance,many studies have concentrated on enhancing the structure of the encoder and decoder.However,this often overlooks the resulting increase in model complexity,imposing additional storage and computational burdens on smart devices.Furthermore,existing work tends to prioritize explicit semantics,neglecting the potential of implicit semantics.This paper aims to easily and effectively enhance the receiver's decoding capability without modifying the encoder and decoder structures.We propose a novel semantic communication system with variational neural inference for text transmission.Specifically,we introduce a simple but effective variational neural inferer at the receiver to infer the latent semantic information within the received text.This information is then utilized to assist in the decoding process.The simulation results show a significant enhancement in system performance and improved robustness.展开更多
In cornfields,factors such as the similarity between corn seedlings and weeds and the blurring of plant edge details pose challenges to corn and weed segmentation.In addition,remote areas such as farmland are usually ...In cornfields,factors such as the similarity between corn seedlings and weeds and the blurring of plant edge details pose challenges to corn and weed segmentation.In addition,remote areas such as farmland are usually constrained by limited computational resources and limited collected data.Therefore,it becomes necessary to lighten the model to better adapt to complex cornfield scene,and make full use of the limited data information.In this paper,we propose an improved image segmentation algorithm based on unet.Firstly,the inverted residual structure is introduced into the contraction path to reduce the number of parameters in the training process and improve the feature extraction ability;secondly,the pyramid pooling module is introduced to enhance the network’s ability of acquiring contextual information as well as the ability of dealing with the small target loss problem;and lastly,Finally,to further enhance the segmentation capability of the model,the squeeze and excitation mechanism is introduced in the expansion path.We used images of corn seedlings collected in the field and publicly available corn weed datasets to evaluate the improved model.The improved model has a total parameter of 3.79 M and miou can achieve 87.9%.The fps on a single 3050 ti video card is about 58.9.The experimental results show that the network proposed in this paper can quickly segment corn weeds in a cornfield scenario with good segmentation accuracy.展开更多
In the video captioning methods based on an encoder-decoder,limited visual features are extracted by an encoder,and a natural sentence of the video content is generated using a decoder.However,this kind ofmethod is de...In the video captioning methods based on an encoder-decoder,limited visual features are extracted by an encoder,and a natural sentence of the video content is generated using a decoder.However,this kind ofmethod is dependent on a single video input source and few visual labels,and there is a problem with semantic alignment between video contents and generated natural sentences,which are not suitable for accurately comprehending and describing the video contents.To address this issue,this paper proposes a video captioning method by semantic topic-guided generation.First,a 3D convolutional neural network is utilized to extract the spatiotemporal features of videos during the encoding.Then,the semantic topics of video data are extracted using the visual labels retrieved from similar video data.In the decoding,a decoder is constructed by combining a novel Enhance-TopK sampling algorithm with a Generative Pre-trained Transformer-2 deep neural network,which decreases the influence of“deviation”in the semantic mapping process between videos and texts by jointly decoding a baseline and semantic topics of video contents.During this process,the designed Enhance-TopK sampling algorithm can alleviate a long-tail problem by dynamically adjusting the probability distribution of the predicted words.Finally,the experiments are conducted on two publicly used Microsoft Research Video Description andMicrosoft Research-Video to Text datasets.The experimental results demonstrate that the proposed method outperforms several state-of-art approaches.Specifically,the performance indicators Bilingual Evaluation Understudy,Metric for Evaluation of Translation with Explicit Ordering,Recall Oriented Understudy for Gisting Evaluation-longest common subsequence,and Consensus-based Image Description Evaluation of the proposed method are improved by 1.2%,0.1%,0.3%,and 2.4% on the Microsoft Research Video Description dataset,and 0.1%,1.0%,0.1%,and 2.8% on the Microsoft Research-Video to Text dataset,respectively,compared with the existing video captioning methods.As a result,the proposed method can generate video captioning that is more closely aligned with human natural language expression habits.展开更多
In the future development direction of the sixth generation(6G)mobile communication,several communication models are proposed to face the growing challenges of the task.The rapid development of artificial intelligence...In the future development direction of the sixth generation(6G)mobile communication,several communication models are proposed to face the growing challenges of the task.The rapid development of artificial intelligence(AI)foundation models provides significant support for efficient and intelligent communication interactions.In this paper,we propose an innovative semantic communication paradigm called task-oriented semantic communication system with foundation models.First,we segment the image by using task prompts based on the segment anything model(SAM)and contrastive language-image pretraining(CLIP).Meanwhile,we adopt Bezier curve to enhance the mask to improve the segmentation accuracy.Second,we have differentiated semantic compression and transmission approaches for segmented content.Third,we fuse different semantic information based on the conditional diffusion model to generate high-quality images that satisfy the users'specific task requirements.Finally,the experimental results show that the proposed system compresses the semantic information effectively and improves the robustness of semantic communication.展开更多
This paper focuses on the effective utilization of data augmentation techniques for 3Dlidar point clouds to enhance the performance of neural network models.These point clouds,which represent spatial information throu...This paper focuses on the effective utilization of data augmentation techniques for 3Dlidar point clouds to enhance the performance of neural network models.These point clouds,which represent spatial information through a collection of 3D coordinates,have found wide-ranging applications.Data augmentation has emerged as a potent solution to the challenges posed by limited labeled data and the need to enhance model generalization capabilities.Much of the existing research is devoted to crafting novel data augmentation methods specifically for 3D lidar point clouds.However,there has been a lack of focus on making the most of the numerous existing augmentation techniques.Addressing this deficiency,this research investigates the possibility of combining two fundamental data augmentation strategies.The paper introduces PolarMix andMix3D,two commonly employed augmentation techniques,and presents a new approach,named RandomFusion.Instead of using a fixed or predetermined combination of augmentation methods,RandomFusion randomly chooses one method from a pool of options for each instance or sample.This innovative data augmentation technique randomly augments each point in the point cloud with either PolarMix or Mix3D.The crux of this strategy is the random choice between PolarMix and Mix3Dfor the augmentation of each point within the point cloud data set.The results of the experiments conducted validate the efficacy of the RandomFusion strategy in enhancing the performance of neural network models for 3D lidar point cloud semantic segmentation tasks.This is achieved without compromising computational efficiency.By examining the potential of merging different augmentation techniques,the research contributes significantly to a more comprehensive understanding of how to utilize existing augmentation methods for 3D lidar point clouds.RandomFusion data augmentation technique offers a simple yet effective method to leverage the diversity of augmentation techniques and boost the robustness of models.The insights gained from this research can pave the way for future work aimed at developing more advanced and efficient data augmentation strategies for 3D lidar point cloud analysis.展开更多
With the rapid spread of Internet information and the spread of fake news,the detection of fake news becomes more and more important.Traditional detection methods often rely on a single emotional or semantic feature t...With the rapid spread of Internet information and the spread of fake news,the detection of fake news becomes more and more important.Traditional detection methods often rely on a single emotional or semantic feature to identify fake news,but these methods have limitations when dealing with news in specific domains.In order to solve the problem of weak feature correlation between data from different domains,a model for detecting fake news by integrating domain-specific emotional and semantic features is proposed.This method makes full use of the attention mechanism,grasps the correlation between different features,and effectively improves the effect of feature fusion.The algorithm first extracts the semantic features of news text through the Bi-LSTM(Bidirectional Long Short-Term Memory)layer to capture the contextual relevance of news text.Senta-BiLSTM is then used to extract emotional features and predict the probability of positive and negative emotions in the text.It then uses domain features as an enhancement feature and attention mechanism to fully capture more fine-grained emotional features associated with that domain.Finally,the fusion features are taken as the input of the fake news detection classifier,combined with the multi-task representation of information,and the MLP and Softmax functions are used for classification.The experimental results show that on the Chinese dataset Weibo21,the F1 value of this model is 0.958,4.9% higher than that of the sub-optimal model;on the English dataset FakeNewsNet,the F1 value of the detection result of this model is 0.845,1.8% higher than that of the sub-optimal model,which is advanced and feasible.展开更多
基金the Communication University of China(CUC230A013)the Fundamental Research Funds for the Central Universities.
文摘The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-attention mechanisms falter when applied to datasets with intricate semantic content and extensive dependency structures.In response,this paper introduces a Diffusion Sampling and Label-Driven Co-attention Neural Network(DSLD),which adopts a diffusion sampling method to capture more comprehensive semantic information of the data.Additionally,themodel leverages the joint correlation information of labels and data to introduce the computation of text representation,correcting semantic representationbiases in thedata,andincreasing the accuracyof semantic representation.Ultimately,the model computes the corresponding classification results by synthesizing these rich data semantic representations.Experiments on seven benchmark datasets show that our proposed model achieves competitive results compared to state-of-the-art methods.
基金supported by the Natural Science Foundation of China under Grants 61971084,62025105,62001073,62272075the National Natural Science Foundation of Chongqing under Grants cstc2021ycjh-bgzxm0039,cstc2021jcyj-msxmX0031+1 种基金the Science and Technology Research Program for Chongqing Municipal Education Commission KJZD-M202200601the Support Program for Overseas Students to Return to China for Entrepreneurship and Innovation under Grants cx2021003,cx2021053.
文摘Semantic Communication(SC)has emerged as a novel communication paradigm that provides a receiver with meaningful information extracted from the source to maximize information transmission throughput in wireless networks,beyond the theoretical capacity limit.Despite the extensive research on SC,there is a lack of comprehensive survey on technologies,solutions,applications,and challenges for SC.In this article,the development of SC is first reviewed and its characteristics,architecture,and advantages are summarized.Next,key technologies such as semantic extraction,semantic encoding,and semantic segmentation are discussed and their corresponding solutions in terms of efficiency,robustness,adaptability,and reliability are summarized.Applications of SC to UAV communication,remote image sensing and fusion,intelligent transportation,and healthcare are also presented and their strategies are summarized.Finally,some challenges and future research directions are presented to provide guidance for further research of SC.
基金partly supported by NSFC under grant No.62293481,No.62201505partly by the SUTDZJU IDEA Grant(SUTD-ZJU(VP)202102)。
文摘As conventional communication systems based on classic information theory have closely approached Shannon capacity,semantic communication is emerging as a key enabling technology for the further improvement of communication performance.However,it is still unsettled on how to represent semantic information and characterise the theoretical limits of semantic-oriented compression and transmission.In this paper,we consider a semantic source which is characterised by a set of correlated random variables whose joint probabilistic distribution can be described by a Bayesian network.We give the information-theoretic limit on the lossless compression of the semantic source and introduce a low complexity encoding method by exploiting the conditional independence.We further characterise the limits on lossy compression of the semantic source and the upper and lower bounds of the rate-distortion function.We also investigate the lossy compression of the semantic source with two-sided information at the encoder and decoder,and obtain the corresponding rate distortion function.We prove that the optimal code of the semantic source is the combination of the optimal codes of each conditional independent set given the side information.
基金supported in part by collaborative research with Toyota Motor Corporation,in part by ROIS NII Open Collaborative Research under Grant 21S0601,in part by JSPS KAKENHI under Grants 20H00592,21H03424.
文摘With the rapid development of artificial intelligence and the widespread use of the Internet of Things, semantic communication, as an emerging communication paradigm, has been attracting great interest. Taking image transmission as an example, from the semantic communication's view, not all pixels in the images are equally important for certain receivers. The existing semantic communication systems directly perform semantic encoding and decoding on the whole image, in which the region of interest cannot be identified. In this paper, we propose a novel semantic communication system for image transmission that can distinguish between Regions Of Interest (ROI) and Regions Of Non-Interest (RONI) based on semantic segmentation, where a semantic segmentation algorithm is used to classify each pixel of the image and distinguish ROI and RONI. The system also enables high-quality transmission of ROI with lower communication overheads by transmissions through different semantic communication networks with different bandwidth requirements. An improved metric θPSNR is proposed to evaluate the transmission accuracy of the novel semantic transmission network. Experimental results show that our proposed system achieves a significant performance improvement compared with existing approaches, namely, existing semantic communication approaches and the conventional approach without semantics.
基金supported in part by the National Natural Science Foundation of China under Grant No.61931020,U19B2024,62171449,62001483in part by the science and technology innovation Program of Hunan Province under Grant No.2021JJ40690。
文摘Increasing research has focused on semantic communication,the goal of which is to convey accurately the meaning instead of transmitting symbols from the sender to the receiver.In this paper,we design a novel encoding and decoding semantic communication framework,which adopts the semantic information and the contextual correlations between items to optimize the performance of a communication system over various channels.On the sender side,the average semantic loss caused by the wrong detection is defined,and a semantic source encoding strategy is developed to minimize the average semantic loss.To further improve communication reliability,a decoding strategy that utilizes the semantic and the context information to recover messages is proposed in the receiver.Extensive simulation results validate the superior performance of our strategies over state-of-the-art semantic coding and decoding policies on different communication channels.
基金This work is supported by the National Natural Science Foundation of China under Grant No.62001341the National Natural Science Foundation of Jiangsu Province under Grant No.BK20221379the Jiangsu Engineering Research Center of Digital Twinning Technology for Key Equipment in Petrochemical Process under Grant No.DTEC202104.
文摘This paper focuses on the task of few-shot 3D point cloud semantic segmentation.Despite some progress,this task still encounters many issues due to the insufficient samples given,e.g.,incomplete object segmentation and inaccurate semantic discrimination.To tackle these issues,we first leverage part-whole relationships into the task of 3D point cloud semantic segmentation to capture semantic integrity,which is empowered by the dynamic capsule routing with the module of 3D Capsule Networks(CapsNets)in the embedding network.Concretely,the dynamic routing amalgamates geometric information of the 3D point cloud data to construct higher-level feature representations,which capture the relationships between object parts and their wholes.Secondly,we designed a multi-prototype enhancement module to enhance the prototype discriminability.Specifically,the single-prototype enhancement mechanism is expanded to the multi-prototype enhancement version for capturing rich semantics.Besides,the shot-correlation within the category is calculated via the interaction of different samples to enhance the intra-category similarity.Ablation studies prove that the involved part-whole relations and proposed multi-prototype enhancement module help to achieve complete object segmentation and improve semantic discrimination.Moreover,under the integration of these two modules,quantitative and qualitative experiments on two public benchmarks,including S3DIS and ScanNet,indicate the superior performance of the proposed framework on the task of 3D point cloud semantic segmentation,compared to some state-of-the-art methods.
基金the National Natural Science Foundation of China(Grant Number 62066013)Hainan Provincial Natural Science Foundation of China(Grant Numbers 622RC674 and 2019RC182).
文摘High-resolution remote sensing image segmentation is a challenging task. In urban remote sensing, the presenceof occlusions and shadows often results in blurred or invisible object boundaries, thereby increasing the difficultyof segmentation. In this paper, an improved network with a cross-region self-attention mechanism for multi-scalefeatures based onDeepLabv3+is designed to address the difficulties of small object segmentation and blurred targetedge segmentation. First,we use CrossFormer as the backbone feature extraction network to achieve the interactionbetween large- and small-scale features, and establish self-attention associations between features at both large andsmall scales to capture global contextual feature information. Next, an improved atrous spatial pyramid poolingmodule is introduced to establish multi-scale feature maps with large- and small-scale feature associations, andattention vectors are added in the channel direction to enable adaptive adjustment of multi-scale channel features.The proposed networkmodel is validated using the PotsdamandVaihingen datasets. The experimental results showthat, compared with existing techniques, the network model designed in this paper can extract and fuse multiscaleinformation, more clearly extract edge information and small-scale information, and segment boundariesmore smoothly. Experimental results on public datasets demonstrate the superiority of ourmethod compared withseveral state-of-the-art networks.
基金supported in part by the National Natural Science Foundation of China under Grant No.61931020,U19B2024,62171449,,62001483in part by the science and technology innovation Program of Hunan Province under Grant No.2021JJ40690.
文摘Context information is significant for semantic extraction and recovery of messages in semantic communication.However,context information is not fully utilized in the existing semantic communication systems since re-lationships between sentences are often ignored.In this paper,we propose an Extended Context-based Semantic Communication(ECSC)system for text transmission,in which context information within and between sentences is explored for semantic representation and recovery.At the encoder,self-attention and segment-level relative attention are used to extract context information within and between sentences,respectively.In addition,a gate mechanism is adopted at the encoder to incorporate the context information from different ranges.At the decoder,Transformer-XL is introduced to obtain more semantic information from the historical communication processes for semantic recovery.Simulation results show the effectiveness of our proposed model in improving the semantic accuracy between transmitted and recovered messages under various channel conditions.
基金supported in part by the National Key R&D Project of China under Grant 2020YFA0712300National Natural Science Foundation of China under Grant NSFC-62231022,12031011supported in part by the NSF of China under Grant 62125108。
文摘We consider an image semantic communication system in a time-varying fading Gaussian MIMO channel,with a finite number of channel states.A deep learning-aided broadcast approach scheme is proposed to benefit the adaptive semantic transmission in terms of different channel states.We combine the classic broadcast approach with the image transformer to implement this adaptive joint source and channel coding(JSCC)scheme.Specifically,we utilize the neural network(NN)to jointly optimize the hierarchical image compression and superposition code mapping within this scheme.The learned transformers and codebooks allow recovering of the image with an adaptive quality and low error rate at the receiver side,in each channel state.The simulation results exhibit our proposed scheme can dynamically adapt the coding to the current channel state and outperform some existing intelligent schemes with the fixed coding block.
基金supported by the National Natural Science Foundation of China(No.61971062)BUPT Excellent Ph.D.Students Foundation(CX2022153)。
文摘Video transmission requires considerable bandwidth,and current widely employed schemes prove inadequate when confronted with scenes featuring prominently.Motivated by the strides in talkinghead generative technology,the paper introduces a semantic transmission system tailored for talking-head videos.The system captures semantic information from talking-head video and faithfully reconstructs source video at the receiver,only one-shot reference frame and compact semantic features are required for the entire transmission.Specifically,we analyze video semantics in the pixel domain frame-by-frame and jointly process multi-frame semantic information to seamlessly incorporate spatial and temporal information.Variational modeling is utilized to evaluate the diversity of importance among group semantics,thereby guiding bandwidth resource allocation for semantics to enhance system efficiency.The whole endto-end system is modeled as an optimization problem and equivalent to acquiring optimal rate-distortion performance.We evaluate our system on both reference frame and video transmission,experimental results demonstrate that our system can improve the efficiency and robustness of communications.Compared to the classical approaches,our system can save over 90%of bandwidth when user perception is close.
基金supported in part by the STI 2030-Major Projects(2021ZD0202002)in part by the National Natural Science Foundation of China(Grant No.62227807)+2 种基金in part by the Natural Science Foundation of Gansu Province,China(Grant No.22JR5RA488)in part by the Fundamental Research Funds for the Central Universities(Grant No.lzujbky-2023-16)Supported by Supercomputing Center of Lanzhou University.
文摘With the rapid growth of information transmission via the Internet,efforts have been made to reduce network load to promote efficiency.One such application is semantic computing,which can extract and process semantic communication.Social media has enabled users to share their current emotions,opinions,and life events through their mobile devices.Notably,people suffering from mental health problems are more willing to share their feelings on social networks.Therefore,it is necessary to extract semantic information from social media(vlog data)to identify abnormal emotional states to facilitate early identification and intervention.Most studies do not consider spatio-temporal information when fusing multimodal information to identify abnormal emotional states such as depression.To solve this problem,this paper proposes a spatio-temporal squeeze transformer method for the extraction of semantic features of depression.First,a module with spatio-temporal data is embedded into the transformer encoder,which is utilized to obtain a representation of spatio-temporal features.Second,a classifier with a voting mechanism is designed to encourage the model to classify depression and non-depression effec-tively.Experiments are conducted on the D-Vlog dataset.The results show that the method is effective,and the accuracy rate can reach 70.70%.This work provides scaffolding for future work in the detection of affect recognition in semantic communication based on social media vlog data.
基金supported by the Beijing Natural Science Foundation(L211012)the Natural Science Foundation of China(62122012,62221001)the Fundamental Research Funds for the Central Universities(2022JBQY004)。
文摘The concept of semantic communication provides a novel approach for applications in scenarios with limited communication resources.In this paper,we propose an end-to-end(E2E)semantic molecular communication system,aiming to enhance the efficiency of molecular communication systems by reducing the transmitted information.Specifically,following the joint source channel coding paradigm,the network is designed to encode the task-relevant information into the concentration of the information molecules,which is robust to the degradation of the molecular communication channel.Furthermore,we propose a channel network to enable the E2E learning over the non-differentiable molecular channel.Experimental results demonstrate the superior performance of the semantic molecular communication system over the conventional methods in classification tasks.
基金supported in part by the Tianjin Technology Innovation Guidance Special Fund Project under Grant No.21YDTPJC00850in part by the National Natural Science Foundation of China under Grant No.41906161in part by the Natural Science Foundation of Tianjin under Grant No.21JCQNJC00650。
文摘With the development of underwater sonar detection technology,simultaneous localization and mapping(SLAM)approach has attracted much attention in underwater navigation field in recent years.But the weak detection ability of a single vehicle limits the SLAM performance in wide areas.Thereby,cooperative SLAM using multiple vehicles has become an important research direction.The key factor of cooperative SLAM is timely and efficient sonar image transmission among underwater vehicles.However,the limited bandwidth of underwater acoustic channels contradicts a large amount of sonar image data.It is essential to compress the images before transmission.Recently,deep neural networks have great value in image compression by virtue of the powerful learning ability of neural networks,but the existing sonar image compression methods based on neural network usually focus on the pixel-level information without the semantic-level information.In this paper,we propose a novel underwater acoustic transmission scheme called UAT-SSIC that includes semantic segmentation-based sonar image compression(SSIC)framework and the joint source-channel codec,to improve the accuracy of the semantic information of the reconstructed sonar image at the receiver.The SSIC framework consists of Auto-Encoder structure-based sonar image compression network,which is measured by a semantic segmentation network's residual.Considering that sonar images have the characteristics of blurred target edges,the semantic segmentation network used a special dilated convolution neural network(DiCNN)to enhance segmentation accuracy by expanding the range of receptive fields.The joint source-channel codec with unequal error protection is proposed that adjusts the power level of the transmitted data,which deal with sonar image transmission error caused by the serious underwater acoustic channel.Experiment results demonstrate that our method preserves more semantic information,with advantages over existing methods at the same compression ratio.It also improves the error tolerance and packet loss resistance of transmission.
基金supported in part by the National Science Foundation of China(NSFC)with grant no.62271514in part by the Science,Technology and Innovation Commission of Shenzhen Municipality with grant no.JCYJ20210324120002007 and ZDSYS20210623091807023in part by the State Key Laboratory of Public Big Data with grant no.PBD2023-01。
文摘Recently,deep learning-based semantic communication has garnered widespread attention,with numerous systems designed for transmitting diverse data sources,including text,image,and speech,etc.While efforts have been directed toward improving system performance,many studies have concentrated on enhancing the structure of the encoder and decoder.However,this often overlooks the resulting increase in model complexity,imposing additional storage and computational burdens on smart devices.Furthermore,existing work tends to prioritize explicit semantics,neglecting the potential of implicit semantics.This paper aims to easily and effectively enhance the receiver's decoding capability without modifying the encoder and decoder structures.We propose a novel semantic communication system with variational neural inference for text transmission.Specifically,we introduce a simple but effective variational neural inferer at the receiver to infer the latent semantic information within the received text.This information is then utilized to assist in the decoding process.The simulation results show a significant enhancement in system performance and improved robustness.
文摘In cornfields,factors such as the similarity between corn seedlings and weeds and the blurring of plant edge details pose challenges to corn and weed segmentation.In addition,remote areas such as farmland are usually constrained by limited computational resources and limited collected data.Therefore,it becomes necessary to lighten the model to better adapt to complex cornfield scene,and make full use of the limited data information.In this paper,we propose an improved image segmentation algorithm based on unet.Firstly,the inverted residual structure is introduced into the contraction path to reduce the number of parameters in the training process and improve the feature extraction ability;secondly,the pyramid pooling module is introduced to enhance the network’s ability of acquiring contextual information as well as the ability of dealing with the small target loss problem;and lastly,Finally,to further enhance the segmentation capability of the model,the squeeze and excitation mechanism is introduced in the expansion path.We used images of corn seedlings collected in the field and publicly available corn weed datasets to evaluate the improved model.The improved model has a total parameter of 3.79 M and miou can achieve 87.9%.The fps on a single 3050 ti video card is about 58.9.The experimental results show that the network proposed in this paper can quickly segment corn weeds in a cornfield scenario with good segmentation accuracy.
基金supported in part by the National Natural Science Foundation of China under Grant 61873277in part by the Natural Science Basic Research Plan in Shaanxi Province of China underGrant 2020JQ-758in part by the Chinese Postdoctoral Science Foundation under Grant 2020M673446.
文摘In the video captioning methods based on an encoder-decoder,limited visual features are extracted by an encoder,and a natural sentence of the video content is generated using a decoder.However,this kind ofmethod is dependent on a single video input source and few visual labels,and there is a problem with semantic alignment between video contents and generated natural sentences,which are not suitable for accurately comprehending and describing the video contents.To address this issue,this paper proposes a video captioning method by semantic topic-guided generation.First,a 3D convolutional neural network is utilized to extract the spatiotemporal features of videos during the encoding.Then,the semantic topics of video data are extracted using the visual labels retrieved from similar video data.In the decoding,a decoder is constructed by combining a novel Enhance-TopK sampling algorithm with a Generative Pre-trained Transformer-2 deep neural network,which decreases the influence of“deviation”in the semantic mapping process between videos and texts by jointly decoding a baseline and semantic topics of video contents.During this process,the designed Enhance-TopK sampling algorithm can alleviate a long-tail problem by dynamically adjusting the probability distribution of the predicted words.Finally,the experiments are conducted on two publicly used Microsoft Research Video Description andMicrosoft Research-Video to Text datasets.The experimental results demonstrate that the proposed method outperforms several state-of-art approaches.Specifically,the performance indicators Bilingual Evaluation Understudy,Metric for Evaluation of Translation with Explicit Ordering,Recall Oriented Understudy for Gisting Evaluation-longest common subsequence,and Consensus-based Image Description Evaluation of the proposed method are improved by 1.2%,0.1%,0.3%,and 2.4% on the Microsoft Research Video Description dataset,and 0.1%,1.0%,0.1%,and 2.8% on the Microsoft Research-Video to Text dataset,respectively,compared with the existing video captioning methods.As a result,the proposed method can generate video captioning that is more closely aligned with human natural language expression habits.
基金supported in part by the National Natural Science Foundation of China under Grant(62001246,62231017,62201277,62071255)the Natural Science Foundation of Jiangsu Province under Grant BK20220390+3 种基金Key R and D Program of Jiangsu Province Key project and topics under Grant(BE2021095,BE2023035)the Natural Science Research Startup Foundation of Recruiting Talents of Nanjing University of Posts and Telecommunications(Grant No.NY221011)National Science Foundation of Xiamen,China(No.3502Z202372013)Open Project of the Key Laboratory of Underwater Acoustic Communication and Marine Information Technology(Xiamen University)of the Ministry of Education,China(No.UAC202304)。
文摘In the future development direction of the sixth generation(6G)mobile communication,several communication models are proposed to face the growing challenges of the task.The rapid development of artificial intelligence(AI)foundation models provides significant support for efficient and intelligent communication interactions.In this paper,we propose an innovative semantic communication paradigm called task-oriented semantic communication system with foundation models.First,we segment the image by using task prompts based on the segment anything model(SAM)and contrastive language-image pretraining(CLIP).Meanwhile,we adopt Bezier curve to enhance the mask to improve the segmentation accuracy.Second,we have differentiated semantic compression and transmission approaches for segmented content.Third,we fuse different semantic information based on the conditional diffusion model to generate high-quality images that satisfy the users'specific task requirements.Finally,the experimental results show that the proposed system compresses the semantic information effectively and improves the robustness of semantic communication.
基金funded in part by the Key Project of Nature Science Research for Universities of Anhui Province of China(No.2022AH051720)in part by the Science and Technology Development Fund,Macao SAR(Grant Nos.0093/2022/A2,0076/2022/A2 and 0008/2022/AGJ)in part by the China University Industry-University-Research Collaborative Innovation Fund(No.2021FNA04017).
文摘This paper focuses on the effective utilization of data augmentation techniques for 3Dlidar point clouds to enhance the performance of neural network models.These point clouds,which represent spatial information through a collection of 3D coordinates,have found wide-ranging applications.Data augmentation has emerged as a potent solution to the challenges posed by limited labeled data and the need to enhance model generalization capabilities.Much of the existing research is devoted to crafting novel data augmentation methods specifically for 3D lidar point clouds.However,there has been a lack of focus on making the most of the numerous existing augmentation techniques.Addressing this deficiency,this research investigates the possibility of combining two fundamental data augmentation strategies.The paper introduces PolarMix andMix3D,two commonly employed augmentation techniques,and presents a new approach,named RandomFusion.Instead of using a fixed or predetermined combination of augmentation methods,RandomFusion randomly chooses one method from a pool of options for each instance or sample.This innovative data augmentation technique randomly augments each point in the point cloud with either PolarMix or Mix3D.The crux of this strategy is the random choice between PolarMix and Mix3Dfor the augmentation of each point within the point cloud data set.The results of the experiments conducted validate the efficacy of the RandomFusion strategy in enhancing the performance of neural network models for 3D lidar point cloud semantic segmentation tasks.This is achieved without compromising computational efficiency.By examining the potential of merging different augmentation techniques,the research contributes significantly to a more comprehensive understanding of how to utilize existing augmentation methods for 3D lidar point clouds.RandomFusion data augmentation technique offers a simple yet effective method to leverage the diversity of augmentation techniques and boost the robustness of models.The insights gained from this research can pave the way for future work aimed at developing more advanced and efficient data augmentation strategies for 3D lidar point cloud analysis.
基金The authors are highly thankful to the National Social Science Foundation of China(20BXW101,18XXW015)Innovation Research Project for the Cultivation of High-Level Scientific and Technological Talents(Top-Notch Talents of theDiscipline)(ZZKY2022303)+3 种基金National Natural Science Foundation of China(Nos.62102451,62202496)Basic Frontier Innovation Project of Engineering University of People’s Armed Police(WJX202316)This work is also supported by National Natural Science Foundation of China(No.62172436)Engineering University of PAP’s Funding for Scientific Research Innovation Team,Engineering University of PAP’s Funding for Basic Scientific Research,and Engineering University of PAP’s Funding for Education and Teaching.Natural Science Foundation of Shaanxi Province(No.2023-JCYB-584).
文摘With the rapid spread of Internet information and the spread of fake news,the detection of fake news becomes more and more important.Traditional detection methods often rely on a single emotional or semantic feature to identify fake news,but these methods have limitations when dealing with news in specific domains.In order to solve the problem of weak feature correlation between data from different domains,a model for detecting fake news by integrating domain-specific emotional and semantic features is proposed.This method makes full use of the attention mechanism,grasps the correlation between different features,and effectively improves the effect of feature fusion.The algorithm first extracts the semantic features of news text through the Bi-LSTM(Bidirectional Long Short-Term Memory)layer to capture the contextual relevance of news text.Senta-BiLSTM is then used to extract emotional features and predict the probability of positive and negative emotions in the text.It then uses domain features as an enhancement feature and attention mechanism to fully capture more fine-grained emotional features associated with that domain.Finally,the fusion features are taken as the input of the fake news detection classifier,combined with the multi-task representation of information,and the MLP and Softmax functions are used for classification.The experimental results show that on the Chinese dataset Weibo21,the F1 value of this model is 0.958,4.9% higher than that of the sub-optimal model;on the English dataset FakeNewsNet,the F1 value of the detection result of this model is 0.845,1.8% higher than that of the sub-optimal model,which is advanced and feasible.