We consider an image semantic communication system in a time-varying fading Gaussian MIMO channel,with a finite number of channel states.A deep learning-aided broadcast approach scheme is proposed to benefit the adapt...We consider an image semantic communication system in a time-varying fading Gaussian MIMO channel,with a finite number of channel states.A deep learning-aided broadcast approach scheme is proposed to benefit the adaptive semantic transmission in terms of different channel states.We combine the classic broadcast approach with the image transformer to implement this adaptive joint source and channel coding(JSCC)scheme.Specifically,we utilize the neural network(NN)to jointly optimize the hierarchical image compression and superposition code mapping within this scheme.The learned transformers and codebooks allow recovering of the image with an adaptive quality and low error rate at the receiver side,in each channel state.The simulation results exhibit our proposed scheme can dynamically adapt the coding to the current channel state and outperform some existing intelligent schemes with the fixed coding block.展开更多
An ontology and metadata for online learning resource repository management is constructed. First, based on the analysis of the use-case diagram, the upper ontology is illustrated which includes resource library ontol...An ontology and metadata for online learning resource repository management is constructed. First, based on the analysis of the use-case diagram, the upper ontology is illustrated which includes resource library ontology and user ontology, and evaluated from its function and implementation; then the corresponding class diagram, resource description framework (RDF) schema and extensible markup language (XML) schema are given. Secondly, the metadata for online learning resource repository management is proposed based on the Dublin Core Metadata Initiative and the IEEE Learning Technologies Standards Committee Learning Object Metadata Working Group. Finally, the inference instance is shown, which proves the validity of ontology and metadata in online learning resource repository management.展开更多
Location-based services(LBS)in vehicular ad hoc networks(VANETs)must protect users’privacy and address the threat of the exposure of sensitive locations during LBS requests.Users release not only their geographical b...Location-based services(LBS)in vehicular ad hoc networks(VANETs)must protect users’privacy and address the threat of the exposure of sensitive locations during LBS requests.Users release not only their geographical but also semantic information of the visited places(e.g.,hospital).This sensitive information enables the inference attacker to exploit the users’preferences and life patterns.In this paper we propose a reinforcement learning(RL)based sensitive semantic location privacy protection scheme.This scheme uses the idea of differential privacy to randomize the released vehicle locations and adaptively selects the perturbation policy based on the sensitivity of the semantic location and the attack history.This scheme enables a vehicle to optimize the perturbation policy in terms of the privacy and the quality of service(QoS)loss without being aware of the current inference attack model in a dynamic privacy protection process.To solve the location protection problem with high-dimensional and continuous-valued perturbation policy variables,a deep deterministic policy gradientbased semantic location perturbation scheme(DSLP)is developed.The actor part is used to generate continuous privacy budget and perturbation angle,and the critic part is used to estimate the performance of the policy.Simulations demonstrate the DSLP-based scheme outperforms the benchmark schemes,which increases the privacy,reduces the QoS loss,and increases the utility of the vehicle.展开更多
Power Shell has been widely deployed in fileless malware and advanced persistent threat(APT)attacks due to its high stealthiness and live-off-theland technique.However,existing works mainly focus on deobfuscation and ...Power Shell has been widely deployed in fileless malware and advanced persistent threat(APT)attacks due to its high stealthiness and live-off-theland technique.However,existing works mainly focus on deobfuscation and malicious detection,lacking the malicious Power Shell families classification and behavior analysis.Moreover,the state-of-the-art methods fail to capture fine-grained features and semantic relationships,resulting in low robustness and accuracy.To this end,we propose Power Detector,a novel malicious Power Shell script detector based on multimodal semantic fusion and deep learning.Specifically,we design four feature extraction methods to extract key features from character,token,abstract syntax tree(AST),and semantic knowledge graph.Then,we intelligently design four embeddings(i.e.,Char2Vec,Token2Vec,AST2Vec,and Rela2Vec) and construct a multi-modal fusion algorithm to concatenate feature vectors from different views.Finally,we propose a combined model based on transformer and CNN-Bi LSTM to implement Power Shell family detection.Our experiments with five types of Power Shell attacks show that PowerDetector can accurately detect various obfuscated and stealth PowerShell scripts,with a 0.9402 precision,a 0.9358 recall,and a 0.9374 F1-score.Furthermore,through singlemodal and multi-modal comparison experiments,we demonstrate that PowerDetector’s multi-modal embedding and deep learning model can achieve better accuracy and even identify more unknown attacks.展开更多
Neurons can be abstractly represented as skeletons due to the filament nature of neurites.With the rapid development of imaging and image analysis techniques,an increasing amount of neuron skeleton data is being produ...Neurons can be abstractly represented as skeletons due to the filament nature of neurites.With the rapid development of imaging and image analysis techniques,an increasing amount of neuron skeleton data is being produced.In some scienti fic studies,it is necessary to dissect the axons and dendrites,which is typically done manually and is both tedious and time-consuming.To automate this process,we have developed a method that relies solely on neuronal skeletons using Geometric Deep Learning(GDL).We demonstrate the effectiveness of this method using pyramidal neurons in mammalian brains,and the results are promising for its application in neuroscience studies.展开更多
Text classification is an essential task for many applications related to the Natural Language Processing domain.It can be applied in many fields,such as Information Retrieval,Knowledge Extraction,and Knowledge modeli...Text classification is an essential task for many applications related to the Natural Language Processing domain.It can be applied in many fields,such as Information Retrieval,Knowledge Extraction,and Knowledge modeling.Even though the importance of this task,Arabic Text Classification tools still suffer from many problems and remain incapable of responding to the increasing volume of Arabic content that circulates on the web or resides in large databases.This paper introduces a novel machine learning-based approach that exclusively uses hybrid(stylistic and semantic)features.First,we clean the Arabic documents and translate them to English using translation tools.Consequently,the semantic features are automatically extracted from the translated documents using an existing database of English topics.Besides,the model automatically extracts from the textual content a set of stylistic features such as word and character frequencies and punctuation.Therefore,we obtain 3 types of features:semantic,stylistic and hybrid.Using each time,a different type of feature,we performed an in-depth comparison study of nine well-known Machine Learning models to evaluate our approach and used a standard Arabic corpus.The obtained results show that Neural Network outperforms other models and provides good performances using hybrid features(F1-score=0.88%).展开更多
In recent years,multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas,especially for automatic image annotation,whose purpose is to provide an efficie...In recent years,multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas,especially for automatic image annotation,whose purpose is to provide an efficient and effective searching environment for users to query their images more easily. In this paper,a semi-supervised learning based probabilistic latent semantic analysis( PLSA) model for automatic image annotation is presenred. Since it's often hard to obtain or create labeled images in large quantities while unlabeled ones are easier to collect,a transductive support vector machine( TSVM) is exploited to enhance the quality of the training image data. Then,different image features with different magnitudes will result in different performance for automatic image annotation. To this end,a Gaussian normalization method is utilized to normalize different features extracted from effective image regions segmented by the normalized cuts algorithm so as to reserve the intrinsic content of images as complete as possible. Finally,a PLSA model with asymmetric modalities is constructed based on the expectation maximization( EM) algorithm to predict a candidate set of annotations with confidence scores. Extensive experiments on the general-purpose Corel5k dataset demonstrate that the proposed model can significantly improve performance of traditional PLSA for the task of automatic image annotation.展开更多
Because of everyone's involvement in social networks, social networks are full of massive multimedia data, and events are got released and disseminated through social networks in the form of multi-modal and multi-att...Because of everyone's involvement in social networks, social networks are full of massive multimedia data, and events are got released and disseminated through social networks in the form of multi-modal and multi-attribute heterogeneous data. There have been numerous researches on social network search. Considering the spatio-temporal feature of messages and social relationships among users, we summarized an overall social network search framework from the perspective of semantics based on existing researches. For social network search, the acquisition and representation of spatio-temporal data is the basis, the semantic analysis and modeling of social network cross-media big data is an important component, deep semantic learning of social networks is the key research field, and the indexing and ranking mechanism is the indispensable part. This paper reviews the current studies in these fields, and then main challenges of social network search are given. Finally, we give an outlook to the prospect and further work of social network search.展开更多
Semantic communication,as a critical component of artificial intelligence(AI),has gained increasing attention in recent years due to its significant impact on various fields.In this paper,we focus on the applications ...Semantic communication,as a critical component of artificial intelligence(AI),has gained increasing attention in recent years due to its significant impact on various fields.In this paper,we focus on the applications of semantic feature extraction,a key step in the semantic communication,in several areas of artificial intelligence,including natural language processing,medical imaging,remote sensing,autonomous driving,and other image-related applications.Specifically,we discuss how semantic feature extraction can enhance the accuracy and efficiency of natural language processing tasks,such as text classification,sentiment analysis,and topic modeling.In the medical imaging field,we explore how semantic feature extraction can be used for disease diagnosis,drug development,and treatment planning.In addition,we investigate the applications of semantic feature extraction in remote sensing and autonomous driving,where it can facilitate object detection,scene understanding,and other tasks.By providing an overview of the applications of semantic feature extraction in various fields,this paper aims to provide insights into the potential of this technology to advance the development of artificial intelligence.展开更多
Image semantic segmentation is an important branch of computer vision of a wide variety of practical applications such as medical image analysis,autonomous driving,virtual or augmented reality,etc.In recent years,due ...Image semantic segmentation is an important branch of computer vision of a wide variety of practical applications such as medical image analysis,autonomous driving,virtual or augmented reality,etc.In recent years,due to the remarkable performance of transformer and multilayer perceptron(MLP)in computer vision,which is equivalent to convolutional neural network(CNN),there has been a substantial amount of image semantic segmentation works aimed at developing different types of deep learning architecture.This survey aims to provide a comprehensive overview of deep learning methods in the field of general image semantic segmentation.Firstly,the commonly used image segmentation datasets are listed.Next,extensive pioneering works are deeply studied from multiple perspectives(e.g.,network structures,feature fusion methods,attention mechanisms),and are divided into four categories according to different network architectures:CNN-based architectures,transformer-based architectures,MLP-based architectures,and others.Furthermore,this paper presents some common evaluation metrics and compares the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value on the most widely used datasets.Finally,possible future research directions and challenges are discussed for the reference of other researchers.展开更多
Early detection of the Covid-19 disease is essential due to its higher rate of infection affecting tens of millions of people,and its high number of deaths also by 7%.For that purpose,a proposed model of several stage...Early detection of the Covid-19 disease is essential due to its higher rate of infection affecting tens of millions of people,and its high number of deaths also by 7%.For that purpose,a proposed model of several stages was developed.The first stage is optimizing the images using dynamic adaptive histogram equalization,performing a semantic segmentation using DeepLabv3Plus,then augmenting the data by flipping it horizontally,rotating it,then flipping it vertically.The second stage builds a custom convolutional neural network model using several pre-trained ImageNet.Finally,the model compares the pre-trained data to the new output,while repeatedly trimming the best-performing models to reduce complexity and improve memory efficiency.Several experiments were done using different techniques and parameters.Accordingly,the proposed model achieved an average accuracy of 99.6%and an area under the curve of 0.996 in the Covid-19 detection.This paper will discuss how to train a customized intelligent convolutional neural network using various parameters on a set of chest X-rays with an accuracy of 99.6%.展开更多
Remarkable progress in research has shown the efficiency of Knowledge Graphs(KGs)in extracting valuable external knowledge in various domains.A Knowledge Graph(KG)can illustrate high-order relations that connect two o...Remarkable progress in research has shown the efficiency of Knowledge Graphs(KGs)in extracting valuable external knowledge in various domains.A Knowledge Graph(KG)can illustrate high-order relations that connect two objects with one or multiple related attributes.The emerging Graph Neural Networks(GNN)can extract both object characteristics and relations from KGs.This paper presents how Machine Learning(ML)meets the Semantic Web and how KGs are related to Neural Networks and Deep Learning.The paper also highlights important aspects of this area of research,discussing open issues such as the bias hidden in KGs at different levels of graph representation。展开更多
The goal of zero-shot recognition is to classify classes it has never seen before, which needs to build a bridge between seen and unseen classes through semantic embedding space. Therefore, semantic embedding space le...The goal of zero-shot recognition is to classify classes it has never seen before, which needs to build a bridge between seen and unseen classes through semantic embedding space. Therefore, semantic embedding space learning plays an important role in zero-shot recognition. Among existing works, semantic embedding space is mainly taken by user-defined attribute vectors. However, the discriminative information included in the user-defined attribute vector is limited. In this paper, we propose to learn an extra latent attribute space automatically to produce a more generalized and discriminative semantic embedded space. To prevent the bias problem, both user-defined attribute vector and latent attribute space are optimized by adversarial learning with auto-encoders. We also propose to reconstruct semantic patterns produced by explanatory graphs, which can make semantic embedding space more sensitive to usefully semantic information and less sensitive to useless information. The proposed method is evaluated on the AwA2 and CUB dataset. These results show that our proposed method achieves superior performance.展开更多
The process of segmenting point cloud data into several homogeneous areas with points in the same region having the same attributes is known as 3D segmentation.Segmentation is challenging with point cloud data due to...The process of segmenting point cloud data into several homogeneous areas with points in the same region having the same attributes is known as 3D segmentation.Segmentation is challenging with point cloud data due to substantial redundancy,fluctuating sample density and lack of apparent organization.The research area has a wide range of robotics applications,including intelligent vehicles,autonomous mapping and navigation.A number of researchers have introduced various methodologies and algorithms.Deep learning has been successfully used to a spectrum of 2D vision domains as a prevailing A.I.methods.However,due to the specific problems of processing point clouds with deep neural networks,deep learning on point clouds is still in its initial stages.This study examines many strategies that have been presented to 3D instance and semantic segmentation and gives a complete assessment of current developments in deep learning-based 3D segmentation.In these approaches’benefits,draw backs,and design mechanisms are studied and addressed.This study evaluates the impact of various segmentation algorithms on competitiveness on various publicly accessible datasets,as well as the most often used pipelines,their advantages and limits,insightful findings and intriguing future research directions.展开更多
This research addresses the challenges of training large semantic segmentation models for image analysis,focusing on expediting the annotation process and mitigating imbalanced datasets.In the context of imbalanced da...This research addresses the challenges of training large semantic segmentation models for image analysis,focusing on expediting the annotation process and mitigating imbalanced datasets.In the context of imbalanced datasets,biases related to age and gender in clinical contexts and skewed representation in natural images can affect model performance.Strategies to mitigate these biases are explored to enhance efficiency and accuracy in semantic segmentation analysis.An in-depth exploration of various reinforced active learning methodologies for image segmentation is conducted,optimizing precision and efficiency across diverse domains.The proposed framework integrates Dueling Deep Q-Networks(DQN),Prioritized Experience Replay,Noisy Networks,and Emphasizing Recent Experience.Extensive experimentation and evaluation of diverse datasets reveal both improvements and limitations associated with various approaches in terms of overall accuracy and efficiency.This research contributes to the expansion of reinforced active learning methodologies for image segmentation,paving the way for more sophisticated and precise segmentation algorithms across diverse domains.The findings emphasize the need for a careful balance between exploration and exploitation strategies in reinforcement learning for effective image segmentation.展开更多
With the rapid development of artificial intelligence and the widespread use of the Internet of Things, semantic communication, as an emerging communication paradigm, has been attracting great interest. Taking image t...With the rapid development of artificial intelligence and the widespread use of the Internet of Things, semantic communication, as an emerging communication paradigm, has been attracting great interest. Taking image transmission as an example, from the semantic communication's view, not all pixels in the images are equally important for certain receivers. The existing semantic communication systems directly perform semantic encoding and decoding on the whole image, in which the region of interest cannot be identified. In this paper, we propose a novel semantic communication system for image transmission that can distinguish between Regions Of Interest (ROI) and Regions Of Non-Interest (RONI) based on semantic segmentation, where a semantic segmentation algorithm is used to classify each pixel of the image and distinguish ROI and RONI. The system also enables high-quality transmission of ROI with lower communication overheads by transmissions through different semantic communication networks with different bandwidth requirements. An improved metric θPSNR is proposed to evaluate the transmission accuracy of the novel semantic transmission network. Experimental results show that our proposed system achieves a significant performance improvement compared with existing approaches, namely, existing semantic communication approaches and the conventional approach without semantics.展开更多
The concept of semantic communication provides a novel approach for applications in scenarios with limited communication resources.In this paper,we propose an end-to-end(E2E)semantic molecular communication system,aim...The concept of semantic communication provides a novel approach for applications in scenarios with limited communication resources.In this paper,we propose an end-to-end(E2E)semantic molecular communication system,aiming to enhance the efficiency of molecular communication systems by reducing the transmitted information.Specifically,following the joint source channel coding paradigm,the network is designed to encode the task-relevant information into the concentration of the information molecules,which is robust to the degradation of the molecular communication channel.Furthermore,we propose a channel network to enable the E2E learning over the non-differentiable molecular channel.Experimental results demonstrate the superior performance of the semantic molecular communication system over the conventional methods in classification tasks.展开更多
基金supported in part by the National Key R&D Project of China under Grant 2020YFA0712300National Natural Science Foundation of China under Grant NSFC-62231022,12031011supported in part by the NSF of China under Grant 62125108。
文摘We consider an image semantic communication system in a time-varying fading Gaussian MIMO channel,with a finite number of channel states.A deep learning-aided broadcast approach scheme is proposed to benefit the adaptive semantic transmission in terms of different channel states.We combine the classic broadcast approach with the image transformer to implement this adaptive joint source and channel coding(JSCC)scheme.Specifically,we utilize the neural network(NN)to jointly optimize the hierarchical image compression and superposition code mapping within this scheme.The learned transformers and codebooks allow recovering of the image with an adaptive quality and low error rate at the receiver side,in each channel state.The simulation results exhibit our proposed scheme can dynamically adapt the coding to the current channel state and outperform some existing intelligent schemes with the fixed coding block.
基金The Advanced University Action Plan of the Minis-try of Education of China (2004XD-03).
文摘An ontology and metadata for online learning resource repository management is constructed. First, based on the analysis of the use-case diagram, the upper ontology is illustrated which includes resource library ontology and user ontology, and evaluated from its function and implementation; then the corresponding class diagram, resource description framework (RDF) schema and extensible markup language (XML) schema are given. Secondly, the metadata for online learning resource repository management is proposed based on the Dublin Core Metadata Initiative and the IEEE Learning Technologies Standards Committee Learning Object Metadata Working Group. Finally, the inference instance is shown, which proves the validity of ontology and metadata in online learning resource repository management.
基金This work was supported in part by National Natural Science Foundation of China under Grant 61971366 and 61771474,and in part by the Fundamental Research Funds for the central universities No.20720200077,and in part by Major Science and Technology Innovation Projects of Shandong Province 2019JZZY020505 and Key R&D Projects of Xuzhou City KC18171,and in part by NSF EARS-1839818,CNS1717454,CNS-1731424,and CNS-1702850.
文摘Location-based services(LBS)in vehicular ad hoc networks(VANETs)must protect users’privacy and address the threat of the exposure of sensitive locations during LBS requests.Users release not only their geographical but also semantic information of the visited places(e.g.,hospital).This sensitive information enables the inference attacker to exploit the users’preferences and life patterns.In this paper we propose a reinforcement learning(RL)based sensitive semantic location privacy protection scheme.This scheme uses the idea of differential privacy to randomize the released vehicle locations and adaptively selects the perturbation policy based on the sensitivity of the semantic location and the attack history.This scheme enables a vehicle to optimize the perturbation policy in terms of the privacy and the quality of service(QoS)loss without being aware of the current inference attack model in a dynamic privacy protection process.To solve the location protection problem with high-dimensional and continuous-valued perturbation policy variables,a deep deterministic policy gradientbased semantic location perturbation scheme(DSLP)is developed.The actor part is used to generate continuous privacy budget and perturbation angle,and the critic part is used to estimate the performance of the policy.Simulations demonstrate the DSLP-based scheme outperforms the benchmark schemes,which increases the privacy,reduces the QoS loss,and increases the utility of the vehicle.
基金This work was supported by National Natural Science Foundation of China(No.62172308,No.U1626107,No.61972297,No.62172144,and No.62062019).
文摘Power Shell has been widely deployed in fileless malware and advanced persistent threat(APT)attacks due to its high stealthiness and live-off-theland technique.However,existing works mainly focus on deobfuscation and malicious detection,lacking the malicious Power Shell families classification and behavior analysis.Moreover,the state-of-the-art methods fail to capture fine-grained features and semantic relationships,resulting in low robustness and accuracy.To this end,we propose Power Detector,a novel malicious Power Shell script detector based on multimodal semantic fusion and deep learning.Specifically,we design four feature extraction methods to extract key features from character,token,abstract syntax tree(AST),and semantic knowledge graph.Then,we intelligently design four embeddings(i.e.,Char2Vec,Token2Vec,AST2Vec,and Rela2Vec) and construct a multi-modal fusion algorithm to concatenate feature vectors from different views.Finally,we propose a combined model based on transformer and CNN-Bi LSTM to implement Power Shell family detection.Our experiments with five types of Power Shell attacks show that PowerDetector can accurately detect various obfuscated and stealth PowerShell scripts,with a 0.9402 precision,a 0.9358 recall,and a 0.9374 F1-score.Furthermore,through singlemodal and multi-modal comparison experiments,we demonstrate that PowerDetector’s multi-modal embedding and deep learning model can achieve better accuracy and even identify more unknown attacks.
基金supported by the Simons Foundation,the National Natural Science Foundation of China(No.NSFC61405038)the Fujian provincial fund(No.2020J01453).
文摘Neurons can be abstractly represented as skeletons due to the filament nature of neurites.With the rapid development of imaging and image analysis techniques,an increasing amount of neuron skeleton data is being produced.In some scienti fic studies,it is necessary to dissect the axons and dendrites,which is typically done manually and is both tedious and time-consuming.To automate this process,we have developed a method that relies solely on neuronal skeletons using Geometric Deep Learning(GDL).We demonstrate the effectiveness of this method using pyramidal neurons in mammalian brains,and the results are promising for its application in neuroscience studies.
文摘Text classification is an essential task for many applications related to the Natural Language Processing domain.It can be applied in many fields,such as Information Retrieval,Knowledge Extraction,and Knowledge modeling.Even though the importance of this task,Arabic Text Classification tools still suffer from many problems and remain incapable of responding to the increasing volume of Arabic content that circulates on the web or resides in large databases.This paper introduces a novel machine learning-based approach that exclusively uses hybrid(stylistic and semantic)features.First,we clean the Arabic documents and translate them to English using translation tools.Consequently,the semantic features are automatically extracted from the translated documents using an existing database of English topics.Besides,the model automatically extracts from the textual content a set of stylistic features such as word and character frequencies and punctuation.Therefore,we obtain 3 types of features:semantic,stylistic and hybrid.Using each time,a different type of feature,we performed an in-depth comparison study of nine well-known Machine Learning models to evaluate our approach and used a standard Arabic corpus.The obtained results show that Neural Network outperforms other models and provides good performances using hybrid features(F1-score=0.88%).
基金Supported by the National Program on Key Basic Research Project(No.2013CB329502)the National Natural Science Foundation of China(No.61202212)+1 种基金the Special Research Project of the Educational Department of Shaanxi Province of China(No.15JK1038)the Key Research Project of Baoji University of Arts and Sciences(No.ZK16047)
文摘In recent years,multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas,especially for automatic image annotation,whose purpose is to provide an efficient and effective searching environment for users to query their images more easily. In this paper,a semi-supervised learning based probabilistic latent semantic analysis( PLSA) model for automatic image annotation is presenred. Since it's often hard to obtain or create labeled images in large quantities while unlabeled ones are easier to collect,a transductive support vector machine( TSVM) is exploited to enhance the quality of the training image data. Then,different image features with different magnitudes will result in different performance for automatic image annotation. To this end,a Gaussian normalization method is utilized to normalize different features extracted from effective image regions segmented by the normalized cuts algorithm so as to reserve the intrinsic content of images as complete as possible. Finally,a PLSA model with asymmetric modalities is constructed based on the expectation maximization( EM) algorithm to predict a candidate set of annotations with confidence scores. Extensive experiments on the general-purpose Corel5k dataset demonstrate that the proposed model can significantly improve performance of traditional PLSA for the task of automatic image annotation.
文摘Because of everyone's involvement in social networks, social networks are full of massive multimedia data, and events are got released and disseminated through social networks in the form of multi-modal and multi-attribute heterogeneous data. There have been numerous researches on social network search. Considering the spatio-temporal feature of messages and social relationships among users, we summarized an overall social network search framework from the perspective of semantics based on existing researches. For social network search, the acquisition and representation of spatio-temporal data is the basis, the semantic analysis and modeling of social network cross-media big data is an important component, deep semantic learning of social networks is the key research field, and the indexing and ranking mechanism is the indispensable part. This paper reviews the current studies in these fields, and then main challenges of social network search are given. Finally, we give an outlook to the prospect and further work of social network search.
文摘Semantic communication,as a critical component of artificial intelligence(AI),has gained increasing attention in recent years due to its significant impact on various fields.In this paper,we focus on the applications of semantic feature extraction,a key step in the semantic communication,in several areas of artificial intelligence,including natural language processing,medical imaging,remote sensing,autonomous driving,and other image-related applications.Specifically,we discuss how semantic feature extraction can enhance the accuracy and efficiency of natural language processing tasks,such as text classification,sentiment analysis,and topic modeling.In the medical imaging field,we explore how semantic feature extraction can be used for disease diagnosis,drug development,and treatment planning.In addition,we investigate the applications of semantic feature extraction in remote sensing and autonomous driving,where it can facilitate object detection,scene understanding,and other tasks.By providing an overview of the applications of semantic feature extraction in various fields,this paper aims to provide insights into the potential of this technology to advance the development of artificial intelligence.
基金supported by the Major science and technology project of Hainan Province(Grant No.ZDKJ2020012)National Natural Science Foundation of China(Grant No.62162024 and 62162022)+1 种基金Key Projects in Hainan Province(Grant ZDYF2021GXJS003 and Grant ZDYF2020040)Graduate Innovation Project(Grant No.Qhys2021-187).
文摘Image semantic segmentation is an important branch of computer vision of a wide variety of practical applications such as medical image analysis,autonomous driving,virtual or augmented reality,etc.In recent years,due to the remarkable performance of transformer and multilayer perceptron(MLP)in computer vision,which is equivalent to convolutional neural network(CNN),there has been a substantial amount of image semantic segmentation works aimed at developing different types of deep learning architecture.This survey aims to provide a comprehensive overview of deep learning methods in the field of general image semantic segmentation.Firstly,the commonly used image segmentation datasets are listed.Next,extensive pioneering works are deeply studied from multiple perspectives(e.g.,network structures,feature fusion methods,attention mechanisms),and are divided into four categories according to different network architectures:CNN-based architectures,transformer-based architectures,MLP-based architectures,and others.Furthermore,this paper presents some common evaluation metrics and compares the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value on the most widely used datasets.Finally,possible future research directions and challenges are discussed for the reference of other researchers.
基金This work was supported by the National Research Foundation of Korea-Grant funded by the Korean Government(Ministry of Science and ICT)-NRF-2020R1A2B5B02002478).There was no additional external funding received for this study.
文摘Early detection of the Covid-19 disease is essential due to its higher rate of infection affecting tens of millions of people,and its high number of deaths also by 7%.For that purpose,a proposed model of several stages was developed.The first stage is optimizing the images using dynamic adaptive histogram equalization,performing a semantic segmentation using DeepLabv3Plus,then augmenting the data by flipping it horizontally,rotating it,then flipping it vertically.The second stage builds a custom convolutional neural network model using several pre-trained ImageNet.Finally,the model compares the pre-trained data to the new output,while repeatedly trimming the best-performing models to reduce complexity and improve memory efficiency.Several experiments were done using different techniques and parameters.Accordingly,the proposed model achieved an average accuracy of 99.6%and an area under the curve of 0.996 in the Covid-19 detection.This paper will discuss how to train a customized intelligent convolutional neural network using various parameters on a set of chest X-rays with an accuracy of 99.6%.
文摘Remarkable progress in research has shown the efficiency of Knowledge Graphs(KGs)in extracting valuable external knowledge in various domains.A Knowledge Graph(KG)can illustrate high-order relations that connect two objects with one or multiple related attributes.The emerging Graph Neural Networks(GNN)can extract both object characteristics and relations from KGs.This paper presents how Machine Learning(ML)meets the Semantic Web and how KGs are related to Neural Networks and Deep Learning.The paper also highlights important aspects of this area of research,discussing open issues such as the bias hidden in KGs at different levels of graph representation。
文摘The goal of zero-shot recognition is to classify classes it has never seen before, which needs to build a bridge between seen and unseen classes through semantic embedding space. Therefore, semantic embedding space learning plays an important role in zero-shot recognition. Among existing works, semantic embedding space is mainly taken by user-defined attribute vectors. However, the discriminative information included in the user-defined attribute vector is limited. In this paper, we propose to learn an extra latent attribute space automatically to produce a more generalized and discriminative semantic embedded space. To prevent the bias problem, both user-defined attribute vector and latent attribute space are optimized by adversarial learning with auto-encoders. We also propose to reconstruct semantic patterns produced by explanatory graphs, which can make semantic embedding space more sensitive to usefully semantic information and less sensitive to useless information. The proposed method is evaluated on the AwA2 and CUB dataset. These results show that our proposed method achieves superior performance.
基金This research was supported by the BB21 plus funded by Busan Metropolitan City and Busan Institute for Talent and Lifelong Education(BIT)and a grant from Tongmyong University Innovated University Research Park(I-URP)funded by Busan Metropolitan City,Republic of Korea.
文摘The process of segmenting point cloud data into several homogeneous areas with points in the same region having the same attributes is known as 3D segmentation.Segmentation is challenging with point cloud data due to substantial redundancy,fluctuating sample density and lack of apparent organization.The research area has a wide range of robotics applications,including intelligent vehicles,autonomous mapping and navigation.A number of researchers have introduced various methodologies and algorithms.Deep learning has been successfully used to a spectrum of 2D vision domains as a prevailing A.I.methods.However,due to the specific problems of processing point clouds with deep neural networks,deep learning on point clouds is still in its initial stages.This study examines many strategies that have been presented to 3D instance and semantic segmentation and gives a complete assessment of current developments in deep learning-based 3D segmentation.In these approaches’benefits,draw backs,and design mechanisms are studied and addressed.This study evaluates the impact of various segmentation algorithms on competitiveness on various publicly accessible datasets,as well as the most often used pipelines,their advantages and limits,insightful findings and intriguing future research directions.
基金This work is partially supported by the Vice President for Research and Partnerships of the University of Oklahoma,the Data Institute for Societal Challenges,and the Stephenson Cancer Center through DISC/SCC Seed Grant Award.
文摘This research addresses the challenges of training large semantic segmentation models for image analysis,focusing on expediting the annotation process and mitigating imbalanced datasets.In the context of imbalanced datasets,biases related to age and gender in clinical contexts and skewed representation in natural images can affect model performance.Strategies to mitigate these biases are explored to enhance efficiency and accuracy in semantic segmentation analysis.An in-depth exploration of various reinforced active learning methodologies for image segmentation is conducted,optimizing precision and efficiency across diverse domains.The proposed framework integrates Dueling Deep Q-Networks(DQN),Prioritized Experience Replay,Noisy Networks,and Emphasizing Recent Experience.Extensive experimentation and evaluation of diverse datasets reveal both improvements and limitations associated with various approaches in terms of overall accuracy and efficiency.This research contributes to the expansion of reinforced active learning methodologies for image segmentation,paving the way for more sophisticated and precise segmentation algorithms across diverse domains.The findings emphasize the need for a careful balance between exploration and exploitation strategies in reinforcement learning for effective image segmentation.
基金supported in part by collaborative research with Toyota Motor Corporation,in part by ROIS NII Open Collaborative Research under Grant 21S0601,in part by JSPS KAKENHI under Grants 20H00592,21H03424.
文摘With the rapid development of artificial intelligence and the widespread use of the Internet of Things, semantic communication, as an emerging communication paradigm, has been attracting great interest. Taking image transmission as an example, from the semantic communication's view, not all pixels in the images are equally important for certain receivers. The existing semantic communication systems directly perform semantic encoding and decoding on the whole image, in which the region of interest cannot be identified. In this paper, we propose a novel semantic communication system for image transmission that can distinguish between Regions Of Interest (ROI) and Regions Of Non-Interest (RONI) based on semantic segmentation, where a semantic segmentation algorithm is used to classify each pixel of the image and distinguish ROI and RONI. The system also enables high-quality transmission of ROI with lower communication overheads by transmissions through different semantic communication networks with different bandwidth requirements. An improved metric θPSNR is proposed to evaluate the transmission accuracy of the novel semantic transmission network. Experimental results show that our proposed system achieves a significant performance improvement compared with existing approaches, namely, existing semantic communication approaches and the conventional approach without semantics.
基金supported by the Beijing Natural Science Foundation(L211012)the Natural Science Foundation of China(62122012,62221001)the Fundamental Research Funds for the Central Universities(2022JBQY004)。
文摘The concept of semantic communication provides a novel approach for applications in scenarios with limited communication resources.In this paper,we propose an end-to-end(E2E)semantic molecular communication system,aiming to enhance the efficiency of molecular communication systems by reducing the transmitted information.Specifically,following the joint source channel coding paradigm,the network is designed to encode the task-relevant information into the concentration of the information molecules,which is robust to the degradation of the molecular communication channel.Furthermore,we propose a channel network to enable the E2E learning over the non-differentiable molecular channel.Experimental results demonstrate the superior performance of the semantic molecular communication system over the conventional methods in classification tasks.