With the rapid spread of Internet information and the spread of fake news,the detection of fake news becomes more and more important.Traditional detection methods often rely on a single emotional or semantic feature t...With the rapid spread of Internet information and the spread of fake news,the detection of fake news becomes more and more important.Traditional detection methods often rely on a single emotional or semantic feature to identify fake news,but these methods have limitations when dealing with news in specific domains.In order to solve the problem of weak feature correlation between data from different domains,a model for detecting fake news by integrating domain-specific emotional and semantic features is proposed.This method makes full use of the attention mechanism,grasps the correlation between different features,and effectively improves the effect of feature fusion.The algorithm first extracts the semantic features of news text through the Bi-LSTM(Bidirectional Long Short-Term Memory)layer to capture the contextual relevance of news text.Senta-BiLSTM is then used to extract emotional features and predict the probability of positive and negative emotions in the text.It then uses domain features as an enhancement feature and attention mechanism to fully capture more fine-grained emotional features associated with that domain.Finally,the fusion features are taken as the input of the fake news detection classifier,combined with the multi-task representation of information,and the MLP and Softmax functions are used for classification.The experimental results show that on the Chinese dataset Weibo21,the F1 value of this model is 0.958,4.9% higher than that of the sub-optimal model;on the English dataset FakeNewsNet,the F1 value of the detection result of this model is 0.845,1.8% higher than that of the sub-optimal model,which is advanced and feasible.展开更多
Text classification is an essential task for many applications related to the Natural Language Processing domain.It can be applied in many fields,such as Information Retrieval,Knowledge Extraction,and Knowledge modeli...Text classification is an essential task for many applications related to the Natural Language Processing domain.It can be applied in many fields,such as Information Retrieval,Knowledge Extraction,and Knowledge modeling.Even though the importance of this task,Arabic Text Classification tools still suffer from many problems and remain incapable of responding to the increasing volume of Arabic content that circulates on the web or resides in large databases.This paper introduces a novel machine learning-based approach that exclusively uses hybrid(stylistic and semantic)features.First,we clean the Arabic documents and translate them to English using translation tools.Consequently,the semantic features are automatically extracted from the translated documents using an existing database of English topics.Besides,the model automatically extracts from the textual content a set of stylistic features such as word and character frequencies and punctuation.Therefore,we obtain 3 types of features:semantic,stylistic and hybrid.Using each time,a different type of feature,we performed an in-depth comparison study of nine well-known Machine Learning models to evaluate our approach and used a standard Arabic corpus.The obtained results show that Neural Network outperforms other models and provides good performances using hybrid features(F1-score=0.88%).展开更多
Current Chinese event detection methods commonly use word embedding to capture semantic representation,but these methods find it difficult to capture the dependence relationship between the trigger words and other wor...Current Chinese event detection methods commonly use word embedding to capture semantic representation,but these methods find it difficult to capture the dependence relationship between the trigger words and other words in the same sentence.Based on the simple evaluation,it is known that a dependency parser can effectively capture dependency relationships and improve the accuracy of event categorisation.This study proposes a novel architecture that models a hybrid representation to summarise semantic and structural information from both characters and words.This model can capture rich semantic features for the event detection task by incorporating the semantic representation generated from the dependency parser.The authors evaluate different models on kbp 2017 corpus.The experimental results show that the proposed method can significantly improve performance in Chinese event detection.展开更多
The article describes semantic features of kinship terminology in modern Chinese language. To make a more complete analysis, the article compares the semantics of kinship terminology in Kazakh, Russian, English and Ch...The article describes semantic features of kinship terminology in modern Chinese language. To make a more complete analysis, the article compares the semantics of kinship terminology in Kazakh, Russian, English and Chinese languages, which belong to various language groups.展开更多
“Obtaining” verbs depict a person taking temporary possession of an object. They signal an event of the transferring of one thing from its original owner to a potential possessor. Based on theories of Cognitive Sema...“Obtaining” verbs depict a person taking temporary possession of an object. They signal an event of the transferring of one thing from its original owner to a potential possessor. Based on theories of Cognitive Semantics, this paper intends to probe into the semantic features of English “obtaining” verbs and the different profiles, background frames entailed in different words, hoping to shed light on the further study of the syntactic performance of this category of verbs.展开更多
With the rapid development of the mobile communication and the Internet,the previous web anomaly detectionand identificationmodels were built relying on security experts’empirical knowledge and attack features.Althou...With the rapid development of the mobile communication and the Internet,the previous web anomaly detectionand identificationmodels were built relying on security experts’empirical knowledge and attack features.Althoughthis approach can achieve higher detection performance,it requires huge human labor and resources to maintainthe feature library.In contrast,semantic feature engineering can dynamically discover new semantic featuresand optimize feature selection by automatically analyzing the semantic information contained in the data itself,thus reducing dependence on prior knowledge.However,current semantic features still have the problem ofsemantic expression singularity,as they are extracted from a single semantic mode such as word segmentation,character segmentation,or arbitrary semantic feature extraction.This paper extracts features of web requestsfrom dual semantic granularity,and proposes a semantic feature fusion method to solve the above problems.Themethod first preprocesses web requests,and extracts word-level and character-level semantic features of URLs viaconvolutional neural network(CNN),respectively.By constructing three loss functions to reduce losses betweenfeatures,labels and categories.Experiments on the HTTP CSIC 2010,Malicious URLs and HttpParams datasetsverify the proposedmethod.Results show that compared withmachine learning,deep learningmethods and BERTmodel,the proposed method has better detection performance.And it achieved the best detection rate of 99.16%in the dataset HttpParams.展开更多
Semantic communication,as a critical component of artificial intelligence(AI),has gained increasing attention in recent years due to its significant impact on various fields.In this paper,we focus on the applications ...Semantic communication,as a critical component of artificial intelligence(AI),has gained increasing attention in recent years due to its significant impact on various fields.In this paper,we focus on the applications of semantic feature extraction,a key step in the semantic communication,in several areas of artificial intelligence,including natural language processing,medical imaging,remote sensing,autonomous driving,and other image-related applications.Specifically,we discuss how semantic feature extraction can enhance the accuracy and efficiency of natural language processing tasks,such as text classification,sentiment analysis,and topic modeling.In the medical imaging field,we explore how semantic feature extraction can be used for disease diagnosis,drug development,and treatment planning.In addition,we investigate the applications of semantic feature extraction in remote sensing and autonomous driving,where it can facilitate object detection,scene understanding,and other tasks.By providing an overview of the applications of semantic feature extraction in various fields,this paper aims to provide insights into the potential of this technology to advance the development of artificial intelligence.展开更多
As a branch of Business English, Exhibition English constitutes an integrated part of English for Special Purpose. This paper discusses the linguistic features of exhibition English at the lexical, grammatical and sem...As a branch of Business English, Exhibition English constitutes an integrated part of English for Special Purpose. This paper discusses the linguistic features of exhibition English at the lexical, grammatical and semantic levels, which will help us analyze the textual patterns in exhibition.展开更多
Aiming at the problem existing in the computer aided design process that how to express the design intents with high-level engineering terminologies, a mechanical product self-organized semantic feature evolution tech...Aiming at the problem existing in the computer aided design process that how to express the design intents with high-level engineering terminologies, a mechanical product self-organized semantic feature evolution technology for axiomatic design is proposed, so that the constraint relations between mechanical parts could be expressed in a semantic form which is more suitable for designers. By describing the evolution rules for semantic constraint information, the abstract expression of design semantics in mechanical product evolution process is realized and the constraint relations between parts are mapped to the geometric level from the semantic level; With semantic feature relation graph, the abstract semantic description, the semantic relative structure and the semantic constraint information are linked together; And the methods of semantic feature self-organized evolution are classified. Finally, combining a design example of domestic high-speed elevator, how to apply the theory to practical product development is illustrated and this method and its validity is described and verified. According to the study results, the designers are able to represent the design intents at an advanced semantic level in a more intuitional and natural way and the automation, recursion and visualization for mechanical product axiomatic design are also realized.展开更多
Log anomaly detection is an important paradigm for system troubleshooting.Existing log anomaly detection based on Long Short-Term Memory(LSTM)networks is time-consuming to handle long sequences.Transformer model is in...Log anomaly detection is an important paradigm for system troubleshooting.Existing log anomaly detection based on Long Short-Term Memory(LSTM)networks is time-consuming to handle long sequences.Transformer model is introduced to promote efficiency.However,most existing Transformer-based log anomaly detection methods convert unstructured log messages into structured templates by log parsing,which introduces parsing errors.They only extract simple semantic feature,which ignores other features,and are generally supervised,relying on the amount of labeled data.To overcome the limitations of existing methods,this paper proposes a novel unsupervised log anomaly detection method based on multi-feature(UMFLog).UMFLog includes two sub-models to consider two kinds of features:semantic feature and statistical feature,respectively.UMFLog applies the log original content with detailed parameters instead of templates or template IDs to avoid log parsing errors.In the first sub-model,UMFLog uses Bidirectional Encoder Representations from Transformers(BERT)instead of random initialization to extract effective semantic feature,and an unsupervised hypersphere-based Transformer model to learn compact log sequence representations and obtain anomaly candidates.In the second sub-model,UMFLog exploits a statistical feature-based Variational Autoencoder(VAE)about word occurrence times to identify the final anomaly from anomaly candidates.Extensive experiments and evaluations are conducted on three real public log datasets.The results show that UMFLog significantly improves F1-scores compared to the state-of-the-art(SOTA)methods because of the multi-feature.展开更多
With the continuous development and utilization of marine resources,the underwater target detection has gradually become a popular research topic in the field of underwater robot operations and target detection.Howeve...With the continuous development and utilization of marine resources,the underwater target detection has gradually become a popular research topic in the field of underwater robot operations and target detection.However,it is difficult to combine the environmental semantic information and the semantic information of targets at different scales by detection algorithms due to the complex underwater environment.In this paper,a cascade model based on the UGC-YOLO network structure with high detection accuracy is proposed.The YOLOv3 convolutional neural network is employed as the baseline structure.By fusing the global semantic information between two residual stages in the parallel structure of the feature extraction network,the perception of underwater targets is improved and the detection rate of hard-to-detect underwater objects is raised.Furthermore,the deformable convolution is applied to capture longrange semantic dependencies and PPM pooling is introduced in the highest layer network for aggregating semantic information.Finally,a multi-scale weighted fusion approach is presented for learning semantic information at different scales.Experiments are conducted on an underwater test dataset and the results have demonstrated that our proposed algorithm could detect aquatic targets in complex degraded underwater images.Compared with the baseline network algorithm,the Common Objects in Context(COCO)evaluation metric has been improved by 4.34%.展开更多
Human recognition technology based on biometrics has become a fundamental requirement in all aspects of life due to increased concerns about security and privacy issues.Therefore,biometric systems have emerged as a te...Human recognition technology based on biometrics has become a fundamental requirement in all aspects of life due to increased concerns about security and privacy issues.Therefore,biometric systems have emerged as a technology with the capability to identify or authenticate individuals based on their physiological and behavioral characteristics.Among different viable biometric modalities,the human ear structure can offer unique and valuable discriminative characteristics for human recognition systems.In recent years,most existing traditional ear recognition systems have been designed based on computer vision models and have achieved successful results.Nevertheless,such traditional models can be sensitive to several unconstrained environmental factors.As such,some traits may be difficult to extract automatically but can still be semantically perceived as soft biometrics.This research proposes a new group of semantic features to be used as soft ear biometrics,mainly inspired by conventional descriptive traits used naturally by humans when identifying or describing each other.Hence,the research study is focused on the fusion of the soft ear biometric traits with traditional(hard)ear biometric features to investigate their validity and efficacy in augmenting human identification performance.The proposed framework has two subsystems:first,a computer vision-based subsystem,extracting traditional(hard)ear biometric traits using principal component analysis(PCA)and local binary patterns(LBP),and second,a crowdsourcing-based subsystem,deriving semantic(soft)ear biometric traits.Several feature-level fusion experiments were conducted using the AMI database to evaluate the proposed algorithm’s performance.The obtained results for both identification and verification showed that the proposed soft ear biometric information significantly improved the recognition performance of traditional ear biometrics,reaching up to 12%for LBP and 5%for PCA descriptors;when fusing all three capacities PCA,LBP,and soft traits using k-nearest neighbors(KNN)classifier.展开更多
Text format information is full of most of the resources of Internet,which puts forward higher and higher requirements for the accuracy of text classification.Therefore,in this manuscript,firstly,we design a hybrid mo...Text format information is full of most of the resources of Internet,which puts forward higher and higher requirements for the accuracy of text classification.Therefore,in this manuscript,firstly,we design a hybrid model of bidirectional encoder representation from transformers-hierarchical attention networks-dilated convolutions networks(BERT_HAN_DCN)which based on BERT pre-trained model with superior ability of extracting characteristic.The advantages of HAN model and DCN model are taken into account which can help gain abundant semantic information,fusing context semantic features and hierarchical characteristics.Secondly,the traditional softmax algorithm increases the learning difficulty of the same kind of samples,making it more difficult to distinguish similar features.Based on this,AM-softmax is introduced to replace the traditional softmax.Finally,the fused model is validated,which shows superior performance in the accuracy rate and F1-score of this hybrid model on two datasets and the experimental analysis shows the general single models such as HAN,DCN,based on BERT pre-trained model.Besides,the improved AM-softmax network model is superior to the general softmax network model.展开更多
To resist the risk of the stego-image being maliciously altered during transmission,we propose a coverless image steganography method based on image segmentation.Most existing coverless steganography methods are based...To resist the risk of the stego-image being maliciously altered during transmission,we propose a coverless image steganography method based on image segmentation.Most existing coverless steganography methods are based on whole feature mapping,which has poor robustness when facing geometric attacks,because the contents in the image are easy to lost.To solve this problem,we use ResNet to extract semantic features,and segment the object areas from the image through Mask RCNN for information hiding.These selected object areas have ethical structural integrity and are not located in the visual center of the image,reducing the information loss of malicious attacks.Then,these object areas will be binarized to generate hash sequences for information mapping.In transmission,only a set of stego-images unrelated to the secret information are transmitted,so it can fundamentally resist steganalysis.At the same time,since both Mask RCNN and ResNet have excellent robustness,pre-training the model through supervised learning can achieve good performance.The robust hash algorithm can also resist attacks during transmission.Although image segmentation will reduce the capacity,multiple object areas can be extracted from an image to ensure the capacity to a certain extent.Experimental results show that compared with other coverless image steganography methods,our method is more robust when facing geometric attacks.展开更多
The haze weather environment leads to the deterioration of the visual effect of the image,and it is difficult to carry out the work of the advanced vision task.Therefore,dehazing the haze image is an important step be...The haze weather environment leads to the deterioration of the visual effect of the image,and it is difficult to carry out the work of the advanced vision task.Therefore,dehazing the haze image is an important step before the execution of the advanced vision task.Traditional dehazing algorithms achieve image dehazing by improving image brightness and contrast or constructing artificial priors such as color attenuation priors and dark channel priors.However,the effect is unstable when dealing with complex scenes.In the method based on convolutional neural network,the image dehazing network of the encoding and decoding structure does not consider the difference before and after the dehazing image,and the image spatial information is lost in the encoding stage.In order to overcome these problems,this paper proposes a novel end-to-end two-stream convolutional neural network for single-image dehazing.The network model is composed of a spatial information feature stream and a highlevel semantic feature stream.The spatial information feature stream retains the detailed information of the dehazing image,and the high-level semantic feature stream extracts the multi-scale structural features of the dehazing image.A spatial information auxiliary module is designed and placed between the feature streams.This module uses the attention mechanism to construct a unified expression of different types of information and realizes the gradual restoration of the clear image with the semantic information auxiliary spatial information in the dehazing network.A parallel residual twicing module is proposed,which performs dehazing on the difference information of features at different stages to improve the model’s ability to discriminate haze images.The peak signal-to-noise ratio(PSNR)and structural similarity are used to quantitatively evaluate the similarity between the dehazing results of each algorithm and the original image.The structure similarity and PSNR of the method in this paper reached 0.852 and 17.557dB on the HazeRD dataset,which were higher than existing comparison algorithms.On the SOTS dataset,the indicators are 0.955 and 27.348dB,which are sub-optimal results.In experiments with real haze images,this method can also achieve excellent visual restoration effects.The experimental results show that the model proposed in this paper can restore desired visual effects without fog images,and it also has good generalization performance in real haze scenes.展开更多
In order to improve the ability of sharing and scheduling capability of English teaching resources, an improved algorithm for English text summarization is proposed based on Association semantic rules. The relative fe...In order to improve the ability of sharing and scheduling capability of English teaching resources, an improved algorithm for English text summarization is proposed based on Association semantic rules. The relative features are mined among English text phrases and sentences, the semantic relevance analysis and feature extraction of keywords in English abstract are realized, the association rules differentiation for English text summarization is obtained based on information theory, related semantic roles information in English Teaching Texts is mined. Text similarity feature is taken as the maximum difference component of two semantic association rule vectors, and combining semantic similarity information, the accurate extraction of English text Abstract is realized. The simulation results show that the method can extract the text summarization accurately, it has better convergence and precision performance in the extraction process.展开更多
The performances of semisupervised clustering for unlabeled data are often superior to those of unsupervised learning,which indicates that semantic information attached to clusters can significantly improve feature re...The performances of semisupervised clustering for unlabeled data are often superior to those of unsupervised learning,which indicates that semantic information attached to clusters can significantly improve feature representation capability.In a graph convolutional network(GCN),each node contains information about itself and its neighbors that is beneficial to common and unique features among samples.Combining these findings,we propose a deep clustering method based on GCN and semantic feature guidance(GFDC) in which a deep convolutional network is used as a feature generator,and a GCN with a softmax layer performs clustering assignment.First,the diversity and amount of input information are enhanced to generate highly useful representations for downstream tasks.Subsequently,the topological graph is constructed to express the spatial relationship of features.For a pair of datasets,feature correspondence constraints are used to regularize clustering loss,and clustering outputs are iteratively optimized.Three external evaluation indicators,i.e.,clustering accuracy,normalized mutual information,and the adjusted Rand index,and an internal indicator,i.e., the Davidson-Bouldin index(DBI),are employed to evaluate clustering performances.Experimental results on eight public datasets show that the GFDC algorithm is significantly better than the majority of competitive clustering methods,i.e.,its clustering accuracy is20% higher than the best clustering method on the United States Postal Service dataset.The GFDC algorithm also has the highest accuracy on the smaller Amazon and Caltech datasets.Moreover,DBI indicates the dispersion of cluster distribution and compactness within the cluster.展开更多
It is difficult to analyze semantic relations automatically, especially the semantic relations of Chinese special sentence patterns. In this paper, we apply a novel model feature structure to represent Chinese semanti...It is difficult to analyze semantic relations automatically, especially the semantic relations of Chinese special sentence patterns. In this paper, we apply a novel model feature structure to represent Chinese semantic relations, which is formalized as "recursive directed graph". We focus on Chinese special sentence patterns, including the complex noun phrase, verb-complement structure, pivotal sentences, serial verb sentence and subject-predicate predicate sentence. Feature structure facilitates a richer Chinese semantic information extraction when compared with dependency structure. The results show that using recursive directed graph is more suitable for extracting Chinese complex semantic relations.展开更多
A steady increase in consumer demands, and severe constraints from both a somewhat damaged environment and newly installed government policies, require today's product design and development to be faster and more...A steady increase in consumer demands, and severe constraints from both a somewhat damaged environment and newly installed government policies, require today's product design and development to be faster and more efficient than ever before, yet utilizing even fewer resources. New holistic approaches, such as total product life cycle modeling which embraces all aspects of a product's life cycle, are current attempts to solve these problems. Within the field of product design and modeling, feature technology has proved to be one very promising solution component. Owing to the tremendous increase in information technology, to transfer from low level data processing towards knowledge modeling and information processing is about to bring a change in almost every computerized application. From this viewpoint, current problems of both feature frameworks and feature systems are analyzed in respect to static and dynamic consistency breakdowns. The analysis ranges from early stages of designing (feature) concepts to final system implementation and application. Por the first time, an integrated view is given oil approaches, solutions and practical experience, with feature concepts and structures, providing both a feature framework and its implementation with sufficient system architecture and computational power to master a fair number of known consistency breakdowns, while providing for robust contexts for feature semantics and integrated models. Within today's heavy use of information technology these are pre-requisites if the full potential of feature technology is to be successfully translated into practice.展开更多
In this paper we propose a multiple feature approach for the normalization task which can map each disorder mention in the text to a unique unified medical language system(UMLS)concept unique identifier(CUI). We d...In this paper we propose a multiple feature approach for the normalization task which can map each disorder mention in the text to a unique unified medical language system(UMLS)concept unique identifier(CUI). We develop a two-step method to acquire a list of candidate CUIs and their associated preferred names using UMLS API and to choose the closest CUI by calculating the similarity between the input disorder mention and each candidate. The similarity calculation step is formulated as a classification problem and multiple features(string features,ranking features,similarity features,and contextual features) are used to normalize the disorder mentions. The results show that the multiple feature approach improves the accuracy of the normalization task from 32.99% to 67.08% compared with the Meta Map baseline.展开更多
基金The authors are highly thankful to the National Social Science Foundation of China(20BXW101,18XXW015)Innovation Research Project for the Cultivation of High-Level Scientific and Technological Talents(Top-Notch Talents of theDiscipline)(ZZKY2022303)+3 种基金National Natural Science Foundation of China(Nos.62102451,62202496)Basic Frontier Innovation Project of Engineering University of People’s Armed Police(WJX202316)This work is also supported by National Natural Science Foundation of China(No.62172436)Engineering University of PAP’s Funding for Scientific Research Innovation Team,Engineering University of PAP’s Funding for Basic Scientific Research,and Engineering University of PAP’s Funding for Education and Teaching.Natural Science Foundation of Shaanxi Province(No.2023-JCYB-584).
文摘With the rapid spread of Internet information and the spread of fake news,the detection of fake news becomes more and more important.Traditional detection methods often rely on a single emotional or semantic feature to identify fake news,but these methods have limitations when dealing with news in specific domains.In order to solve the problem of weak feature correlation between data from different domains,a model for detecting fake news by integrating domain-specific emotional and semantic features is proposed.This method makes full use of the attention mechanism,grasps the correlation between different features,and effectively improves the effect of feature fusion.The algorithm first extracts the semantic features of news text through the Bi-LSTM(Bidirectional Long Short-Term Memory)layer to capture the contextual relevance of news text.Senta-BiLSTM is then used to extract emotional features and predict the probability of positive and negative emotions in the text.It then uses domain features as an enhancement feature and attention mechanism to fully capture more fine-grained emotional features associated with that domain.Finally,the fusion features are taken as the input of the fake news detection classifier,combined with the multi-task representation of information,and the MLP and Softmax functions are used for classification.The experimental results show that on the Chinese dataset Weibo21,the F1 value of this model is 0.958,4.9% higher than that of the sub-optimal model;on the English dataset FakeNewsNet,the F1 value of the detection result of this model is 0.845,1.8% higher than that of the sub-optimal model,which is advanced and feasible.
文摘Text classification is an essential task for many applications related to the Natural Language Processing domain.It can be applied in many fields,such as Information Retrieval,Knowledge Extraction,and Knowledge modeling.Even though the importance of this task,Arabic Text Classification tools still suffer from many problems and remain incapable of responding to the increasing volume of Arabic content that circulates on the web or resides in large databases.This paper introduces a novel machine learning-based approach that exclusively uses hybrid(stylistic and semantic)features.First,we clean the Arabic documents and translate them to English using translation tools.Consequently,the semantic features are automatically extracted from the translated documents using an existing database of English topics.Besides,the model automatically extracts from the textual content a set of stylistic features such as word and character frequencies and punctuation.Therefore,we obtain 3 types of features:semantic,stylistic and hybrid.Using each time,a different type of feature,we performed an in-depth comparison study of nine well-known Machine Learning models to evaluate our approach and used a standard Arabic corpus.The obtained results show that Neural Network outperforms other models and provides good performances using hybrid features(F1-score=0.88%).
基金973 Program,Grant/Award Number:2014CB340504The State Key Program of National Natural Science of China,Grant/Award Number:61533018+3 种基金National Natural Science Foundation of China,Grant/Award Number:61402220The Philosophy and Social Science Foundation of Hunan Province,Grant/Award Number:16YBA323Natural Science Foundation of Hunan Province,Grant/Award Number:2020JJ4525Scientific Research Fund of Hunan Provincial Education Department,Grant/Award Number:18B279,19A439。
文摘Current Chinese event detection methods commonly use word embedding to capture semantic representation,but these methods find it difficult to capture the dependence relationship between the trigger words and other words in the same sentence.Based on the simple evaluation,it is known that a dependency parser can effectively capture dependency relationships and improve the accuracy of event categorisation.This study proposes a novel architecture that models a hybrid representation to summarise semantic and structural information from both characters and words.This model can capture rich semantic features for the event detection task by incorporating the semantic representation generated from the dependency parser.The authors evaluate different models on kbp 2017 corpus.The experimental results show that the proposed method can significantly improve performance in Chinese event detection.
文摘The article describes semantic features of kinship terminology in modern Chinese language. To make a more complete analysis, the article compares the semantics of kinship terminology in Kazakh, Russian, English and Chinese languages, which belong to various language groups.
文摘“Obtaining” verbs depict a person taking temporary possession of an object. They signal an event of the transferring of one thing from its original owner to a potential possessor. Based on theories of Cognitive Semantics, this paper intends to probe into the semantic features of English “obtaining” verbs and the different profiles, background frames entailed in different words, hoping to shed light on the further study of the syntactic performance of this category of verbs.
基金a grant from the National Natural Science Foundation of China(Nos.11905239,12005248 and 12105303).
文摘With the rapid development of the mobile communication and the Internet,the previous web anomaly detectionand identificationmodels were built relying on security experts’empirical knowledge and attack features.Althoughthis approach can achieve higher detection performance,it requires huge human labor and resources to maintainthe feature library.In contrast,semantic feature engineering can dynamically discover new semantic featuresand optimize feature selection by automatically analyzing the semantic information contained in the data itself,thus reducing dependence on prior knowledge.However,current semantic features still have the problem ofsemantic expression singularity,as they are extracted from a single semantic mode such as word segmentation,character segmentation,or arbitrary semantic feature extraction.This paper extracts features of web requestsfrom dual semantic granularity,and proposes a semantic feature fusion method to solve the above problems.Themethod first preprocesses web requests,and extracts word-level and character-level semantic features of URLs viaconvolutional neural network(CNN),respectively.By constructing three loss functions to reduce losses betweenfeatures,labels and categories.Experiments on the HTTP CSIC 2010,Malicious URLs and HttpParams datasetsverify the proposedmethod.Results show that compared withmachine learning,deep learningmethods and BERTmodel,the proposed method has better detection performance.And it achieved the best detection rate of 99.16%in the dataset HttpParams.
文摘Semantic communication,as a critical component of artificial intelligence(AI),has gained increasing attention in recent years due to its significant impact on various fields.In this paper,we focus on the applications of semantic feature extraction,a key step in the semantic communication,in several areas of artificial intelligence,including natural language processing,medical imaging,remote sensing,autonomous driving,and other image-related applications.Specifically,we discuss how semantic feature extraction can enhance the accuracy and efficiency of natural language processing tasks,such as text classification,sentiment analysis,and topic modeling.In the medical imaging field,we explore how semantic feature extraction can be used for disease diagnosis,drug development,and treatment planning.In addition,we investigate the applications of semantic feature extraction in remote sensing and autonomous driving,where it can facilitate object detection,scene understanding,and other tasks.By providing an overview of the applications of semantic feature extraction in various fields,this paper aims to provide insights into the potential of this technology to advance the development of artificial intelligence.
文摘As a branch of Business English, Exhibition English constitutes an integrated part of English for Special Purpose. This paper discusses the linguistic features of exhibition English at the lexical, grammatical and semantic levels, which will help us analyze the textual patterns in exhibition.
基金National Natural Science Foundation of China (No.50505044)National Hi-tech Research and Development Program of China (863 Program,No.2007AA04Z 190)
文摘Aiming at the problem existing in the computer aided design process that how to express the design intents with high-level engineering terminologies, a mechanical product self-organized semantic feature evolution technology for axiomatic design is proposed, so that the constraint relations between mechanical parts could be expressed in a semantic form which is more suitable for designers. By describing the evolution rules for semantic constraint information, the abstract expression of design semantics in mechanical product evolution process is realized and the constraint relations between parts are mapped to the geometric level from the semantic level; With semantic feature relation graph, the abstract semantic description, the semantic relative structure and the semantic constraint information are linked together; And the methods of semantic feature self-organized evolution are classified. Finally, combining a design example of domestic high-speed elevator, how to apply the theory to practical product development is illustrated and this method and its validity is described and verified. According to the study results, the designers are able to represent the design intents at an advanced semantic level in a more intuitional and natural way and the automation, recursion and visualization for mechanical product axiomatic design are also realized.
基金supported in part by the National Natural Science Foundation of China under Grant 62272062the Scientific Research Fund of Hunan Provincial Transportation Department(No.202143)the Open Fund ofKey Laboratory of Safety Control of Bridge Engineering,Ministry of Education(Changsha University of Science Technology)under Grant 21KB07.
文摘Log anomaly detection is an important paradigm for system troubleshooting.Existing log anomaly detection based on Long Short-Term Memory(LSTM)networks is time-consuming to handle long sequences.Transformer model is introduced to promote efficiency.However,most existing Transformer-based log anomaly detection methods convert unstructured log messages into structured templates by log parsing,which introduces parsing errors.They only extract simple semantic feature,which ignores other features,and are generally supervised,relying on the amount of labeled data.To overcome the limitations of existing methods,this paper proposes a novel unsupervised log anomaly detection method based on multi-feature(UMFLog).UMFLog includes two sub-models to consider two kinds of features:semantic feature and statistical feature,respectively.UMFLog applies the log original content with detailed parameters instead of templates or template IDs to avoid log parsing errors.In the first sub-model,UMFLog uses Bidirectional Encoder Representations from Transformers(BERT)instead of random initialization to extract effective semantic feature,and an unsupervised hypersphere-based Transformer model to learn compact log sequence representations and obtain anomaly candidates.In the second sub-model,UMFLog exploits a statistical feature-based Variational Autoencoder(VAE)about word occurrence times to identify the final anomaly from anomaly candidates.Extensive experiments and evaluations are conducted on three real public log datasets.The results show that UMFLog significantly improves F1-scores compared to the state-of-the-art(SOTA)methods because of the multi-feature.
基金supported by the National Natural Science Foundation of China(No.62271199)the Natural Science Foundation of Hunan Province,China(No.2020JJ5170)the Scientific Research Fund of Hunan Provincial Education Department(No.18C0299)。
文摘With the continuous development and utilization of marine resources,the underwater target detection has gradually become a popular research topic in the field of underwater robot operations and target detection.However,it is difficult to combine the environmental semantic information and the semantic information of targets at different scales by detection algorithms due to the complex underwater environment.In this paper,a cascade model based on the UGC-YOLO network structure with high detection accuracy is proposed.The YOLOv3 convolutional neural network is employed as the baseline structure.By fusing the global semantic information between two residual stages in the parallel structure of the feature extraction network,the perception of underwater targets is improved and the detection rate of hard-to-detect underwater objects is raised.Furthermore,the deformable convolution is applied to capture longrange semantic dependencies and PPM pooling is introduced in the highest layer network for aggregating semantic information.Finally,a multi-scale weighted fusion approach is presented for learning semantic information at different scales.Experiments are conducted on an underwater test dataset and the results have demonstrated that our proposed algorithm could detect aquatic targets in complex degraded underwater images.Compared with the baseline network algorithm,the Common Objects in Context(COCO)evaluation metric has been improved by 4.34%.
基金supported and funded by KAU Scientific Endowment,King Abdulaziz University,Jeddah,Saudi Arabia.
文摘Human recognition technology based on biometrics has become a fundamental requirement in all aspects of life due to increased concerns about security and privacy issues.Therefore,biometric systems have emerged as a technology with the capability to identify or authenticate individuals based on their physiological and behavioral characteristics.Among different viable biometric modalities,the human ear structure can offer unique and valuable discriminative characteristics for human recognition systems.In recent years,most existing traditional ear recognition systems have been designed based on computer vision models and have achieved successful results.Nevertheless,such traditional models can be sensitive to several unconstrained environmental factors.As such,some traits may be difficult to extract automatically but can still be semantically perceived as soft biometrics.This research proposes a new group of semantic features to be used as soft ear biometrics,mainly inspired by conventional descriptive traits used naturally by humans when identifying or describing each other.Hence,the research study is focused on the fusion of the soft ear biometric traits with traditional(hard)ear biometric features to investigate their validity and efficacy in augmenting human identification performance.The proposed framework has two subsystems:first,a computer vision-based subsystem,extracting traditional(hard)ear biometric traits using principal component analysis(PCA)and local binary patterns(LBP),and second,a crowdsourcing-based subsystem,deriving semantic(soft)ear biometric traits.Several feature-level fusion experiments were conducted using the AMI database to evaluate the proposed algorithm’s performance.The obtained results for both identification and verification showed that the proposed soft ear biometric information significantly improved the recognition performance of traditional ear biometrics,reaching up to 12%for LBP and 5%for PCA descriptors;when fusing all three capacities PCA,LBP,and soft traits using k-nearest neighbors(KNN)classifier.
基金Fundamental Research Funds for the Central University,China(No.2232018D3-17)。
文摘Text format information is full of most of the resources of Internet,which puts forward higher and higher requirements for the accuracy of text classification.Therefore,in this manuscript,firstly,we design a hybrid model of bidirectional encoder representation from transformers-hierarchical attention networks-dilated convolutions networks(BERT_HAN_DCN)which based on BERT pre-trained model with superior ability of extracting characteristic.The advantages of HAN model and DCN model are taken into account which can help gain abundant semantic information,fusing context semantic features and hierarchical characteristics.Secondly,the traditional softmax algorithm increases the learning difficulty of the same kind of samples,making it more difficult to distinguish similar features.Based on this,AM-softmax is introduced to replace the traditional softmax.Finally,the fused model is validated,which shows superior performance in the accuracy rate and F1-score of this hybrid model on two datasets and the experimental analysis shows the general single models such as HAN,DCN,based on BERT pre-trained model.Besides,the improved AM-softmax network model is superior to the general softmax network model.
基金This work was supported in part by the National Natural Science Foundation of China under Grant 61772561,author J.Q,http://www.nsfc.gov.cn/in part by the Key Research and Development Plan of Hunan Province under Grant 2018NK2012,author J.Q,http://kjt.hunan.gov.cn/+3 种基金in part by the Science Research Projects of Hunan Provincial Education Department under Grant 18A174,author X.X,http://kxjsc.gov.hnedu.cn/in part by the Degree&Postgraduate Education Reform Project of Hunan Province under Grant 2019JGYB154,author J.Q,http://xwb.gov.hnedu.cn/in part by the Postgraduate Excellent teaching team Project of Hunan Province under Grant[2019]370-133,author J.Q,http://xwb.gov.hnedu.cn/and in part by the Postgraduate Education and Teaching Reform Project of Central South University of Forestry&Technology under Grant 2019JG013,author X.X,http://jwc.csuft.edu.cn/.
文摘To resist the risk of the stego-image being maliciously altered during transmission,we propose a coverless image steganography method based on image segmentation.Most existing coverless steganography methods are based on whole feature mapping,which has poor robustness when facing geometric attacks,because the contents in the image are easy to lost.To solve this problem,we use ResNet to extract semantic features,and segment the object areas from the image through Mask RCNN for information hiding.These selected object areas have ethical structural integrity and are not located in the visual center of the image,reducing the information loss of malicious attacks.Then,these object areas will be binarized to generate hash sequences for information mapping.In transmission,only a set of stego-images unrelated to the secret information are transmitted,so it can fundamentally resist steganalysis.At the same time,since both Mask RCNN and ResNet have excellent robustness,pre-training the model through supervised learning can achieve good performance.The robust hash algorithm can also resist attacks during transmission.Although image segmentation will reduce the capacity,multiple object areas can be extracted from an image to ensure the capacity to a certain extent.Experimental results show that compared with other coverless image steganography methods,our method is more robust when facing geometric attacks.
基金supported by the National Natural Science Foundationof China under Grant No. 61803061, 61906026Innovation research groupof universities in Chongqing+4 种基金the Chongqing Natural Science Foundationunder Grant cstc2020jcyj-msxmX0577, cstc2020jcyj-msxmX0634“Chengdu-Chongqing Economic Circle” innovation funding of Chongqing Municipal Education Commission KJCXZD2020028the Science andTechnology Research Program of Chongqing Municipal Education Commission grants KJQN202000602Ministry of Education China MobileResearch Fund (MCM 20180404)Special key project of Chongqingtechnology innovation and application development: cstc2019jscxzdztzx0068.
文摘The haze weather environment leads to the deterioration of the visual effect of the image,and it is difficult to carry out the work of the advanced vision task.Therefore,dehazing the haze image is an important step before the execution of the advanced vision task.Traditional dehazing algorithms achieve image dehazing by improving image brightness and contrast or constructing artificial priors such as color attenuation priors and dark channel priors.However,the effect is unstable when dealing with complex scenes.In the method based on convolutional neural network,the image dehazing network of the encoding and decoding structure does not consider the difference before and after the dehazing image,and the image spatial information is lost in the encoding stage.In order to overcome these problems,this paper proposes a novel end-to-end two-stream convolutional neural network for single-image dehazing.The network model is composed of a spatial information feature stream and a highlevel semantic feature stream.The spatial information feature stream retains the detailed information of the dehazing image,and the high-level semantic feature stream extracts the multi-scale structural features of the dehazing image.A spatial information auxiliary module is designed and placed between the feature streams.This module uses the attention mechanism to construct a unified expression of different types of information and realizes the gradual restoration of the clear image with the semantic information auxiliary spatial information in the dehazing network.A parallel residual twicing module is proposed,which performs dehazing on the difference information of features at different stages to improve the model’s ability to discriminate haze images.The peak signal-to-noise ratio(PSNR)and structural similarity are used to quantitatively evaluate the similarity between the dehazing results of each algorithm and the original image.The structure similarity and PSNR of the method in this paper reached 0.852 and 17.557dB on the HazeRD dataset,which were higher than existing comparison algorithms.On the SOTS dataset,the indicators are 0.955 and 27.348dB,which are sub-optimal results.In experiments with real haze images,this method can also achieve excellent visual restoration effects.The experimental results show that the model proposed in this paper can restore desired visual effects without fog images,and it also has good generalization performance in real haze scenes.
文摘In order to improve the ability of sharing and scheduling capability of English teaching resources, an improved algorithm for English text summarization is proposed based on Association semantic rules. The relative features are mined among English text phrases and sentences, the semantic relevance analysis and feature extraction of keywords in English abstract are realized, the association rules differentiation for English text summarization is obtained based on information theory, related semantic roles information in English Teaching Texts is mined. Text similarity feature is taken as the maximum difference component of two semantic association rule vectors, and combining semantic similarity information, the accurate extraction of English text Abstract is realized. The simulation results show that the method can extract the text summarization accurately, it has better convergence and precision performance in the extraction process.
基金supported by the Hebei Province Introduction of Studying Abroad Talent Funded Project (No. C20200302)the Opening Fund of Hebei Key Laboratory of Machine Learning and Computational Intelligence (Nos. 2019-2021-A and ZZ201909-202109-1)+1 种基金the National Natural Science Foundation of China (No. 61976141)the Social Science Foundation of Hebei Province (No. HB20TQ005)。
文摘The performances of semisupervised clustering for unlabeled data are often superior to those of unsupervised learning,which indicates that semantic information attached to clusters can significantly improve feature representation capability.In a graph convolutional network(GCN),each node contains information about itself and its neighbors that is beneficial to common and unique features among samples.Combining these findings,we propose a deep clustering method based on GCN and semantic feature guidance(GFDC) in which a deep convolutional network is used as a feature generator,and a GCN with a softmax layer performs clustering assignment.First,the diversity and amount of input information are enhanced to generate highly useful representations for downstream tasks.Subsequently,the topological graph is constructed to express the spatial relationship of features.For a pair of datasets,feature correspondence constraints are used to regularize clustering loss,and clustering outputs are iteratively optimized.Three external evaluation indicators,i.e.,clustering accuracy,normalized mutual information,and the adjusted Rand index,and an internal indicator,i.e., the Davidson-Bouldin index(DBI),are employed to evaluate clustering performances.Experimental results on eight public datasets show that the GFDC algorithm is significantly better than the majority of competitive clustering methods,i.e.,its clustering accuracy is20% higher than the best clustering method on the United States Postal Service dataset.The GFDC algorithm also has the highest accuracy on the smaller Amazon and Caltech datasets.Moreover,DBI indicates the dispersion of cluster distribution and compactness within the cluster.
基金Supported by the National Natural Science Foundation of China(61202193,61202304)the Major Projects of Chinese National Social Science Foundation(11&ZD189)+2 种基金the Chinese Postdoctoral Science Foundation(2013M540593,2014T70722)the Accomplishments of Listed Subjects in Hubei Prime Subject Developmentthe Open Foundation of Shandong Key Lab of Language Resource Development and Application
文摘It is difficult to analyze semantic relations automatically, especially the semantic relations of Chinese special sentence patterns. In this paper, we apply a novel model feature structure to represent Chinese semantic relations, which is formalized as "recursive directed graph". We focus on Chinese special sentence patterns, including the complex noun phrase, verb-complement structure, pivotal sentences, serial verb sentence and subject-predicate predicate sentence. Feature structure facilitates a richer Chinese semantic information extraction when compared with dependency structure. The results show that using recursive directed graph is more suitable for extracting Chinese complex semantic relations.
文摘A steady increase in consumer demands, and severe constraints from both a somewhat damaged environment and newly installed government policies, require today's product design and development to be faster and more efficient than ever before, yet utilizing even fewer resources. New holistic approaches, such as total product life cycle modeling which embraces all aspects of a product's life cycle, are current attempts to solve these problems. Within the field of product design and modeling, feature technology has proved to be one very promising solution component. Owing to the tremendous increase in information technology, to transfer from low level data processing towards knowledge modeling and information processing is about to bring a change in almost every computerized application. From this viewpoint, current problems of both feature frameworks and feature systems are analyzed in respect to static and dynamic consistency breakdowns. The analysis ranges from early stages of designing (feature) concepts to final system implementation and application. Por the first time, an integrated view is given oil approaches, solutions and practical experience, with feature concepts and structures, providing both a feature framework and its implementation with sufficient system architecture and computational power to master a fair number of known consistency breakdowns, while providing for robust contexts for feature semantics and integrated models. Within today's heavy use of information technology these are pre-requisites if the full potential of feature technology is to be successfully translated into practice.
基金Supported by the National Natural Science Foundation of China(61133012,61202193,61373108)the Major Projects of the National Social Science Foundation of China(11&ZD189)+1 种基金the Chinese Postdoctoral Science Foundation(2013M540593,2014T70722)the Open Foundation of Shandong Key Laboratory of Language Resource Development and Application
文摘In this paper we propose a multiple feature approach for the normalization task which can map each disorder mention in the text to a unique unified medical language system(UMLS)concept unique identifier(CUI). We develop a two-step method to acquire a list of candidate CUIs and their associated preferred names using UMLS API and to choose the closest CUI by calculating the similarity between the input disorder mention and each candidate. The similarity calculation step is formulated as a classification problem and multiple features(string features,ranking features,similarity features,and contextual features) are used to normalize the disorder mentions. The results show that the multiple feature approach improves the accuracy of the normalization task from 32.99% to 67.08% compared with the Meta Map baseline.