Chinese Clinical Named Entity Recognition(CNER)is a crucial step in extracting medical information and is of great significance in promoting medical informatization.However,CNER poses challenges due to the specificity...Chinese Clinical Named Entity Recognition(CNER)is a crucial step in extracting medical information and is of great significance in promoting medical informatization.However,CNER poses challenges due to the specificity of clinical terminology,the complexity of Chinese text semantics,and the uncertainty of Chinese entity boundaries.To address these issues,we propose an improved CNER model,which is based on multi-feature fusion and multi-scale local context enhancement.The model simultaneously fuses multi-feature representations of pinyin,radical,Part of Speech(POS),word boundary with BERT deep contextual representations to enhance the semantic representation of text for more effective entity recognition.Furthermore,to address the model’s limitation of focusing just on global features,we incorporate Convolutional Neural Networks(CNNs)with various kernel sizes to capture multi-scale local features of the text and enhance the model’s comprehension of the text.Finally,we integrate the obtained global and local features,and employ multi-head attention mechanism(MHA)extraction to enhance the model’s focus on characters associated with medical entities,hence boosting the model’s performance.We obtained 92.74%,and 87.80%F1 scores on the two CNER benchmark datasets,CCKS2017 and CCKS2019,respectively.The results demonstrate that our model outperforms the latest models in CNER,showcasing its outstanding overall performance.It can be seen that the CNER model proposed in this study has an important application value in constructing clinical medical knowledge graph and intelligent Q&A system.展开更多
Most ground faults in distribution network are caused by insulation deterioration of power equipment.It is difficult to find the insulation deterioration of the distribution network in time,and the development trend o...Most ground faults in distribution network are caused by insulation deterioration of power equipment.It is difficult to find the insulation deterioration of the distribution network in time,and the development trend of the initial insulation fault is unknown,which brings difficulties to the distribution inspection.In order to solve the above problems,a situational awareness method of the initial insulation fault of the distribution network based on a multi-feature index comprehensive evaluation is proposed.Firstly,the insulation situation evaluation index is selected by analyzing the insulation fault mechanism of the distribution network,and the relational database of the distribution network is designed based on the data and numerical characteristics of the existing distribution management system.Secondly,considering all kinds of fault factors of the distribution network and the influence of the power supply region,the evaluation method of the initial insulation fault situation of the distribution network is proposed,and the development situation of the distribution network insulation fault is classified according to the evaluation method.Then,principal component analysis was used to reduce the dimension of the training samples and test samples of the distribution network data,and the support vector machine(SVM)was trained.The optimal parameter combination of the SVM model was found by the grid search method,and a multi-class SVM model based on 1-v-1 method was constructed.Finally,the trained multi-class SVM was used to predict 6 kinds of situation level prediction samples.The results of simulation examples show that the average prediction accuracy of 6 situation levels is above 95%,and the perception accuracy of 4 situation levels is above 96%.In addition,the insulation maintenance decision scheme under different situation levels is able to be given when no fault occurs or the insulation fault is in the early stage,which can meet the needs of power distribution and inspection for accurately sensing the insulation fault situation.The correctness and effectiveness of this method are verified.展开更多
This paper analyzes the progress of handwritten Chinese character recognition technology,from two perspectives:traditional recognition methods and deep learning-based recognition methods.Firstly,the complexity of Chin...This paper analyzes the progress of handwritten Chinese character recognition technology,from two perspectives:traditional recognition methods and deep learning-based recognition methods.Firstly,the complexity of Chinese character recognition is pointed out,including its numerous categories,complex structure,and the problem of similar characters,especially the variability of handwritten Chinese characters.Subsequently,recognition methods based on feature optimization,model optimization,and fusion techniques are highlighted.The fusion studies between feature optimization and model improvement are further explored,and these studies further enhance the recognition effect through complementary advantages.Finally,the article summarizes the current challenges of Chinese character recognition technology,including accuracy improvement,model complexity,and real-time problems,and looks forward to future research directions.展开更多
Vehicle re-identification(ReID)aims to retrieve the target vehicle in an extensive image gallery through its appearances from various views in the cross-camera scenario.It has gradually become a core technology of int...Vehicle re-identification(ReID)aims to retrieve the target vehicle in an extensive image gallery through its appearances from various views in the cross-camera scenario.It has gradually become a core technology of intelligent transportation system.Most existing vehicle re-identification models adopt the joint learning of global and local features.However,they directly use the extracted global features,resulting in insufficient feature expression.Moreover,local features are primarily obtained through advanced annotation and complex attention mechanisms,which require additional costs.To solve this issue,a multi-feature learning model with enhanced local attention for vehicle re-identification(MFELA)is proposed in this paper.The model consists of global and local branches.The global branch utilizes both middle and highlevel semantic features of ResNet50 to enhance the global representation capability.In addition,multi-scale pooling operations are used to obtain multiscale information.While the local branch utilizes the proposed Region Batch Dropblock(RBD),which encourages the model to learn discriminative features for different local regions and simultaneously drops corresponding same areas randomly in a batch during training to enhance the attention to local regions.Then features from both branches are combined to provide a more comprehensive and distinctive feature representation.Extensive experiments on VeRi-776 and VehicleID datasets prove that our method has excellent performance.展开更多
Urban land provides a suitable location for various economic activities which affect the development of surrounding areas. With rapid industrialization and urbanization, the contradictions in land-use become more noti...Urban land provides a suitable location for various economic activities which affect the development of surrounding areas. With rapid industrialization and urbanization, the contradictions in land-use become more noticeable. Urban administrators and decision-makers seek modern methods and technology to provide information support for urban growth. Recently, with the fast development of high-resolution sensor technology, more relevant data can be obtained, which is an advantage in studying the sustainable development of urban land-use. However, these data are only information sources and are a mixture of "information" and "noise". Processing, analysis and information extraction from remote sensing data is necessary to provide useful information. This paper extracts urban land-use information from a high-resolution image by using the multi-feature information of the image objects, and adopts an object-oriented image analysis approach and multi-scale image segmentation technology. A classification and extraction model is set up based on the multi-features of the image objects, in order to contribute to information for reasonable planning and effective management. This new image analysis approach offers a satisfactory solution for extracting information quickly and efficiently.展开更多
All human languages have words that can mean different things in different contexts, such words with multiple meanings are potentially “ambiguous”. The process of “deciding which of several meanings of a term is in...All human languages have words that can mean different things in different contexts, such words with multiple meanings are potentially “ambiguous”. The process of “deciding which of several meanings of a term is intended in a given context” is known as “word sense disambiguation (WSD)”. This paper presents a method of WSD that assigns a target word the sense that is most related to the senses of its neighbor words. We explore the use of measures of relatedness between word senses based on a novel hybrid approach. First, we investigate how to “literally” and “regularly” express a “concept”. We apply set algebra to WordNet’s synsets cooperating with WordNet’s word ontology. In this way we establish regular rules for constructing various representations (lexical notations) of a concept using Boolean operators and word forms in various synset(s) defined in WordNet. Then we establish a formal mechanism for quantifying and estimating the semantic relatedness between concepts—we facilitate “concept distribution statistics” to determine the degree of semantic relatedness between two lexically expressed con- cepts. The experimental results showed good performance on Semcor, a subset of Brown corpus. We observe that measures of semantic relatedness are useful sources of information for WSD.展开更多
The knowledge of flow regime is very important for quantifying the pressure drop, the stability and safety of two-phase flow systems. Based on image multi-feature fusion and support vector machine, a new method to ide...The knowledge of flow regime is very important for quantifying the pressure drop, the stability and safety of two-phase flow systems. Based on image multi-feature fusion and support vector machine, a new method to identify flow regime in two-phase flow was presented. Firstly, gas-liquid two-phase flow images including bub- bly flow, plug flow, slug flow, stratified flow, wavy flow, annular flow and mist flow were captured by digital high speed video systems in the horizontal tube. The image moment invariants and gray level co-occurrence matrix texture features were extracted using image processing techniques. To improve the performance of a multiple classifier system, the rough sets theory was used for reducing the inessential factors. Furthermore, the support vector machine was trained by using these eigenvectors to reduce the dimension as flow regime samples, and the flow regime intelligent identification was realized. The test results showed that image features which were reduced with the rough sets theory could excellently reflect the difference between seven typical flow regimes, and successful training the support vector machine could quickly and accurately identify seven typical flow regimes of gas-liquid two-phase flow in the horizontal tube. Image multi-feature fusion method provided a new way to identify the gas-liquid two-phase flow, and achieved higher identification ability than that of single characteristic. The overall identification accuracy was 100%, and an estimate of the image processing time was 8 ms for online flow regime identification.展开更多
Massive open online courses(MOOC)have recently gained worldwide attention in the field of education.The manner of MOOC provides a new option for learning various kinds of knowledge.A mass of data miming algorithms hav...Massive open online courses(MOOC)have recently gained worldwide attention in the field of education.The manner of MOOC provides a new option for learning various kinds of knowledge.A mass of data miming algorithms have been proposed to analyze the learner’s characteristics and classify the learners into different groups.However,most current algorithms mainly focus on the final grade of the learners,which may result in an improper classification.To overcome the shortages of the existing algorithms,a novel multi-feature weighting based K-means(MFWK-means)algorithm is proposed in this paper.Correlations between the widely used feature grade and other features are first investigated,and then the learners are classified based on their grades and weighted features with the proposed MFWK-means algorithm.Experimental results with the Canvas Network Person-Course(CNPC)dataset demonstrate the effectiveness of our method.Moreover,a comparison between the new MFWK-means and the traditional K-means clustering algorithm is implemented to show the superiority of the proposed method.展开更多
This paper presents a new approach to determining whether an interested personal name across doeuments refers to the same entity. Firstly,three vectors for each text are formed: the personal name Boolean vectors deno...This paper presents a new approach to determining whether an interested personal name across doeuments refers to the same entity. Firstly,three vectors for each text are formed: the personal name Boolean vectors denoting whether a personal name occurs the text the biographical word Boolean vector representing title, occupation and so forth, and the feature vector with real values. Then, by combining a heuristic strategy based on Boolean vectors with an agglomeratie clustering algorithm based on feature vectors, it seeks to resolve multi-document personal name coreference. Experimental results show that this approach achieves a good performance by testing on "Wang Gang" corpus.展开更多
It’s common that different individuals share the same name, which makes it time-consuming to search information of a particular individual on the web. Name disambiguation study is necessary to help users find the per...It’s common that different individuals share the same name, which makes it time-consuming to search information of a particular individual on the web. Name disambiguation study is necessary to help users find the person of interest more readily. In this paper, we propose an Adaptive Resonance Theory (ART) based two-stage strategy for this problem. We get a first-stage clustering result with ART1 model and then merge similar clusters in the second stage. Our strategy is a mimic process of manual disambiguation and need not to predict the number of clusters, which makes it competent for the disambiguation task. Experimental results show that, in comparison with the agglomerative clustering method, our strategy improves the performance by respectively 0.92% and 5.00% on two kinds of name recognition results.展开更多
As a new type of Denial of Service(DoS)attacks,the Low-rate Denial of Service(LDoS)attacks make the traditional method of detecting Distributed Denial of Service Attack(DDoS)attacks useless due to the characteristics ...As a new type of Denial of Service(DoS)attacks,the Low-rate Denial of Service(LDoS)attacks make the traditional method of detecting Distributed Denial of Service Attack(DDoS)attacks useless due to the characteristics of a low average rate and concealment.With features extracted from the network traffic,a new detection approach based on multi-feature fusion is proposed to solve the problem in this paper.An attack feature set containing the Acknowledge character(ACK)sequence number,the packet size,and the queue length is used to classify normal and LDoS attack traffics.Each feature is digitalized and preprocessed to fit the input of the K-Nearest Neighbor(KNN)classifier separately,and to obtain the decision contour matrix.Then a posteriori probability in the matrix is fused,and the fusion decision index D is used as the basis of detecting the LDoS attacks.Experiments proved that the detection rate of the multi-feature fusion algorithm is higher than those of the single-based detection method and other algorithms.展开更多
Smoke detection is the most commonly used method in early warning of fire and is widely used in forest detection.Most existing smoke detection methods contain empty spaces and obstacles which interfere with detection ...Smoke detection is the most commonly used method in early warning of fire and is widely used in forest detection.Most existing smoke detection methods contain empty spaces and obstacles which interfere with detection and extract false smoke roots.This study developed a new smoke roots search algorithm based on a multi-feature fusion dynamic extraction strategy.This determines smoke origin candidate points and region based on a multi-frame discrete confidence level.The results show that the new method provides a more complete smoke contour with no background interference,compared to the results using existing methods.Unlike video-based methods that rely on continuous frames,an adaptive threshold method was developed to build the judgment image set composed of non-consecutive frames.The smoke roots origin search algorithm increased the detection rate and significantly reduced false detection rate compared to existing methods.展开更多
In light of degradation of particle filtering and robust weakness in the utilization of single feature tracking,this paper presents a kernel particle filtering tracking method based on multi-feature integration.In thi...In light of degradation of particle filtering and robust weakness in the utilization of single feature tracking,this paper presents a kernel particle filtering tracking method based on multi-feature integration.In this paper,a new weight upgrading method is given out during kernel particle filtering at first,and then robust tracking is realized by integrating color and texture features under the framework of kernel particle filtering.Space histogram and integral histogram is adopted to calculate color and texture features respectively.These two calculation methods effectively overcome their own defectiveness,and meanwhile,improve the real timing for particle filtering.This algorithm has also improved sampling effectiveness,resolved redundant calculation for particle filtering and degradation of particles.Finally,the experiment for target tracking is realized by using the method under complicated background and shelter.Experiment results show that the method can reliably and accurately track target and deal with target sheltering situation properly.展开更多
Word sense disambiguation(WSD)is a fundamental but significant task in natural language processing,which directly affects the performance of upper applications.However,WSD is very challenging due to the problem of kno...Word sense disambiguation(WSD)is a fundamental but significant task in natural language processing,which directly affects the performance of upper applications.However,WSD is very challenging due to the problem of knowledge bottleneck,i.e.,it is hard to acquire abundant disambiguation knowledge,especially in Chinese.To solve this problem,this paper proposes a graph-based Chinese WSD method with multi-knowledge integration.Particularly,a graph model combining various Chinese and English knowledge resources by word sense mapping is designed.Firstly,the content words in a Chinese ambiguous sentence are extracted and mapped to English words with BabelNet.Then,English word similarity is computed based on English word embeddings and knowledge base.Chinese word similarity is evaluated with Chinese word embedding and HowNet,respectively.The weights of the three kinds of word similarity are optimized with simulated annealing algorithm so as to obtain their overall similarities,which are utilized to construct a disambiguation graph.The graph scoring algorithm evaluates the importance of each word sense node and judge the right senses of the ambiguous words.Extensive experimental results on SemEval dataset show that our proposed WSD method significantly outperforms the baselines.展开更多
Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning. In order to retain useful information and g...Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning. In order to retain useful information and get more reliable results, a novel medical image fusion algorithm based on pulse coupled neural networks (PCNN) and multi-feature fuzzy clustering is proposed, which makes use of the multi-feature of image and combines the advantages of the local entropy and variance of local entropy based PCNN. The results of experiments indicate that the proposed image fusion method can better preserve the image details and robustness and significantly improve the image visual effect than the other fusion methods with less information distortion.展开更多
A hierarchical particle filter(HPF) framework based on multi-feature fusion is proposed.The proposed HPF effectively uses different feature information to avoid the tracking failure based on the single feature in a ...A hierarchical particle filter(HPF) framework based on multi-feature fusion is proposed.The proposed HPF effectively uses different feature information to avoid the tracking failure based on the single feature in a complicated environment.In this approach,the Harris algorithm is introduced to detect the corner points of the object,and the corner matching algorithm based on singular value decomposition is used to compute the firstorder weights and make particles centralize in the high likelihood area.Then the local binary pattern(LBP) operator is used to build the observation model of the target based on the color and texture features,by which the second-order weights of particles and the accurate location of the target can be obtained.Moreover,a backstepping controller is proposed to complete the whole tracking system.Simulations and experiments are carried out,and the results show that the HPF algorithm with the backstepping controller achieves stable and accurate tracking with good robustness in complex environments.展开更多
In the smart logistics industry,unmanned forklifts that intelligently identify logistics pallets can improve work efficiency in warehousing and transportation and are better than traditional manual forklifts driven by...In the smart logistics industry,unmanned forklifts that intelligently identify logistics pallets can improve work efficiency in warehousing and transportation and are better than traditional manual forklifts driven by humans.Therefore,they play a critical role in smart warehousing,and semantics segmentation is an effective method to realize the intelligent identification of logistics pallets.However,most current recognition algorithms are ineffective due to the diverse types of pallets,their complex shapes,frequent blockades in production environments,and changing lighting conditions.This paper proposes a novel multi-feature fusion-guided multiscale bidirectional attention(MFMBA)neural network for logistics pallet segmentation.To better predict the foreground category(the pallet)and the background category(the cargo)of a pallet image,our approach extracts three types of features(grayscale,texture,and Hue,Saturation,Value features)and fuses them.The multiscale architecture deals with the problem that the size and shape of the pallet may appear different in the image in the actual,complex environment,which usually makes feature extraction difficult.Our study proposes a multiscale architecture that can extract additional semantic features.Also,since a traditional attention mechanism only assigns attention rights from a single direction,we designed a bidirectional attention mechanism that assigns cross-attention weights to each feature from two directions,horizontally and vertically,significantly improving segmentation.Finally,comparative experimental results show that the precision of the proposed algorithm is 0.53%–8.77%better than that of other methods we compared.展开更多
Word Sense Disambiguation (WSD) is to decide the sense of an ambiguous word on particular context. Most of current studies on WSD only use several ambiguous words as test samples, thus leads to some limitation in prac...Word Sense Disambiguation (WSD) is to decide the sense of an ambiguous word on particular context. Most of current studies on WSD only use several ambiguous words as test samples, thus leads to some limitation in practical application. In this paper, we perform WSD study based on large scale real-world corpus using two unsupervised learning algorithms based on ±n-improved Bayesian model and Dependency Grammar (DG)-improved Bayesian model. ±n-improved classifiers reduce the window size of context of ambiguous words with close-distance feature extraction method, and decrease the jamming of useless features, thus obviously improve the accuracy, reaching 83.18% (in open test). DG-improved classifier can more effectively conquer the noise effect existing in Naive-Bayesian classifier. Experimental results show that this approach does better on Chinese WSD, and the open test achieved an accuracy of 86.27%.展开更多
A sense feature system (SFS) is first automatically constructed from the text corpora to structurize the textural information. WSD rules are then extracted from SFS according to their certainty factors and are applied...A sense feature system (SFS) is first automatically constructed from the text corpora to structurize the textural information. WSD rules are then extracted from SFS according to their certainty factors and are applied to disambiguate the senses of polysemous words. The entropy of a deterministic rough prediction is used to measure the decision quality of a rule set. Finally, the back off rule smoothing method is further designed to improve the performance of a WSD model. In the experiments, a mean rate of correction achieved during experiments for WSD in the case of rule smoothing is 0.92.展开更多
The natural language processing has a set of phases that evolves from lexical text analysis to the pragmatic one in which the author’s intentions are shown. The ambiguity problem appears in all of these tasks. Previo...The natural language processing has a set of phases that evolves from lexical text analysis to the pragmatic one in which the author’s intentions are shown. The ambiguity problem appears in all of these tasks. Previous works tries to do word sense disambiguation, the process of assign a sense to a word inside a specific context, creating algorithms under a supervised or unsupervised approach, which means that those algorithms use or not an external lexical resource. This paper presents an approximated approach that combines not supervised algorithms by the use of a classifiers set, the result will be a learning algorithm based on unsupervised methods for word sense disambiguation process. It begins with an introduction to word sense disambiguation concepts and then analyzes some unsupervised algorithms in order to extract the best of them, and combines them under a supervised approach making use of some classifiers.展开更多
基金This study was supported by the National Natural Science Foundation of China(61911540482 and 61702324).
文摘Chinese Clinical Named Entity Recognition(CNER)is a crucial step in extracting medical information and is of great significance in promoting medical informatization.However,CNER poses challenges due to the specificity of clinical terminology,the complexity of Chinese text semantics,and the uncertainty of Chinese entity boundaries.To address these issues,we propose an improved CNER model,which is based on multi-feature fusion and multi-scale local context enhancement.The model simultaneously fuses multi-feature representations of pinyin,radical,Part of Speech(POS),word boundary with BERT deep contextual representations to enhance the semantic representation of text for more effective entity recognition.Furthermore,to address the model’s limitation of focusing just on global features,we incorporate Convolutional Neural Networks(CNNs)with various kernel sizes to capture multi-scale local features of the text and enhance the model’s comprehension of the text.Finally,we integrate the obtained global and local features,and employ multi-head attention mechanism(MHA)extraction to enhance the model’s focus on characters associated with medical entities,hence boosting the model’s performance.We obtained 92.74%,and 87.80%F1 scores on the two CNER benchmark datasets,CCKS2017 and CCKS2019,respectively.The results demonstrate that our model outperforms the latest models in CNER,showcasing its outstanding overall performance.It can be seen that the CNER model proposed in this study has an important application value in constructing clinical medical knowledge graph and intelligent Q&A system.
基金funded by the Science and Technology Project of China Southern Power Grid(YNKJXM20210175)the National Natural Science Foundation of China(52177070).
文摘Most ground faults in distribution network are caused by insulation deterioration of power equipment.It is difficult to find the insulation deterioration of the distribution network in time,and the development trend of the initial insulation fault is unknown,which brings difficulties to the distribution inspection.In order to solve the above problems,a situational awareness method of the initial insulation fault of the distribution network based on a multi-feature index comprehensive evaluation is proposed.Firstly,the insulation situation evaluation index is selected by analyzing the insulation fault mechanism of the distribution network,and the relational database of the distribution network is designed based on the data and numerical characteristics of the existing distribution management system.Secondly,considering all kinds of fault factors of the distribution network and the influence of the power supply region,the evaluation method of the initial insulation fault situation of the distribution network is proposed,and the development situation of the distribution network insulation fault is classified according to the evaluation method.Then,principal component analysis was used to reduce the dimension of the training samples and test samples of the distribution network data,and the support vector machine(SVM)was trained.The optimal parameter combination of the SVM model was found by the grid search method,and a multi-class SVM model based on 1-v-1 method was constructed.Finally,the trained multi-class SVM was used to predict 6 kinds of situation level prediction samples.The results of simulation examples show that the average prediction accuracy of 6 situation levels is above 95%,and the perception accuracy of 4 situation levels is above 96%.In addition,the insulation maintenance decision scheme under different situation levels is able to be given when no fault occurs or the insulation fault is in the early stage,which can meet the needs of power distribution and inspection for accurately sensing the insulation fault situation.The correctness and effectiveness of this method are verified.
文摘This paper analyzes the progress of handwritten Chinese character recognition technology,from two perspectives:traditional recognition methods and deep learning-based recognition methods.Firstly,the complexity of Chinese character recognition is pointed out,including its numerous categories,complex structure,and the problem of similar characters,especially the variability of handwritten Chinese characters.Subsequently,recognition methods based on feature optimization,model optimization,and fusion techniques are highlighted.The fusion studies between feature optimization and model improvement are further explored,and these studies further enhance the recognition effect through complementary advantages.Finally,the article summarizes the current challenges of Chinese character recognition technology,including accuracy improvement,model complexity,and real-time problems,and looks forward to future research directions.
基金This work was supported,in part,by the National Nature Science Foundation of China under Grant Numbers 61502240,61502096,61304205,61773219in part,by the Natural Science Foundation of Jiangsu Province under grant numbers BK20201136,BK20191401+1 种基金in part,by the Postgraduate Research&Practice Innovation Program of Jiangsu Province under Grant Numbers SJCX21_0363in part,by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fund.
文摘Vehicle re-identification(ReID)aims to retrieve the target vehicle in an extensive image gallery through its appearances from various views in the cross-camera scenario.It has gradually become a core technology of intelligent transportation system.Most existing vehicle re-identification models adopt the joint learning of global and local features.However,they directly use the extracted global features,resulting in insufficient feature expression.Moreover,local features are primarily obtained through advanced annotation and complex attention mechanisms,which require additional costs.To solve this issue,a multi-feature learning model with enhanced local attention for vehicle re-identification(MFELA)is proposed in this paper.The model consists of global and local branches.The global branch utilizes both middle and highlevel semantic features of ResNet50 to enhance the global representation capability.In addition,multi-scale pooling operations are used to obtain multiscale information.While the local branch utilizes the proposed Region Batch Dropblock(RBD),which encourages the model to learn discriminative features for different local regions and simultaneously drops corresponding same areas randomly in a batch during training to enhance the attention to local regions.Then features from both branches are combined to provide a more comprehensive and distinctive feature representation.Extensive experiments on VeRi-776 and VehicleID datasets prove that our method has excellent performance.
基金The paper is supported by the Research Foundation for OutstandingYoung Teachers , China University of Geosciences ( Wuhan) ( No .CUGQNL0616) Research Foundationfor State Key Laboratory of Geo-logical Processes and Mineral Resources ( No . MGMR2002-02)Hubei Provincial Depart ment of Education (B) .
文摘Urban land provides a suitable location for various economic activities which affect the development of surrounding areas. With rapid industrialization and urbanization, the contradictions in land-use become more noticeable. Urban administrators and decision-makers seek modern methods and technology to provide information support for urban growth. Recently, with the fast development of high-resolution sensor technology, more relevant data can be obtained, which is an advantage in studying the sustainable development of urban land-use. However, these data are only information sources and are a mixture of "information" and "noise". Processing, analysis and information extraction from remote sensing data is necessary to provide useful information. This paper extracts urban land-use information from a high-resolution image by using the multi-feature information of the image objects, and adopts an object-oriented image analysis approach and multi-scale image segmentation technology. A classification and extraction model is set up based on the multi-features of the image objects, in order to contribute to information for reasonable planning and effective management. This new image analysis approach offers a satisfactory solution for extracting information quickly and efficiently.
文摘All human languages have words that can mean different things in different contexts, such words with multiple meanings are potentially “ambiguous”. The process of “deciding which of several meanings of a term is intended in a given context” is known as “word sense disambiguation (WSD)”. This paper presents a method of WSD that assigns a target word the sense that is most related to the senses of its neighbor words. We explore the use of measures of relatedness between word senses based on a novel hybrid approach. First, we investigate how to “literally” and “regularly” express a “concept”. We apply set algebra to WordNet’s synsets cooperating with WordNet’s word ontology. In this way we establish regular rules for constructing various representations (lexical notations) of a concept using Boolean operators and word forms in various synset(s) defined in WordNet. Then we establish a formal mechanism for quantifying and estimating the semantic relatedness between concepts—we facilitate “concept distribution statistics” to determine the degree of semantic relatedness between two lexically expressed con- cepts. The experimental results showed good performance on Semcor, a subset of Brown corpus. We observe that measures of semantic relatedness are useful sources of information for WSD.
基金Supported by the National Natural Science Foundation of China (50706006) and the Science and Technology Development Program of Jilin Province (20040513).
文摘The knowledge of flow regime is very important for quantifying the pressure drop, the stability and safety of two-phase flow systems. Based on image multi-feature fusion and support vector machine, a new method to identify flow regime in two-phase flow was presented. Firstly, gas-liquid two-phase flow images including bub- bly flow, plug flow, slug flow, stratified flow, wavy flow, annular flow and mist flow were captured by digital high speed video systems in the horizontal tube. The image moment invariants and gray level co-occurrence matrix texture features were extracted using image processing techniques. To improve the performance of a multiple classifier system, the rough sets theory was used for reducing the inessential factors. Furthermore, the support vector machine was trained by using these eigenvectors to reduce the dimension as flow regime samples, and the flow regime intelligent identification was realized. The test results showed that image features which were reduced with the rough sets theory could excellently reflect the difference between seven typical flow regimes, and successful training the support vector machine could quickly and accurately identify seven typical flow regimes of gas-liquid two-phase flow in the horizontal tube. Image multi-feature fusion method provided a new way to identify the gas-liquid two-phase flow, and achieved higher identification ability than that of single characteristic. The overall identification accuracy was 100%, and an estimate of the image processing time was 8 ms for online flow regime identification.
文摘Massive open online courses(MOOC)have recently gained worldwide attention in the field of education.The manner of MOOC provides a new option for learning various kinds of knowledge.A mass of data miming algorithms have been proposed to analyze the learner’s characteristics and classify the learners into different groups.However,most current algorithms mainly focus on the final grade of the learners,which may result in an improper classification.To overcome the shortages of the existing algorithms,a novel multi-feature weighting based K-means(MFWK-means)algorithm is proposed in this paper.Correlations between the widely used feature grade and other features are first investigated,and then the learners are classified based on their grades and weighted features with the proposed MFWK-means algorithm.Experimental results with the Canvas Network Person-Course(CNPC)dataset demonstrate the effectiveness of our method.Moreover,a comparison between the new MFWK-means and the traditional K-means clustering algorithm is implemented to show the superiority of the proposed method.
文摘This paper presents a new approach to determining whether an interested personal name across doeuments refers to the same entity. Firstly,three vectors for each text are formed: the personal name Boolean vectors denoting whether a personal name occurs the text the biographical word Boolean vector representing title, occupation and so forth, and the feature vector with real values. Then, by combining a heuristic strategy based on Boolean vectors with an agglomeratie clustering algorithm based on feature vectors, it seeks to resolve multi-document personal name coreference. Experimental results show that this approach achieves a good performance by testing on "Wang Gang" corpus.
文摘It’s common that different individuals share the same name, which makes it time-consuming to search information of a particular individual on the web. Name disambiguation study is necessary to help users find the person of interest more readily. In this paper, we propose an Adaptive Resonance Theory (ART) based two-stage strategy for this problem. We get a first-stage clustering result with ART1 model and then merge similar clusters in the second stage. Our strategy is a mimic process of manual disambiguation and need not to predict the number of clusters, which makes it competent for the disambiguation task. Experimental results show that, in comparison with the agglomerative clustering method, our strategy improves the performance by respectively 0.92% and 5.00% on two kinds of name recognition results.
基金the National Natural Science Foundation of China-Civil Aviation joint fund(U1933108)the Fundamental Research Funds for the Central Universities of China(3122019051).
文摘As a new type of Denial of Service(DoS)attacks,the Low-rate Denial of Service(LDoS)attacks make the traditional method of detecting Distributed Denial of Service Attack(DDoS)attacks useless due to the characteristics of a low average rate and concealment.With features extracted from the network traffic,a new detection approach based on multi-feature fusion is proposed to solve the problem in this paper.An attack feature set containing the Acknowledge character(ACK)sequence number,the packet size,and the queue length is used to classify normal and LDoS attack traffics.Each feature is digitalized and preprocessed to fit the input of the K-Nearest Neighbor(KNN)classifier separately,and to obtain the decision contour matrix.Then a posteriori probability in the matrix is fused,and the fusion decision index D is used as the basis of detecting the LDoS attacks.Experiments proved that the detection rate of the multi-feature fusion algorithm is higher than those of the single-based detection method and other algorithms.
基金supported by the National Natural Science Foundation of China(grants no.32171797 and 31800549)。
文摘Smoke detection is the most commonly used method in early warning of fire and is widely used in forest detection.Most existing smoke detection methods contain empty spaces and obstacles which interfere with detection and extract false smoke roots.This study developed a new smoke roots search algorithm based on a multi-feature fusion dynamic extraction strategy.This determines smoke origin candidate points and region based on a multi-frame discrete confidence level.The results show that the new method provides a more complete smoke contour with no background interference,compared to the results using existing methods.Unlike video-based methods that rely on continuous frames,an adaptive threshold method was developed to build the judgment image set composed of non-consecutive frames.The smoke roots origin search algorithm increased the detection rate and significantly reduced false detection rate compared to existing methods.
基金Sponsored by Natural Science Foundation of Heilongjiang Province of China(Grant No.QC2001C060)the Science and Technology Research Projectsin Office of Education of Heilongjiang province(Grant No.11531307)
文摘In light of degradation of particle filtering and robust weakness in the utilization of single feature tracking,this paper presents a kernel particle filtering tracking method based on multi-feature integration.In this paper,a new weight upgrading method is given out during kernel particle filtering at first,and then robust tracking is realized by integrating color and texture features under the framework of kernel particle filtering.Space histogram and integral histogram is adopted to calculate color and texture features respectively.These two calculation methods effectively overcome their own defectiveness,and meanwhile,improve the real timing for particle filtering.This algorithm has also improved sampling effectiveness,resolved redundant calculation for particle filtering and degradation of particles.Finally,the experiment for target tracking is realized by using the method under complicated background and shelter.Experiment results show that the method can reliably and accurately track target and deal with target sheltering situation properly.
基金The research work is supported by National Key R&D Program of China under Grant No.2018YFC0831704National Nature Science Foundation of China under Grant No.61502259+1 种基金Natural Science Foundation of Shandong Province under Grant No.ZR2017MF056Taishan Scholar Program of Shandong Province in China(Directed by Prof.Yinglong Wang).
文摘Word sense disambiguation(WSD)is a fundamental but significant task in natural language processing,which directly affects the performance of upper applications.However,WSD is very challenging due to the problem of knowledge bottleneck,i.e.,it is hard to acquire abundant disambiguation knowledge,especially in Chinese.To solve this problem,this paper proposes a graph-based Chinese WSD method with multi-knowledge integration.Particularly,a graph model combining various Chinese and English knowledge resources by word sense mapping is designed.Firstly,the content words in a Chinese ambiguous sentence are extracted and mapped to English words with BabelNet.Then,English word similarity is computed based on English word embeddings and knowledge base.Chinese word similarity is evaluated with Chinese word embedding and HowNet,respectively.The weights of the three kinds of word similarity are optimized with simulated annealing algorithm so as to obtain their overall similarities,which are utilized to construct a disambiguation graph.The graph scoring algorithm evaluates the importance of each word sense node and judge the right senses of the ambiguous words.Extensive experimental results on SemEval dataset show that our proposed WSD method significantly outperforms the baselines.
文摘Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning. In order to retain useful information and get more reliable results, a novel medical image fusion algorithm based on pulse coupled neural networks (PCNN) and multi-feature fuzzy clustering is proposed, which makes use of the multi-feature of image and combines the advantages of the local entropy and variance of local entropy based PCNN. The results of experiments indicate that the proposed image fusion method can better preserve the image details and robustness and significantly improve the image visual effect than the other fusion methods with less information distortion.
基金supported by the National Natural Science Foundation of China(61304097)the Projects of Major International(Regional)Joint Research Program NSFC(61120106010)the Foundation for Innovation Research Groups of the National National Natural Science Foundation of China(61321002)
文摘A hierarchical particle filter(HPF) framework based on multi-feature fusion is proposed.The proposed HPF effectively uses different feature information to avoid the tracking failure based on the single feature in a complicated environment.In this approach,the Harris algorithm is introduced to detect the corner points of the object,and the corner matching algorithm based on singular value decomposition is used to compute the firstorder weights and make particles centralize in the high likelihood area.Then the local binary pattern(LBP) operator is used to build the observation model of the target based on the color and texture features,by which the second-order weights of particles and the accurate location of the target can be obtained.Moreover,a backstepping controller is proposed to complete the whole tracking system.Simulations and experiments are carried out,and the results show that the HPF algorithm with the backstepping controller achieves stable and accurate tracking with good robustness in complex environments.
基金supported by the Postgraduate Scientific Research Innovation Project of Hunan Province under Grant QL20210212the Scientific Innovation Fund for Postgraduates of Central South University of Forestry and Technology under Grant CX202102043.
文摘In the smart logistics industry,unmanned forklifts that intelligently identify logistics pallets can improve work efficiency in warehousing and transportation and are better than traditional manual forklifts driven by humans.Therefore,they play a critical role in smart warehousing,and semantics segmentation is an effective method to realize the intelligent identification of logistics pallets.However,most current recognition algorithms are ineffective due to the diverse types of pallets,their complex shapes,frequent blockades in production environments,and changing lighting conditions.This paper proposes a novel multi-feature fusion-guided multiscale bidirectional attention(MFMBA)neural network for logistics pallet segmentation.To better predict the foreground category(the pallet)and the background category(the cargo)of a pallet image,our approach extracts three types of features(grayscale,texture,and Hue,Saturation,Value features)and fuses them.The multiscale architecture deals with the problem that the size and shape of the pallet may appear different in the image in the actual,complex environment,which usually makes feature extraction difficult.Our study proposes a multiscale architecture that can extract additional semantic features.Also,since a traditional attention mechanism only assigns attention rights from a single direction,we designed a bidirectional attention mechanism that assigns cross-attention weights to each feature from two directions,horizontally and vertically,significantly improving segmentation.Finally,comparative experimental results show that the precision of the proposed algorithm is 0.53%–8.77%better than that of other methods we compared.
基金Supported by the National Natural Science Foundation of China (No.60435020).
文摘Word Sense Disambiguation (WSD) is to decide the sense of an ambiguous word on particular context. Most of current studies on WSD only use several ambiguous words as test samples, thus leads to some limitation in practical application. In this paper, we perform WSD study based on large scale real-world corpus using two unsupervised learning algorithms based on ±n-improved Bayesian model and Dependency Grammar (DG)-improved Bayesian model. ±n-improved classifiers reduce the window size of context of ambiguous words with close-distance feature extraction method, and decrease the jamming of useless features, thus obviously improve the accuracy, reaching 83.18% (in open test). DG-improved classifier can more effectively conquer the noise effect existing in Naive-Bayesian classifier. Experimental results show that this approach does better on Chinese WSD, and the open test achieved an accuracy of 86.27%.
文摘A sense feature system (SFS) is first automatically constructed from the text corpora to structurize the textural information. WSD rules are then extracted from SFS according to their certainty factors and are applied to disambiguate the senses of polysemous words. The entropy of a deterministic rough prediction is used to measure the decision quality of a rule set. Finally, the back off rule smoothing method is further designed to improve the performance of a WSD model. In the experiments, a mean rate of correction achieved during experiments for WSD in the case of rule smoothing is 0.92.
文摘The natural language processing has a set of phases that evolves from lexical text analysis to the pragmatic one in which the author’s intentions are shown. The ambiguity problem appears in all of these tasks. Previous works tries to do word sense disambiguation, the process of assign a sense to a word inside a specific context, creating algorithms under a supervised or unsupervised approach, which means that those algorithms use or not an external lexical resource. This paper presents an approximated approach that combines not supervised algorithms by the use of a classifiers set, the result will be a learning algorithm based on unsupervised methods for word sense disambiguation process. It begins with an introduction to word sense disambiguation concepts and then analyzes some unsupervised algorithms in order to extract the best of them, and combines them under a supervised approach making use of some classifiers.