Using similar single-difference methodology(SSDM) to solve the deformation values of the monitoring points, there is unstability of the deformation information series, at sometimes.In order to overcome this shortcomin...Using similar single-difference methodology(SSDM) to solve the deformation values of the monitoring points, there is unstability of the deformation information series, at sometimes.In order to overcome this shortcoming, Kalman filtering algorithm for this series is established,and its correctness and validity are verified with the test data obtained on the movable platform in plane. The results show that Kalman filtering can improve the correctness, reliability and stability of the deformation information series.展开更多
A new similar single-difference mathematical model (SS-DM) and its corresponding algorithmare advanced to solve the deformationof monitoring point directly in singleepoch. The method for building theSSDM is introduced...A new similar single-difference mathematical model (SS-DM) and its corresponding algorithmare advanced to solve the deformationof monitoring point directly in singleepoch. The method for building theSSDM is introduced in detail, and themain error sources affecting the accu-racy of deformation measurement areanalyzed briefly, and the basic algo-rithm and steps of solving the deform-ation are discussed.In order to validate the correctnessand the accuracy of the similar single-difference model, the test with fivedual frequency receivers is carried outon a slideway which moved in plane inFeb. 2001. In the test,five sessions areobserved. The numerical results oftest data show that the advanced mod-el is correct.展开更多
K-means algorithm is one of the most widely used algorithms in the clustering analysis. To deal with the problem caused by the random selection of initial center points in the traditional al- gorithm, this paper propo...K-means algorithm is one of the most widely used algorithms in the clustering analysis. To deal with the problem caused by the random selection of initial center points in the traditional al- gorithm, this paper proposes an improved K-means algorithm based on the similarity matrix. The im- proved algorithm can effectively avoid the random selection of initial center points, therefore it can provide effective initial points for clustering process, and reduce the fluctuation of clustering results which are resulted from initial points selections, thus a better clustering quality can be obtained. The experimental results also show that the F-measure of the improved K-means algorithm has been greatly improved and the clustering results are more stable.展开更多
The molecular similarity of 139 organic compounds was calculated by the topologic index method, the flexible super-ball algorithm was used to scan similar molecules and structures. The results show that the properti...The molecular similarity of 139 organic compounds was calculated by the topologic index method, the flexible super-ball algorithm was used to scan similar molecules and structures. The results show that the properties of organic compounds estimated from this method are reliable.展开更多
The fundamental problem of similarity studies, in the frame of data-mining, is to examine and detect similar items in articles, papers, and books with huge sizes. In this paper, we are interested in the probabilistic,...The fundamental problem of similarity studies, in the frame of data-mining, is to examine and detect similar items in articles, papers, and books with huge sizes. In this paper, we are interested in the probabilistic, and the statistical and the algorithmic aspects in studies of texts. We will be using the approach of k-shinglings, a k-shingling being defined as a sequence of k consecutive characters that are extracted from a text (k ≥ 1). The main stake in this field is to find accurate and quick algorithms to compute the similarity in short times. This will be achieved in using approximation methods. The first approximation method is statistical and, is based on the theorem of Glivenko-Cantelli. The second is the banding technique. And the third concerns a modification of the algorithm proposed by Rajaraman et al. ([1]), denoted here as (RUM). The Jaccard index is the one being used in this paper. We finally illustrate these results of the paper on the four Gospels. The results are very conclusive.展开更多
In this paper, we proposed an improved hybrid semantic matching algorithm combining Input/Output (I/O) semantic matching with text lexical similarity to overcome the disadvantage that the existing semantic matching al...In this paper, we proposed an improved hybrid semantic matching algorithm combining Input/Output (I/O) semantic matching with text lexical similarity to overcome the disadvantage that the existing semantic matching algorithms were unable to distinguish those services with the same I/O by only performing I/O based service signature matching in semantic web service discovery techniques. The improved algorithm consists of two steps, the first is logic based I/O concept ontology matching, through which the candidate service set is obtained and the second is the service name matching with lexical similarity against the candidate service set, through which the final precise matching result is concluded. Using Ontology Web Language for Services (OWL-S) test collection, we tested our hybrid algorithm and compared it with OWL-S Matchmaker-X (OWLS-MX), the experimental results have shown that the proposed algorithm could pick out the most suitable advertised service corresponding to user's request from very similar ones and provide better matching precision and efficiency than OWLS-MX.展开更多
Borda sorting algorithm is a kind of improvement algorithm based on weighted position sorting algorithm,it is mainly suitable for the high duplication of search results,for the independent search results,the effect is...Borda sorting algorithm is a kind of improvement algorithm based on weighted position sorting algorithm,it is mainly suitable for the high duplication of search results,for the independent search results,the effect is not very good and the computing method of relative score in Borda sorting algorithm is according to the rule of the linear regressive,but position relationship cannot fully represent the correlation changes.aimed at this drawback,the new sorting algorithm is proposed in this paper,named PMS-Sorting algorithm,firstly the position score of the returned results is standardized processing,and the similarity retrieval word string with the query results is combined into the algorithm,the similarity calculation method is also improved,through the experiment,the improved algorithm is superior to traditional sorting algorithm.展开更多
Pattern discovery from time series is of fundamental importance. Most of the algorithms of pattern discovery in time series capture the values of time series based on some kinds of similarity measures. Affected by the...Pattern discovery from time series is of fundamental importance. Most of the algorithms of pattern discovery in time series capture the values of time series based on some kinds of similarity measures. Affected by the scale and baseline, value-based methods bring about problem when the objective is to capture the shape. Thus, a similarity measure based on shape, Sh measure, is originally proposed, andthe properties of this similarity and corresponding proofs are given. Then a time series shape pattern discovery algorithm based on Sh measure is put forward. The proposed algorithm is terminated in finite iteration with given computational and storage complexity. Finally the experiments on synthetic datasets and sunspot datasets demonstrate that the time series shape pattern algorithm is valid.展开更多
In order to mine production and security information from security supervising data and to ensure security and safety involved in production and decision-making,a clustering analysis algorithm for security supervising...In order to mine production and security information from security supervising data and to ensure security and safety involved in production and decision-making,a clustering analysis algorithm for security supervising data based on a semantic description in coal mines is studied.First,the semantic and numerical-based hybrid description method of security supervising data in coal mines is described.Secondly,the similarity measurement method of semantic and numerical data are separately given and a weight-based hybrid similarity measurement method for the security supervising data based on a semantic description in coal mines is presented.Thirdly,taking the hybrid similarity measurement method as the distance criteria and using a grid methodology for reference,an improved CURE clustering algorithm based on the grid is presented.Finally,the simulation results of a security supervising data set in coal mines validate the efficiency of the algorithm.展开更多
This paper presents a fuzzy logic approach to efficiently perform unsupervised character classification for improvement in robustness, correctness and speed of a character recognition system. The characters are first ...This paper presents a fuzzy logic approach to efficiently perform unsupervised character classification for improvement in robustness, correctness and speed of a character recognition system. The characters are first split into eight typographical categories. The classification scheme uses pattern matching to classify the characters in each category into a set of fuzzy prototypes based on a nonlinear weighted similarity function. The fuzzy unsupervised character classification, which is natural in the repre...展开更多
Refineries often need to find similar crude oil to replace the scarce crude oil for stabilizing the feedstock property. We introduced the method for calculation of crude blended properties firstly, and then created a ...Refineries often need to find similar crude oil to replace the scarce crude oil for stabilizing the feedstock property. We introduced the method for calculation of crude blended properties firstly, and then created a crude oil selection and blending optimization model based on the data of crude oil property. The model is a mixed-integer nonlinear programming(MINLP) with constraints, and the target is to maximize the similarity between the blended crude oil and the objective crude oil. Furthermore, the model takes into account the selection of crude oils and their blending ratios simultaneously, and transforms the problem of looking for similar crude oil into the crude oil selection and blending optimization problem. We applied the Improved Cuckoo Search(ICS) algorithm to solving the model. Through the simulations, ICS was compared with the genetic algorithm, the particle swarm optimization algorithm and the CPLEX solver. The results show that ICS has very good optimization efficiency. The blending solution can provide a reference for refineries to find the similar crude oil. And the method proposed can also give some references to selection and blending optimization of other materials.展开更多
A new genetic algorithm for community detection in complex networks was proposed. It adopts matrix encoding that enables traditional crossover between individuals. Initial populations are generated using nodes similar...A new genetic algorithm for community detection in complex networks was proposed. It adopts matrix encoding that enables traditional crossover between individuals. Initial populations are generated using nodes similarity, which enhances the diversity of initial individuals while retaining an acceptable level of accuracy, and improves the efficiency of optimal solution search. Individual crossover is based on the quality of individuals' genes; all nodes unassigned to any community are grouped into a new community, while ambiguously placed nodes are assigned to the community to which most of their neighbors belong. Individual mutation, which splits a gene into two new genes or randomly fuses it into other genes, is non-uniform. The simplicity and effectiveness of the algorithm are revealed in experimental tests using artificial random networks and real networks. The accuracy of the algorithm is superior to that of some classic algorithms, and is comparable to that of some recent high-precision algorithms.展开更多
This paper describes the theory, implementation, and experimental evaluation of an Aggregation Cache Replacement ( ACR ) algorithm. By considering application background, carefully choosing weight values, using a sp...This paper describes the theory, implementation, and experimental evaluation of an Aggregation Cache Replacement ( ACR ) algorithm. By considering application background, carefully choosing weight values, using a special formula to calculate the similarity, and clustering ontologies by similarity for getting more embedded deep relations, ACR combines the ontology similarity with the value of object and decides which object is to be replaced. We demonstrate the usefulness of ACR through experiments. (a) It is found that the aggregation tree is created wholly differently according to the application cases. Therefore, clustering can direct the content adaptation more accurately according to the user perception and can satisfy the user with different preferences. (b) After comparing this new method with widely-used algorithm Last-Recently-Used (LRU) and First-in-First-out (FIFO) method, it is found that ACR outperforms the later two in accuracy and usability. (c) It has a better semantic explanation and makes adaptation more personalized and more precise.展开更多
With the continuous expansion of software scale,software update and maintenance have become more and more important.However,frequent software code updates will make the software more likely to introduce new defects.So...With the continuous expansion of software scale,software update and maintenance have become more and more important.However,frequent software code updates will make the software more likely to introduce new defects.So how to predict the defects quickly and accurately on the software change has become an important problem for software developers.Current defect prediction methods often cannot reflect the feature information of the defect comprehensively,and the detection effect is not ideal enough.Therefore,we propose a novel defect prediction model named ITNB(Improved Transfer Naive Bayes)based on improved transfer Naive Bayesian algorithm in this paper,which mainly considers the following two aspects:(1)Considering that the edge data of the test set may affect the similarity calculation and final prediction result,we remove the edge data of the test set when calculating the data similarity between the training set and the test set;(2)Considering that each feature dimension has different effects on defect prediction,we construct the calculation formula of training data weight based on feature dimension weight and data gravity,and then calculate the prior probability and the conditional probability of training data from the weight information,so as to construct the weighted bayesian classifier for software defect prediction.To evaluate the performance of the ITNB model,we use six datasets from large open source projects,namely Bugzilla,Columba,Mozilla,JDT,Platform and PostgreSQL.We compare the ITNB model with the transfer Naive Bayesian(TNB)model.The experimental results show that our ITNB model can achieve better results than the TNB model in terms of accurary,precision and pd for within-project and cross-project defect prediction.展开更多
With the increasing variety of application software of meteorological satellite ground system, how to provide reasonable hardware resources and improve the efficiency of software is paid more and more attention. In th...With the increasing variety of application software of meteorological satellite ground system, how to provide reasonable hardware resources and improve the efficiency of software is paid more and more attention. In this paper, a set of software classification method based on software operating characteristics is proposed. The method uses software run-time resource consumption to describe the software running characteristics. Firstly, principal component analysis (PCA) is used to reduce the dimension of software running feature data and to interpret software characteristic information. Then the modified K-means algorithm was used to classify the meteorological data processing software. Finally, it combined with the results of principal component analysis to explain the significance of various types of integrated software operating characteristics. And it is used as the basis for optimizing the allocation of software hardware resources and improving the efficiency of software operation.展开更多
The paper proposed the research and implement of text similarity system based on power spectrum analysis. It is not difficult to imagine that the signals of brain are closely linked with writing process. So we build t...The paper proposed the research and implement of text similarity system based on power spectrum analysis. It is not difficult to imagine that the signals of brain are closely linked with writing process. So we build text modeling and set pulse signal function to get the power spectrum of the text. The specific detail is getting power spectrum from economic field to build spectral library, and then using the method of power spectrum matching algorithm to judge whether the test text belonged to the economic field. The method made text similarity system finish the function of text intelligent classification efficiently and accurately.展开更多
Under the background of the rapid development of the air transport industry, the abnormal phenomenon of flights has become increasingly serious due to various factors such as the gradual reduction of resources, advers...Under the background of the rapid development of the air transport industry, the abnormal phenomenon of flights has become increasingly serious due to various factors such as the gradual reduction of resources, adverse climatic conditions, problems in air traffic control and mechanical failures. In order to reduce losses, it has become a major problem for airlines to use optimization algorithm to study the recovery of abnormal flights. By upgrading the passenger recovery engine, the purpose of this paper is to provide the optimal recovery scheme for passengers, so as to reduce the risk of transferring overseas flights, and thus reduce the economic loss of airlines. In this paper, the optimization model and algorithm based on network flow, combined with actual business requirements, comprehensively consider multiple optimization objectives to quickly generate passenger recovery solutions, and at the same time achieve the optimal income of airlines and the acceptance rate of passenger recovery, so as to balance the two. The practicability and effectiveness of the proposed model and algorithm are proved by some concrete examples.展开更多
This paper reports on the implementation of efficient burst assembly algorithms and traffic prediction. The ultimate goal is to propose a new burst assembly algorithm which is based on time-burst length (hybrid) thr...This paper reports on the implementation of efficient burst assembly algorithms and traffic prediction. The ultimate goal is to propose a new burst assembly algorithm which is based on time-burst length (hybrid) threshold with traffic prediction to reduce burst assembly delay in OBS (Optical Burst Switching) networks. Research has shown that traffic always change from time to time, hence, any measure that is put in place should be able to adapt to such changes. With our implemented burst assembly algorithm, the traffic rate is predicted and the predicted rate is used to dynamically adjust the burst assembly length. This work further investigates the impact of the proposed algorithm on traffic self similarity.展开更多
文摘Using similar single-difference methodology(SSDM) to solve the deformation values of the monitoring points, there is unstability of the deformation information series, at sometimes.In order to overcome this shortcoming, Kalman filtering algorithm for this series is established,and its correctness and validity are verified with the test data obtained on the movable platform in plane. The results show that Kalman filtering can improve the correctness, reliability and stability of the deformation information series.
基金the National Land and Resource Bureau Science and Technology Foundation (No. 20001020304).
文摘A new similar single-difference mathematical model (SS-DM) and its corresponding algorithmare advanced to solve the deformationof monitoring point directly in singleepoch. The method for building theSSDM is introduced in detail, and themain error sources affecting the accu-racy of deformation measurement areanalyzed briefly, and the basic algo-rithm and steps of solving the deform-ation are discussed.In order to validate the correctnessand the accuracy of the similar single-difference model, the test with fivedual frequency receivers is carried outon a slideway which moved in plane inFeb. 2001. In the test,five sessions areobserved. The numerical results oftest data show that the advanced mod-el is correct.
文摘K-means algorithm is one of the most widely used algorithms in the clustering analysis. To deal with the problem caused by the random selection of initial center points in the traditional al- gorithm, this paper proposes an improved K-means algorithm based on the similarity matrix. The im- proved algorithm can effectively avoid the random selection of initial center points, therefore it can provide effective initial points for clustering process, and reduce the fluctuation of clustering results which are resulted from initial points selections, thus a better clustering quality can be obtained. The experimental results also show that the F-measure of the improved K-means algorithm has been greatly improved and the clustering results are more stable.
基金the National Natural Science Foundation of China(Grant No. 29767001).
文摘The molecular similarity of 139 organic compounds was calculated by the topologic index method, the flexible super-ball algorithm was used to scan similar molecules and structures. The results show that the properties of organic compounds estimated from this method are reliable.
文摘The fundamental problem of similarity studies, in the frame of data-mining, is to examine and detect similar items in articles, papers, and books with huge sizes. In this paper, we are interested in the probabilistic, and the statistical and the algorithmic aspects in studies of texts. We will be using the approach of k-shinglings, a k-shingling being defined as a sequence of k consecutive characters that are extracted from a text (k ≥ 1). The main stake in this field is to find accurate and quick algorithms to compute the similarity in short times. This will be achieved in using approximation methods. The first approximation method is statistical and, is based on the theorem of Glivenko-Cantelli. The second is the banding technique. And the third concerns a modification of the algorithm proposed by Rajaraman et al. ([1]), denoted here as (RUM). The Jaccard index is the one being used in this paper. We finally illustrate these results of the paper on the four Gospels. The results are very conclusive.
基金Supported by the National Natural Science Foundation of China (No. 60872018)the Specialized Research Fund for the Doctoral Program of Higher Education (No. 20070293001)973 Project (No. 2007CB310607)
文摘In this paper, we proposed an improved hybrid semantic matching algorithm combining Input/Output (I/O) semantic matching with text lexical similarity to overcome the disadvantage that the existing semantic matching algorithms were unable to distinguish those services with the same I/O by only performing I/O based service signature matching in semantic web service discovery techniques. The improved algorithm consists of two steps, the first is logic based I/O concept ontology matching, through which the candidate service set is obtained and the second is the service name matching with lexical similarity against the candidate service set, through which the final precise matching result is concluded. Using Ontology Web Language for Services (OWL-S) test collection, we tested our hybrid algorithm and compared it with OWL-S Matchmaker-X (OWLS-MX), the experimental results have shown that the proposed algorithm could pick out the most suitable advertised service corresponding to user's request from very similar ones and provide better matching precision and efficiency than OWLS-MX.
基金This work was funded by the National Natural Science Foundation of China under Grant(No.61772152 and No.61502037)the Basic Research Project(Nos.JCKY2016206B001,JCKY2014206C002 and JCKY2017604C010)the Technical Foundation Project(No.JSQB2017206C002).
文摘Borda sorting algorithm is a kind of improvement algorithm based on weighted position sorting algorithm,it is mainly suitable for the high duplication of search results,for the independent search results,the effect is not very good and the computing method of relative score in Borda sorting algorithm is according to the rule of the linear regressive,but position relationship cannot fully represent the correlation changes.aimed at this drawback,the new sorting algorithm is proposed in this paper,named PMS-Sorting algorithm,firstly the position score of the returned results is standardized processing,and the similarity retrieval word string with the query results is combined into the algorithm,the similarity calculation method is also improved,through the experiment,the improved algorithm is superior to traditional sorting algorithm.
文摘Pattern discovery from time series is of fundamental importance. Most of the algorithms of pattern discovery in time series capture the values of time series based on some kinds of similarity measures. Affected by the scale and baseline, value-based methods bring about problem when the objective is to capture the shape. Thus, a similarity measure based on shape, Sh measure, is originally proposed, andthe properties of this similarity and corresponding proofs are given. Then a time series shape pattern discovery algorithm based on Sh measure is put forward. The proposed algorithm is terminated in finite iteration with given computational and storage complexity. Finally the experiments on synthetic datasets and sunspot datasets demonstrate that the time series shape pattern algorithm is valid.
基金The National Natural Science Foundation of China(No.50674086)Specialized Research Fund for the Doctoral Program of Higher Education(No.20060290508)the Postdoctoral Scientific Program of Jiangsu Province(No.0701045B)
文摘In order to mine production and security information from security supervising data and to ensure security and safety involved in production and decision-making,a clustering analysis algorithm for security supervising data based on a semantic description in coal mines is studied.First,the semantic and numerical-based hybrid description method of security supervising data in coal mines is described.Secondly,the similarity measurement method of semantic and numerical data are separately given and a weight-based hybrid similarity measurement method for the security supervising data based on a semantic description in coal mines is presented.Thirdly,taking the hybrid similarity measurement method as the distance criteria and using a grid methodology for reference,an improved CURE clustering algorithm based on the grid is presented.Finally,the simulation results of a security supervising data set in coal mines validate the efficiency of the algorithm.
文摘This paper presents a fuzzy logic approach to efficiently perform unsupervised character classification for improvement in robustness, correctness and speed of a character recognition system. The characters are first split into eight typographical categories. The classification scheme uses pattern matching to classify the characters in each category into a set of fuzzy prototypes based on a nonlinear weighted similarity function. The fuzzy unsupervised character classification, which is natural in the repre...
基金supported by the National Natural Science Foundation of China(No.21365008)the Science Foundation of Guangxi province of China(No.2012GXNSFAA053230)
文摘Refineries often need to find similar crude oil to replace the scarce crude oil for stabilizing the feedstock property. We introduced the method for calculation of crude blended properties firstly, and then created a crude oil selection and blending optimization model based on the data of crude oil property. The model is a mixed-integer nonlinear programming(MINLP) with constraints, and the target is to maximize the similarity between the blended crude oil and the objective crude oil. Furthermore, the model takes into account the selection of crude oils and their blending ratios simultaneously, and transforms the problem of looking for similar crude oil into the crude oil selection and blending optimization problem. We applied the Improved Cuckoo Search(ICS) algorithm to solving the model. Through the simulations, ICS was compared with the genetic algorithm, the particle swarm optimization algorithm and the CPLEX solver. The results show that ICS has very good optimization efficiency. The blending solution can provide a reference for refineries to find the similar crude oil. And the method proposed can also give some references to selection and blending optimization of other materials.
文摘A new genetic algorithm for community detection in complex networks was proposed. It adopts matrix encoding that enables traditional crossover between individuals. Initial populations are generated using nodes similarity, which enhances the diversity of initial individuals while retaining an acceptable level of accuracy, and improves the efficiency of optimal solution search. Individual crossover is based on the quality of individuals' genes; all nodes unassigned to any community are grouped into a new community, while ambiguously placed nodes are assigned to the community to which most of their neighbors belong. Individual mutation, which splits a gene into two new genes or randomly fuses it into other genes, is non-uniform. The simplicity and effectiveness of the algorithm are revealed in experimental tests using artificial random networks and real networks. The accuracy of the algorithm is superior to that of some classic algorithms, and is comparable to that of some recent high-precision algorithms.
基金Supported by the National Natural Science Foun-dation of China (60472050)
文摘This paper describes the theory, implementation, and experimental evaluation of an Aggregation Cache Replacement ( ACR ) algorithm. By considering application background, carefully choosing weight values, using a special formula to calculate the similarity, and clustering ontologies by similarity for getting more embedded deep relations, ACR combines the ontology similarity with the value of object and decides which object is to be replaced. We demonstrate the usefulness of ACR through experiments. (a) It is found that the aggregation tree is created wholly differently according to the application cases. Therefore, clustering can direct the content adaptation more accurately according to the user perception and can satisfy the user with different preferences. (b) After comparing this new method with widely-used algorithm Last-Recently-Used (LRU) and First-in-First-out (FIFO) method, it is found that ACR outperforms the later two in accuracy and usability. (c) It has a better semantic explanation and makes adaptation more personalized and more precise.
基金This work is supported in part by the National Science Foundation of China(Nos.61672392,61373038)in part by the National Key Research and Development Program of China(No.2016YFC1202204).
文摘With the continuous expansion of software scale,software update and maintenance have become more and more important.However,frequent software code updates will make the software more likely to introduce new defects.So how to predict the defects quickly and accurately on the software change has become an important problem for software developers.Current defect prediction methods often cannot reflect the feature information of the defect comprehensively,and the detection effect is not ideal enough.Therefore,we propose a novel defect prediction model named ITNB(Improved Transfer Naive Bayes)based on improved transfer Naive Bayesian algorithm in this paper,which mainly considers the following two aspects:(1)Considering that the edge data of the test set may affect the similarity calculation and final prediction result,we remove the edge data of the test set when calculating the data similarity between the training set and the test set;(2)Considering that each feature dimension has different effects on defect prediction,we construct the calculation formula of training data weight based on feature dimension weight and data gravity,and then calculate the prior probability and the conditional probability of training data from the weight information,so as to construct the weighted bayesian classifier for software defect prediction.To evaluate the performance of the ITNB model,we use six datasets from large open source projects,namely Bugzilla,Columba,Mozilla,JDT,Platform and PostgreSQL.We compare the ITNB model with the transfer Naive Bayesian(TNB)model.The experimental results show that our ITNB model can achieve better results than the TNB model in terms of accurary,precision and pd for within-project and cross-project defect prediction.
文摘With the increasing variety of application software of meteorological satellite ground system, how to provide reasonable hardware resources and improve the efficiency of software is paid more and more attention. In this paper, a set of software classification method based on software operating characteristics is proposed. The method uses software run-time resource consumption to describe the software running characteristics. Firstly, principal component analysis (PCA) is used to reduce the dimension of software running feature data and to interpret software characteristic information. Then the modified K-means algorithm was used to classify the meteorological data processing software. Finally, it combined with the results of principal component analysis to explain the significance of various types of integrated software operating characteristics. And it is used as the basis for optimizing the allocation of software hardware resources and improving the efficiency of software operation.
文摘The paper proposed the research and implement of text similarity system based on power spectrum analysis. It is not difficult to imagine that the signals of brain are closely linked with writing process. So we build text modeling and set pulse signal function to get the power spectrum of the text. The specific detail is getting power spectrum from economic field to build spectral library, and then using the method of power spectrum matching algorithm to judge whether the test text belonged to the economic field. The method made text similarity system finish the function of text intelligent classification efficiently and accurately.
文摘Under the background of the rapid development of the air transport industry, the abnormal phenomenon of flights has become increasingly serious due to various factors such as the gradual reduction of resources, adverse climatic conditions, problems in air traffic control and mechanical failures. In order to reduce losses, it has become a major problem for airlines to use optimization algorithm to study the recovery of abnormal flights. By upgrading the passenger recovery engine, the purpose of this paper is to provide the optimal recovery scheme for passengers, so as to reduce the risk of transferring overseas flights, and thus reduce the economic loss of airlines. In this paper, the optimization model and algorithm based on network flow, combined with actual business requirements, comprehensively consider multiple optimization objectives to quickly generate passenger recovery solutions, and at the same time achieve the optimal income of airlines and the acceptance rate of passenger recovery, so as to balance the two. The practicability and effectiveness of the proposed model and algorithm are proved by some concrete examples.
文摘This paper reports on the implementation of efficient burst assembly algorithms and traffic prediction. The ultimate goal is to propose a new burst assembly algorithm which is based on time-burst length (hybrid) threshold with traffic prediction to reduce burst assembly delay in OBS (Optical Burst Switching) networks. Research has shown that traffic always change from time to time, hence, any measure that is put in place should be able to adapt to such changes. With our implemented burst assembly algorithm, the traffic rate is predicted and the predicted rate is used to dynamically adjust the burst assembly length. This work further investigates the impact of the proposed algorithm on traffic self similarity.