The large-scale multi-objective optimization algorithm(LSMOA),based on the grouping of decision variables,is an advanced method for handling high-dimensional decision variables.However,in practical problems,the intera...The large-scale multi-objective optimization algorithm(LSMOA),based on the grouping of decision variables,is an advanced method for handling high-dimensional decision variables.However,in practical problems,the interaction among decision variables is intricate,leading to large group sizes and suboptimal optimization effects;hence a large-scale multi-objective optimization algorithm based on weighted overlapping grouping of decision variables(MOEAWOD)is proposed in this paper.Initially,the decision variables are perturbed and categorized into convergence and diversity variables;subsequently,the convergence variables are subdivided into groups based on the interactions among different decision variables.If the size of a group surpasses the set threshold,that group undergoes a process of weighting and overlapping grouping.Specifically,the interaction strength is evaluated based on the interaction frequency and number of objectives among various decision variables.The decision variable with the highest interaction in the group is identified and disregarded,and the remaining variables are then reclassified into subgroups.Finally,the decision variable with the strongest interaction is added to each subgroup.MOEAWOD minimizes the interactivity between different groups and maximizes the interactivity of decision variables within groups,which contributed to the optimized direction of convergence and diversity exploration with different groups.MOEAWOD was subjected to testing on 18 benchmark large-scale optimization problems,and the experimental results demonstrate the effectiveness of our methods.Compared with the other algorithms,our method is still at an advantage.展开更多
Discrete curves are composed of a set of ordered discrete points distributed at the intersection of the scanning plane and the surface of the object. In order to accurately calculate the geometric characteristics of a...Discrete curves are composed of a set of ordered discrete points distributed at the intersection of the scanning plane and the surface of the object. In order to accurately calculate the geometric characteristics of any point on the discrete curve, a distance-based Gaussian weighted algorithm is proposed to estimate the geometric characteristics of three-dimensional space discrete curves. According to the definition of discrete derivatives, the algorithm fully considers the relative position difference between a specific point and its neighboring points, introduces the distance weighting idea, and integrates the smoothing strategy. The experiment uses two spatial discrete curves for uniform and non-uniform sampling, and compares them with two commonly used estimation algorithms. The comparative analysis is carried out in terms of sampling density, neighborhood radius and noise resistance. The experimental results show that the Gaussian distance weighted algorithm is effective and provides an efficient algorithm for underground pipeline safety detection.展开更多
Blind separation of sparse sources (BSSS) is discussed. The BSSS method based on the conventional K-means clustering is very fast and is also easy to implement. However, the accuracy of this method is generally not ...Blind separation of sparse sources (BSSS) is discussed. The BSSS method based on the conventional K-means clustering is very fast and is also easy to implement. However, the accuracy of this method is generally not satisfactory. The contribution of the vector x(t) with different modules is theoretically proved to be unequal, and a weighted K-means clustering method is proposed on this grounds. The proposed algorithm is not only as fast as the conventional K-means clustering method, but can also achieve considerably accurate results, which is demonstrated by numerical experiments.展开更多
In allusion to the disadvantage of having to obtain the number of clusters of data sets in advance and the sensitivity to selecting initial clustering centers in the k-means algorithm, an improved k-means clustering a...In allusion to the disadvantage of having to obtain the number of clusters of data sets in advance and the sensitivity to selecting initial clustering centers in the k-means algorithm, an improved k-means clustering algorithm is proposed. First, the concept of a silhouette coefficient is introduced, and the optimal clustering number Kopt of a data set with unknown class information is confirmed by calculating the silhouette coefficient of objects in clusters under different K values. Then the distribution of the data set is obtained through hierarchical clustering and the initial clustering-centers are confirmed. Finally, the clustering is completed by the traditional k-means clustering. By the theoretical analysis, it is proved that the improved k-means clustering algorithm has proper computational complexity. The experimental results of IRIS testing data set show that the algorithm can distinguish different clusters reasonably and recognize the outliers efficiently, and the entropy generated by the algorithm is lower.展开更多
An improved parallel weighted bit-flipping(PWBF) algorithm is presented. To accelerate the information exchanges between check nodes and variable nodes, the bit-flipping step and the check node updating step of the ...An improved parallel weighted bit-flipping(PWBF) algorithm is presented. To accelerate the information exchanges between check nodes and variable nodes, the bit-flipping step and the check node updating step of the original algorithm are parallelized. The simulation experiments demonstrate that the improved PWBF algorithm provides about 0. 1 to 0. 3 dB coding gain over the original PWBF algorithm. And the improved algorithm achieves a higher convergence rate. The choice of the threshold is also discussed, which is used to determine whether a bit should be flipped during each iteration. The appropriate threshold can ensure that most error bits be flipped, and keep the right ones untouched at the same time. The improvement is particularly effective for decoding quasi-cyclic low-density paritycheck(QC-LDPC) codes.展开更多
In k-means clustering, we are given a set of n data points in d-dimensional space R^d and an integer k and the problem is to determine a set of k points in R^d, called centers, so as to minimize the mean squared dista...In k-means clustering, we are given a set of n data points in d-dimensional space R^d and an integer k and the problem is to determine a set of k points in R^d, called centers, so as to minimize the mean squared distance from each data point to its nearest center. In this paper, we present a simple and efficient clustering algorithm based on the k-means algorithm, which we call enhanced k-means algorithm. This algorithm is easy to implement, requiring a simple data structure to keep some information in each iteration to be used in the next iteration. Our experimental results demonstrated that our scheme can improve the computational speed of the k-means algorithm by the magnitude in the total number of distance calculations and the overall time of computation.展开更多
Fractional vegetation cover(FVC)is an important parameter to measure crop growth.In studies of crop growth monitoring,it is very important to extract FVC quickly and accurately.As the most widely used FVC extraction m...Fractional vegetation cover(FVC)is an important parameter to measure crop growth.In studies of crop growth monitoring,it is very important to extract FVC quickly and accurately.As the most widely used FVC extraction method,the photographic method has the advantages of simple operation and high extraction accuracy.However,when soil moisture and acquisition times vary,the extraction results are less accurate.To accommodate various conditions of FVC extraction,this study proposes a new FVC extraction method that extracts FVC from a normalized difference vegetation index(NDVI)greyscale image of wheat by using a density peak k-means(DPK-means)algorithm.In this study,Yangfumai 4(YF4)planted in pots and Yangmai 16(Y16)planted in the field were used as the research materials.With a hyperspectral imaging camera mounted on a tripod,ground hyperspectral images of winter wheat under different soil conditions(dry and wet)were collected at 1 m above the potted wheat canopy.Unmanned aerial vehicle(UAV)hyperspectral images of winter wheat at various stages were collected at 50 m above the field wheat canopy by a UAV equipped with a hyperspectral camera.The pixel dichotomy method and DPK-means algorithm were used to classify vegetation pixels and non-vegetation pixels in NDVI greyscale images of wheat,and the extraction effects of the two methods were compared and analysed.The results showed that extraction by pixel dichotomy was influenced by the acquisition conditions and its error distribution was relatively scattered,while the extraction effect of the DPK-means algorithm was less affected by the acquisition conditions and its error distribution was concentrated.The absolute values of error were 0.042 and 0.044,the root mean square errors(RMSE)were 0.028 and 0.030,and the fitting accuracy R2 of the FVC was 0.87 and 0.93,under dry and wet soil conditions and under various time conditions,respectively.This study found that the DPK-means algorithm was capable of achieving more accurate results than the pixel dichotomy method in various soil and time conditions and was an accurate and robust method for FVC extraction.展开更多
A weighted algorithm for watermarking relational databases for copyright protection is presented. The possibility of watermarking an attribute is assigned according to its weight decided by the owner of the database. ...A weighted algorithm for watermarking relational databases for copyright protection is presented. The possibility of watermarking an attribute is assigned according to its weight decided by the owner of the database. A one-way hash function and a secret key known only to the owner of the data are used to select tuples and bits to mark. By assigning high weight to significant attributes, the scheme ensures that important attributes take more chance to be marked than less important ones. Experimental results show that the proposed scheme is robust against various forms of attacks, and has perfect immunity to subset attack.展开更多
A class of nonidentical parallel machine scheduling problems are considered in which the goal is to minimize the total weighted completion time. Models and relaxations are collected. Most of these problems are NP-hard...A class of nonidentical parallel machine scheduling problems are considered in which the goal is to minimize the total weighted completion time. Models and relaxations are collected. Most of these problems are NP-hard, in the strong sense, or open problems, therefore approximation algorithms are studied. The review reveals that there exist some potential areas worthy of further research.展开更多
The current Grover quantum searching algorithm cannot identify the difference in importance of the search targets when it is applied to an unsorted quantum database, and the probability for each search target is equal...The current Grover quantum searching algorithm cannot identify the difference in importance of the search targets when it is applied to an unsorted quantum database, and the probability for each search target is equal. To solve this problem, a Grover searching algorithm based on weighted targets is proposed. First, each target is endowed a weight coefficient according to its importance. Applying these different weight coefficients, the targets are represented as quantum superposition states. Second, the novel Grover searching algorithm based on the quantum superposition of the weighted targets is constructed. Using this algorithm, the probability of getting each target can be approximated to the corresponding weight coefficient, which shows the flexibility of this algorithm. Finally, the validity of the algorithm is proved by a simple searching example.展开更多
In conventional computed tomography (CT) reconstruction based on fixed voltage, the projective data often ap- pear overexposed or underexposed, as a result, the reconstructive results are poor. To solve this problem...In conventional computed tomography (CT) reconstruction based on fixed voltage, the projective data often ap- pear overexposed or underexposed, as a result, the reconstructive results are poor. To solve this problem, variable voltage CT reconstruction has been proposed. The effective projective sequences of a structural component are obtained through the variable voltage. The total variation is adjusted and minimized to optimize the reconstructive results on the basis of iterative image using algebraic reconstruction technique (ART). In the process of reconstruction, the reconstructive image of low voltage is used as an initial value of the effective proiective reconstruction of the adjacent high voltage, and so on until to the highest voltage according to the gray weighted algorithm. Thereby the complete structural information is reconstructed. Simulation results show that the proposed algorithm can completely reflect the information of a complicated structural com- ponent, and the pixel values are more stable than those of the conventional.展开更多
The maximum weighted matching problem in bipartite graphs is one of the classic combinatorial optimization problems, and arises in many different applications. Ordered binary decision diagram (OBDD) or algebraic decis...The maximum weighted matching problem in bipartite graphs is one of the classic combinatorial optimization problems, and arises in many different applications. Ordered binary decision diagram (OBDD) or algebraic decision diagram (ADD) or variants thereof provides canonical forms to represent and manipulate Boolean functions and pseudo-Boolean functions efficiently. ADD and OBDD-based symbolic algorithms give improved results for large-scale combinatorial optimization problems by searching nodes and edges implicitly. We present novel symbolic ADD formulation and algorithm for maximum weighted matching in bipartite graphs. The symbolic algorithm implements the Hungarian algorithm in the context of ADD and OBDD formulation and manipulations. It begins by setting feasible labelings of nodes and then iterates through a sequence of phases. Each phase is divided into two stages. The first stage is building equality bipartite graphs, and the second one is finding maximum cardinality matching in equality bipartite graph. The second stage iterates through the following steps: greedily searching initial matching, building layered network, backward traversing node-disjoint augmenting paths, updating cardinality matching and building residual network. The symbolic algorithm does not require explicit enumeration of the nodes and edges, and therefore can handle many complex executions in each step. Simulation experiments indicate that symbolic algorithm is competitive with traditional algorithms.展开更多
As data grows in size,search engines face new challenges in extracting more relevant content for users’searches.As a result,a number of retrieval and ranking algorithms have been employed to ensure that the results a...As data grows in size,search engines face new challenges in extracting more relevant content for users’searches.As a result,a number of retrieval and ranking algorithms have been employed to ensure that the results are relevant to the user’s requirements.Unfortunately,most existing indexes and ranking algo-rithms crawl documents and web pages based on a limited set of criteria designed to meet user expectations,making it impossible to deliver exceptionally accurate results.As a result,this study investigates and analyses how search engines work,as well as the elements that contribute to higher ranks.This paper addresses the issue of bias by proposing a new ranking algorithm based on the PageRank(PR)algorithm,which is one of the most widely used page ranking algorithms We pro-pose weighted PageRank(WPR)algorithms to test the relationship between these various measures.The Weighted Page Rank(WPR)model was used in three dis-tinct trials to compare the rankings of documents and pages based on one or more user preferences criteria.Thefindings of utilizing the Weighted Page Rank model showed that using multiple criteria to rankfinal pages is better than using only one,and that some criteria had a greater impact on ranking results than others.展开更多
With the increasing variety of application software of meteorological satellite ground system, how to provide reasonable hardware resources and improve the efficiency of software is paid more and more attention. In th...With the increasing variety of application software of meteorological satellite ground system, how to provide reasonable hardware resources and improve the efficiency of software is paid more and more attention. In this paper, a set of software classification method based on software operating characteristics is proposed. The method uses software run-time resource consumption to describe the software running characteristics. Firstly, principal component analysis (PCA) is used to reduce the dimension of software running feature data and to interpret software characteristic information. Then the modified K-means algorithm was used to classify the meteorological data processing software. Finally, it combined with the results of principal component analysis to explain the significance of various types of integrated software operating characteristics. And it is used as the basis for optimizing the allocation of software hardware resources and improving the efficiency of software operation.展开更多
The traditional K-means clustering algorithm is difficult to determine the cluster number,which is sensitive to the initialization of the clustering center and easy to fall into local optimum.This paper proposes a clu...The traditional K-means clustering algorithm is difficult to determine the cluster number,which is sensitive to the initialization of the clustering center and easy to fall into local optimum.This paper proposes a clustering algorithm based on self-organizing mapping network and weight particle swarm optimization SOM&WPSO(Self-Organization Map and Weight Particle Swarm Optimization).Firstly,the algorithm takes the competitive learning mechanism of a self-organizing mapping network to divide the data samples into coarse clusters and obtain the clustering center.Then,the obtained clustering center is used as the initialization parameter of the weight particle swarm optimization algorithm.The particle position of the WPSO algorithm is determined by the traditional clustering center is improved to the sample weight,and the cluster center is the“food”of the particle group.Each particle moves toward the nearest cluster center.Each iteration optimizes the particle position and velocity and uses K-means and K-medoids recalculates cluster centers and cluster partitions until the end of the algorithm convergence iteration.After a lot of experimental analysis on the commonly used UCI data set,this paper not only solves the shortcomings of K-means clustering algorithm,the problem of dependence of the initial clustering center,and improves the accuracy of clustering,but also avoids falling into the local optimum.The algorithm has good global convergence.展开更多
This study used Topological Weighted Centroid (TWC) to analyze the Coronavirus outbreak in Brazil. This analysis only uses latitude and longitude in formation of the capitals with the confirmed cases on May 24, 2020 t...This study used Topological Weighted Centroid (TWC) to analyze the Coronavirus outbreak in Brazil. This analysis only uses latitude and longitude in formation of the capitals with the confirmed cases on May 24, 2020 to illustrate the usefulness of TWC though any date could have been used. There are three types of TWC analyses, each type having five associated algorithms that produce fifteen maps, TWC-Original, TWC-Frequency and TWC-Windowing. We focus on TWC-Original to illustrate our approach. The TWC method without using the transportation information predicts the network for COVID-19 outbreak that matches very well with the main radial transportation routes network in Brazil.展开更多
The K-means method is one of the most widely used clustering methods and has been implemented in many fields of science and technology. One of the major problems of the k-means algorithm is that it may produce empty c...The K-means method is one of the most widely used clustering methods and has been implemented in many fields of science and technology. One of the major problems of the k-means algorithm is that it may produce empty clusters depending on initial center vectors. Genetic Algorithms (GAs) are adaptive heuristic search algorithm based on the evolutionary principles of natural selection and genetics. This paper presents a hybrid version of the k-means algorithm with GAs that efficiently eliminates this empty cluster problem. Results of simulation experiments using several data sets prove our claim.展开更多
基金supported in part by the Central Government Guides Local Science and TechnologyDevelopment Funds(Grant No.YDZJSX2021A038)in part by theNational Natural Science Foundation of China under(Grant No.61806138)in part by the China University Industry-University-Research Collaborative Innovation Fund(Future Network Innovation Research and Application Project)(Grant 2021FNA04014).
文摘The large-scale multi-objective optimization algorithm(LSMOA),based on the grouping of decision variables,is an advanced method for handling high-dimensional decision variables.However,in practical problems,the interaction among decision variables is intricate,leading to large group sizes and suboptimal optimization effects;hence a large-scale multi-objective optimization algorithm based on weighted overlapping grouping of decision variables(MOEAWOD)is proposed in this paper.Initially,the decision variables are perturbed and categorized into convergence and diversity variables;subsequently,the convergence variables are subdivided into groups based on the interactions among different decision variables.If the size of a group surpasses the set threshold,that group undergoes a process of weighting and overlapping grouping.Specifically,the interaction strength is evaluated based on the interaction frequency and number of objectives among various decision variables.The decision variable with the highest interaction in the group is identified and disregarded,and the remaining variables are then reclassified into subgroups.Finally,the decision variable with the strongest interaction is added to each subgroup.MOEAWOD minimizes the interactivity between different groups and maximizes the interactivity of decision variables within groups,which contributed to the optimized direction of convergence and diversity exploration with different groups.MOEAWOD was subjected to testing on 18 benchmark large-scale optimization problems,and the experimental results demonstrate the effectiveness of our methods.Compared with the other algorithms,our method is still at an advantage.
文摘Discrete curves are composed of a set of ordered discrete points distributed at the intersection of the scanning plane and the surface of the object. In order to accurately calculate the geometric characteristics of any point on the discrete curve, a distance-based Gaussian weighted algorithm is proposed to estimate the geometric characteristics of three-dimensional space discrete curves. According to the definition of discrete derivatives, the algorithm fully considers the relative position difference between a specific point and its neighboring points, introduces the distance weighting idea, and integrates the smoothing strategy. The experiment uses two spatial discrete curves for uniform and non-uniform sampling, and compares them with two commonly used estimation algorithms. The comparative analysis is carried out in terms of sampling density, neighborhood radius and noise resistance. The experimental results show that the Gaussian distance weighted algorithm is effective and provides an efficient algorithm for underground pipeline safety detection.
基金the National Natural Science Foundation of China (60672061)
文摘Blind separation of sparse sources (BSSS) is discussed. The BSSS method based on the conventional K-means clustering is very fast and is also easy to implement. However, the accuracy of this method is generally not satisfactory. The contribution of the vector x(t) with different modules is theoretically proved to be unequal, and a weighted K-means clustering method is proposed on this grounds. The proposed algorithm is not only as fast as the conventional K-means clustering method, but can also achieve considerably accurate results, which is demonstrated by numerical experiments.
基金The National Natural Science Foundation of China(No50674086)Specialized Research Fund for the Doctoral Program of Higher Education (No20060290508)the Youth Scientific Research Foundation of China University of Mining and Technology (No2006A047)
文摘In allusion to the disadvantage of having to obtain the number of clusters of data sets in advance and the sensitivity to selecting initial clustering centers in the k-means algorithm, an improved k-means clustering algorithm is proposed. First, the concept of a silhouette coefficient is introduced, and the optimal clustering number Kopt of a data set with unknown class information is confirmed by calculating the silhouette coefficient of objects in clusters under different K values. Then the distribution of the data set is obtained through hierarchical clustering and the initial clustering-centers are confirmed. Finally, the clustering is completed by the traditional k-means clustering. By the theoretical analysis, it is proved that the improved k-means clustering algorithm has proper computational complexity. The experimental results of IRIS testing data set show that the algorithm can distinguish different clusters reasonably and recognize the outliers efficiently, and the entropy generated by the algorithm is lower.
基金The National High Technology Research and Development Program of China (863Program) ( No2009AA01Z235,2006AA01Z263)the Research Fund of the National Mobile Communications Research Laboratory of Southeast University(No2008A10)
文摘An improved parallel weighted bit-flipping(PWBF) algorithm is presented. To accelerate the information exchanges between check nodes and variable nodes, the bit-flipping step and the check node updating step of the original algorithm are parallelized. The simulation experiments demonstrate that the improved PWBF algorithm provides about 0. 1 to 0. 3 dB coding gain over the original PWBF algorithm. And the improved algorithm achieves a higher convergence rate. The choice of the threshold is also discussed, which is used to determine whether a bit should be flipped during each iteration. The appropriate threshold can ensure that most error bits be flipped, and keep the right ones untouched at the same time. The improvement is particularly effective for decoding quasi-cyclic low-density paritycheck(QC-LDPC) codes.
文摘In k-means clustering, we are given a set of n data points in d-dimensional space R^d and an integer k and the problem is to determine a set of k points in R^d, called centers, so as to minimize the mean squared distance from each data point to its nearest center. In this paper, we present a simple and efficient clustering algorithm based on the k-means algorithm, which we call enhanced k-means algorithm. This algorithm is easy to implement, requiring a simple data structure to keep some information in each iteration to be used in the next iteration. Our experimental results demonstrated that our scheme can improve the computational speed of the k-means algorithm by the magnitude in the total number of distance calculations and the overall time of computation.
基金supported by the Beijing Natural Science Foundation,China(4202066)the Central Public-interest Scientific Institution Basal Research Fund,China(JBYWAII-2020-29 and JBYW-AII-2020-31)+1 种基金the Key Research and Development Program of Hebei Province,China(19227407D)the Technology Innovation Project Fund of Chinese Academy of Agricultural Sciences(CAAS-ASTIP2020-All)。
文摘Fractional vegetation cover(FVC)is an important parameter to measure crop growth.In studies of crop growth monitoring,it is very important to extract FVC quickly and accurately.As the most widely used FVC extraction method,the photographic method has the advantages of simple operation and high extraction accuracy.However,when soil moisture and acquisition times vary,the extraction results are less accurate.To accommodate various conditions of FVC extraction,this study proposes a new FVC extraction method that extracts FVC from a normalized difference vegetation index(NDVI)greyscale image of wheat by using a density peak k-means(DPK-means)algorithm.In this study,Yangfumai 4(YF4)planted in pots and Yangmai 16(Y16)planted in the field were used as the research materials.With a hyperspectral imaging camera mounted on a tripod,ground hyperspectral images of winter wheat under different soil conditions(dry and wet)were collected at 1 m above the potted wheat canopy.Unmanned aerial vehicle(UAV)hyperspectral images of winter wheat at various stages were collected at 50 m above the field wheat canopy by a UAV equipped with a hyperspectral camera.The pixel dichotomy method and DPK-means algorithm were used to classify vegetation pixels and non-vegetation pixels in NDVI greyscale images of wheat,and the extraction effects of the two methods were compared and analysed.The results showed that extraction by pixel dichotomy was influenced by the acquisition conditions and its error distribution was relatively scattered,while the extraction effect of the DPK-means algorithm was less affected by the acquisition conditions and its error distribution was concentrated.The absolute values of error were 0.042 and 0.044,the root mean square errors(RMSE)were 0.028 and 0.030,and the fitting accuracy R2 of the FVC was 0.87 and 0.93,under dry and wet soil conditions and under various time conditions,respectively.This study found that the DPK-means algorithm was capable of achieving more accurate results than the pixel dichotomy method in various soil and time conditions and was an accurate and robust method for FVC extraction.
基金Supported by the Aeronautics Science Foundation of China (02F52033), the High-Technology Research Project of Jiangsu Province (BG2004005) and Youth Research Foundation of Qufu Normal Univer-sity(XJ02057)
文摘A weighted algorithm for watermarking relational databases for copyright protection is presented. The possibility of watermarking an attribute is assigned according to its weight decided by the owner of the database. A one-way hash function and a secret key known only to the owner of the data are used to select tuples and bits to mark. By assigning high weight to significant attributes, the scheme ensures that important attributes take more chance to be marked than less important ones. Experimental results show that the proposed scheme is robust against various forms of attacks, and has perfect immunity to subset attack.
基金the National Natural Science Foundation of China (70631003)the Hefei University of Technology Foundation (071102F).
文摘A class of nonidentical parallel machine scheduling problems are considered in which the goal is to minimize the total weighted completion time. Models and relaxations are collected. Most of these problems are NP-hard, in the strong sense, or open problems, therefore approximation algorithms are studied. The review reveals that there exist some potential areas worthy of further research.
基金the National Natural Science Foundation of China (60773065).
文摘The current Grover quantum searching algorithm cannot identify the difference in importance of the search targets when it is applied to an unsorted quantum database, and the probability for each search target is equal. To solve this problem, a Grover searching algorithm based on weighted targets is proposed. First, each target is endowed a weight coefficient according to its importance. Applying these different weight coefficients, the targets are represented as quantum superposition states. Second, the novel Grover searching algorithm based on the quantum superposition of the weighted targets is constructed. Using this algorithm, the probability of getting each target can be approximated to the corresponding weight coefficient, which shows the flexibility of this algorithm. Finally, the validity of the algorithm is proved by a simple searching example.
文摘In conventional computed tomography (CT) reconstruction based on fixed voltage, the projective data often ap- pear overexposed or underexposed, as a result, the reconstructive results are poor. To solve this problem, variable voltage CT reconstruction has been proposed. The effective projective sequences of a structural component are obtained through the variable voltage. The total variation is adjusted and minimized to optimize the reconstructive results on the basis of iterative image using algebraic reconstruction technique (ART). In the process of reconstruction, the reconstructive image of low voltage is used as an initial value of the effective proiective reconstruction of the adjacent high voltage, and so on until to the highest voltage according to the gray weighted algorithm. Thereby the complete structural information is reconstructed. Simulation results show that the proposed algorithm can completely reflect the information of a complicated structural com- ponent, and the pixel values are more stable than those of the conventional.
文摘The maximum weighted matching problem in bipartite graphs is one of the classic combinatorial optimization problems, and arises in many different applications. Ordered binary decision diagram (OBDD) or algebraic decision diagram (ADD) or variants thereof provides canonical forms to represent and manipulate Boolean functions and pseudo-Boolean functions efficiently. ADD and OBDD-based symbolic algorithms give improved results for large-scale combinatorial optimization problems by searching nodes and edges implicitly. We present novel symbolic ADD formulation and algorithm for maximum weighted matching in bipartite graphs. The symbolic algorithm implements the Hungarian algorithm in the context of ADD and OBDD formulation and manipulations. It begins by setting feasible labelings of nodes and then iterates through a sequence of phases. Each phase is divided into two stages. The first stage is building equality bipartite graphs, and the second one is finding maximum cardinality matching in equality bipartite graph. The second stage iterates through the following steps: greedily searching initial matching, building layered network, backward traversing node-disjoint augmenting paths, updating cardinality matching and building residual network. The symbolic algorithm does not require explicit enumeration of the nodes and edges, and therefore can handle many complex executions in each step. Simulation experiments indicate that symbolic algorithm is competitive with traditional algorithms.
文摘As data grows in size,search engines face new challenges in extracting more relevant content for users’searches.As a result,a number of retrieval and ranking algorithms have been employed to ensure that the results are relevant to the user’s requirements.Unfortunately,most existing indexes and ranking algo-rithms crawl documents and web pages based on a limited set of criteria designed to meet user expectations,making it impossible to deliver exceptionally accurate results.As a result,this study investigates and analyses how search engines work,as well as the elements that contribute to higher ranks.This paper addresses the issue of bias by proposing a new ranking algorithm based on the PageRank(PR)algorithm,which is one of the most widely used page ranking algorithms We pro-pose weighted PageRank(WPR)algorithms to test the relationship between these various measures.The Weighted Page Rank(WPR)model was used in three dis-tinct trials to compare the rankings of documents and pages based on one or more user preferences criteria.Thefindings of utilizing the Weighted Page Rank model showed that using multiple criteria to rankfinal pages is better than using only one,and that some criteria had a greater impact on ranking results than others.
文摘With the increasing variety of application software of meteorological satellite ground system, how to provide reasonable hardware resources and improve the efficiency of software is paid more and more attention. In this paper, a set of software classification method based on software operating characteristics is proposed. The method uses software run-time resource consumption to describe the software running characteristics. Firstly, principal component analysis (PCA) is used to reduce the dimension of software running feature data and to interpret software characteristic information. Then the modified K-means algorithm was used to classify the meteorological data processing software. Finally, it combined with the results of principal component analysis to explain the significance of various types of integrated software operating characteristics. And it is used as the basis for optimizing the allocation of software hardware resources and improving the efficiency of software operation.
文摘The traditional K-means clustering algorithm is difficult to determine the cluster number,which is sensitive to the initialization of the clustering center and easy to fall into local optimum.This paper proposes a clustering algorithm based on self-organizing mapping network and weight particle swarm optimization SOM&WPSO(Self-Organization Map and Weight Particle Swarm Optimization).Firstly,the algorithm takes the competitive learning mechanism of a self-organizing mapping network to divide the data samples into coarse clusters and obtain the clustering center.Then,the obtained clustering center is used as the initialization parameter of the weight particle swarm optimization algorithm.The particle position of the WPSO algorithm is determined by the traditional clustering center is improved to the sample weight,and the cluster center is the“food”of the particle group.Each particle moves toward the nearest cluster center.Each iteration optimizes the particle position and velocity and uses K-means and K-medoids recalculates cluster centers and cluster partitions until the end of the algorithm convergence iteration.After a lot of experimental analysis on the commonly used UCI data set,this paper not only solves the shortcomings of K-means clustering algorithm,the problem of dependence of the initial clustering center,and improves the accuracy of clustering,but also avoids falling into the local optimum.The algorithm has good global convergence.
文摘This study used Topological Weighted Centroid (TWC) to analyze the Coronavirus outbreak in Brazil. This analysis only uses latitude and longitude in formation of the capitals with the confirmed cases on May 24, 2020 to illustrate the usefulness of TWC though any date could have been used. There are three types of TWC analyses, each type having five associated algorithms that produce fifteen maps, TWC-Original, TWC-Frequency and TWC-Windowing. We focus on TWC-Original to illustrate our approach. The TWC method without using the transportation information predicts the network for COVID-19 outbreak that matches very well with the main radial transportation routes network in Brazil.
文摘The K-means method is one of the most widely used clustering methods and has been implemented in many fields of science and technology. One of the major problems of the k-means algorithm is that it may produce empty clusters depending on initial center vectors. Genetic Algorithms (GAs) are adaptive heuristic search algorithm based on the evolutionary principles of natural selection and genetics. This paper presents a hybrid version of the k-means algorithm with GAs that efficiently eliminates this empty cluster problem. Results of simulation experiments using several data sets prove our claim.