Nowadays,succeeding safe communication and protection-sensitive data from unauthorized access above public networks are the main worries in cloud servers.Hence,to secure both data and keys ensuring secured data storag...Nowadays,succeeding safe communication and protection-sensitive data from unauthorized access above public networks are the main worries in cloud servers.Hence,to secure both data and keys ensuring secured data storage and access,our proposed work designs a Novel Quantum Key Distribution(QKD)relying upon a non-commutative encryption framework.It makes use of a Novel Quantum Key Distribution approach,which guarantees high level secured data transmission.Along with this,a shared secret is generated using Diffie Hellman(DH)to certify secured key generation at reduced time complexity.Moreover,a non-commutative approach is used,which effectively allows the users to store and access the encrypted data into the cloud server.Also,to prevent data loss or corruption caused by the insiders in the cloud,Optimized Genetic Algorithm(OGA)is utilized,which effectively recovers the data and retrieve it if the missed data without loss.It is then followed with the decryption process as if requested by the user.Thus our proposed framework ensures authentication and paves way for secure data access,with enhanced performance and reduced complexities experienced with the prior works.展开更多
Task scheduling is the main problem in cloud computing that reduces system performance;it is an important way to arrange user needs and perform multiple goals.Cloud computing is the most popular technology nowadays an...Task scheduling is the main problem in cloud computing that reduces system performance;it is an important way to arrange user needs and perform multiple goals.Cloud computing is the most popular technology nowadays and has many research potential in various areas like resource allocation,task scheduling,security,privacy,etc.To improve system performance,an efficient task-scheduling algorithm is required.Existing task-scheduling algorithms focus on task-resource requirements,CPU memory,execution time,and execution cost.In this paper,a task scheduling algorithm based on a Genetic Algorithm(GA)has been presented for assigning and executing different tasks.The proposed algorithm aims to minimize both the completion time and execution cost of tasks and maximize resource utilization.We evaluate our algorithm’s performance by applying it to two examples with a different number of tasks and processors.The first example contains ten tasks and four processors;the computation costs are generated randomly.The last example has eight processors,and the number of tasks ranges from twenty to seventy;the computation cost of each task on different processors is generated randomly.The achieved results show that the proposed approach significantly succeeded in finding the optimal solutions for the three objectives;completion time,execution cost,and resource utilization.展开更多
Task scheduling in highly elastic and dynamic processing environments such as cloud computing have become the most discussed problem among researchers.Task scheduling algorithms are responsible for the allocation of t...Task scheduling in highly elastic and dynamic processing environments such as cloud computing have become the most discussed problem among researchers.Task scheduling algorithms are responsible for the allocation of the tasks among the computing resources for their execution,and an inefficient task scheduling algorithm results in under-or over-utilization of the resources,which in turn leads to degradation of the services.Therefore,in the proposed work,load balancing is considered as an important criterion for task scheduling in a cloud computing environment as it can help in reducing the overhead in the critical decision-oriented process.In this paper,we propose an adaptive genetic algorithm-based load balancing(GALB)-aware task scheduling technique that not only results in better utilization of resources but also helps in optimizing the values of key performance indicators such as makespan,performance improvement ratio,and degree of imbalance.The concept of adaptive crossover and mutation is used in this work which results in better adaptation for the fittest individual of the current generation and prevents them from the elimination.CloudSim simulator has been used to carry out the simulations and obtained results establish that the proposed GALB algorithm performs better for all the key indicators and outperforms its peers which are taken into the consideration.展开更多
Genetic Algorithm(GA)has been widely used to solve various optimization problems.As the solving process of GA requires large storage and computing resources,it is well motivated to outsource the solving process of GA ...Genetic Algorithm(GA)has been widely used to solve various optimization problems.As the solving process of GA requires large storage and computing resources,it is well motivated to outsource the solving process of GA to the cloud server.However,the algorithm user would never want his data to be disclosed to cloud server.Thus,it is necessary for the user to encrypt the data before transmitting them to the server.But the user will encounter a new problem.The arithmetic operations we are familiar with cannot work directly in the ciphertext domain.In this paper,a privacy-preserving outsourced genetic algorithm is proposed.The user’s data are protected by homomorphic encryption algorithm which can support the operations in the encrypted domain.GA is elaborately adapted to search the optimal result over the encrypted data.The security analysis and experiment results demonstrate the effectiveness of the proposed scheme.展开更多
In order to solve the problem that the resource scheduling time of cloud data center is too long,this paper analyzes the two-stage resource scheduling mechanism of cloud data center.Aiming at the minimum task completi...In order to solve the problem that the resource scheduling time of cloud data center is too long,this paper analyzes the two-stage resource scheduling mechanism of cloud data center.Aiming at the minimum task completion time,a mathematical model of resource scheduling in cloud data center is established.The two-stage resource scheduling optimization simulation is realized by using the conventional genetic algorithm.On the technology of the conventional genetic algorithm,an adaptive transformation operator is designed to improve the crossover and mutation of the genetic algorithm.The experimental results show that the improved genetic algorithm can significantly reduce the total completion time of the task,and has good convergence and global optimization ability.展开更多
Non-linearity and parameter time-variety are inherent properties of lateral motions of a vehicle. How to effectively control intelligent vehicle (IV) lateral motions is a challenging task. Controller design can be reg...Non-linearity and parameter time-variety are inherent properties of lateral motions of a vehicle. How to effectively control intelligent vehicle (IV) lateral motions is a challenging task. Controller design can be regarded as a process of searching optimal structure from controller structure space and searching optimal parameters from parameter space. Based on this view, an intelligent vehicle lateral motions controller was designed. The controller structure was constructed by T-S fuzzy-neural network (FNN). Its parameters were searched and selected with genetic algorithm (GA). The simulation results indicate that the controller designed has strong robustness, high precision and good ride quality, and it can effectively resolve IV lateral motion non-linearity and time-variant parameters problem.展开更多
With the development of Computerized Business Application, the amount of data is increasing exponentially. Cloud computing provides high performance computing resources and mass storage resources for massive data proc...With the development of Computerized Business Application, the amount of data is increasing exponentially. Cloud computing provides high performance computing resources and mass storage resources for massive data processing. In distributed cloud computing systems, data intensive computing can lead to data scheduling between data centers. Reasonable data placement can reduce data scheduling between the data centers effectively, and improve the data acquisition efficiency of users. In this paper, the mathematical model of data scheduling between data centers is built. By means of the global optimization ability of the genetic algorithm, generational evolution produces better approximate solution, and gets the best approximation of the data placement at last. The experimental results show that genetic algorithm can effectively work out the approximate optimal data placement, and minimize data scheduling between data centers.展开更多
Computational fluid dynamics(CFD) can give a lot of potentially very useful information for hydraulic optimization design of pumps, however, it cannot directly state what kind of modification should be made to impro...Computational fluid dynamics(CFD) can give a lot of potentially very useful information for hydraulic optimization design of pumps, however, it cannot directly state what kind of modification should be made to improve such hydrodynamic performance. In this paper, a more convenient and effective approach is proposed by combined using of CFD, multi-objective genetic algorithm(MOGA) and artificial neural networks(ANN) for a double-channel pump's impeller, with maximum head and efficiency set as optimization objectives, four key geometrical parameters including inlet diameter, outlet diameter, exit width and midline wrap angle chosen as optimization parameters. Firstly, a multi-fidelity fitness assignment system in which fitness of impellers serving as training and comparison samples for ANN is evaluated by CFD, meanwhile fitness of impellers generated by MOGA is evaluated by ANN, is established and dramatically reduces the computational expense. Then, a modified MOGA optimization process, in which selection is performed independently in two sub-populations according to two optimization objectives, crossover and mutation is performed afterword in the merged population, is developed to ensure the global optimal solution to be found. Finally, Pareto optimal frontier is found after 500 steps of iterations, and two optimal design schemes are chosen according to the design requirements. The preliminary and optimal design schemes are compared, and the comparing results show that hydraulic performances of both pumps 1 and 2 are improved, with the head and efficiency of pump 1 increased by 5.7% and 5.2%, respectively in the design working conditions, meanwhile shaft power decreased in all working conditions, the head and efficiency of pump 2 increased by 11.7% and 5.9%, respectively while shaft power increased by 5.5%. Inner flow field analyses also show that the backflow phenomenon significantly diminishes at the entrance of the optimal impellers 1 and 2, both the area of vortex and intensity of vortex decreases in the whole flow channel. This paper provides a promising tool to solve the hydraulic optimization problem of pumps' impellers.展开更多
In this paper, a mathematical model consisting of forward and backward models is built on parallel genetic algorithms (PGAs) for fault diagnosis in a transmission power system. A new method to reduce the scale of faul...In this paper, a mathematical model consisting of forward and backward models is built on parallel genetic algorithms (PGAs) for fault diagnosis in a transmission power system. A new method to reduce the scale of fault sections is developed in the forward model and the message passing interface (MPI) approach is chosen to parallel the genetic algorithms by global sin-gle-population master-slave method (GPGAs). The proposed approach is applied to a sample system consisting of 28 sections, 84 protective relays and 40 circuit breakers. Simulation results show that the new model based on GPGAs can achieve very fast computation in online applications of large-scale power systems.展开更多
A novel immune genetic algorithm with the elitist selection and elitist crossover was proposed, which is called the immune genetic algorithm with the elitism (IGAE). In IGAE, the new methods for computing antibody s...A novel immune genetic algorithm with the elitist selection and elitist crossover was proposed, which is called the immune genetic algorithm with the elitism (IGAE). In IGAE, the new methods for computing antibody similarity, expected reproduction probability, and clonal selection probability were given. IGAE has three features. The first is that the similarities of two antibodies in structure and quality are all defined in the form of percentage, which helps to describe the similarity of two antibodies more accurately and to reduce the computational burden effectively. The second is that with the elitist selection and elitist crossover strategy IGAE is able to find the globally optimal solution of a given problem. The third is that the formula of expected reproduction probability of antibody can be adjusted through a parameter r, which helps to balance the population diversity and the convergence speed of IGAE so that IGAE can find the globally optimal solution of a given problem more rapidly. Two different complex multi-modal functions were selected to test the validity of IGAE. The experimental results show that IGAE can find the globally maximum/minimum values of the two functions rapidly. The experimental results also confirm that IGAE is of better performance in convergence speed, solution variation behavior, and computational efficiency compared with the canonical genetic algorithm with the elitism and the immune genetic algorithm with the information entropy and elitism.展开更多
A neurocomputing model for Genetic Algorithm (GA) to break the speed bottleneck of GA was proposed. With all genetic operations parallel implemented by NN-based sub-modules, the model integrates both the strongpoint o...A neurocomputing model for Genetic Algorithm (GA) to break the speed bottleneck of GA was proposed. With all genetic operations parallel implemented by NN-based sub-modules, the model integrates both the strongpoint of parallel GA (PGA) and those of hardware GA (HGA). Moreover a new crossover operator named universe crossover was also proposed to suit the NN-based realization. This model was tested with a benchmark function set, and the experimental results validated the potential of the neurocomputing model. The significance of this model means that HGA and PGA can be integrated and the inherent parallelism of GA can be explicitly and farthest realized, as a result, the optimization speed of GA will be accelerated by one or two magnitudes compered to the serial implementation with same speed hardware, and GA will be turned from an algorithm into a machine.展开更多
An improved self-calibrating algorithm for visual servo based on adaptive genetic algorithm is proposed in this paper. Our approach introduces an extension of Mendonca-Cipolla and G. Chesi's self-calibration for the ...An improved self-calibrating algorithm for visual servo based on adaptive genetic algorithm is proposed in this paper. Our approach introduces an extension of Mendonca-Cipolla and G. Chesi's self-calibration for the positionbased visual servo technique which exploits the singular value property of the essential matrix. Specifically, a suitable dynamic online cost function is generated according to the property of the three singular values. The visual servo process is carried out simultaneous to the dynamic self-calibration, and then the cost function is minimized using the adaptive genetic algorithm instead of the gradient descent method in G. Chesi's approach. Moreover, this method overcomes the limitation that the initial parameters must be selected close to the true value, which is not constant in many cases. It is not necessary to know exactly the camera intrinsic parameters when using our approach, instead, coarse coding bounds of the five parameters are enough for the algorithm, which can be done once and for all off-line. Besides, this algorithm does not require knowledge of the 3D model of the object. Simulation experiments are carried out and the results demonstrate that the proposed approach provides a fast convergence speed and robustness against unpredictable perturbations of camera parameters, and it is an effective and efficient visual servo algorithm.展开更多
There are three difficult problems in the application of genetic algorithms, namely, the parameter control, the premature convergence and the deception problem.Based on genetic algorithm with varying population size, ...There are three difficult problems in the application of genetic algorithms, namely, the parameter control, the premature convergence and the deception problem.Based on genetic algorithm with varying population size, a self adaptive genetic algorithm called natural genetic algorithm (NGA) is proposed. This algorithm introduces the population size threshold and the immigrant concepts, and adopts dynamically changing parameters. The design and structure of NGA are discussed in this paper.The performance of NGA is also analyzed.展开更多
This paper describes an efficient solution to parallelize softwareprogram instructions, regardless of the programming language in which theyare written. We solve the problem of the optimal distribution of a set ofinst...This paper describes an efficient solution to parallelize softwareprogram instructions, regardless of the programming language in which theyare written. We solve the problem of the optimal distribution of a set ofinstructions on available processors. We propose a genetic algorithm to parallelize computations, using evolution to search the solution space. The stagesof our proposed genetic algorithm are: The choice of the initial populationand its representation in chromosomes, the crossover, and the mutation operations customized to the problem being dealt with. In this paper, geneticalgorithms are applied to the entire search space of the parallelization ofthe program instructions problem. This problem is NP-complete, so thereare no polynomial algorithms that can scan the solution space and solve theproblem. The genetic algorithm-based method is general and it is simple andefficient to implement because it can be scaled to a larger or smaller number ofinstructions that must be parallelized. The parallelization technique proposedin this paper was developed in the C# programming language, and our resultsconfirm the effectiveness of our parallelization method. Experimental resultsobtained and presented for different working scenarios confirm the theoreticalresults, and they provide insight on how to improve the exploration of a searchspace that is too large to be searched exhaustively.展开更多
In order to solve the complex optimization problem dealing with uncertain phenomenon effectively, this paper presents a probability simulation optimization approach using orthogonal genetic algorithm. This approach sy...In order to solve the complex optimization problem dealing with uncertain phenomenon effectively, this paper presents a probability simulation optimization approach using orthogonal genetic algorithm. This approach synthesizes the computer simulation technology, orthogonal genetic algorithm and statistical test method faultlessly, which can solve complex optimization problem effectively. In this paper, the author gives the correlative conception of probability simulation optimization and describes the probability simulation optimization approach using orthogonal genetic algorithm in detail. Theoretically speaking, it has a strong rationality and maneuverability that can apply probability method in solving the complex optimization problems with uncertain phenomenon. In demonstration, the optimization performance of this method is better than other traditional methods. Simulation resuh suggests that the approach referred to this paper is feasible, correct and valid.展开更多
This article presents a multiobjective approach to the design of the controller for the swing-up and handstand control of a general cart-double-pendulum system (CDPS). The designed controller, which is based on the ...This article presents a multiobjective approach to the design of the controller for the swing-up and handstand control of a general cart-double-pendulum system (CDPS). The designed controller, which is based on the human-simulated intelligent control (HSIC) method, builds up different control modes to monitor and control the CDPS during four kinetic phases consisting of an initial oscillation phase, a swing-up phase, a posture adjustment phase, and a balance control phase. For the approach, the original method of inequalities-based (MoI) multiobjective genetic algorithm (MMGA) is extended and applied to the case study which uses a set of performance indices that includes the cart displacement over the rail boundary, the number of swings, the settling time, the overshoot of the total energy, and the control effort. The simulation results show good responses of the CDPS with the controllers obtained by the proposed approach.展开更多
Computational fluid dynamics (CFD) plays a major role in predicting the flow behavior of a ship. With the development of fast computers and robust CFD software, CFD has become an important tool for designers and eng...Computational fluid dynamics (CFD) plays a major role in predicting the flow behavior of a ship. With the development of fast computers and robust CFD software, CFD has become an important tool for designers and engineers in the ship industry. In this paper, the hull form of a ship was optimized for total resistance using CFD as a calculation tool and a genetic algorithm as an optimization tool. CFD based optimization consists of major steps involving automatic generation of geometry based on design parameters, automatic generation of mesh, automatic analysis of fluid flow to calculate the required objective/cost function, and finally an optimization tool to evaluate the cost for optimization. In this paper, integration of a genetic algorithm program, written in MATLAB, was carried out with the geometry and meshing software GAMBIT and CFD analysis software FLUENT. Different geometries of additive bulbous bow were incorporated in the original hull based on design parameters. These design variables were optimized to achieve a minimum cost function of "total resistance". Integration of a genetic algorithm with CFD tools proves to be effective for hull form ootimization.展开更多
To reduce resources consumption of parallel computation system, a static task scheduling opti- mization method based on hybrid genetic algorithm is proposed and validated, which can shorten the scheduling length of pa...To reduce resources consumption of parallel computation system, a static task scheduling opti- mization method based on hybrid genetic algorithm is proposed and validated, which can shorten the scheduling length of parallel tasks with precedence constraints. Firstly, the global optimal model and constraints are created to demonstrate the static task scheduling problem in heterogeneous distributed computing systems(HeDCSs). Secondly, the genetic population is coded with matrix and used to search the total available time span of the processors, and then the simulated annealing algorithm is introduced to improve the convergence speed and overcome the problem of easily falling into local minimum point, which exists in the traditional genetic algorithm. Finally, compared to other existed scheduling algorithms such as dynamic level scheduling ( DLS), heterogeneous earliest finish time (HEFr), and longest dynamic critical path( LDCP), the proposed approach does not merely de- crease tasks schedule length, but also achieves the maximal resource utilization of parallel computa- tion system by extensive experiments.展开更多
Accurate stereo vision calibration is a preliminary step towards high-precision visual posi- tioning of robot. Combining with the characteristics of genetic algorithm (GA) and particle swarm optimization (PSO), a ...Accurate stereo vision calibration is a preliminary step towards high-precision visual posi- tioning of robot. Combining with the characteristics of genetic algorithm (GA) and particle swarm optimization (PSO), a three-stage calibration method based on hybrid intelligent optimization is pro- posed for nonlinear camera models in this paper. The motivation is to improve the accuracy of the calibration process. In this approach, the stereo vision calibration is considered as an optimization problem that can be solved by the GA and PSO. The initial linear values can be obtained in the frost stage. Then in the second stage, two cameras' parameters are optimized separately. Finally, the in- tegrated optimized calibration of two models is obtained in the third stage. Direct linear transforma- tion (DLT), GA and PSO are individually used in three stages. It is shown that the results of every stage can correctly find near-optimal solution and it can be used to initialize the next stage. Simula- tion analysis and actual experimental results indicate that this calibration method works more accu- rate and robust in noisy environment compared with traditional calibration methods. The proposed method can fulfill the requirements of robot sophisticated visual operation.展开更多
In optimization theory,the adaptive control of the optimization process is an important goal that people pursue.To solve this problem,this study introduces the idea of neutrosophic decision-making into classical heuri...In optimization theory,the adaptive control of the optimization process is an important goal that people pursue.To solve this problem,this study introduces the idea of neutrosophic decision-making into classical heuristic algorithm,and proposes a novel neutrosophic adaptive clustering optimization thought,which is applied in a novel neutrosophic genetic algorithm(NGA),for example.The main feature of NGA is that the NGA treats the crossover effect as a neutrosophic fuzzy set,the variation ratio as a structural parameter,the crossover effect as a benefit parameter and the variation effect as a cost parameter,and then a neutrosophic fitness function value is created.Finally,a high order assignment problem in warehousemanagement is taken to illustrate the effectiveness of NGA.展开更多
文摘Nowadays,succeeding safe communication and protection-sensitive data from unauthorized access above public networks are the main worries in cloud servers.Hence,to secure both data and keys ensuring secured data storage and access,our proposed work designs a Novel Quantum Key Distribution(QKD)relying upon a non-commutative encryption framework.It makes use of a Novel Quantum Key Distribution approach,which guarantees high level secured data transmission.Along with this,a shared secret is generated using Diffie Hellman(DH)to certify secured key generation at reduced time complexity.Moreover,a non-commutative approach is used,which effectively allows the users to store and access the encrypted data into the cloud server.Also,to prevent data loss or corruption caused by the insiders in the cloud,Optimized Genetic Algorithm(OGA)is utilized,which effectively recovers the data and retrieve it if the missed data without loss.It is then followed with the decryption process as if requested by the user.Thus our proposed framework ensures authentication and paves way for secure data access,with enhanced performance and reduced complexities experienced with the prior works.
文摘Task scheduling is the main problem in cloud computing that reduces system performance;it is an important way to arrange user needs and perform multiple goals.Cloud computing is the most popular technology nowadays and has many research potential in various areas like resource allocation,task scheduling,security,privacy,etc.To improve system performance,an efficient task-scheduling algorithm is required.Existing task-scheduling algorithms focus on task-resource requirements,CPU memory,execution time,and execution cost.In this paper,a task scheduling algorithm based on a Genetic Algorithm(GA)has been presented for assigning and executing different tasks.The proposed algorithm aims to minimize both the completion time and execution cost of tasks and maximize resource utilization.We evaluate our algorithm’s performance by applying it to two examples with a different number of tasks and processors.The first example contains ten tasks and four processors;the computation costs are generated randomly.The last example has eight processors,and the number of tasks ranges from twenty to seventy;the computation cost of each task on different processors is generated randomly.The achieved results show that the proposed approach significantly succeeded in finding the optimal solutions for the three objectives;completion time,execution cost,and resource utilization.
文摘Task scheduling in highly elastic and dynamic processing environments such as cloud computing have become the most discussed problem among researchers.Task scheduling algorithms are responsible for the allocation of the tasks among the computing resources for their execution,and an inefficient task scheduling algorithm results in under-or over-utilization of the resources,which in turn leads to degradation of the services.Therefore,in the proposed work,load balancing is considered as an important criterion for task scheduling in a cloud computing environment as it can help in reducing the overhead in the critical decision-oriented process.In this paper,we propose an adaptive genetic algorithm-based load balancing(GALB)-aware task scheduling technique that not only results in better utilization of resources but also helps in optimizing the values of key performance indicators such as makespan,performance improvement ratio,and degree of imbalance.The concept of adaptive crossover and mutation is used in this work which results in better adaptation for the fittest individual of the current generation and prevents them from the elimination.CloudSim simulator has been used to carry out the simulations and obtained results establish that the proposed GALB algorithm performs better for all the key indicators and outperforms its peers which are taken into the consideration.
基金This work is supported by the NSFC(61672294,61601236,U1536206,61502242,61572258,U1405254,61373133,61373132,61232016)BK20150925,Six peak talent project of Jiangsu Province(R2016L13),NRF-2016R1D1A1B03933294,CICAEET,and PAPD fund.
文摘Genetic Algorithm(GA)has been widely used to solve various optimization problems.As the solving process of GA requires large storage and computing resources,it is well motivated to outsource the solving process of GA to the cloud server.However,the algorithm user would never want his data to be disclosed to cloud server.Thus,it is necessary for the user to encrypt the data before transmitting them to the server.But the user will encounter a new problem.The arithmetic operations we are familiar with cannot work directly in the ciphertext domain.In this paper,a privacy-preserving outsourced genetic algorithm is proposed.The user’s data are protected by homomorphic encryption algorithm which can support the operations in the encrypted domain.GA is elaborately adapted to search the optimal result over the encrypted data.The security analysis and experiment results demonstrate the effectiveness of the proposed scheme.
基金National Natural Science Foundation of China(61473216)Shaanxi Provincial Fund(2015JM6337)。
文摘In order to solve the problem that the resource scheduling time of cloud data center is too long,this paper analyzes the two-stage resource scheduling mechanism of cloud data center.Aiming at the minimum task completion time,a mathematical model of resource scheduling in cloud data center is established.The two-stage resource scheduling optimization simulation is realized by using the conventional genetic algorithm.On the technology of the conventional genetic algorithm,an adaptive transformation operator is designed to improve the crossover and mutation of the genetic algorithm.The experimental results show that the improved genetic algorithm can significantly reduce the total completion time of the task,and has good convergence and global optimization ability.
文摘Non-linearity and parameter time-variety are inherent properties of lateral motions of a vehicle. How to effectively control intelligent vehicle (IV) lateral motions is a challenging task. Controller design can be regarded as a process of searching optimal structure from controller structure space and searching optimal parameters from parameter space. Based on this view, an intelligent vehicle lateral motions controller was designed. The controller structure was constructed by T-S fuzzy-neural network (FNN). Its parameters were searched and selected with genetic algorithm (GA). The simulation results indicate that the controller designed has strong robustness, high precision and good ride quality, and it can effectively resolve IV lateral motion non-linearity and time-variant parameters problem.
文摘With the development of Computerized Business Application, the amount of data is increasing exponentially. Cloud computing provides high performance computing resources and mass storage resources for massive data processing. In distributed cloud computing systems, data intensive computing can lead to data scheduling between data centers. Reasonable data placement can reduce data scheduling between the data centers effectively, and improve the data acquisition efficiency of users. In this paper, the mathematical model of data scheduling between data centers is built. By means of the global optimization ability of the genetic algorithm, generational evolution produces better approximate solution, and gets the best approximation of the data placement at last. The experimental results show that genetic algorithm can effectively work out the approximate optimal data placement, and minimize data scheduling between data centers.
基金Supported by National Natural Science Foundation of China(Grant No.51109094)Priority Academic Program Development of Jiangsu Higher Education Institutions of China
文摘Computational fluid dynamics(CFD) can give a lot of potentially very useful information for hydraulic optimization design of pumps, however, it cannot directly state what kind of modification should be made to improve such hydrodynamic performance. In this paper, a more convenient and effective approach is proposed by combined using of CFD, multi-objective genetic algorithm(MOGA) and artificial neural networks(ANN) for a double-channel pump's impeller, with maximum head and efficiency set as optimization objectives, four key geometrical parameters including inlet diameter, outlet diameter, exit width and midline wrap angle chosen as optimization parameters. Firstly, a multi-fidelity fitness assignment system in which fitness of impellers serving as training and comparison samples for ANN is evaluated by CFD, meanwhile fitness of impellers generated by MOGA is evaluated by ANN, is established and dramatically reduces the computational expense. Then, a modified MOGA optimization process, in which selection is performed independently in two sub-populations according to two optimization objectives, crossover and mutation is performed afterword in the merged population, is developed to ensure the global optimal solution to be found. Finally, Pareto optimal frontier is found after 500 steps of iterations, and two optimal design schemes are chosen according to the design requirements. The preliminary and optimal design schemes are compared, and the comparing results show that hydraulic performances of both pumps 1 and 2 are improved, with the head and efficiency of pump 1 increased by 5.7% and 5.2%, respectively in the design working conditions, meanwhile shaft power decreased in all working conditions, the head and efficiency of pump 2 increased by 11.7% and 5.9%, respectively while shaft power increased by 5.5%. Inner flow field analyses also show that the backflow phenomenon significantly diminishes at the entrance of the optimal impellers 1 and 2, both the area of vortex and intensity of vortex decreases in the whole flow channel. This paper provides a promising tool to solve the hydraulic optimization problem of pumps' impellers.
基金the National Natural Science Foundation of China (No. 50677062)the New Century Excellent Talents in Uni-versity of China (No. NCET-07-0745)the Natural Science Foundation of Zhejiang Province, China (No. R107062)
文摘In this paper, a mathematical model consisting of forward and backward models is built on parallel genetic algorithms (PGAs) for fault diagnosis in a transmission power system. A new method to reduce the scale of fault sections is developed in the forward model and the message passing interface (MPI) approach is chosen to parallel the genetic algorithms by global sin-gle-population master-slave method (GPGAs). The proposed approach is applied to a sample system consisting of 28 sections, 84 protective relays and 40 circuit breakers. Simulation results show that the new model based on GPGAs can achieve very fast computation in online applications of large-scale power systems.
基金Project(50275150) supported by the National Natural Science Foundation of ChinaProjects(20040533035, 20070533131) supported by the National Research Foundation for the Doctoral Program of Higher Education of China
文摘A novel immune genetic algorithm with the elitist selection and elitist crossover was proposed, which is called the immune genetic algorithm with the elitism (IGAE). In IGAE, the new methods for computing antibody similarity, expected reproduction probability, and clonal selection probability were given. IGAE has three features. The first is that the similarities of two antibodies in structure and quality are all defined in the form of percentage, which helps to describe the similarity of two antibodies more accurately and to reduce the computational burden effectively. The second is that with the elitist selection and elitist crossover strategy IGAE is able to find the globally optimal solution of a given problem. The third is that the formula of expected reproduction probability of antibody can be adjusted through a parameter r, which helps to balance the population diversity and the convergence speed of IGAE so that IGAE can find the globally optimal solution of a given problem more rapidly. Two different complex multi-modal functions were selected to test the validity of IGAE. The experimental results show that IGAE can find the globally maximum/minimum values of the two functions rapidly. The experimental results also confirm that IGAE is of better performance in convergence speed, solution variation behavior, and computational efficiency compared with the canonical genetic algorithm with the elitism and the immune genetic algorithm with the information entropy and elitism.
基金NationalNaturalScienceFoundationofChina (No .60 2 3 40 2 0 )
文摘A neurocomputing model for Genetic Algorithm (GA) to break the speed bottleneck of GA was proposed. With all genetic operations parallel implemented by NN-based sub-modules, the model integrates both the strongpoint of parallel GA (PGA) and those of hardware GA (HGA). Moreover a new crossover operator named universe crossover was also proposed to suit the NN-based realization. This model was tested with a benchmark function set, and the experimental results validated the potential of the neurocomputing model. The significance of this model means that HGA and PGA can be integrated and the inherent parallelism of GA can be explicitly and farthest realized, as a result, the optimization speed of GA will be accelerated by one or two magnitudes compered to the serial implementation with same speed hardware, and GA will be turned from an algorithm into a machine.
基金the National Natural Science Foundation of China (No.60675048)Science and Technology Research Project of the Ministry of Education (No.204181).
文摘An improved self-calibrating algorithm for visual servo based on adaptive genetic algorithm is proposed in this paper. Our approach introduces an extension of Mendonca-Cipolla and G. Chesi's self-calibration for the positionbased visual servo technique which exploits the singular value property of the essential matrix. Specifically, a suitable dynamic online cost function is generated according to the property of the three singular values. The visual servo process is carried out simultaneous to the dynamic self-calibration, and then the cost function is minimized using the adaptive genetic algorithm instead of the gradient descent method in G. Chesi's approach. Moreover, this method overcomes the limitation that the initial parameters must be selected close to the true value, which is not constant in many cases. It is not necessary to know exactly the camera intrinsic parameters when using our approach, instead, coarse coding bounds of the five parameters are enough for the algorithm, which can be done once and for all off-line. Besides, this algorithm does not require knowledge of the 3D model of the object. Simulation experiments are carried out and the results demonstrate that the proposed approach provides a fast convergence speed and robustness against unpredictable perturbations of camera parameters, and it is an effective and efficient visual servo algorithm.
文摘There are three difficult problems in the application of genetic algorithms, namely, the parameter control, the premature convergence and the deception problem.Based on genetic algorithm with varying population size, a self adaptive genetic algorithm called natural genetic algorithm (NGA) is proposed. This algorithm introduces the population size threshold and the immigrant concepts, and adopts dynamically changing parameters. The design and structure of NGA are discussed in this paper.The performance of NGA is also analyzed.
文摘This paper describes an efficient solution to parallelize softwareprogram instructions, regardless of the programming language in which theyare written. We solve the problem of the optimal distribution of a set ofinstructions on available processors. We propose a genetic algorithm to parallelize computations, using evolution to search the solution space. The stagesof our proposed genetic algorithm are: The choice of the initial populationand its representation in chromosomes, the crossover, and the mutation operations customized to the problem being dealt with. In this paper, geneticalgorithms are applied to the entire search space of the parallelization ofthe program instructions problem. This problem is NP-complete, so thereare no polynomial algorithms that can scan the solution space and solve theproblem. The genetic algorithm-based method is general and it is simple andefficient to implement because it can be scaled to a larger or smaller number ofinstructions that must be parallelized. The parallelization technique proposedin this paper was developed in the C# programming language, and our resultsconfirm the effectiveness of our parallelization method. Experimental resultsobtained and presented for different working scenarios confirm the theoreticalresults, and they provide insight on how to improve the exploration of a searchspace that is too large to be searched exhaustively.
基金Supported by the National Natural Science Foundation of China(70272002) .
文摘In order to solve the complex optimization problem dealing with uncertain phenomenon effectively, this paper presents a probability simulation optimization approach using orthogonal genetic algorithm. This approach synthesizes the computer simulation technology, orthogonal genetic algorithm and statistical test method faultlessly, which can solve complex optimization problem effectively. In this paper, the author gives the correlative conception of probability simulation optimization and describes the probability simulation optimization approach using orthogonal genetic algorithm in detail. Theoretically speaking, it has a strong rationality and maneuverability that can apply probability method in solving the complex optimization problems with uncertain phenomenon. In demonstration, the optimization performance of this method is better than other traditional methods. Simulation resuh suggests that the approach referred to this paper is feasible, correct and valid.
基金supported by the National Science Council, Taiwan(No. 96-2221-E-327-027, No. 96-2221-E-327-005-MY2, and No. 96-2628-E-327-004-MY3).
文摘This article presents a multiobjective approach to the design of the controller for the swing-up and handstand control of a general cart-double-pendulum system (CDPS). The designed controller, which is based on the human-simulated intelligent control (HSIC) method, builds up different control modes to monitor and control the CDPS during four kinetic phases consisting of an initial oscillation phase, a swing-up phase, a posture adjustment phase, and a balance control phase. For the approach, the original method of inequalities-based (MoI) multiobjective genetic algorithm (MMGA) is extended and applied to the case study which uses a set of performance indices that includes the cart displacement over the rail boundary, the number of swings, the settling time, the overshoot of the total energy, and the control effort. The simulation results show good responses of the CDPS with the controllers obtained by the proposed approach.
文摘Computational fluid dynamics (CFD) plays a major role in predicting the flow behavior of a ship. With the development of fast computers and robust CFD software, CFD has become an important tool for designers and engineers in the ship industry. In this paper, the hull form of a ship was optimized for total resistance using CFD as a calculation tool and a genetic algorithm as an optimization tool. CFD based optimization consists of major steps involving automatic generation of geometry based on design parameters, automatic generation of mesh, automatic analysis of fluid flow to calculate the required objective/cost function, and finally an optimization tool to evaluate the cost for optimization. In this paper, integration of a genetic algorithm program, written in MATLAB, was carried out with the geometry and meshing software GAMBIT and CFD analysis software FLUENT. Different geometries of additive bulbous bow were incorporated in the original hull based on design parameters. These design variables were optimized to achieve a minimum cost function of "total resistance". Integration of a genetic algorithm with CFD tools proves to be effective for hull form ootimization.
基金Supported by the National Natural Science Foundation of China(No.61401496)
文摘To reduce resources consumption of parallel computation system, a static task scheduling opti- mization method based on hybrid genetic algorithm is proposed and validated, which can shorten the scheduling length of parallel tasks with precedence constraints. Firstly, the global optimal model and constraints are created to demonstrate the static task scheduling problem in heterogeneous distributed computing systems(HeDCSs). Secondly, the genetic population is coded with matrix and used to search the total available time span of the processors, and then the simulated annealing algorithm is introduced to improve the convergence speed and overcome the problem of easily falling into local minimum point, which exists in the traditional genetic algorithm. Finally, compared to other existed scheduling algorithms such as dynamic level scheduling ( DLS), heterogeneous earliest finish time (HEFr), and longest dynamic critical path( LDCP), the proposed approach does not merely de- crease tasks schedule length, but also achieves the maximal resource utilization of parallel computa- tion system by extensive experiments.
文摘Accurate stereo vision calibration is a preliminary step towards high-precision visual posi- tioning of robot. Combining with the characteristics of genetic algorithm (GA) and particle swarm optimization (PSO), a three-stage calibration method based on hybrid intelligent optimization is pro- posed for nonlinear camera models in this paper. The motivation is to improve the accuracy of the calibration process. In this approach, the stereo vision calibration is considered as an optimization problem that can be solved by the GA and PSO. The initial linear values can be obtained in the frost stage. Then in the second stage, two cameras' parameters are optimized separately. Finally, the in- tegrated optimized calibration of two models is obtained in the third stage. Direct linear transforma- tion (DLT), GA and PSO are individually used in three stages. It is shown that the results of every stage can correctly find near-optimal solution and it can be used to initialize the next stage. Simula- tion analysis and actual experimental results indicate that this calibration method works more accu- rate and robust in noisy environment compared with traditional calibration methods. The proposed method can fulfill the requirements of robot sophisticated visual operation.
基金supported by Shanghai Pujiang Pro-gram(2019PJC062)the Natural Science Foundation of Shandong Province(ZR2021MG003)+2 种基金the Research Project on Undergraduate Teaching Reform of Higher Education in Shandong Province(No.Z2021046)the National Natural Science Foundation of China(51508319)the Nature and Science Fund from Zhejiang Province Ministry of Education(Y201327642).
文摘In optimization theory,the adaptive control of the optimization process is an important goal that people pursue.To solve this problem,this study introduces the idea of neutrosophic decision-making into classical heuristic algorithm,and proposes a novel neutrosophic adaptive clustering optimization thought,which is applied in a novel neutrosophic genetic algorithm(NGA),for example.The main feature of NGA is that the NGA treats the crossover effect as a neutrosophic fuzzy set,the variation ratio as a structural parameter,the crossover effect as a benefit parameter and the variation effect as a cost parameter,and then a neutrosophic fitness function value is created.Finally,a high order assignment problem in warehousemanagement is taken to illustrate the effectiveness of NGA.