Cloud computing has gained significant recognition due to its ability to provide a broad range of online services and applications.Nevertheless,existing commercial cloud computing models demonstrate an appropriate des...Cloud computing has gained significant recognition due to its ability to provide a broad range of online services and applications.Nevertheless,existing commercial cloud computing models demonstrate an appropriate design by concentrating computational assets,such as preservation and server infrastructure,in a limited number of large-scale worldwide data facilities.Optimizing the deployment of virtual machines(VMs)is crucial in this scenario to ensure system dependability,performance,and minimal latency.A significant barrier in the present scenario is the load distribution,particularly when striving for improved energy consumption in a hypothetical grid computing framework.This design employs load-balancing techniques to allocate different user workloads across several virtual machines.To address this challenge,we propose using the twin-fold moth flame technique,which serves as a very effective optimization technique.Developers intentionally designed the twin-fold moth flame method to consider various restrictions,including energy efficiency,lifespan analysis,and resource expenditures.It provides a thorough approach to evaluating total costs in the cloud computing environment.When assessing the efficacy of our suggested strategy,the study will analyze significant metrics such as energy efficiency,lifespan analysis,and resource expenditures.This investigation aims to enhance cloud computing techniques by developing a new optimization algorithm that considers multiple factors for effective virtual machine placement and load balancing.The proposed work demonstrates notable improvements of 12.15%,10.68%,8.70%,13.29%,18.46%,and 33.39%for 40 count data of nodes using the artificial bee colony-bat algorithm,ant colony optimization,crow search algorithm,krill herd,whale optimization genetic algorithm,and improved Lévy-based whale optimization algorithm,respectively.展开更多
As a large amount of data is increasingly generated from edge devices,such as smart homes,mobile phones,and wearable devices,it becomes crucial for many applications to deploy machine learning modes across edge device...As a large amount of data is increasingly generated from edge devices,such as smart homes,mobile phones,and wearable devices,it becomes crucial for many applications to deploy machine learning modes across edge devices.The execution speed of the deployed model is a key element to ensure service quality.Considering a highly heterogeneous edge deployment scenario,deep learning compiling is a novel approach that aims to solve this problem.It defines models using certain DSLs and generates efficient code implementations on different hardware devices.However,there are still two aspects that are not yet thoroughly investigated yet.The first is the optimization of memory-intensive operations,and the second problem is the heterogeneity of the deployment target.To that end,in this work,we propose a system solution that optimizes memory-intensive operation,optimizes the subgraph distribution,and enables the compiling and deployment of DNN models on multiple targets.The evaluation results show the performance of our proposed system.展开更多
As a major configuration of membrane elements,multi-channel porous inorganic membrane tubes were studied by means of theoretical analysis and simulation.Configuration optimization of a cylindrical 37-channel porous in...As a major configuration of membrane elements,multi-channel porous inorganic membrane tubes were studied by means of theoretical analysis and simulation.Configuration optimization of a cylindrical 37-channel porous inorganic membrane tube was studied by increasing membrane filtration area and increasing permeation efficiency of inner channels.An optimal ratio of the channel diameter to the inter-channel distance was proposed so as to increase the total membrane filtration area of the membrane tube.The three-dimensional computational fluid dynamics(CFD) simulation was conducted to study the cross-flow permeation flow of pure water in the 37-channel ceramic membrane tube.A model combining Navier–Stokes equation with Darcy's law and the porous jump boundary conditions was applied.The relationship between permeation efficiency and channel locations,and the method for increasing the permeation efficiency of inner channels were proposed.Some novel multichannel membrane configurations with more permeate side channels were put forward and evaluated.展开更多
Offloading application to cloud can augment mobile devices' computation capabilities for the emerging resource-hungry mobile application, however it can also consume both much time and energy for mobile device off...Offloading application to cloud can augment mobile devices' computation capabilities for the emerging resource-hungry mobile application, however it can also consume both much time and energy for mobile device offloading application remotely to cloud. In this paper, we develop a newly adaptive application offloading decision-transmission scheduling scheme which can solve above problem efficiently. Specifically, we first propose an adaptive application offloading model which allows multiple target clouds coexisting. Second, based on Lyapunov optimization theory, a low complexity adaptive offloading decision-transmission scheduling scheme has been proposed. And the performance analysis is also given. Finally, simulation results show that,compared with that all applications are executed locally, mobile device can save 68.557% average execution time and 67.095% average energy consumption under situations.展开更多
The article consists of two parts.Part I shows the possibility of quantum/soft computing optimizers of knowledge bases(QSCOptKB™)as the toolkit of quantum deep machine learning technology implementation in the solutio...The article consists of two parts.Part I shows the possibility of quantum/soft computing optimizers of knowledge bases(QSCOptKB™)as the toolkit of quantum deep machine learning technology implementation in the solution’s search of intelligent cognitive control tasks applied the cognitive helmet as neurointerface.In particular case,the aim of this part is to demonstrate the possibility of classifying the mental states of a human being operator in on line with knowledge extraction from electroencephalograms based on SCOptKB™and QCOptKB™sophisticated toolkit.Application of soft computing technologies to identify objective indicators of the psychophysiological state of an examined person described.The role and necessity of applying intelligent information technologies development based on computational intelligence toolkits in the task of objective estimation of a general psychophysical state of a human being operator shown.Developed information technology examined with special(difficult in diagnostic practice)examples emotion state estimation of autism children(ASD)and dementia and background of the knowledge bases design for intelligent robot of service use is it.Application of cognitive intelligent control in navigation of autonomous robot for avoidance of obstacles demonstrated.展开更多
The task of an intelligent control system design applying soft and quantum computational intelligence technologies discussed.An example of a control object as a mobile robot with redundant robotic manipulator and ster...The task of an intelligent control system design applying soft and quantum computational intelligence technologies discussed.An example of a control object as a mobile robot with redundant robotic manipulator and stereovision introduced.Design of robust knowledge bases is performed using a developed computational intelligence-quantum/soft computing toolkit(QC/SCOptKBTM).The knowledge base self-organization process of fuzzy homogeneous regulators through the application of end-to-end IT of quantum computing described.The coordination control between the mobile robot and redundant manipulator with stereovision based on soft computing described.The general design methodology of a generalizing control unit based on the physical laws of quantum computing(quantum information-thermodynamic trade-off of control quality distribution and knowledge base self-organization goal)is considered.The modernization of the pattern recognition system based on stereo vision technology presented.The effectiveness of the proposed methodology is demonstrated in comparison with the structures of control systems based on soft computing for unforeseen control situations with sensor system.The main objective of this article is to demonstrate the advantages of the approach based on quantum/soft computing.展开更多
Mobile cloud computing(MCC) combines mobile Internet and cloud computing to improve the performance of mobile applications. However, MCC faces the problem of energy efficiency because of randomly varying channels. A...Mobile cloud computing(MCC) combines mobile Internet and cloud computing to improve the performance of mobile applications. However, MCC faces the problem of energy efficiency because of randomly varying channels. A scheduling algorithm is proposed by introducing the Lyapunov optimization, which can dynamically choose users to transmit data based on queue backlog and channel statistics. The Lyapunov analysis shows that the proposed scheduling algorithm can make a tradeoff between queue backlog and energy consumption in the channel-aware mobile cloud computing system. The simulation results verify the effectiveness of the proposed algorithm.展开更多
The constrained multi-objective multi-variable optimization of fans usually needs a great deal of computational fluid dynamics(CFD)calculations and is time-consuming.In this study,a new multi-model ensemble optimizati...The constrained multi-objective multi-variable optimization of fans usually needs a great deal of computational fluid dynamics(CFD)calculations and is time-consuming.In this study,a new multi-model ensemble optimization algorithm is proposed to tackle such an expensive optimization problem.The multi-variable and multi-objective optimization are conducted with a new flexible multi-objective infill criterion.In addition,the search direction is determined by the multi-model ensemble assisted evolutionary algorithm and the feature extraction by the principal component analysis is used to reduce the dimension of optimization variables.First,the proposed algorithm and other two optimization algorithms which prevail in fan optimizations were compared by using test functions.With the same number of objective function evaluations,the proposed algorithm shows a fast convergency rate on finding the optimal objective function values.Then,this algorithm was used to optimize the rotor and stator blades of a large axial fan,with the efficiencies as the objectives at three flow rates,the high,the design and the low flow rate.Forty-two variables were included in the optimization process.The results show that compared with the prototype fan,the total pressure efficiencies of the optimized fan at the high,the design and the low flow rate were increased by 3.35%,3.07%and 2.89%,respectively,after CFD simulations for 500 fan candidates with the constraint for the design pressure.The optimization results validate the effectiveness and feasibility of the proposed algorithm.展开更多
Pseudospectral (PS) computational methods for nonlinear constrained optimal control have been applied to many industrial-strength problems,notably,the recent zero-propellant-maneuvering of the international space st...Pseudospectral (PS) computational methods for nonlinear constrained optimal control have been applied to many industrial-strength problems,notably,the recent zero-propellant-maneuvering of the international space station performed by NASA.In this paper,we prove a theorem on the rate of convergence for the optimal cost computed using a Legendre PS method.In addition to the high-order convergence rate,two theorems are proved for the existence and convergence of the approximate solutions.Relative to existing work on PS optimal control as well as some other direct computational methods,the proofs do not use necessary conditions of optimal control.Furthermore,we do not make coercivity type of assumptions.As a result,the theory does not require the local uniqueness of optimal solutions.In addition,a restrictive assumption on the cluster points of discrete solutions made in existing convergence theorems is removed.展开更多
Finite element(FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations(RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy...Finite element(FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations(RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time(TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method(CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ(λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.展开更多
The Number Theory comes back as the heart of unified Science, in a Computing Cosmos using the bases 2;3;5;7 whose two symmetric combinations explain the main lepton mass ratios. The corresponding Holic Principle induc...The Number Theory comes back as the heart of unified Science, in a Computing Cosmos using the bases 2;3;5;7 whose two symmetric combinations explain the main lepton mass ratios. The corresponding Holic Principle induces a symmetry between the Newton and Planck constants which confirm the Permanent Sweeping Holography Bang Cosmology, with invariant baryon density 3/10, the dark baryons being dephased matter-antimatter oscillation. This implies the DNA bi-codon mean isotopic mass, confirming to 0.1 ppm the electron-based Topological Axis, whose terminal boson is the base 2 c-observable Universe in the base 3 Cosmos. The physical parameters involve the Euler idoneal numbers and the special Fermat primes of Wieferich (bases 2) and Mirimanoff (base 3). The prime numbers and crystallographic symmetries are related to the 4-fold structure of the DNA bi-codon. The forgotten Eddington’s proton-tau symmetry is rehabilitated, renewing the supersymmetry quest. This excludes the concepts of Multiverse, Continuum, Infinity, Locality and Zero-mass Particle, leading to stringent predictions in Cosmology, Particle Physics and Biology.展开更多
The quantitative rules of the transfer and variation of errors,when the Gaussian integral functions F.(z) are evaluated sequentially by recurring,have been expounded.The traditional viewpoint to negate the applicabili...The quantitative rules of the transfer and variation of errors,when the Gaussian integral functions F.(z) are evaluated sequentially by recurring,have been expounded.The traditional viewpoint to negate the applicability and reliability of upward recursive formula in principle is amended.An optimal scheme of upward-and downward-joint recursions has been developed for the sequential F(z) computations.No additional accuracy is needed with the fundamental term of recursion because the absolute error of Fn(z) always decreases with the recursive approach.The scheme can be employed in modifying any of existent subprograms for Fn<z> computations.In the case of p-d-f-and g-type Gaussians,combining this method with Schaad's formulas can reduce,at least,the additive operations by a factor 40%;the multiplicative and exponential operations by a factor 60%.展开更多
Cloud computing provides the essential infrastructure for multi-tier Ambient Assisted Living(AAL) applications that facilitate people's lives. Resource provisioning is a critically important problem for AAL applic...Cloud computing provides the essential infrastructure for multi-tier Ambient Assisted Living(AAL) applications that facilitate people's lives. Resource provisioning is a critically important problem for AAL applications in cloud data centers(CDCs). This paper focuses on modeling and analysis of multi-tier AAL applications, and aims to optimize resource provisioning while meeting requests' response time constraint. This paper models a multi-tier AAL application as a hybrid multi-tier queueing model consisting of an M/M/c queueing model and multiple M/M/1 queueing models. Then, virtual machine(VM) allocation is formulated as a constrained optimization problem in a CDC, and is further solved with the proposed heuristic VM allocation algorithm(HVMA). The results demonstrate that the proposed model and algorithm can effectively achieve dynamic resource provisioning while meeting the performance constraint.展开更多
The introduction of density functional theory(DFT)and electronic structure has brought computational methods into the field of materials science.In these theoretical calculations,quantum mechanics is predominantly use...The introduction of density functional theory(DFT)and electronic structure has brought computational methods into the field of materials science.In these theoretical calculations,quantum mechanics is predominantly used.Machine learning(ML)and high-throughput computing share some inherent similarities,as both can extract valuable information from massive datasets and possess parallelism and scalability.ML techniques simulate human thought processes,with algorithms that make decisions and have good scalability and strong generalization abilities.The combination of high-throughput and ML technologies leverages the advantages of high-throughput technology standardization and high capacity,addressing the challenges faced by ML at the front end.This complementary combination is expected to further enhance the efficiency of material screening and development.In data mining,using ML methods on various databases,the interrelationships between molecular structures and properties are discovered from large amounts of data.Mapping,current utilization of DFT,materials genomics,and high-throughput computing have generated a substantial amount of data.This review provides new insights into the development of electrochemistry.展开更多
This paper puts forward a design idea for blended wing body(BWB).The idea is described as that cruise point,maximum lift to drag point and pitch trim point are in the same flight attitude.According to this design id...This paper puts forward a design idea for blended wing body(BWB).The idea is described as that cruise point,maximum lift to drag point and pitch trim point are in the same flight attitude.According to this design idea,design objectives and constraints are defined.By applying low and high fidelity aerodynamic analysis tools,BWB aerodynamic design methodology is established by the combination of optimization design and inverse design methods.High lift to drag ratio,pitch trim and acceptable buffet margin can be achieved by this design methodology.For 300-passenger BWB configuration based on static stability design,as compared with initial configuration,the maximum lift to drag ratio and pitch trim are achieved at cruise condition,zero lift pitching moment is positive,and buffet characteristics is well.Fuel burn of 300-passenger BWB configuration is also significantly reduced as compared with conventional civil transports.Because aerodynamic design is carried out under the constraints of BWB design requirements,the design configuration fulfills the demands for interior layout and provides a solid foundation for continuous work.展开更多
Although emission spectral tomography (EST) combines emission spectral measurement with optical computed tomography (OCT), it is difficult to gain transient emission data from a large number of views, therefore, h...Although emission spectral tomography (EST) combines emission spectral measurement with optical computed tomography (OCT), it is difficult to gain transient emission data from a large number of views, therefore, high precision OCT algorithms with few views ought to be studied for EST application. To improve the reconstruction precision in the case of few views, a new computed tomography reconstruction algorithm based on multipurpose optimal criterion and simulated annealing theory (multi-criterion simulated annealing reconstruction technique, MCSART) is proposed. This algorithm can suffice criterion of least squares, criterion of most uniformity, and criterion of most smoothness synchronously. We can get global optimal solution by MCSART algorithm with simulated annealing theory. The simulating experiment result shows that this algorithm is superior to the traditional algorithms under various noises.展开更多
The hardware and software architectures of core service platforms for next-generation networks were analyzed to compute the minimum cost hardware configuration of a core service platform. This method gives a closed fo...The hardware and software architectures of core service platforms for next-generation networks were analyzed to compute the minimum cost hardware configuration of a core service platform. This method gives a closed form expression for the optimized hardware cost configuration based on the service requirements, the processing features of the computers running the core service platform software, and the processing capabilities of the common object request broker architecture middleware. Three simulation scenarios were used to evaluate the model. The input includes the number of servers for the protocol mapping (PM), Parlay gateway (PG), application sever (AS), and communication handling (CH) functions. The simulation results show that the mean delay meets requirements. When the number of servers for PM, PG, AS, and CH functions were not properly selected, the mean delay was excessive. Simulation results show that the model is valid and can be used to optimize investments in core service platforms.展开更多
基金This work was supported in part by the Natural Science Foundation of the Education Department of Henan Province(Grant 22A520025)the National Natural Science Foundation of China(Grant 61975053)the National Key Research and Development of Quality Information Control Technology for Multi-Modal Grain Transportation Efficient Connection(2022YFD2100202).
文摘Cloud computing has gained significant recognition due to its ability to provide a broad range of online services and applications.Nevertheless,existing commercial cloud computing models demonstrate an appropriate design by concentrating computational assets,such as preservation and server infrastructure,in a limited number of large-scale worldwide data facilities.Optimizing the deployment of virtual machines(VMs)is crucial in this scenario to ensure system dependability,performance,and minimal latency.A significant barrier in the present scenario is the load distribution,particularly when striving for improved energy consumption in a hypothetical grid computing framework.This design employs load-balancing techniques to allocate different user workloads across several virtual machines.To address this challenge,we propose using the twin-fold moth flame technique,which serves as a very effective optimization technique.Developers intentionally designed the twin-fold moth flame method to consider various restrictions,including energy efficiency,lifespan analysis,and resource expenditures.It provides a thorough approach to evaluating total costs in the cloud computing environment.When assessing the efficacy of our suggested strategy,the study will analyze significant metrics such as energy efficiency,lifespan analysis,and resource expenditures.This investigation aims to enhance cloud computing techniques by developing a new optimization algorithm that considers multiple factors for effective virtual machine placement and load balancing.The proposed work demonstrates notable improvements of 12.15%,10.68%,8.70%,13.29%,18.46%,and 33.39%for 40 count data of nodes using the artificial bee colony-bat algorithm,ant colony optimization,crow search algorithm,krill herd,whale optimization genetic algorithm,and improved Lévy-based whale optimization algorithm,respectively.
基金supported by the National Natural Science Foundation of China(U21A20519)。
文摘As a large amount of data is increasingly generated from edge devices,such as smart homes,mobile phones,and wearable devices,it becomes crucial for many applications to deploy machine learning modes across edge devices.The execution speed of the deployed model is a key element to ensure service quality.Considering a highly heterogeneous edge deployment scenario,deep learning compiling is a novel approach that aims to solve this problem.It defines models using certain DSLs and generates efficient code implementations on different hardware devices.However,there are still two aspects that are not yet thoroughly investigated yet.The first is the optimization of memory-intensive operations,and the second problem is the heterogeneity of the deployment target.To that end,in this work,we propose a system solution that optimizes memory-intensive operation,optimizes the subgraph distribution,and enables the compiling and deployment of DNN models on multiple targets.The evaluation results show the performance of our proposed system.
基金Supported by the National Basic Research Program of China(2012CB224806)the National Natural Science Foundation of China(21490584,21476236)the National High Technology Research and Development Program of China(2012AA03A606)
文摘As a major configuration of membrane elements,multi-channel porous inorganic membrane tubes were studied by means of theoretical analysis and simulation.Configuration optimization of a cylindrical 37-channel porous inorganic membrane tube was studied by increasing membrane filtration area and increasing permeation efficiency of inner channels.An optimal ratio of the channel diameter to the inter-channel distance was proposed so as to increase the total membrane filtration area of the membrane tube.The three-dimensional computational fluid dynamics(CFD) simulation was conducted to study the cross-flow permeation flow of pure water in the 37-channel ceramic membrane tube.A model combining Navier–Stokes equation with Darcy's law and the porous jump boundary conditions was applied.The relationship between permeation efficiency and channel locations,and the method for increasing the permeation efficiency of inner channels were proposed.Some novel multichannel membrane configurations with more permeate side channels were put forward and evaluated.
基金supported by National Natural Science Foundation of China (Grant No.61261017, No.61571143 and No.61561014)Guangxi Natural Science Foundation (2013GXNSFAA019334 and 2014GXNSFAA118387)+3 种基金Key Laboratory of Cognitive Radio and Information Processing, Ministry of Education (No.CRKL150112)Guangxi Key Lab of Wireless Wideband Communication & Signal Processing (GXKL0614202, GXKL0614101 and GXKL061501)Sci.and Tech.on Info.Transmission and Dissemination in Communication Networks Lab (No.ITD-U14008/KX142600015)Graduate Student Research Innovation Project of Guilin University of Electronic Technology (YJCXS201523)
文摘Offloading application to cloud can augment mobile devices' computation capabilities for the emerging resource-hungry mobile application, however it can also consume both much time and energy for mobile device offloading application remotely to cloud. In this paper, we develop a newly adaptive application offloading decision-transmission scheduling scheme which can solve above problem efficiently. Specifically, we first propose an adaptive application offloading model which allows multiple target clouds coexisting. Second, based on Lyapunov optimization theory, a low complexity adaptive offloading decision-transmission scheduling scheme has been proposed. And the performance analysis is also given. Finally, simulation results show that,compared with that all applications are executed locally, mobile device can save 68.557% average execution time and 67.095% average energy consumption under situations.
文摘The article consists of two parts.Part I shows the possibility of quantum/soft computing optimizers of knowledge bases(QSCOptKB™)as the toolkit of quantum deep machine learning technology implementation in the solution’s search of intelligent cognitive control tasks applied the cognitive helmet as neurointerface.In particular case,the aim of this part is to demonstrate the possibility of classifying the mental states of a human being operator in on line with knowledge extraction from electroencephalograms based on SCOptKB™and QCOptKB™sophisticated toolkit.Application of soft computing technologies to identify objective indicators of the psychophysiological state of an examined person described.The role and necessity of applying intelligent information technologies development based on computational intelligence toolkits in the task of objective estimation of a general psychophysical state of a human being operator shown.Developed information technology examined with special(difficult in diagnostic practice)examples emotion state estimation of autism children(ASD)and dementia and background of the knowledge bases design for intelligent robot of service use is it.Application of cognitive intelligent control in navigation of autonomous robot for avoidance of obstacles demonstrated.
文摘The task of an intelligent control system design applying soft and quantum computational intelligence technologies discussed.An example of a control object as a mobile robot with redundant robotic manipulator and stereovision introduced.Design of robust knowledge bases is performed using a developed computational intelligence-quantum/soft computing toolkit(QC/SCOptKBTM).The knowledge base self-organization process of fuzzy homogeneous regulators through the application of end-to-end IT of quantum computing described.The coordination control between the mobile robot and redundant manipulator with stereovision based on soft computing described.The general design methodology of a generalizing control unit based on the physical laws of quantum computing(quantum information-thermodynamic trade-off of control quality distribution and knowledge base self-organization goal)is considered.The modernization of the pattern recognition system based on stereo vision technology presented.The effectiveness of the proposed methodology is demonstrated in comparison with the structures of control systems based on soft computing for unforeseen control situations with sensor system.The main objective of this article is to demonstrate the advantages of the approach based on quantum/soft computing.
基金supported by the National Natural Science Foundation of China(61173017)the National High Technology Research and Development Program(863 Program)(2014AA01A701)
文摘Mobile cloud computing(MCC) combines mobile Internet and cloud computing to improve the performance of mobile applications. However, MCC faces the problem of energy efficiency because of randomly varying channels. A scheduling algorithm is proposed by introducing the Lyapunov optimization, which can dynamically choose users to transmit data based on queue backlog and channel statistics. The Lyapunov analysis shows that the proposed scheduling algorithm can make a tradeoff between queue backlog and energy consumption in the channel-aware mobile cloud computing system. The simulation results verify the effectiveness of the proposed algorithm.
基金support of National Science and Technology Major Project(2017-11-0007-0021)。
文摘The constrained multi-objective multi-variable optimization of fans usually needs a great deal of computational fluid dynamics(CFD)calculations and is time-consuming.In this study,a new multi-model ensemble optimization algorithm is proposed to tackle such an expensive optimization problem.The multi-variable and multi-objective optimization are conducted with a new flexible multi-objective infill criterion.In addition,the search direction is determined by the multi-model ensemble assisted evolutionary algorithm and the feature extraction by the principal component analysis is used to reduce the dimension of optimization variables.First,the proposed algorithm and other two optimization algorithms which prevail in fan optimizations were compared by using test functions.With the same number of objective function evaluations,the proposed algorithm shows a fast convergency rate on finding the optimal objective function values.Then,this algorithm was used to optimize the rotor and stator blades of a large axial fan,with the efficiencies as the objectives at three flow rates,the high,the design and the low flow rate.Forty-two variables were included in the optimization process.The results show that compared with the prototype fan,the total pressure efficiencies of the optimized fan at the high,the design and the low flow rate were increased by 3.35%,3.07%and 2.89%,respectively,after CFD simulations for 500 fan candidates with the constraint for the design pressure.The optimization results validate the effectiveness and feasibility of the proposed algorithm.
基金supported by the Air Force Office of Scientific Research(No.F1ATA0-90-4-3G001)and Air Force Research Laboratory
文摘Pseudospectral (PS) computational methods for nonlinear constrained optimal control have been applied to many industrial-strength problems,notably,the recent zero-propellant-maneuvering of the international space station performed by NASA.In this paper,we prove a theorem on the rate of convergence for the optimal cost computed using a Legendre PS method.In addition to the high-order convergence rate,two theorems are proved for the existence and convergence of the approximate solutions.Relative to existing work on PS optimal control as well as some other direct computational methods,the proofs do not use necessary conditions of optimal control.Furthermore,we do not make coercivity type of assumptions.As a result,the theory does not require the local uniqueness of optimal solutions.In addition,a restrictive assumption on the cluster points of discrete solutions made in existing convergence theorems is removed.
基金National Natural Science Foundation of China under Grant Nos.51639006 and 51725901
文摘Finite element(FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations(RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time(TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method(CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ(λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.
文摘The Number Theory comes back as the heart of unified Science, in a Computing Cosmos using the bases 2;3;5;7 whose two symmetric combinations explain the main lepton mass ratios. The corresponding Holic Principle induces a symmetry between the Newton and Planck constants which confirm the Permanent Sweeping Holography Bang Cosmology, with invariant baryon density 3/10, the dark baryons being dephased matter-antimatter oscillation. This implies the DNA bi-codon mean isotopic mass, confirming to 0.1 ppm the electron-based Topological Axis, whose terminal boson is the base 2 c-observable Universe in the base 3 Cosmos. The physical parameters involve the Euler idoneal numbers and the special Fermat primes of Wieferich (bases 2) and Mirimanoff (base 3). The prime numbers and crystallographic symmetries are related to the 4-fold structure of the DNA bi-codon. The forgotten Eddington’s proton-tau symmetry is rehabilitated, renewing the supersymmetry quest. This excludes the concepts of Multiverse, Continuum, Infinity, Locality and Zero-mass Particle, leading to stringent predictions in Cosmology, Particle Physics and Biology.
文摘The quantitative rules of the transfer and variation of errors,when the Gaussian integral functions F.(z) are evaluated sequentially by recurring,have been expounded.The traditional viewpoint to negate the applicability and reliability of upward recursive formula in principle is amended.An optimal scheme of upward-and downward-joint recursions has been developed for the sequential F(z) computations.No additional accuracy is needed with the fundamental term of recursion because the absolute error of Fn(z) always decreases with the recursive approach.The scheme can be employed in modifying any of existent subprograms for Fn<z> computations.In the case of p-d-f-and g-type Gaussians,combining this method with Schaad's formulas can reduce,at least,the additive operations by a factor 40%;the multiplicative and exponential operations by a factor 60%.
文摘Cloud computing provides the essential infrastructure for multi-tier Ambient Assisted Living(AAL) applications that facilitate people's lives. Resource provisioning is a critically important problem for AAL applications in cloud data centers(CDCs). This paper focuses on modeling and analysis of multi-tier AAL applications, and aims to optimize resource provisioning while meeting requests' response time constraint. This paper models a multi-tier AAL application as a hybrid multi-tier queueing model consisting of an M/M/c queueing model and multiple M/M/1 queueing models. Then, virtual machine(VM) allocation is formulated as a constrained optimization problem in a CDC, and is further solved with the proposed heuristic VM allocation algorithm(HVMA). The results demonstrate that the proposed model and algorithm can effectively achieve dynamic resource provisioning while meeting the performance constraint.
基金supported by the National Natural Science Foundation of China(grant no.U1904215)the Natural Science Foundation of Jiangsu Province,China(grant no.BK20200044)the Changjiang Scholars Program of the Ministry of Education,China(grant no.Q2018270).
文摘The introduction of density functional theory(DFT)and electronic structure has brought computational methods into the field of materials science.In these theoretical calculations,quantum mechanics is predominantly used.Machine learning(ML)and high-throughput computing share some inherent similarities,as both can extract valuable information from massive datasets and possess parallelism and scalability.ML techniques simulate human thought processes,with algorithms that make decisions and have good scalability and strong generalization abilities.The combination of high-throughput and ML technologies leverages the advantages of high-throughput technology standardization and high capacity,addressing the challenges faced by ML at the front end.This complementary combination is expected to further enhance the efficiency of material screening and development.In data mining,using ML methods on various databases,the interrelationships between molecular structures and properties are discovered from large amounts of data.Mapping,current utilization of DFT,materials genomics,and high-throughput computing have generated a substantial amount of data.This review provides new insights into the development of electrochemistry.
文摘This paper puts forward a design idea for blended wing body(BWB).The idea is described as that cruise point,maximum lift to drag point and pitch trim point are in the same flight attitude.According to this design idea,design objectives and constraints are defined.By applying low and high fidelity aerodynamic analysis tools,BWB aerodynamic design methodology is established by the combination of optimization design and inverse design methods.High lift to drag ratio,pitch trim and acceptable buffet margin can be achieved by this design methodology.For 300-passenger BWB configuration based on static stability design,as compared with initial configuration,the maximum lift to drag ratio and pitch trim are achieved at cruise condition,zero lift pitching moment is positive,and buffet characteristics is well.Fuel burn of 300-passenger BWB configuration is also significantly reduced as compared with conventional civil transports.Because aerodynamic design is carried out under the constraints of BWB design requirements,the design configuration fulfills the demands for interior layout and provides a solid foundation for continuous work.
基金This work was supported by the Chinese Natural Science Foundation of China(No.60577016)the Foundation(No. 0512034)of Jiangxi Natural Science+1 种基金the Science and Technology Program(No. 2006-164)of Jiangxi Provincial Department of Educationthe Program(No.2005-314)of Key Laboratory of Nondestructive Testing Technology,Ministry of Education.
文摘Although emission spectral tomography (EST) combines emission spectral measurement with optical computed tomography (OCT), it is difficult to gain transient emission data from a large number of views, therefore, high precision OCT algorithms with few views ought to be studied for EST application. To improve the reconstruction precision in the case of few views, a new computed tomography reconstruction algorithm based on multipurpose optimal criterion and simulated annealing theory (multi-criterion simulated annealing reconstruction technique, MCSART) is proposed. This algorithm can suffice criterion of least squares, criterion of most uniformity, and criterion of most smoothness synchronously. We can get global optimal solution by MCSART algorithm with simulated annealing theory. The simulating experiment result shows that this algorithm is superior to the traditional algorithms under various noises.
基金the China Postdoctoral Science Foundation (No. 20060390463)the Basic Research Foundation of Tsinghua National Laboratory for Information Science and Technology
文摘The hardware and software architectures of core service platforms for next-generation networks were analyzed to compute the minimum cost hardware configuration of a core service platform. This method gives a closed form expression for the optimized hardware cost configuration based on the service requirements, the processing features of the computers running the core service platform software, and the processing capabilities of the common object request broker architecture middleware. Three simulation scenarios were used to evaluate the model. The input includes the number of servers for the protocol mapping (PM), Parlay gateway (PG), application sever (AS), and communication handling (CH) functions. The simulation results show that the mean delay meets requirements. When the number of servers for PM, PG, AS, and CH functions were not properly selected, the mean delay was excessive. Simulation results show that the model is valid and can be used to optimize investments in core service platforms.