This paper describes an innovative adaptive algorithmic modeling approach, for solving a wide class of e-business and strategic management problems under uncertainty conditions. The proposed methodology is based on ba...This paper describes an innovative adaptive algorithmic modeling approach, for solving a wide class of e-business and strategic management problems under uncertainty conditions. The proposed methodology is based on basic ideas and concepts of four key-field interrelated sciences, i.e., computing science, applied mathematics, management sciences and economic sciences. Furthermore, the fundamental scientific concepts of adaptability and uncertainty are shown to play a critical role of major importance for a (near) optimum solution of a class of complex e-business/services and strategic management problems. Two characteristic case studies, namely measuring e-business performance under certain environmental pressures and organizational constraints and describing the relationships between technology, innovation and firm performance, are considered as effective applications of the proposed adaptive algorithmic modeling approach. A theoretical time-dependent model for the evaluation of firm e-business performances is also proposed.展开更多
The following material is devoted to the generalization of the chaos modeling to random fields in communication channels and its application on the space-time filtering for the incoherent paradigm;that is the purpose ...The following material is devoted to the generalization of the chaos modeling to random fields in communication channels and its application on the space-time filtering for the incoherent paradigm;that is the purpose of this research. The approach, presented hereafter, is based on the “Markovian” trend in modeling of random fields, and it is applied for the first time to the chaos field modeling through the well-known concept of the random “treatment” of deterministic dynamic systems, first presented by A. Kolmogorov, M. Born, etc. The material presents the generalized Stratonovich-Kushner Equations (SKE) for the optimum filtering of chaotic models of random fields and its simplified quasi-optimum solutions. In addition to this, the application of the multi-moment algorithms for quasi-optimum solutions is considered and, it is shown, that for scenarios, when the covariation interval of the input random field is less than the distance between the antenna elements, the gain of the space-time algorithms against their “time” analogies is significant. This is the general result presented in the following.展开更多
Control of pH neutralization processes is challenging in the chemical process industry because of their inherent strong nonlinearity. In this paper, the model algorithmic control (MAC) strategy is extended to nonlinea...Control of pH neutralization processes is challenging in the chemical process industry because of their inherent strong nonlinearity. In this paper, the model algorithmic control (MAC) strategy is extended to nonlinear processes using Hammerstein model that consists of a static nonlinear polynomial function followed in series by a linear impulse response dynamic element. A new nonlinear Hammerstein MAC algorithm (named NLH-MAC) is presented in detail. The simulation control results of a pH neutralization process show that NLH-MAC gives better control performance than linear MAC and the commonly used industrial nonlinear propotional plus integral plus derivative (PID) controller. Further simulation experiment demonstrates that NLH-MAC not only gives good control response, but also possesses good stability and robustness even with large modeling errors.展开更多
In response to the production capacity and functionality variations, a genetic algorithm (GA) embedded with deterministic timed Petri nets(DTPN) for reconfigurable production line(RPL) is proposed to solve its s...In response to the production capacity and functionality variations, a genetic algorithm (GA) embedded with deterministic timed Petri nets(DTPN) for reconfigurable production line(RPL) is proposed to solve its scheduling problem. The basic DTPN modules are presented to model the corresponding variable structures in RPL, and then the scheduling model of the whole RPL is constructed. And in the scheduling algorithm, firing sequences of the Petri nets model are used as chromosomes, thus the selection, crossover, and mutation operator do not deal with the elements in the problem space, but the elements of Petri nets model. Accordingly, all the algorithms for GA operations embedded with Petri nets model are proposed. Moreover, the new weighted single-objective optimization based on reconfiguration cost and E/T is used. The results of a DC motor RPL scheduling suggest that the presented DTPN-GA scheduling algorithm has a significant impact on RPL scheduling, and provide obvious improvements over the conventional scheduling method in practice that meets duedate, minimizes reconfiguration cost, and enhances cost effectivity.展开更多
Associated dynamic performance of the clamping force control valve used in continuously variable transmission (CVT) is optimized. Firstly, the structure and working principle of the valve are analyzed, and then a dy...Associated dynamic performance of the clamping force control valve used in continuously variable transmission (CVT) is optimized. Firstly, the structure and working principle of the valve are analyzed, and then a dynamic model is set up by means of mechanism analysis. For the purpose of checking the validity of the modeling method, a prototype workpiece of the valve is manufactured for comparison test, and its simulation result follows the experimental result quite well. An associated performance index is founded considering the response time, overshoot and saving energy, and five structural parameters are selected to adjust for deriving the optimal associated performance index. The optimization problem is solved by the genetic algorithm (GA) with necessary constraints. Finally, the properties of the optimized valve are compared with those of the prototype workpiece, and the results prove that the dynamic performance indexes of the optimized valve are much better than those of the prototype workpiece.展开更多
A class of general inverse matrix techniques based on adaptive algorithmic modelling methodologies is derived yielding iterative methods for solving unsymmetric linear systems of irregular structure arising in complex...A class of general inverse matrix techniques based on adaptive algorithmic modelling methodologies is derived yielding iterative methods for solving unsymmetric linear systems of irregular structure arising in complex computational problems in three space dimensions. The proposed class of approximate inverse is chosen as the basis to yield systems on which classic and preconditioned iterative methods are explicitly applied. Optimized versions of the proposed approximate inverse are presented using special storage (k-sweep) techniques leading to economical forms of the approximate inverses. Application of the adaptive algorithmic methodologies on a characteristic nonlinear boundary value problem is discussed and numerical results are given.展开更多
In the enormous and still poorly mastered gap between the macro level, where well developed continuum theories of continuous media and engineering methods of calculation and design operate, and atomic, subordinate to ...In the enormous and still poorly mastered gap between the macro level, where well developed continuum theories of continuous media and engineering methods of calculation and design operate, and atomic, subordinate to the laws of quantum mechanics, there is an extensive meso-hierarchical level of the structure of matter. At this level unprecedented previously products and technologies can be artificially created. Nano technology is a qualitatively new strategy in technology: it creates objects in exactly the opposite way—large objects are created from small ones [1]. We have developed a new method for modeling acoustic monitoring of a layered-block elastic medium with several inclusions of various physical and mechanical hierarchical structures [2]. An iterative process is developed for solving the direct problem for the case of three hierarchical inclusions of l, m, s-th ranks based on the use of 2D integro-differential equations. The degree of hierarchy of inclusions is determined by the values of their ranks, which may be different, while the first rank is associated with the atomic structure, the following ranks are associated with increasing geometric sizes, which contain inclusions of lower ranks and sizes. Hierarchical inclusions are located in different layers one above the other: the upper one is abnormally plastic, the second is abnormally elastic and the third is abnormally dense. The degree of filling with inclusions of each rank for all three hierarchical inclusions is different. Modeling is carried out from smaller sizes to large inclusions;as a result, it becomes possible to determine the necessary parameters of the formed material from acoustic monitoring data.展开更多
Background:Three-dimensional printing technology may become a key factor in transforming clinical practice and in significant improvement of treatment outcomes.The introduction of this technique into pediatric cardiac...Background:Three-dimensional printing technology may become a key factor in transforming clinical practice and in significant improvement of treatment outcomes.The introduction of this technique into pediatric cardiac surgery will allow us to study features of the anatomy and spatial relations of a defect and to simulate the optimal surgical repair on a printed model in every individual case.Methods:We performed the prospective cohort study which included 29 children with congenital heart defects.The hearts and the great vessels were modeled and printed out.Measurements of the same cardiac areas were taken in the same planes and points at multislice computed tomography images(group 1)and on printed 3D models of the hearts(group 2).Pre-printing treatment of the multislice computed tomography data and 3D model preparation were performed according to a newly developed algorithm.Results:The measurements taken on the 3D-printed cardiac models and the tomographic images did not differ significantly,which allowed us to conclude that the models were highly accurate and informative.The new algorithm greatly simplifies and speeds up the preparation of a 3D model for printing,while maintaining high accuracy and level of detail.Conclusions:The 3D-printed models provide an accurate preoperative assessment of the anatomy of a defect in each case.The new algorithm has several important advantages over other available programs.They enable the development of customized preliminary plans for surgical repair of each specific complex congenital heart disease,predict possible issues,determine the optimal surgical tactics,and significantly improve surgical outcomes.展开更多
Although the Internet of Things has been widely applied,the problems of cloud computing in the application of digital smart medical Big Data collection,processing,analysis,and storage remain,especially the low efficie...Although the Internet of Things has been widely applied,the problems of cloud computing in the application of digital smart medical Big Data collection,processing,analysis,and storage remain,especially the low efficiency of medical diagnosis.And with the wide application of the Internet of Things and Big Data in the medical field,medical Big Data is increasing in geometric magnitude resulting in cloud service overload,insufficient storage,communication delay,and network congestion.In order to solve these medical and network problems,a medical big-data-oriented fog computing architec-ture and BP algorithm application are proposed,and its structural advantages and characteristics are studied.This architecture enables the medical Big Data generated by medical edge devices and the existing data in the cloud service center to calculate,compare and analyze the fog node through the Internet of Things.The diagnosis results are designed to reduce the business processing delay and improve the diagnosis effect.Considering the weak computing of each edge device,the artificial intelligence BP neural network algorithm is used in the core computing model of the medical diagnosis system to improve the system computing power,enhance the medical intelligence-aided decision-making,and improve the clinical diagnosis and treatment efficiency.In the application process,combined with the characteristics of medical Big Data technology,through fog architecture design and Big Data technology integration,we could research the processing and analysis of heterogeneous data of the medical diagnosis system in the context of the Internet of Things.The results are promising:The medical platform network is smooth,the data storage space is sufficient,the data processing and analysis speed is fast,the diagnosis effect is remarkable,and it is a good assistant to doctors’treatment effect.It not only effectively solves the problem of low clinical diagnosis,treatment efficiency and quality,but also reduces the waiting time of patients,effectively solves the contradiction between doctors and patients,and improves the medical service quality and management level.展开更多
This research recognizes the limitation and challenges of adaptingand applying Process Mining as a powerful tool and technique in theHypothetical Software Architecture (SA) Evaluation Framework with thefeatures and fa...This research recognizes the limitation and challenges of adaptingand applying Process Mining as a powerful tool and technique in theHypothetical Software Architecture (SA) Evaluation Framework with thefeatures and factors of lightweightness. Process mining deals with the largescalecomplexity of security and performance analysis, which are the goalsof SA evaluation frameworks. As a result of these conjectures, all ProcessMining researches in the realm of SA are thoroughly reviewed, and ninechallenges for Process Mining Adaption are recognized. Process mining isembedded in the framework and to boost the quality of the SA model forfurther analysis, the framework nominates architectural discovery algorithmsFlower, Alpha, Integer Linear Programming (ILP), Heuristic, and Inductiveand compares them vs. twelve quality criteria. Finally, the framework’s testingon three case studies approves the feasibility of applying process mining toarchitectural evaluation. The extraction of the SA model is also done by thebest model discovery algorithm, which is selected by intensive benchmarkingin this research. This research presents case studies of SA in service-oriented,Pipe and Filter, and component-based styles, modeled and simulated byHierarchical Colored Petri Net techniques based on the cases’ documentation.Processminingwithin this framework dealswith the system’s log files obtainedfrom SA simulation. Applying process mining is challenging, especially for aSA evaluation framework, as it has not been done yet. The research recognizesthe problems of process mining adaption to a hypothetical lightweightSA evaluation framework and addresses these problems during the solutiondevelopment.展开更多
Lithium-ion batteries have been rapidly developed as clean energy sources in many industrial fields,such as new energy vehicles and energy storage.The core issues hindering their further promotion and application are ...Lithium-ion batteries have been rapidly developed as clean energy sources in many industrial fields,such as new energy vehicles and energy storage.The core issues hindering their further promotion and application are reliability and safety.A digital twin model that maps onto the physical entity of the battery with high simulation accuracy helps to monitor internal states and improve battery safety.This work focuses on developing a digital twin model via a mechanism-data-driven parameter updating algorithm to increase the simulation accuracy of the internal and external characteristics of the full-time domain battery under complex working conditions.An electrochemical model is first developed with the consideration of how electrode particle size impacts battery characteristics.By adding the descriptions of temperature distribution and particle-level stress,a multi-particle size electrochemical-thermal-mechanical coupling model is established.Then,considering the different electrical and thermal effect among individual cells,a model for the battery pack is constructed.A digital twin model construction method is finally developed and verified with battery operating data.展开更多
This paper describes a set of on-site earthquake safety evaluation systems for buildings, which were developed based on a network platform. The system embedded into the quantitative research results which were complet...This paper describes a set of on-site earthquake safety evaluation systems for buildings, which were developed based on a network platform. The system embedded into the quantitative research results which were completed in accordance with the provisions from Post-earthquake Field Works, Part 2: Safety Assessment of Buildings, GB18208.2 -2001, and was further developed into an easy-to-use software platform. The system is aimed at allowing engineering professionals, civil engineeing technicists or earthquake-affected victims on site to assess damaged buildings through a network after earthquakes. The authors studied the function structure, process design of the safety evaluation module, and hierarchical analysis algorithm module of the system in depth, and developed the general architecture design, development technology and database design of the system. Technologies such as hierarchical architecture design and Java EE were used in the system development, and MySQL5 was adopted in the database development. The result is a complete evaluation process of information collection, safety evaluation, and output of damage and safety degrees, as well as query and statistical analysis of identified buildings. The system can play a positive role in sharing expert post-earthquake experience and promoting safety evaluation of buildings on a seismic field.展开更多
Oil product pipelines have features such as transporting multiple materials, ever-changing operating conditions, and synchronism between the oil input plan and the oil offloading plan. In this paper, an optimal model ...Oil product pipelines have features such as transporting multiple materials, ever-changing operating conditions, and synchronism between the oil input plan and the oil offloading plan. In this paper, an optimal model was established for a single-source multi-distribution oil pro- duct pipeline, and scheduling plans were made based on supply. In the model, time node constraints, oil offloading plan constraints, and migration of batch constraints were taken into consideration. The minimum deviation between the demanded oil volumes and the actual offloading volumes was chosen as the objective function, and a linear programming model was established on the basis of known time nodes' sequence. The ant colony optimization algo- rithm and simplex method were used to solve the model. The model was applied to a real pipeline and it performed well.展开更多
The key problem of the adaptive mixture background model is that the parameters can adaptively change according to the input data. To address the problem, a new method is proposed. Firstly, the recursive equations are...The key problem of the adaptive mixture background model is that the parameters can adaptively change according to the input data. To address the problem, a new method is proposed. Firstly, the recursive equations are inferred based on the maximum likelihood rule. Secondly, the forgetting factor and learning rate factor are redefined, and their still more general formulations are obtained by analyzing their practical functions. Lastly, the convergence of the proposed algorithm is proved to enable the estimation converge to a local maximum of the data likelihood function according to the stochastic approximation theory. The experiments show that the proposed learning algorithm excels the formers both in converging rate and accuracy.展开更多
The performance of the model algorithm control method is partially based on the accuracy of the system's model. It is difficult to obtain a good model of a nonlinear system, especially when the nonlinearity is high. ...The performance of the model algorithm control method is partially based on the accuracy of the system's model. It is difficult to obtain a good model of a nonlinear system, especially when the nonlinearity is high. Neural networks have the ability to "learn"the characteristics of a system through nonlinear mapping to represent nonlinear functions as well as their inverse functions. This paper presents a model algorithm control method using neural networks for nonlinear time delay systems. Two neural networks are used in the control scheme. One neural network is trained as the model of the nonlinear time delay system, and the other one produces the control inputs. The neural networks are combined with the model algorithm control method to control the nonlinear time delay systems. Three examples are used to illustrate the proposed control method. The simulation results show that the proposed control method has a good control performance for nonlinear time delay systems.展开更多
A multiple model tracking algorithm based on neural network and multiple-process noise soft-switching for maneuvering targets is presented.In this algorithm, the"current"statistical model and neural network are runn...A multiple model tracking algorithm based on neural network and multiple-process noise soft-switching for maneuvering targets is presented.In this algorithm, the"current"statistical model and neural network are running in parallel.The neural network algorithm is used to modify the adaptive noise filtering algorithm based on the mean value and variance of the"current"statistical model for maneuvering targets, and then the multiple model tracking algorithm of the multiple processing switch is used to improve the precision of tracking maneuvering targets.The modified algorithm is proved to be effective by simulation.展开更多
Mechanism and modeling of the land subsidence are complex because of the complicate geological background in Beijing, China. This paper analyzed the spatial relationship between land subsidence and three factors, incl...Mechanism and modeling of the land subsidence are complex because of the complicate geological background in Beijing, China. This paper analyzed the spatial relationship between land subsidence and three factors, including the change of groundwater level, the thickness of compressible sediments and the building area by using remote sensing and GIS tools in the upper-middle part of alluvial-proluvial plain fan of the Chaobai River in Beijing. Based on the spatial analysis of the land subsidence and three factors, there exist significant non-linear relationship between the vertical displacement and three factors. The Back Propagation Neural Network (BPN) model combined with Genetic Algorithm (GA) was used to simulate regional distribution of the land subsidence. Results showed that at field scale, the groundwater level and land subsidence showed a significant linear relationship. However, at regional scale, the spatial distribution of groundwater depletion funnel did not overlap with the land subsidence funnel. As to the factor of compressible strata, the places with the biggest compressible strata thickness did not have the largest vertical displacement. The distributions of building area and land subsidence have no obvious spatial relationships. The BPN-GA model simulation results illustrated that the accuracy of the trained model during fifty years is acceptable with an error of 51% of verification data less than 20 mm and the average of the absolute error about 32 mm. The BPN model could be utilized to simulate the general distribution of land subsidence in the study area. Overall, this work contributes to better understand the complex relationship between the land subsidence and three influencing factors. And the distribution of the land subsidence can be simulated by the trained BPN-GA model with the limited available dada and acceptable accuracy.展开更多
Recently,many regression models have been presented for prediction of mechanical parameters of rocks regarding to rock index properties.Although statistical analysis is a common method for developing regression models...Recently,many regression models have been presented for prediction of mechanical parameters of rocks regarding to rock index properties.Although statistical analysis is a common method for developing regression models,but still selection of suitable transformation of the independent variables in a regression model is diffcult.In this paper,a genetic algorithm(GA)has been employed as a heuristic search method for selection of best transformation of the independent variables(some index properties of rocks)in regression models for prediction of uniaxial compressive strength(UCS)and modulus of elasticity(E).Firstly,multiple linear regression(MLR)analysis was performed on a data set to establish predictive models.Then,two GA models were developed in which root mean squared error(RMSE)was defned as ftness function.Results have shown that GA models are more precise than MLR models and are able to explain the relation between the intrinsic strength/elasticity properties and index properties of rocks by simple formulation and accepted accuracy.展开更多
This paper presents a nonlinear model predictive control(NMPC) approach based on support vector machine(SVM) and genetic algorithm(GA) for multiple-input multiple-output(MIMO) nonlinear systems.Individual SVM is used ...This paper presents a nonlinear model predictive control(NMPC) approach based on support vector machine(SVM) and genetic algorithm(GA) for multiple-input multiple-output(MIMO) nonlinear systems.Individual SVM is used to approximate each output of the controlled plant Then the model is used in MPC control scheme to predict the outputs of the controlled plant.The optimal control sequence is calculated using GA with elite preserve strategy.Simulation results of a typical MIMO nonlinear system show that this method has a good ability of set points tracking and disturbance rejection.展开更多
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a paral...In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.展开更多
文摘This paper describes an innovative adaptive algorithmic modeling approach, for solving a wide class of e-business and strategic management problems under uncertainty conditions. The proposed methodology is based on basic ideas and concepts of four key-field interrelated sciences, i.e., computing science, applied mathematics, management sciences and economic sciences. Furthermore, the fundamental scientific concepts of adaptability and uncertainty are shown to play a critical role of major importance for a (near) optimum solution of a class of complex e-business/services and strategic management problems. Two characteristic case studies, namely measuring e-business performance under certain environmental pressures and organizational constraints and describing the relationships between technology, innovation and firm performance, are considered as effective applications of the proposed adaptive algorithmic modeling approach. A theoretical time-dependent model for the evaluation of firm e-business performances is also proposed.
文摘The following material is devoted to the generalization of the chaos modeling to random fields in communication channels and its application on the space-time filtering for the incoherent paradigm;that is the purpose of this research. The approach, presented hereafter, is based on the “Markovian” trend in modeling of random fields, and it is applied for the first time to the chaos field modeling through the well-known concept of the random “treatment” of deterministic dynamic systems, first presented by A. Kolmogorov, M. Born, etc. The material presents the generalized Stratonovich-Kushner Equations (SKE) for the optimum filtering of chaotic models of random fields and its simplified quasi-optimum solutions. In addition to this, the application of the multi-moment algorithms for quasi-optimum solutions is considered and, it is shown, that for scenarios, when the covariation interval of the input random field is less than the distance between the antenna elements, the gain of the space-time algorithms against their “time” analogies is significant. This is the general result presented in the following.
文摘Control of pH neutralization processes is challenging in the chemical process industry because of their inherent strong nonlinearity. In this paper, the model algorithmic control (MAC) strategy is extended to nonlinear processes using Hammerstein model that consists of a static nonlinear polynomial function followed in series by a linear impulse response dynamic element. A new nonlinear Hammerstein MAC algorithm (named NLH-MAC) is presented in detail. The simulation control results of a pH neutralization process show that NLH-MAC gives better control performance than linear MAC and the commonly used industrial nonlinear propotional plus integral plus derivative (PID) controller. Further simulation experiment demonstrates that NLH-MAC not only gives good control response, but also possesses good stability and robustness even with large modeling errors.
基金This project is supported by Key Science-Technology Project of Shanghai City Tenth Five-Year-Plan, China (No.031111002)Specialized Research Fund for the Doctoral Program of Higher Education, China (No.20040247033)Municipal Key Basic Research Program of Shanghai, China (No.05JC14060)
文摘In response to the production capacity and functionality variations, a genetic algorithm (GA) embedded with deterministic timed Petri nets(DTPN) for reconfigurable production line(RPL) is proposed to solve its scheduling problem. The basic DTPN modules are presented to model the corresponding variable structures in RPL, and then the scheduling model of the whole RPL is constructed. And in the scheduling algorithm, firing sequences of the Petri nets model are used as chromosomes, thus the selection, crossover, and mutation operator do not deal with the elements in the problem space, but the elements of Petri nets model. Accordingly, all the algorithms for GA operations embedded with Petri nets model are proposed. Moreover, the new weighted single-objective optimization based on reconfiguration cost and E/T is used. The results of a DC motor RPL scheduling suggest that the presented DTPN-GA scheduling algorithm has a significant impact on RPL scheduling, and provide obvious improvements over the conventional scheduling method in practice that meets duedate, minimizes reconfiguration cost, and enhances cost effectivity.
基金Key Science-Technology Foundation of Hunan Province, China (No. 05GK2007).
文摘Associated dynamic performance of the clamping force control valve used in continuously variable transmission (CVT) is optimized. Firstly, the structure and working principle of the valve are analyzed, and then a dynamic model is set up by means of mechanism analysis. For the purpose of checking the validity of the modeling method, a prototype workpiece of the valve is manufactured for comparison test, and its simulation result follows the experimental result quite well. An associated performance index is founded considering the response time, overshoot and saving energy, and five structural parameters are selected to adjust for deriving the optimal associated performance index. The optimization problem is solved by the genetic algorithm (GA) with necessary constraints. Finally, the properties of the optimized valve are compared with those of the prototype workpiece, and the results prove that the dynamic performance indexes of the optimized valve are much better than those of the prototype workpiece.
文摘A class of general inverse matrix techniques based on adaptive algorithmic modelling methodologies is derived yielding iterative methods for solving unsymmetric linear systems of irregular structure arising in complex computational problems in three space dimensions. The proposed class of approximate inverse is chosen as the basis to yield systems on which classic and preconditioned iterative methods are explicitly applied. Optimized versions of the proposed approximate inverse are presented using special storage (k-sweep) techniques leading to economical forms of the approximate inverses. Application of the adaptive algorithmic methodologies on a characteristic nonlinear boundary value problem is discussed and numerical results are given.
文摘In the enormous and still poorly mastered gap between the macro level, where well developed continuum theories of continuous media and engineering methods of calculation and design operate, and atomic, subordinate to the laws of quantum mechanics, there is an extensive meso-hierarchical level of the structure of matter. At this level unprecedented previously products and technologies can be artificially created. Nano technology is a qualitatively new strategy in technology: it creates objects in exactly the opposite way—large objects are created from small ones [1]. We have developed a new method for modeling acoustic monitoring of a layered-block elastic medium with several inclusions of various physical and mechanical hierarchical structures [2]. An iterative process is developed for solving the direct problem for the case of three hierarchical inclusions of l, m, s-th ranks based on the use of 2D integro-differential equations. The degree of hierarchy of inclusions is determined by the values of their ranks, which may be different, while the first rank is associated with the atomic structure, the following ranks are associated with increasing geometric sizes, which contain inclusions of lower ranks and sizes. Hierarchical inclusions are located in different layers one above the other: the upper one is abnormally plastic, the second is abnormally elastic and the third is abnormally dense. The degree of filling with inclusions of each rank for all three hierarchical inclusions is different. Modeling is carried out from smaller sizes to large inclusions;as a result, it becomes possible to determine the necessary parameters of the formed material from acoustic monitoring data.
基金funded by the Ministry of Science and Higher Education of the Russian Federation as part of the World-Class Research Center Program:Advanced Digital Technologies(Contract No.075-15-2022-311,dated 20.04.2022).
文摘Background:Three-dimensional printing technology may become a key factor in transforming clinical practice and in significant improvement of treatment outcomes.The introduction of this technique into pediatric cardiac surgery will allow us to study features of the anatomy and spatial relations of a defect and to simulate the optimal surgical repair on a printed model in every individual case.Methods:We performed the prospective cohort study which included 29 children with congenital heart defects.The hearts and the great vessels were modeled and printed out.Measurements of the same cardiac areas were taken in the same planes and points at multislice computed tomography images(group 1)and on printed 3D models of the hearts(group 2).Pre-printing treatment of the multislice computed tomography data and 3D model preparation were performed according to a newly developed algorithm.Results:The measurements taken on the 3D-printed cardiac models and the tomographic images did not differ significantly,which allowed us to conclude that the models were highly accurate and informative.The new algorithm greatly simplifies and speeds up the preparation of a 3D model for printing,while maintaining high accuracy and level of detail.Conclusions:The 3D-printed models provide an accurate preoperative assessment of the anatomy of a defect in each case.The new algorithm has several important advantages over other available programs.They enable the development of customized preliminary plans for surgical repair of each specific complex congenital heart disease,predict possible issues,determine the optimal surgical tactics,and significantly improve surgical outcomes.
基金supported by 2020 Foshan Science and Technology Project(Numbering:2020001005356),Baoling Qin received the grant.
文摘Although the Internet of Things has been widely applied,the problems of cloud computing in the application of digital smart medical Big Data collection,processing,analysis,and storage remain,especially the low efficiency of medical diagnosis.And with the wide application of the Internet of Things and Big Data in the medical field,medical Big Data is increasing in geometric magnitude resulting in cloud service overload,insufficient storage,communication delay,and network congestion.In order to solve these medical and network problems,a medical big-data-oriented fog computing architec-ture and BP algorithm application are proposed,and its structural advantages and characteristics are studied.This architecture enables the medical Big Data generated by medical edge devices and the existing data in the cloud service center to calculate,compare and analyze the fog node through the Internet of Things.The diagnosis results are designed to reduce the business processing delay and improve the diagnosis effect.Considering the weak computing of each edge device,the artificial intelligence BP neural network algorithm is used in the core computing model of the medical diagnosis system to improve the system computing power,enhance the medical intelligence-aided decision-making,and improve the clinical diagnosis and treatment efficiency.In the application process,combined with the characteristics of medical Big Data technology,through fog architecture design and Big Data technology integration,we could research the processing and analysis of heterogeneous data of the medical diagnosis system in the context of the Internet of Things.The results are promising:The medical platform network is smooth,the data storage space is sufficient,the data processing and analysis speed is fast,the diagnosis effect is remarkable,and it is a good assistant to doctors’treatment effect.It not only effectively solves the problem of low clinical diagnosis,treatment efficiency and quality,but also reduces the waiting time of patients,effectively solves the contradiction between doctors and patients,and improves the medical service quality and management level.
基金This paper is supported by Research Grant Number:PP-FTSM-2022.
文摘This research recognizes the limitation and challenges of adaptingand applying Process Mining as a powerful tool and technique in theHypothetical Software Architecture (SA) Evaluation Framework with thefeatures and factors of lightweightness. Process mining deals with the largescalecomplexity of security and performance analysis, which are the goalsof SA evaluation frameworks. As a result of these conjectures, all ProcessMining researches in the realm of SA are thoroughly reviewed, and ninechallenges for Process Mining Adaption are recognized. Process mining isembedded in the framework and to boost the quality of the SA model forfurther analysis, the framework nominates architectural discovery algorithmsFlower, Alpha, Integer Linear Programming (ILP), Heuristic, and Inductiveand compares them vs. twelve quality criteria. Finally, the framework’s testingon three case studies approves the feasibility of applying process mining toarchitectural evaluation. The extraction of the SA model is also done by thebest model discovery algorithm, which is selected by intensive benchmarkingin this research. This research presents case studies of SA in service-oriented,Pipe and Filter, and component-based styles, modeled and simulated byHierarchical Colored Petri Net techniques based on the cases’ documentation.Processminingwithin this framework dealswith the system’s log files obtainedfrom SA simulation. Applying process mining is challenging, especially for aSA evaluation framework, as it has not been done yet. The research recognizesthe problems of process mining adaption to a hypothetical lightweightSA evaluation framework and addresses these problems during the solutiondevelopment.
基金support by Shandong Province National Natural Science Foundation of China(No.ZR2023QE036).
文摘Lithium-ion batteries have been rapidly developed as clean energy sources in many industrial fields,such as new energy vehicles and energy storage.The core issues hindering their further promotion and application are reliability and safety.A digital twin model that maps onto the physical entity of the battery with high simulation accuracy helps to monitor internal states and improve battery safety.This work focuses on developing a digital twin model via a mechanism-data-driven parameter updating algorithm to increase the simulation accuracy of the internal and external characteristics of the full-time domain battery under complex working conditions.An electrochemical model is first developed with the consideration of how electrode particle size impacts battery characteristics.By adding the descriptions of temperature distribution and particle-level stress,a multi-particle size electrochemical-thermal-mechanical coupling model is established.Then,considering the different electrical and thermal effect among individual cells,a model for the battery pack is constructed.A digital twin model construction method is finally developed and verified with battery operating data.
基金Major Research Plan of the National Natural Science Foundation of China under Grant No.91315301-10Project of Earthquake Code Compilation and Revising:Postearthquake Field Works-Part 2:Safety Assessment of Buildings under Grant No.14410024701Basic Scientific Research Special Project of IEM,CEA under Grant No.2009A01
文摘This paper describes a set of on-site earthquake safety evaluation systems for buildings, which were developed based on a network platform. The system embedded into the quantitative research results which were completed in accordance with the provisions from Post-earthquake Field Works, Part 2: Safety Assessment of Buildings, GB18208.2 -2001, and was further developed into an easy-to-use software platform. The system is aimed at allowing engineering professionals, civil engineeing technicists or earthquake-affected victims on site to assess damaged buildings through a network after earthquakes. The authors studied the function structure, process design of the safety evaluation module, and hierarchical analysis algorithm module of the system in depth, and developed the general architecture design, development technology and database design of the system. Technologies such as hierarchical architecture design and Java EE were used in the system development, and MySQL5 was adopted in the database development. The result is a complete evaluation process of information collection, safety evaluation, and output of damage and safety degrees, as well as query and statistical analysis of identified buildings. The system can play a positive role in sharing expert post-earthquake experience and promoting safety evaluation of buildings on a seismic field.
基金part of the Program of"Study on the mechanism of complex heat and mass transfer during batch transport process in products pipelines"funded under the National Natural Science Foundation of China(grant number 51474228)
文摘Oil product pipelines have features such as transporting multiple materials, ever-changing operating conditions, and synchronism between the oil input plan and the oil offloading plan. In this paper, an optimal model was established for a single-source multi-distribution oil pro- duct pipeline, and scheduling plans were made based on supply. In the model, time node constraints, oil offloading plan constraints, and migration of batch constraints were taken into consideration. The minimum deviation between the demanded oil volumes and the actual offloading volumes was chosen as the objective function, and a linear programming model was established on the basis of known time nodes' sequence. The ant colony optimization algo- rithm and simplex method were used to solve the model. The model was applied to a real pipeline and it performed well.
基金the Doctorate Foundation of the Engineering College, Air Force Engineering University.
文摘The key problem of the adaptive mixture background model is that the parameters can adaptively change according to the input data. To address the problem, a new method is proposed. Firstly, the recursive equations are inferred based on the maximum likelihood rule. Secondly, the forgetting factor and learning rate factor are redefined, and their still more general formulations are obtained by analyzing their practical functions. Lastly, the convergence of the proposed algorithm is proved to enable the estimation converge to a local maximum of the data likelihood function according to the stochastic approximation theory. The experiments show that the proposed learning algorithm excels the formers both in converging rate and accuracy.
基金supported by the Brain Korea 21 PLUS Project,National Research Foundation of Korea(NRF-2013R1A2A2A01068127NRF-2013R1A1A2A10009458)Jiangsu Province University Natural Science Research Project(13KJB510003)
文摘The performance of the model algorithm control method is partially based on the accuracy of the system's model. It is difficult to obtain a good model of a nonlinear system, especially when the nonlinearity is high. Neural networks have the ability to "learn"the characteristics of a system through nonlinear mapping to represent nonlinear functions as well as their inverse functions. This paper presents a model algorithm control method using neural networks for nonlinear time delay systems. Two neural networks are used in the control scheme. One neural network is trained as the model of the nonlinear time delay system, and the other one produces the control inputs. The neural networks are combined with the model algorithm control method to control the nonlinear time delay systems. Three examples are used to illustrate the proposed control method. The simulation results show that the proposed control method has a good control performance for nonlinear time delay systems.
文摘A multiple model tracking algorithm based on neural network and multiple-process noise soft-switching for maneuvering targets is presented.In this algorithm, the"current"statistical model and neural network are running in parallel.The neural network algorithm is used to modify the adaptive noise filtering algorithm based on the mean value and variance of the"current"statistical model for maneuvering targets, and then the multiple model tracking algorithm of the multiple processing switch is used to improve the precision of tracking maneuvering targets.The modified algorithm is proved to be effective by simulation.
基金Under the auspices of National Natural Science Foundation of China(No.41201420,41130744)Beijing Nova Program(No.Z111106054511097)Foundation of Beijing Municipal Commission of Education(No.KM201110028016)
文摘Mechanism and modeling of the land subsidence are complex because of the complicate geological background in Beijing, China. This paper analyzed the spatial relationship between land subsidence and three factors, including the change of groundwater level, the thickness of compressible sediments and the building area by using remote sensing and GIS tools in the upper-middle part of alluvial-proluvial plain fan of the Chaobai River in Beijing. Based on the spatial analysis of the land subsidence and three factors, there exist significant non-linear relationship between the vertical displacement and three factors. The Back Propagation Neural Network (BPN) model combined with Genetic Algorithm (GA) was used to simulate regional distribution of the land subsidence. Results showed that at field scale, the groundwater level and land subsidence showed a significant linear relationship. However, at regional scale, the spatial distribution of groundwater depletion funnel did not overlap with the land subsidence funnel. As to the factor of compressible strata, the places with the biggest compressible strata thickness did not have the largest vertical displacement. The distributions of building area and land subsidence have no obvious spatial relationships. The BPN-GA model simulation results illustrated that the accuracy of the trained model during fifty years is acceptable with an error of 51% of verification data less than 20 mm and the average of the absolute error about 32 mm. The BPN model could be utilized to simulate the general distribution of land subsidence in the study area. Overall, this work contributes to better understand the complex relationship between the land subsidence and three influencing factors. And the distribution of the land subsidence can be simulated by the trained BPN-GA model with the limited available dada and acceptable accuracy.
文摘Recently,many regression models have been presented for prediction of mechanical parameters of rocks regarding to rock index properties.Although statistical analysis is a common method for developing regression models,but still selection of suitable transformation of the independent variables in a regression model is diffcult.In this paper,a genetic algorithm(GA)has been employed as a heuristic search method for selection of best transformation of the independent variables(some index properties of rocks)in regression models for prediction of uniaxial compressive strength(UCS)and modulus of elasticity(E).Firstly,multiple linear regression(MLR)analysis was performed on a data set to establish predictive models.Then,two GA models were developed in which root mean squared error(RMSE)was defned as ftness function.Results have shown that GA models are more precise than MLR models and are able to explain the relation between the intrinsic strength/elasticity properties and index properties of rocks by simple formulation and accepted accuracy.
基金Supported by the National Natural Science Foundation of China(21076179)the National Basic Research Program of China(2012CB720500)
文摘This paper presents a nonlinear model predictive control(NMPC) approach based on support vector machine(SVM) and genetic algorithm(GA) for multiple-input multiple-output(MIMO) nonlinear systems.Individual SVM is used to approximate each output of the controlled plant Then the model is used in MPC control scheme to predict the outputs of the controlled plant.The optimal control sequence is calculated using GA with elite preserve strategy.Simulation results of a typical MIMO nonlinear system show that this method has a good ability of set points tracking and disturbance rejection.
文摘In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.