Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a nove...Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”.展开更多
Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as s...Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system.展开更多
This work presents the “n<sup>th</sup>-Order Feature Adjoint Sensitivity Analysis Methodology for Nonlinear Systems” (abbreviated as “n<sup>th</sup>-FASAM-N”), which will be shown to be the...This work presents the “n<sup>th</sup>-Order Feature Adjoint Sensitivity Analysis Methodology for Nonlinear Systems” (abbreviated as “n<sup>th</sup>-FASAM-N”), which will be shown to be the most efficient methodology for computing exact expressions of sensitivities, of any order, of model responses with respect to features of model parameters and, subsequently, with respect to the model’s uncertain parameters, boundaries, and internal interfaces. The unparalleled efficiency and accuracy of the n<sup>th</sup>-FASAM-N methodology stems from the maximal reduction of the number of adjoint computations (which are considered to be “large-scale” computations) for computing high-order sensitivities. When applying the n<sup>th</sup>-FASAM-N methodology to compute the second- and higher-order sensitivities, the number of large-scale computations is proportional to the number of “model features” as opposed to being proportional to the number of model parameters (which are considerably more than the number of features).When a model has no “feature” functions of parameters, but only comprises primary parameters, the n<sup>th</sup>-FASAM-N methodology becomes identical to the extant n<sup>th</sup> CASAM-N (“n<sup>th</sup>-Order Comprehensive Adjoint Sensitivity Analysis Methodology for Nonlinear Systems”) methodology. Both the n<sup>th</sup>-FASAM-N and the n<sup>th</sup>-CASAM-N methodologies are formulated in linearly increasing higher-dimensional Hilbert spaces as opposed to exponentially increasing parameter-dimensional spaces thus overcoming the curse of dimensionality in sensitivity analysis of nonlinear systems. Both the n<sup>th</sup>-FASAM-N and the n<sup>th</sup>-CASAM-N are incomparably more efficient and more accurate than any other methods (statistical, finite differences, etc.) for computing exact expressions of response sensitivities of any order with respect to the model’s features and/or primary uncertain parameters, boundaries, and internal interfaces.展开更多
This paper presents a kind of artificial intelligent system-generalized computing system (GCS for short), and introduces its mathematical description, implement problem and learning problem.
Recently, the possibility of using DNA as a computing tool arouses wide interests of many researchers. In this paper, we first explored the mechanism of DNA computing and its biological mathematics based on the mechan...Recently, the possibility of using DNA as a computing tool arouses wide interests of many researchers. In this paper, we first explored the mechanism of DNA computing and its biological mathematics based on the mechanism of biological DNA. Then we integrated DNA computing with evolutionary computation, fuzzy systems, neural networks and chaotic systems in soft computing technologies. Finally, we made some prospects on the further work of DNA bio soft computing.展开更多
Industrial big data integration and sharing(IBDIS)is of great significance in managing and providing data for big data analysis in manufacturing systems.A novel fog-computing-based IBDIS approach called Fog-IBDIS is p...Industrial big data integration and sharing(IBDIS)is of great significance in managing and providing data for big data analysis in manufacturing systems.A novel fog-computing-based IBDIS approach called Fog-IBDIS is proposed in order to integrate and share industrial big data with high raw data security and low network traffic loads by moving the integration task from the cloud to the edge of networks.First,a task flow graph(TFG)is designed to model the data analysis process.The TFG is composed of several tasks,which are executed by the data owners through the Fog-IBDIS platform in order to protect raw data privacy.Second,the function of Fog-IBDIS to enable data integration and sharing is presented in five modules:TFG management,compilation and running control,the data integration model,the basic algorithm library,and the management component.Finally,a case study is presented to illustrate the implementation of Fog-IBDIS,which ensures raw data security by deploying the analysis tasks executed by the data generators,and eases the network traffic load by greatly reducing the volume of transmitted data.展开更多
To further improve delay performance in multi-cell cellular edge computing systems,a new delay-driven joint communication and computing resource BP(backpressure)scheduling algorithm is proposed.Firstly,the mathematica...To further improve delay performance in multi-cell cellular edge computing systems,a new delay-driven joint communication and computing resource BP(backpressure)scheduling algorithm is proposed.Firstly,the mathematical models of the communication delay and computing delay in multi-cell cellular edge computing systems are established and expressed as virtual delay queues.Then,based on the virtual delay models,a novel joint wireless subcarrier and virtual machine resource scheduling algorithm is proposed to stabilize the virtual delay queues in the framework of the BP scheduling principle.Finally,the delay performance of the proposed virtual queue-based BP scheduling algorithm is evaluated via simulation experiments and compared with the traditional queue length-based BP scheduling algorithm.Results show that under the considered simulation parameters,the total delay of the proposed BP scheduling algorithm is always lower than that of the traditional queue length-based BP scheduling algorithm.The percentage of the reduced total delay can be as high as 51.29%when the computing resources are heterogeneously configured.Therefore,compared with the traditional queue length-based BP scheduling algorithms,the proposed virtual delay queue-based BP scheduling algorithm can further reduce delay in multi-cell cellular edge computing systems.展开更多
Computer Algebra Systems have been extensively used in higher education. The reasons are many e.g., visualize mathematical problems, correlate real-world problems on a conceptual level, are flexible, simple to use, ac...Computer Algebra Systems have been extensively used in higher education. The reasons are many e.g., visualize mathematical problems, correlate real-world problems on a conceptual level, are flexible, simple to use, accessible from anywhere, etc. However, there is still room for improvement. Computer algebra system (CAS) optimization is the set of best practices and techniques to keep the CAS running optimally. Best practices are related to how to carry out a mathematical task or configure your system. In this paper, we are going to examine these techniques. The documentation sheets of CASs are the source of data that we used to compare them and examine their characteristics. The research results reveal that there are many tips that we can follow to accelerate performance.展开更多
Linguistic dynamic systems(LDS)are dynamic processes involving computing with words(CW)for modeling and analysis of complex systems.In this paper,a fuzzy neural network(FNN)structure of LDS was proposed.In addition,an...Linguistic dynamic systems(LDS)are dynamic processes involving computing with words(CW)for modeling and analysis of complex systems.In this paper,a fuzzy neural network(FNN)structure of LDS was proposed.In addition,an improved nonlinear particle swarm optimization was employed for training FNN.The experiment results on logistics formulation demonstrates the feasibility and the efficiency of this FNN model.展开更多
Granular Computing on partitions(RST),coverings(GrCC) and neighborhood systems(LNS) are examined: (1) The order of generality is RST, GrCC, and then LNS. (2) The quotient structure: In RST, it is called quotient set. ...Granular Computing on partitions(RST),coverings(GrCC) and neighborhood systems(LNS) are examined: (1) The order of generality is RST, GrCC, and then LNS. (2) The quotient structure: In RST, it is called quotient set. In GrCC, it is a simplical complex, called the nerve of the covering in combinatorial topology. For LNS, the structure has no known description. (3) The approximation space of RST is a topological space generated by a partition, called a clopen space. For LNS, it is a generalized/pretopological space which is more general than topological space. For GrCC,there are two possibilities. One is a special case of LNS,which is the topological space generated by the covering. There is another topological space, the topology generated by the finite intersections of the members of a covering The first one treats covering as a base, the second one as a subbase. (4) Knowledge representations in RST are symbol-valued systems. In GrCC, they are expression-valued systems. In LNS, they are multivalued system; reported in 1998 . (5) RST and GRCC representation theories are complete in the sense that granular models can be recaptured fully from the knowledge representations.展开更多
Currently computing information systems have entered a new stage and the security of systems is more and more serious, and the research on system security is developing in depth. This paper discusses neuro-computing a...Currently computing information systems have entered a new stage and the security of systems is more and more serious, and the research on system security is developing in depth. This paper discusses neuro-computing applications in security of network information systems.展开更多
Data mining techniques and information personalization have made significant growth in the past decade. Enormous volume of data is generated every day. Recommender systems can help users to find their specific informa...Data mining techniques and information personalization have made significant growth in the past decade. Enormous volume of data is generated every day. Recommender systems can help users to find their specific information in the extensive volume of information. Several techniques have been presented for development of Recommender System (RS). One of these techniques is the Evolutionary Computing (EC), which can optimize and improve RS in the various applications. This study investigates the number of publications, focusing on some aspects such as the recommendation techniques, the evaluation methods and the datasets which are used.展开更多
Quorum systems have been used to solve the problem of data consistency in distributed fault-tolerance systems. But when intrusions occur, traditional quorum systems have some disadvantages. For example, synchronous qu...Quorum systems have been used to solve the problem of data consistency in distributed fault-tolerance systems. But when intrusions occur, traditional quorum systems have some disadvantages. For example, synchronous quorum systems are subject to DOS attacks, while asynchronous quorum systems need a larger system size (at least 3f+1 for generic data, and f fewer for self-verifying data). In order to solve the problems above, an intrusion-tolerance quorum system (ITQS) of hybrid time model based on trust timely computing base is presented (TTCB). The TTCB is a trust secure real-time component inside the server with a well defined interface and separated from the operation system. It is in the synchronous communication environment while the application layer in the server deals with read-write requests and executes update-copy protocols asynchronously. The architectural hybridization of synchrony and asynchrony can achieve the data consistency and availability correctly. We also build two kinds of ITQSes based on TTCB, i.e., the symmetrical and the asymmetrical TTCB quorum systems. In the performance evaluations, we show that TTCB quorum systems are of smaller size, lower load and higher availability.展开更多
In parametric cost estimating, objections to using statistical Cost Estimating Relationships (CERs) and parametric models include problems of low statistical significance due to limited data points, biases in the un...In parametric cost estimating, objections to using statistical Cost Estimating Relationships (CERs) and parametric models include problems of low statistical significance due to limited data points, biases in the underlying data, and lack of robustness. Soft Computing (SC) technologies are used for building intelligent cost models. The SC models are systemically evaluated based on their training and prediction of the historical cost data of airborne avionics systems. Results indicating the strengths and weakness of each model are presented. In general, the intelligent cost models have higher prediction precision, better data adaptability, and stronger self-learning capability than the regression CERs.展开更多
After a brief emphasis about the interconnected world, including Cyber-Physical Systems of Systems, the increasing importance of the decision-making by autonomous, quasi-autonomous, and autonomic systems is emphasised...After a brief emphasis about the interconnected world, including Cyber-Physical Systems of Systems, the increasing importance of the decision-making by autonomous, quasi-autonomous, and autonomic systems is emphasised. Promising roles of computational understanding, computational awareness, and computational wisdom for better autonomous decision-making are outlined. The contributions of simulation-based approaches are listed.展开更多
The healthcare industry is rapidly adapting to new computing environments and technologies. With academics increasingly committed to developing and enhancing healthcare solutions that combine the Internet of Things (I...The healthcare industry is rapidly adapting to new computing environments and technologies. With academics increasingly committed to developing and enhancing healthcare solutions that combine the Internet of Things (IoT) and edge computing, there is a greater need than ever to adequately monitor the data being acquired, shared, processed, and stored. The growth of cloud, IoT, and edge computing models presents severe data privacy concerns, especially in the healthcare sector. However, rigorous research to develop appropriate data privacy solutions in the healthcare sector is still lacking. This paper discusses the current state of privacy-preservation solutions in IoT and edge healthcare applications. It identifies the common strategies often used to include privacy by the intelligent edges and technologies in healthcare systems. Furthermore, the study addresses the technical complexity, efficacy, and sustainability limits of these methods. The study also highlights the privacy issues and current research directions that have driven the IoT and edge healthcare solutions, with which more insightful future applications are encouraged.展开更多
Nowadays,when a life span of sensor nodes are threatened by the shortage of energy available for communication,sink mobility is an excellent technique for increasing its lifespan.When communicating via a WSN,the use o...Nowadays,when a life span of sensor nodes are threatened by the shortage of energy available for communication,sink mobility is an excellent technique for increasing its lifespan.When communicating via a WSN,the use of nodes as a transmission method eliminates the need for a physical medium.Sink mobility in a dynamic network topology presents a problem for sensor nodes that have reserved resources.Unless the route is revised and changed to reflect the location of the mobile sink location,it will be inefficient for delivering data effec-tively.In the clustering strategy,nodes are grouped together to improve commu-nication,and the cluster head receives data from compactable nodes.The sink receives the aggregated data from the head.The cluster head is the central node in the conventional technique.A single node uses more energy than a node that is routed to a dead node.Increasing the number of people using a route shortens its lifespan.The proposed work demonstrates the effectiveness with which sensor node paths can be modified at a lower cost by utilising the virtual grid.The best routes are maintained mostly by sink node communication on routes based on dynamic route adjustment(VGDRA).Only specific nodes are acquired to re-align data supply to the mobile sink in accordance with new paradigms of route recon-struction.According to the results,VGDRA schemes have a longer life span because of the reduced number of loops.展开更多
Recently,with the growth of cyber physical systems(CPS),several applications have begun to deploy in the CPS for connecting the cyber space with the physical scale effectively.Besides,the cloud computing(CC)enabled CP...Recently,with the growth of cyber physical systems(CPS),several applications have begun to deploy in the CPS for connecting the cyber space with the physical scale effectively.Besides,the cloud computing(CC)enabled CPS offers huge processing and storage resources for CPS thatfinds helpful for a range of application areas.At the same time,with the massive development of applica-tions that exist in the CPS environment,the energy utilization of the cloud enabled CPS has gained significant interest.For improving the energy effective-ness of the CC platform,virtualization technologies have been employed for resource management and the applications are executed via virtual machines(VMs).Since effective scheduling of resources acts as an important role in the design of cloud enabled CPS,this paper focuses on the design of chaotic sandpi-per optimization based VM scheduling(CSPO-VMS)technique for energy effi-cient CPS.The CSPO-VMS technique is utilized for searching for the optimum VM migration solution and it helps to choose an effective scheduling strategy.The CSPO algorithm integrates the concepts of traditional SPO algorithm with the chaos theory,which substitutes the main parameter and combines it with the chaos.In order to improve the process of determining the global optimum solutions and convergence rate of the SPO algorithm,the chaotic concept is included in the SPO algorithm.The CSPO-VMS technique also derives afitness function to choose optimal scheduling strategy in the CPS environment.In order to demonstrate the enhanced performance of the CSPO-VMS technique,a wide range of simulations were carried out and the results are examined under varying aspects.The simulation results ensured the improved performance of the CSPO-VMS technique over the recent methods interms of different measures.展开更多
Predictive Emission Monitoring Systems (PEMS) offer a cost-effective and environmentally friendly alternative to Continuous Emission Monitoring Systems (CEMS) for monitoring pollution from industrial sources. Multiple...Predictive Emission Monitoring Systems (PEMS) offer a cost-effective and environmentally friendly alternative to Continuous Emission Monitoring Systems (CEMS) for monitoring pollution from industrial sources. Multiple regression is one of the fundamental statistical techniques to describe the relationship between dependent and independent variables. This model can be effectively used to develop a PEMS, to estimate the amount of pollution emitted by industrial sources, where the fuel composition and other process-related parameters are available. It often makes them sufficient to predict the emission discharge with acceptable accuracy. In cases where PEMS are accepted as an alternative method to CEMS, which use gas analyzers, they can provide cost savings and substantial benefits for ongoing system support and maintenance. The described mathematical concept is based on the matrix algebra representation in multiple regression involving multiple precision arithmetic techniques. Challenging numerical examples for statistical big data analysis, are investigated. Numerical examples illustrate computational accuracy and efficiency of statistical analysis due to increasing the precision level. The programming language C++ is used for mathematical model implementation. The data for research and development, including the dependent fuel and independent NOx emissions data, were obtained from CEMS software installed on a petrochemical plant.展开更多
基金the National Key Research and Development Program of China(2021YFF0900800)the National Natural Science Foundation of China(61972276,62206116,62032016)+2 种基金the New Liberal Arts Reform and Practice Project of National Ministry of Education(2021170002)the Open Research Fund of the State Key Laboratory for Management and Control of Complex Systems(20210101)Tianjin University Talent Innovation Reward Program for Literature and Science Graduate Student(C1-2022-010)。
文摘Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”.
基金The work is partially supported by Natural Science Foundation of Ningxia(Grant No.AAC03300)National Natural Science Foundation of China(Grant No.61962001)Graduate Innovation Project of North Minzu University(Grant No.YCX23152).
文摘Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system.
文摘This work presents the “n<sup>th</sup>-Order Feature Adjoint Sensitivity Analysis Methodology for Nonlinear Systems” (abbreviated as “n<sup>th</sup>-FASAM-N”), which will be shown to be the most efficient methodology for computing exact expressions of sensitivities, of any order, of model responses with respect to features of model parameters and, subsequently, with respect to the model’s uncertain parameters, boundaries, and internal interfaces. The unparalleled efficiency and accuracy of the n<sup>th</sup>-FASAM-N methodology stems from the maximal reduction of the number of adjoint computations (which are considered to be “large-scale” computations) for computing high-order sensitivities. When applying the n<sup>th</sup>-FASAM-N methodology to compute the second- and higher-order sensitivities, the number of large-scale computations is proportional to the number of “model features” as opposed to being proportional to the number of model parameters (which are considerably more than the number of features).When a model has no “feature” functions of parameters, but only comprises primary parameters, the n<sup>th</sup>-FASAM-N methodology becomes identical to the extant n<sup>th</sup> CASAM-N (“n<sup>th</sup>-Order Comprehensive Adjoint Sensitivity Analysis Methodology for Nonlinear Systems”) methodology. Both the n<sup>th</sup>-FASAM-N and the n<sup>th</sup>-CASAM-N methodologies are formulated in linearly increasing higher-dimensional Hilbert spaces as opposed to exponentially increasing parameter-dimensional spaces thus overcoming the curse of dimensionality in sensitivity analysis of nonlinear systems. Both the n<sup>th</sup>-FASAM-N and the n<sup>th</sup>-CASAM-N are incomparably more efficient and more accurate than any other methods (statistical, finite differences, etc.) for computing exact expressions of response sensitivities of any order with respect to the model’s features and/or primary uncertain parameters, boundaries, and internal interfaces.
文摘This paper presents a kind of artificial intelligent system-generalized computing system (GCS for short), and introduces its mathematical description, implement problem and learning problem.
文摘Recently, the possibility of using DNA as a computing tool arouses wide interests of many researchers. In this paper, we first explored the mechanism of DNA computing and its biological mathematics based on the mechanism of biological DNA. Then we integrated DNA computing with evolutionary computation, fuzzy systems, neural networks and chaotic systems in soft computing technologies. Finally, we made some prospects on the further work of DNA bio soft computing.
基金This work was supported in part by the National Natural Science Foundation of China(51435009)Shanghai Sailing Program(19YF1401500)the Fundamental Research Funds for the Central Universities(2232019D3-34).
文摘Industrial big data integration and sharing(IBDIS)is of great significance in managing and providing data for big data analysis in manufacturing systems.A novel fog-computing-based IBDIS approach called Fog-IBDIS is proposed in order to integrate and share industrial big data with high raw data security and low network traffic loads by moving the integration task from the cloud to the edge of networks.First,a task flow graph(TFG)is designed to model the data analysis process.The TFG is composed of several tasks,which are executed by the data owners through the Fog-IBDIS platform in order to protect raw data privacy.Second,the function of Fog-IBDIS to enable data integration and sharing is presented in five modules:TFG management,compilation and running control,the data integration model,the basic algorithm library,and the management component.Finally,a case study is presented to illustrate the implementation of Fog-IBDIS,which ensures raw data security by deploying the analysis tasks executed by the data generators,and eases the network traffic load by greatly reducing the volume of transmitted data.
基金The National Natural Science Foundation of China(No.61571111)the Incubation Project of the National Natural Science Foundation of China at Nanjing University of Posts and Telecommunications(No.NY219106)
文摘To further improve delay performance in multi-cell cellular edge computing systems,a new delay-driven joint communication and computing resource BP(backpressure)scheduling algorithm is proposed.Firstly,the mathematical models of the communication delay and computing delay in multi-cell cellular edge computing systems are established and expressed as virtual delay queues.Then,based on the virtual delay models,a novel joint wireless subcarrier and virtual machine resource scheduling algorithm is proposed to stabilize the virtual delay queues in the framework of the BP scheduling principle.Finally,the delay performance of the proposed virtual queue-based BP scheduling algorithm is evaluated via simulation experiments and compared with the traditional queue length-based BP scheduling algorithm.Results show that under the considered simulation parameters,the total delay of the proposed BP scheduling algorithm is always lower than that of the traditional queue length-based BP scheduling algorithm.The percentage of the reduced total delay can be as high as 51.29%when the computing resources are heterogeneously configured.Therefore,compared with the traditional queue length-based BP scheduling algorithms,the proposed virtual delay queue-based BP scheduling algorithm can further reduce delay in multi-cell cellular edge computing systems.
文摘Computer Algebra Systems have been extensively used in higher education. The reasons are many e.g., visualize mathematical problems, correlate real-world problems on a conceptual level, are flexible, simple to use, accessible from anywhere, etc. However, there is still room for improvement. Computer algebra system (CAS) optimization is the set of best practices and techniques to keep the CAS running optimally. Best practices are related to how to carry out a mathematical task or configure your system. In this paper, we are going to examine these techniques. The documentation sheets of CASs are the source of data that we used to compare them and examine their characteristics. The research results reveal that there are many tips that we can follow to accelerate performance.
基金National Natural Science Foundation of China(No.60873179)Doctoral Program Foundation of Institutions of Higher Education of China(No.20090121110032)+3 种基金Shenzhen Science and Technology Research Foundations,China(No.JC200903180630A,No.ZYB200907110169A)Key Project of Institutes Serving for the Economic Zone on the Western Coast of the Tai wan Strait,ChinaNatural Science Foundation of Xiamen,China(No.3502Z2093018)Projects of Education Depart ment of Fujian Province of China(No.JK2009017,No.JK2010031,No.JA10196)
文摘Linguistic dynamic systems(LDS)are dynamic processes involving computing with words(CW)for modeling and analysis of complex systems.In this paper,a fuzzy neural network(FNN)structure of LDS was proposed.In addition,an improved nonlinear particle swarm optimization was employed for training FNN.The experiment results on logistics formulation demonstrates the feasibility and the efficiency of this FNN model.
文摘Granular Computing on partitions(RST),coverings(GrCC) and neighborhood systems(LNS) are examined: (1) The order of generality is RST, GrCC, and then LNS. (2) The quotient structure: In RST, it is called quotient set. In GrCC, it is a simplical complex, called the nerve of the covering in combinatorial topology. For LNS, the structure has no known description. (3) The approximation space of RST is a topological space generated by a partition, called a clopen space. For LNS, it is a generalized/pretopological space which is more general than topological space. For GrCC,there are two possibilities. One is a special case of LNS,which is the topological space generated by the covering. There is another topological space, the topology generated by the finite intersections of the members of a covering The first one treats covering as a base, the second one as a subbase. (4) Knowledge representations in RST are symbol-valued systems. In GrCC, they are expression-valued systems. In LNS, they are multivalued system; reported in 1998 . (5) RST and GRCC representation theories are complete in the sense that granular models can be recaptured fully from the knowledge representations.
基金This project was supported by the Foundation of State Key Lab for Software Engineering at Wuhan University.
文摘Currently computing information systems have entered a new stage and the security of systems is more and more serious, and the research on system security is developing in depth. This paper discusses neuro-computing applications in security of network information systems.
文摘Data mining techniques and information personalization have made significant growth in the past decade. Enormous volume of data is generated every day. Recommender systems can help users to find their specific information in the extensive volume of information. Several techniques have been presented for development of Recommender System (RS). One of these techniques is the Evolutionary Computing (EC), which can optimize and improve RS in the various applications. This study investigates the number of publications, focusing on some aspects such as the recommendation techniques, the evaluation methods and the datasets which are used.
基金supported by the National Natural Science Foundation of China (60774091)
文摘Quorum systems have been used to solve the problem of data consistency in distributed fault-tolerance systems. But when intrusions occur, traditional quorum systems have some disadvantages. For example, synchronous quorum systems are subject to DOS attacks, while asynchronous quorum systems need a larger system size (at least 3f+1 for generic data, and f fewer for self-verifying data). In order to solve the problems above, an intrusion-tolerance quorum system (ITQS) of hybrid time model based on trust timely computing base is presented (TTCB). The TTCB is a trust secure real-time component inside the server with a well defined interface and separated from the operation system. It is in the synchronous communication environment while the application layer in the server deals with read-write requests and executes update-copy protocols asynchronously. The architectural hybridization of synchrony and asynchrony can achieve the data consistency and availability correctly. We also build two kinds of ITQSes based on TTCB, i.e., the symmetrical and the asymmetrical TTCB quorum systems. In the performance evaluations, we show that TTCB quorum systems are of smaller size, lower load and higher availability.
文摘In parametric cost estimating, objections to using statistical Cost Estimating Relationships (CERs) and parametric models include problems of low statistical significance due to limited data points, biases in the underlying data, and lack of robustness. Soft Computing (SC) technologies are used for building intelligent cost models. The SC models are systemically evaluated based on their training and prediction of the historical cost data of airborne avionics systems. Results indicating the strengths and weakness of each model are presented. In general, the intelligent cost models have higher prediction precision, better data adaptability, and stronger self-learning capability than the regression CERs.
文摘After a brief emphasis about the interconnected world, including Cyber-Physical Systems of Systems, the increasing importance of the decision-making by autonomous, quasi-autonomous, and autonomic systems is emphasised. Promising roles of computational understanding, computational awareness, and computational wisdom for better autonomous decision-making are outlined. The contributions of simulation-based approaches are listed.
文摘The healthcare industry is rapidly adapting to new computing environments and technologies. With academics increasingly committed to developing and enhancing healthcare solutions that combine the Internet of Things (IoT) and edge computing, there is a greater need than ever to adequately monitor the data being acquired, shared, processed, and stored. The growth of cloud, IoT, and edge computing models presents severe data privacy concerns, especially in the healthcare sector. However, rigorous research to develop appropriate data privacy solutions in the healthcare sector is still lacking. This paper discusses the current state of privacy-preservation solutions in IoT and edge healthcare applications. It identifies the common strategies often used to include privacy by the intelligent edges and technologies in healthcare systems. Furthermore, the study addresses the technical complexity, efficacy, and sustainability limits of these methods. The study also highlights the privacy issues and current research directions that have driven the IoT and edge healthcare solutions, with which more insightful future applications are encouraged.
文摘Nowadays,when a life span of sensor nodes are threatened by the shortage of energy available for communication,sink mobility is an excellent technique for increasing its lifespan.When communicating via a WSN,the use of nodes as a transmission method eliminates the need for a physical medium.Sink mobility in a dynamic network topology presents a problem for sensor nodes that have reserved resources.Unless the route is revised and changed to reflect the location of the mobile sink location,it will be inefficient for delivering data effec-tively.In the clustering strategy,nodes are grouped together to improve commu-nication,and the cluster head receives data from compactable nodes.The sink receives the aggregated data from the head.The cluster head is the central node in the conventional technique.A single node uses more energy than a node that is routed to a dead node.Increasing the number of people using a route shortens its lifespan.The proposed work demonstrates the effectiveness with which sensor node paths can be modified at a lower cost by utilising the virtual grid.The best routes are maintained mostly by sink node communication on routes based on dynamic route adjustment(VGDRA).Only specific nodes are acquired to re-align data supply to the mobile sink in accordance with new paradigms of route recon-struction.According to the results,VGDRA schemes have a longer life span because of the reduced number of loops.
文摘Recently,with the growth of cyber physical systems(CPS),several applications have begun to deploy in the CPS for connecting the cyber space with the physical scale effectively.Besides,the cloud computing(CC)enabled CPS offers huge processing and storage resources for CPS thatfinds helpful for a range of application areas.At the same time,with the massive development of applica-tions that exist in the CPS environment,the energy utilization of the cloud enabled CPS has gained significant interest.For improving the energy effective-ness of the CC platform,virtualization technologies have been employed for resource management and the applications are executed via virtual machines(VMs).Since effective scheduling of resources acts as an important role in the design of cloud enabled CPS,this paper focuses on the design of chaotic sandpi-per optimization based VM scheduling(CSPO-VMS)technique for energy effi-cient CPS.The CSPO-VMS technique is utilized for searching for the optimum VM migration solution and it helps to choose an effective scheduling strategy.The CSPO algorithm integrates the concepts of traditional SPO algorithm with the chaos theory,which substitutes the main parameter and combines it with the chaos.In order to improve the process of determining the global optimum solutions and convergence rate of the SPO algorithm,the chaotic concept is included in the SPO algorithm.The CSPO-VMS technique also derives afitness function to choose optimal scheduling strategy in the CPS environment.In order to demonstrate the enhanced performance of the CSPO-VMS technique,a wide range of simulations were carried out and the results are examined under varying aspects.The simulation results ensured the improved performance of the CSPO-VMS technique over the recent methods interms of different measures.
文摘Predictive Emission Monitoring Systems (PEMS) offer a cost-effective and environmentally friendly alternative to Continuous Emission Monitoring Systems (CEMS) for monitoring pollution from industrial sources. Multiple regression is one of the fundamental statistical techniques to describe the relationship between dependent and independent variables. This model can be effectively used to develop a PEMS, to estimate the amount of pollution emitted by industrial sources, where the fuel composition and other process-related parameters are available. It often makes them sufficient to predict the emission discharge with acceptable accuracy. In cases where PEMS are accepted as an alternative method to CEMS, which use gas analyzers, they can provide cost savings and substantial benefits for ongoing system support and maintenance. The described mathematical concept is based on the matrix algebra representation in multiple regression involving multiple precision arithmetic techniques. Challenging numerical examples for statistical big data analysis, are investigated. Numerical examples illustrate computational accuracy and efficiency of statistical analysis due to increasing the precision level. The programming language C++ is used for mathematical model implementation. The data for research and development, including the dependent fuel and independent NOx emissions data, were obtained from CEMS software installed on a petrochemical plant.