Success or failure of an E-commerce platform is often reduced to its ability to maximize the conversion rate of its visitors. This is commonly regarded as the capacity to induce a purchase from a visitor. Visitors pos...Success or failure of an E-commerce platform is often reduced to its ability to maximize the conversion rate of its visitors. This is commonly regarded as the capacity to induce a purchase from a visitor. Visitors possess individual characteristics, histories, and objectives which complicate the choice of what platform features that maximize the conversion rate. Modern web technology has made clickstream data accessible allowing a complete record of a visitor’s actions on a website to be analyzed. What remains poorly constrained is what parts of the clickstream data are meaningful information and what parts are accidental for the problem of platform design. In this research, clickstream data from an online retailer was examined to demonstrate how statistical modeling can improve clickstream information usage. A conceptual model was developed that conjectured relationships between visitor and platform variables, visitors’ platform exit rate, boune rate, and decision to purchase. Several hypotheses on the nature of the clickstream relationships were posited and tested with the models. A discrete choice logit model showed that the content of a website, the history of website use, and the exit rate of pages visited had marginal effects on derived utility for the visitor. Exit rate and bounce rate were modeled as beta distributed random variables. It was found that exit rate and its variability for pages visited were associated with site content, site quality, prior visitor history on the site, and technological preferences of the visitor. Bounce rate was also found to be influenced by the same factors but was in a direction opposite to the registered hypotheses. Most findings supported that clickstream data is amenable to statistical modeling with interpretable and comprehensible models.展开更多
Open source intelligence is one of the most important public data sources for strategic information analysis. One of the primary and core issues of strategic information research is information perception,so this pape...Open source intelligence is one of the most important public data sources for strategic information analysis. One of the primary and core issues of strategic information research is information perception,so this paper mainly expounds the perception method for strategic information perception in the open source intelligence environment as well as the framework and basic process of information perception. This paper argues that in order to match the information perception result with the information depiction result,it conducts practical exploration for the results of information acquisition,perception,depiction and analysis. This paper introduces and develops a monitoring platform for information perception. The results show that the method proposed in this paper is feasible.展开更多
Enterprises are continuously aiming at improving the execution of processes to achieve a competitive edge.One of the established ways of improving process performance is to assign the most appropriate resources to eac...Enterprises are continuously aiming at improving the execution of processes to achieve a competitive edge.One of the established ways of improving process performance is to assign the most appropriate resources to each task of the process.However,evaluations of business process improvement approaches have established that a method that can guide decision-makers to identify the most appropriate resources for a task of process improvement in a structured way,is missing.It is because the relationship between resources and tasks is less understood and advancement in business process intelligence is also ignored.To address this problem an integrated resource classification framework is presenting that identifies competence,suitability,and preference as the relationship of task with resources.But,only the competence relationship of human resources with a task is presented in this research as a resource competence model.Furthermore,the competency calculation method is presented as a user guider layer for business process intelligencebased resource competence evaluation.The computed capabilities serve as a basic input for choosing the most appropriate resources for each task of the process.Applicability of method is illustrated through a heathcare case study.展开更多
The aim of this study was to develop an adequate mathematical model for long-term forecasting of technological progress and economic growth in the digital age (2020-2050). In addition, the task was to develop a model ...The aim of this study was to develop an adequate mathematical model for long-term forecasting of technological progress and economic growth in the digital age (2020-2050). In addition, the task was to develop a model for forecast calculations of labor productivity in the symbiosis of “man + intelligent machine”, where an intelligent machine (IM) is understood as a computer or robot equipped with elements of artificial intelligence (AI), as well as in the digital economy as a whole. In the course of the study, it was shown that in order to implement its goals the Schumpeter-Kondratiev innovation and cycle theory on forming long waves (LW) of economic development influenced by a powerful cluster of economic technologies engendered by industrial revolutions is most appropriate for a long-term forecasting of technological progress and economic growth. The Solow neoclassical model of economic growth, synchronized with LW, gives the opportunity to forecast economic dynamics of technologically advanced countries with a greater precision up to 30 years, the time which correlates with the continuation of LW. In the information and digital age, the key role among the main factors of growth (capital, labour and technological progress) is played by the latter. The authors have developed an information model which allows for forecasting technological progress basing on growth rates of endogenous technological information in economics. The main regimes of producing technological information, corresponding to the eras of information and digital economies, are given in the article, as well as the Lagrangians that engender them. The model is verified on the example of the 5<sup>th</sup> information LW for the US economy (1982-2018) and it has had highly accurate approximation for both technological progress and economic growth. A number of new results were obtained using the developed information models for forecasting technological progress. The forecasting trajectory of economic growth of developed countries (on the example of the USA) on the upward stage of the 6<sup>th</sup> LW (2018-2042), engendered by the digital technologies of the 4<sup>th</sup> Industrial Revolution is given. It is also demonstrated that the symbiosis of human and intelligent machine (IM) is the driving force in the digital economy, where man plays the leading role organizing effective and efficient mutual work. Authors suggest a mathematical model for calculating labour productivity in the digital economy, where the symbiosis of “human + IM” is widely used. The calculations carried out with the help of the model show: 1) the symbiosis of “human + IM” from the very beginning lets to realize the possibilities of increasing work performance in the economy with the help of digital technologies;2) the largest labour productivity is achieved in the symbiosis of “human + IM”, where man labour prevails, and the lowest labour productivity is seen where the largest part of the work is performed by IM;3) developed countries may achieve labour productivity of 3% per year by the mid-2020s, which has all the chances to stay up to the 2040s.展开更多
Artificial intelligence(AI) systems surpass certain human intelligence abilities in a statistical sense as a whole, but are not yet the true realization of these human intelligence abilities and behaviors. There are d...Artificial intelligence(AI) systems surpass certain human intelligence abilities in a statistical sense as a whole, but are not yet the true realization of these human intelligence abilities and behaviors. There are differences, and even contradictions, between the cognition and behavior of AI systems and humans. With the goal of achieving general AI, this study contains a review of the role of cognitive science in inspiring the development of the three mainstream academic branches of AI based on the three-layer framework proposed by David Marr, and the limitations of the current development of AI are explored and analyzed. The differences and inconsistencies between the cognition mechanisms of the human brain and the computation mechanisms of AI systems are analyzed. They are found to be the cause of the differences and contradictions between the cognition and behavior of AI systems and humans. Additionally, eight important research directions and their scientific issues that need to focus on braininspired AI research are proposed: highly imitated bionic information processing, a large-scale deep learning model that balances structure and function, multi-granularity joint problem solving bidirectionally driven by data and knowledge, AI models that simulate specific brain structures, a collaborative processing mechanism with the physical separation of perceptual processing and interpretive analysis, embodied intelligence that integrates the brain cognitive mechanism and AI computation mechanisms,intelligence simulation from individual intelligence to group intelligence(social intelligence), and AI-assisted brain cognitive intelligence.展开更多
To explore the influence of intelligent highways and advanced traveler information systems(ATIS)on path choice behavior,a day-to-day(DTD)traffic flow evolution model with information from intelligent highways and ATIS...To explore the influence of intelligent highways and advanced traveler information systems(ATIS)on path choice behavior,a day-to-day(DTD)traffic flow evolution model with information from intelligent highways and ATIS is proposed,whereby the network reliability and experiential learning theory are introduced into the decision process for the travelers’route choice.The intelligent highway serves all the travelers who drive on it,whereas ATIS serves vehicles equipped with information systems.Travelers who drive on intelligent highways or vehicles equipped with ATIS determine their trip routes based on real-time traffic information,whereas other travelers use both the road network conditions from the previous day and historical travel experience to choose a route.Both roadway capacity degradation and travel demand fluctuations are considered to demonstrate the uncertainties in the network.The theory of traffic network flow is developed to build a DTD model considering information from intelligent highway and ATIS.The fixed point theorem is adopted to investigate the equivalence,existence and stability of the proposed DTD model.Numerical examples illustrate that using a high confidence level and weight parameter for the traffic flow reduces the stability of the proposed model.The traffic flow reaches a steady state as travelers’routes shift with repetitive learning of road conditions.The proposed model can be used to formulate scientific traffic organization and diversion schemes during road expansion or reconstruction.展开更多
With the requirements for high performance results in the today’s mobile, global, highly competitive, and technology-based business world, business professionals have to get supported by convenient mobile decision su...With the requirements for high performance results in the today’s mobile, global, highly competitive, and technology-based business world, business professionals have to get supported by convenient mobile decision support systems (DSS). To give an improved support to mobile business professionals, it is necessary to go further than just allowing a simple remote access to a Business Intelligence platform. In this paper, the need for actual context-aware mobile Geospatial Business Intelligence (GeoBI) systems that can help capture, filter, organize and structure the user mobile context is exposed and justified. Furthermore, since capturing, structuring, and modeling mobile contextual information is still a research issue, a wide inventory of existing research work on context and mobile context is provided. Then, step by step, we methodologically identify relevant contextual information to capture for mobility purposes as well as for BI needs, organize them into context-dimensions, and build a hierarchical mobile GeoBI context model which (1) is geo-spatial-extended, (2) fits with human perception of mobility, (3) takes into account the local context interactions and information-sharing with remote contexts, and (4) matches with the usual hierarchical aggregated structure of BI data.展开更多
A great discovery made by H. von Foerster, P. M. Mora and L. W. Amiot was published in a 1960 issue of “Science”. The authors showed that existing data for calculating the Earth’s population in the new era (from 1 ...A great discovery made by H. von Foerster, P. M. Mora and L. W. Amiot was published in a 1960 issue of “Science”. The authors showed that existing data for calculating the Earth’s population in the new era (from 1 to 1958) could be described with incredibly high proximity by a hyperbolic function with the point of singularity on 13 November 2026. Thus, empirical regularity of the rise of the human population was established, which was marked by explosive demographic growth in the 20<sup>th</sup> century when during only one century it almost quadrupled: from 1.656 billion in 1900 to 6.144 billion in 2000. Nowadays, the world population has already overcome 7.8 billion people. Immediately after 1960, an active search for phenomenological models began to explain the mechanism of the hyperbolic population growth and the following demographic transition designed to stabilize its population. A significant role in explaining the mechanism of the hyperbolic growth of the world population was played by S. Kuznets (1960) and E. Boserup (1965), who found out that the rates of technological progress historically increased in proportion to the Earth’s population. It meant that the growth of the population led to raising the level of life-supporting technologies, and the latter in its turn enlarged the carrying capacity of the Earth, making it possible for the world population to expand. Proceeding from the information imperative, we have developed the model of the demographic dynamics for the 21<sup>st</sup> century for the first time. The model shows that with the development and spread of Intelligent Machines (IM), the number of the world population reaching a certain maximum will then irreversibly decline. Human depopulation will largely touch upon the most developed countries, where IM is used intensively nowadays. Until a certain moment in time, this depopulation in developed countries will be compensated by the explosive growth of the population in African countries located south of the Sahara. Calculations in our model reveal that the peak of the human population of 8.52 billion people will be reached in 2050, then it will irreversibly go down to 7.9 billion people by 2100, if developed countries do not take timely effective measures to overcome the process of information depopulation.展开更多
Automatically mapping a requirement specification to design model in Software Engineering is an open complex problem. Existing methods use a complex manual process that use the knowledge from the requirement specifica...Automatically mapping a requirement specification to design model in Software Engineering is an open complex problem. Existing methods use a complex manual process that use the knowledge from the requirement specification/modeling and the design, and try to find a good match between them. The key task done by designers is to convert a natural language based requirement specification (or corresponding UML based representation) into a predominantly computer language based design model—thus the process is very complex as there is a very large gap between our natural language and computer language. Moreover, this is not just a simple language conversion, but rather a complex knowledge conversion that can lead to meaningful design implementation. In this paper, we describe an automated method to map Requirement Model to Design Model and thus automate/partially automate the Structured Design (SD) process. We believe, this is the first logical step in mapping a more complex requirement specification to design model. We call it IRTDM (Intelligent Agent based requirement model to design model mapping). The main theme of IRTDM is to use some AI (Artificial Intelligence) based algorithms, semantic representation using Ontology or Predicate Logic, design structures using some well known design framework and Machine Learning algorithms for learning over time. Semantics help convert natural language based requirement specification (and associated UML representation) into high level design model followed by mapping to design structures. AI method can also be used to convert high level design structures into lower level design which then can be refined further by some manual and/or semi automated process. We emphasize that automation is one of the key ways to minimize the software cost, and is very important for all, especially, for the “Design for the Bottom 90% People” or BOP (Base of the Pyramid People).展开更多
This paper examines how cybersecurity is developing and how it relates to more conventional information security. Although information security and cyber security are sometimes used synonymously, this study contends t...This paper examines how cybersecurity is developing and how it relates to more conventional information security. Although information security and cyber security are sometimes used synonymously, this study contends that they are not the same. The concept of cyber security is explored, which goes beyond protecting information resources to include a wider variety of assets, including people [1]. Protecting information assets is the main goal of traditional information security, with consideration to the human element and how people fit into the security process. On the other hand, cyber security adds a new level of complexity, as people might unintentionally contribute to or become targets of cyberattacks. This aspect presents moral questions since it is becoming more widely accepted that society has a duty to protect weaker members of society, including children [1]. The study emphasizes how important cyber security is on a larger scale, with many countries creating plans and laws to counteract cyberattacks. Nevertheless, a lot of these sources frequently neglect to define the differences or the relationship between information security and cyber security [1]. The paper focus on differentiating between cybersecurity and information security on a larger scale. The study also highlights other areas of cybersecurity which includes defending people, social norms, and vital infrastructure from threats that arise from online in addition to information and technology protection. It contends that ethical issues and the human factor are becoming more and more important in protecting assets in the digital age, and that cyber security is a paradigm shift in this regard [1].展开更多
Converting thermal energy into mechanical work by means of Organic Rankine Cycle is a validated technology to exploit low-grade waste heat.The typical design process of Organic Rankine Cycle system,which commonly in-v...Converting thermal energy into mechanical work by means of Organic Rankine Cycle is a validated technology to exploit low-grade waste heat.The typical design process of Organic Rankine Cycle system,which commonly in-volves working fluid selection,cycle configuration selection,operating parameters optimization,and component selection and sizing,is time-consuming and highly dependent on engineer’s experience.Thus,it is difficult to achieve the optimal design in most cases.In recent decades,artificial intelligence has been gradually introduced into the design of energy system to overcome above shortcomings.In order to clarify the research field of arti-ficial intelligence technique in Organic Rankine Cycle design and guide artificial intelligence technique to assist Organic Rankine Cycle design better,this study presents a preliminary literature summary on recent progresses of artificial intelligence technique in organic Rankine cycle systems design.First,this study analyzes four main procedures which constitute a typical design process of Organic Rankine Cycle systems and finds that design problems encountered during design process can be divided into three categories:decision making,parameter optimization and parameter prediction.In the second section,a detailed literature review on each design proce-dures using artificial intelligence algorithms is presented.At last,the state of art in this field and the prospects for the future work are provided.展开更多
The enhancement of computing power,the maturity of learning algorithms,and the richness of application scenarios make Artificial Intelligence(AI)solution increasingly attractive when solving Geo-spatial Information Sc...The enhancement of computing power,the maturity of learning algorithms,and the richness of application scenarios make Artificial Intelligence(AI)solution increasingly attractive when solving Geo-spatial Information Science(GSIS)problems.These include image matching,image target detection,change detection,image retrieval,and for generating data models of various types.This paper discusses the connection and synthesis between AI and GSIS in block adjustment,image search and discovery in big databases,automatic change detection,and detection of abnormalities,demonstrating that AI can integrate GSIS.Moreover,the concept of Earth Observation Brain and Smart Geo-spatial Service(SGSS)is introduced in the end,and it is expected to promote the development of GSIS into broadening applications.展开更多
Digital technologies have changed the way supply chain operations are structured.In this article,we conduct systematic syntheses of literature on the impact of new technologies on supply chains and the related cyber r...Digital technologies have changed the way supply chain operations are structured.In this article,we conduct systematic syntheses of literature on the impact of new technologies on supply chains and the related cyber risks.A taxonomic/cladistic approach is used for the evaluations of progress in the area of supply chain integration in the Industrial Internet of Things and Industry 4.0,with a specific focus on the mitigation of cyber risks.An analytical framework is presented,based on a critical assessment with respect to issues related to new types of cyber risk and the integration of supply chains with new technologies.This paper identifies a dynamic and self-adapting supply chain system supported with Artificial Intelligence and Machine Learning(AI/ML)and real-time intelligence for predictive cyber risk analytics.The system is integrated into a cognition engine that enables predictive cyber risk analytics with real-time intelligence from IoT networks at the edge.This enhances capacities and assist in the creation of a comprehensive understanding of the opportunities and threats that arise when edge computing nodes are deployed,and when AI/ML technologies are migrated to the periphery of IoT networks.展开更多
Digital technologies have changed the way supply chain operations are structured.In this article,we conduct systematic syntheses of literature on the impact of new technologies on supply chains and the related cyber r...Digital technologies have changed the way supply chain operations are structured.In this article,we conduct systematic syntheses of literature on the impact of new technologies on supply chains and the related cyber risks.A taxonomic/cladistic approach is used for the evaluations of progress in the area of supply chain integration in the Industrial Internet of Things and Industry 4.0,with a specific focus on the mitigation of cyber risks.An analytical framework is presented,based on a critical assessment with respect to issues related to new types of cyber risk and the integration of supply chains with new technologies.This paper identifies a dynamic and self-adapting supply chain system supported with Artificial Intelligence and Machine Learning(AI/ML)and real-time intelligence for predictive cyber risk analytics.The system is integrated into a cognition engine that enables predictive cyber risk analytics with real-time intelligence from IoT networks at the edge.This enhances capacities and assist in the creation of a comprehensive understanding of the opportunities and threats that arise when edge computing nodes are deployed,and when AI/ML technologies are migrated to the periphery of IoT networks.展开更多
文摘Success or failure of an E-commerce platform is often reduced to its ability to maximize the conversion rate of its visitors. This is commonly regarded as the capacity to induce a purchase from a visitor. Visitors possess individual characteristics, histories, and objectives which complicate the choice of what platform features that maximize the conversion rate. Modern web technology has made clickstream data accessible allowing a complete record of a visitor’s actions on a website to be analyzed. What remains poorly constrained is what parts of the clickstream data are meaningful information and what parts are accidental for the problem of platform design. In this research, clickstream data from an online retailer was examined to demonstrate how statistical modeling can improve clickstream information usage. A conceptual model was developed that conjectured relationships between visitor and platform variables, visitors’ platform exit rate, boune rate, and decision to purchase. Several hypotheses on the nature of the clickstream relationships were posited and tested with the models. A discrete choice logit model showed that the content of a website, the history of website use, and the exit rate of pages visited had marginal effects on derived utility for the visitor. Exit rate and bounce rate were modeled as beta distributed random variables. It was found that exit rate and its variability for pages visited were associated with site content, site quality, prior visitor history on the site, and technological preferences of the visitor. Bounce rate was also found to be influenced by the same factors but was in a direction opposite to the registered hypotheses. Most findings supported that clickstream data is amenable to statistical modeling with interpretable and comprehensible models.
基金Supported by the National Social Science Fund Project(No.18BTQ054)
文摘Open source intelligence is one of the most important public data sources for strategic information analysis. One of the primary and core issues of strategic information research is information perception,so this paper mainly expounds the perception method for strategic information perception in the open source intelligence environment as well as the framework and basic process of information perception. This paper argues that in order to match the information perception result with the information depiction result,it conducts practical exploration for the results of information acquisition,perception,depiction and analysis. This paper introduces and develops a monitoring platform for information perception. The results show that the method proposed in this paper is feasible.
文摘Enterprises are continuously aiming at improving the execution of processes to achieve a competitive edge.One of the established ways of improving process performance is to assign the most appropriate resources to each task of the process.However,evaluations of business process improvement approaches have established that a method that can guide decision-makers to identify the most appropriate resources for a task of process improvement in a structured way,is missing.It is because the relationship between resources and tasks is less understood and advancement in business process intelligence is also ignored.To address this problem an integrated resource classification framework is presenting that identifies competence,suitability,and preference as the relationship of task with resources.But,only the competence relationship of human resources with a task is presented in this research as a resource competence model.Furthermore,the competency calculation method is presented as a user guider layer for business process intelligencebased resource competence evaluation.The computed capabilities serve as a basic input for choosing the most appropriate resources for each task of the process.Applicability of method is illustrated through a heathcare case study.
文摘The aim of this study was to develop an adequate mathematical model for long-term forecasting of technological progress and economic growth in the digital age (2020-2050). In addition, the task was to develop a model for forecast calculations of labor productivity in the symbiosis of “man + intelligent machine”, where an intelligent machine (IM) is understood as a computer or robot equipped with elements of artificial intelligence (AI), as well as in the digital economy as a whole. In the course of the study, it was shown that in order to implement its goals the Schumpeter-Kondratiev innovation and cycle theory on forming long waves (LW) of economic development influenced by a powerful cluster of economic technologies engendered by industrial revolutions is most appropriate for a long-term forecasting of technological progress and economic growth. The Solow neoclassical model of economic growth, synchronized with LW, gives the opportunity to forecast economic dynamics of technologically advanced countries with a greater precision up to 30 years, the time which correlates with the continuation of LW. In the information and digital age, the key role among the main factors of growth (capital, labour and technological progress) is played by the latter. The authors have developed an information model which allows for forecasting technological progress basing on growth rates of endogenous technological information in economics. The main regimes of producing technological information, corresponding to the eras of information and digital economies, are given in the article, as well as the Lagrangians that engender them. The model is verified on the example of the 5<sup>th</sup> information LW for the US economy (1982-2018) and it has had highly accurate approximation for both technological progress and economic growth. A number of new results were obtained using the developed information models for forecasting technological progress. The forecasting trajectory of economic growth of developed countries (on the example of the USA) on the upward stage of the 6<sup>th</sup> LW (2018-2042), engendered by the digital technologies of the 4<sup>th</sup> Industrial Revolution is given. It is also demonstrated that the symbiosis of human and intelligent machine (IM) is the driving force in the digital economy, where man plays the leading role organizing effective and efficient mutual work. Authors suggest a mathematical model for calculating labour productivity in the digital economy, where the symbiosis of “human + IM” is widely used. The calculations carried out with the help of the model show: 1) the symbiosis of “human + IM” from the very beginning lets to realize the possibilities of increasing work performance in the economy with the help of digital technologies;2) the largest labour productivity is achieved in the symbiosis of “human + IM”, where man labour prevails, and the lowest labour productivity is seen where the largest part of the work is performed by IM;3) developed countries may achieve labour productivity of 3% per year by the mid-2020s, which has all the chances to stay up to the 2040s.
基金supported by the National Natural Science Foundation of China (Grant Nos. 62221005, 61936001, and 62376045)the Natural Science Foundation of Chongqing, China (Grant Nos. cstc2021ycjhbgzxm0013)the Project of Chongqing Municipal Education Commission, China (Grant No. HZ2021008)。
文摘Artificial intelligence(AI) systems surpass certain human intelligence abilities in a statistical sense as a whole, but are not yet the true realization of these human intelligence abilities and behaviors. There are differences, and even contradictions, between the cognition and behavior of AI systems and humans. With the goal of achieving general AI, this study contains a review of the role of cognitive science in inspiring the development of the three mainstream academic branches of AI based on the three-layer framework proposed by David Marr, and the limitations of the current development of AI are explored and analyzed. The differences and inconsistencies between the cognition mechanisms of the human brain and the computation mechanisms of AI systems are analyzed. They are found to be the cause of the differences and contradictions between the cognition and behavior of AI systems and humans. Additionally, eight important research directions and their scientific issues that need to focus on braininspired AI research are proposed: highly imitated bionic information processing, a large-scale deep learning model that balances structure and function, multi-granularity joint problem solving bidirectionally driven by data and knowledge, AI models that simulate specific brain structures, a collaborative processing mechanism with the physical separation of perceptual processing and interpretive analysis, embodied intelligence that integrates the brain cognitive mechanism and AI computation mechanisms,intelligence simulation from individual intelligence to group intelligence(social intelligence), and AI-assisted brain cognitive intelligence.
基金Project(71801115)supported by the National Natural Science Foundation of ChinaProject(2021M691311)supported by the Postdoctoral Science Foundation of ChinaProject(111041000000180001210102)supported by the Central Public Interest Scientific Institution Basal Research Fund,China。
文摘To explore the influence of intelligent highways and advanced traveler information systems(ATIS)on path choice behavior,a day-to-day(DTD)traffic flow evolution model with information from intelligent highways and ATIS is proposed,whereby the network reliability and experiential learning theory are introduced into the decision process for the travelers’route choice.The intelligent highway serves all the travelers who drive on it,whereas ATIS serves vehicles equipped with information systems.Travelers who drive on intelligent highways or vehicles equipped with ATIS determine their trip routes based on real-time traffic information,whereas other travelers use both the road network conditions from the previous day and historical travel experience to choose a route.Both roadway capacity degradation and travel demand fluctuations are considered to demonstrate the uncertainties in the network.The theory of traffic network flow is developed to build a DTD model considering information from intelligent highway and ATIS.The fixed point theorem is adopted to investigate the equivalence,existence and stability of the proposed DTD model.Numerical examples illustrate that using a high confidence level and weight parameter for the traffic flow reduces the stability of the proposed model.The traffic flow reaches a steady state as travelers’routes shift with repetitive learning of road conditions.The proposed model can be used to formulate scientific traffic organization and diversion schemes during road expansion or reconstruction.
文摘With the requirements for high performance results in the today’s mobile, global, highly competitive, and technology-based business world, business professionals have to get supported by convenient mobile decision support systems (DSS). To give an improved support to mobile business professionals, it is necessary to go further than just allowing a simple remote access to a Business Intelligence platform. In this paper, the need for actual context-aware mobile Geospatial Business Intelligence (GeoBI) systems that can help capture, filter, organize and structure the user mobile context is exposed and justified. Furthermore, since capturing, structuring, and modeling mobile contextual information is still a research issue, a wide inventory of existing research work on context and mobile context is provided. Then, step by step, we methodologically identify relevant contextual information to capture for mobility purposes as well as for BI needs, organize them into context-dimensions, and build a hierarchical mobile GeoBI context model which (1) is geo-spatial-extended, (2) fits with human perception of mobility, (3) takes into account the local context interactions and information-sharing with remote contexts, and (4) matches with the usual hierarchical aggregated structure of BI data.
文摘A great discovery made by H. von Foerster, P. M. Mora and L. W. Amiot was published in a 1960 issue of “Science”. The authors showed that existing data for calculating the Earth’s population in the new era (from 1 to 1958) could be described with incredibly high proximity by a hyperbolic function with the point of singularity on 13 November 2026. Thus, empirical regularity of the rise of the human population was established, which was marked by explosive demographic growth in the 20<sup>th</sup> century when during only one century it almost quadrupled: from 1.656 billion in 1900 to 6.144 billion in 2000. Nowadays, the world population has already overcome 7.8 billion people. Immediately after 1960, an active search for phenomenological models began to explain the mechanism of the hyperbolic population growth and the following demographic transition designed to stabilize its population. A significant role in explaining the mechanism of the hyperbolic growth of the world population was played by S. Kuznets (1960) and E. Boserup (1965), who found out that the rates of technological progress historically increased in proportion to the Earth’s population. It meant that the growth of the population led to raising the level of life-supporting technologies, and the latter in its turn enlarged the carrying capacity of the Earth, making it possible for the world population to expand. Proceeding from the information imperative, we have developed the model of the demographic dynamics for the 21<sup>st</sup> century for the first time. The model shows that with the development and spread of Intelligent Machines (IM), the number of the world population reaching a certain maximum will then irreversibly decline. Human depopulation will largely touch upon the most developed countries, where IM is used intensively nowadays. Until a certain moment in time, this depopulation in developed countries will be compensated by the explosive growth of the population in African countries located south of the Sahara. Calculations in our model reveal that the peak of the human population of 8.52 billion people will be reached in 2050, then it will irreversibly go down to 7.9 billion people by 2100, if developed countries do not take timely effective measures to overcome the process of information depopulation.
文摘Automatically mapping a requirement specification to design model in Software Engineering is an open complex problem. Existing methods use a complex manual process that use the knowledge from the requirement specification/modeling and the design, and try to find a good match between them. The key task done by designers is to convert a natural language based requirement specification (or corresponding UML based representation) into a predominantly computer language based design model—thus the process is very complex as there is a very large gap between our natural language and computer language. Moreover, this is not just a simple language conversion, but rather a complex knowledge conversion that can lead to meaningful design implementation. In this paper, we describe an automated method to map Requirement Model to Design Model and thus automate/partially automate the Structured Design (SD) process. We believe, this is the first logical step in mapping a more complex requirement specification to design model. We call it IRTDM (Intelligent Agent based requirement model to design model mapping). The main theme of IRTDM is to use some AI (Artificial Intelligence) based algorithms, semantic representation using Ontology or Predicate Logic, design structures using some well known design framework and Machine Learning algorithms for learning over time. Semantics help convert natural language based requirement specification (and associated UML representation) into high level design model followed by mapping to design structures. AI method can also be used to convert high level design structures into lower level design which then can be refined further by some manual and/or semi automated process. We emphasize that automation is one of the key ways to minimize the software cost, and is very important for all, especially, for the “Design for the Bottom 90% People” or BOP (Base of the Pyramid People).
文摘This paper examines how cybersecurity is developing and how it relates to more conventional information security. Although information security and cyber security are sometimes used synonymously, this study contends that they are not the same. The concept of cyber security is explored, which goes beyond protecting information resources to include a wider variety of assets, including people [1]. Protecting information assets is the main goal of traditional information security, with consideration to the human element and how people fit into the security process. On the other hand, cyber security adds a new level of complexity, as people might unintentionally contribute to or become targets of cyberattacks. This aspect presents moral questions since it is becoming more widely accepted that society has a duty to protect weaker members of society, including children [1]. The study emphasizes how important cyber security is on a larger scale, with many countries creating plans and laws to counteract cyberattacks. Nevertheless, a lot of these sources frequently neglect to define the differences or the relationship between information security and cyber security [1]. The paper focus on differentiating between cybersecurity and information security on a larger scale. The study also highlights other areas of cybersecurity which includes defending people, social norms, and vital infrastructure from threats that arise from online in addition to information and technology protection. It contends that ethical issues and the human factor are becoming more and more important in protecting assets in the digital age, and that cyber security is a paradigm shift in this regard [1].
基金The work described in this paper was supported by the National Key Research and Development Plan under Grant No.2018YFB1501004.
文摘Converting thermal energy into mechanical work by means of Organic Rankine Cycle is a validated technology to exploit low-grade waste heat.The typical design process of Organic Rankine Cycle system,which commonly in-volves working fluid selection,cycle configuration selection,operating parameters optimization,and component selection and sizing,is time-consuming and highly dependent on engineer’s experience.Thus,it is difficult to achieve the optimal design in most cases.In recent decades,artificial intelligence has been gradually introduced into the design of energy system to overcome above shortcomings.In order to clarify the research field of arti-ficial intelligence technique in Organic Rankine Cycle design and guide artificial intelligence technique to assist Organic Rankine Cycle design better,this study presents a preliminary literature summary on recent progresses of artificial intelligence technique in organic Rankine cycle systems design.First,this study analyzes four main procedures which constitute a typical design process of Organic Rankine Cycle systems and finds that design problems encountered during design process can be divided into three categories:decision making,parameter optimization and parameter prediction.In the second section,a detailed literature review on each design proce-dures using artificial intelligence algorithms is presented.At last,the state of art in this field and the prospects for the future work are provided.
基金This work was supported in part by the National key R and D plan on strategic international scientific and technological innovation cooperation special project[grant number 2016YFE0202300]the National Natural Science Foundation of China[grant number 61671332,41771452,51708426,41890820,41771454]+1 种基金the Natural Science Fund of Hubei Province in China[grant number 2018CFA007]the Independent Research Projects of Wuhan University[grant number 2042018kf0250].
文摘The enhancement of computing power,the maturity of learning algorithms,and the richness of application scenarios make Artificial Intelligence(AI)solution increasingly attractive when solving Geo-spatial Information Science(GSIS)problems.These include image matching,image target detection,change detection,image retrieval,and for generating data models of various types.This paper discusses the connection and synthesis between AI and GSIS in block adjustment,image search and discovery in big databases,automatic change detection,and detection of abnormalities,demonstrating that AI can integrate GSIS.Moreover,the concept of Earth Observation Brain and Smart Geo-spatial Service(SGSS)is introduced in the end,and it is expected to promote the development of GSIS into broadening applications.
基金This work was funded by the UK EPSRC[grant number:EP/S035362/1,EP/N023013/1,EP/N02334X/1]and by the Cisco Research Centre[grant number 1525381].
文摘Digital technologies have changed the way supply chain operations are structured.In this article,we conduct systematic syntheses of literature on the impact of new technologies on supply chains and the related cyber risks.A taxonomic/cladistic approach is used for the evaluations of progress in the area of supply chain integration in the Industrial Internet of Things and Industry 4.0,with a specific focus on the mitigation of cyber risks.An analytical framework is presented,based on a critical assessment with respect to issues related to new types of cyber risk and the integration of supply chains with new technologies.This paper identifies a dynamic and self-adapting supply chain system supported with Artificial Intelligence and Machine Learning(AI/ML)and real-time intelligence for predictive cyber risk analytics.The system is integrated into a cognition engine that enables predictive cyber risk analytics with real-time intelligence from IoT networks at the edge.This enhances capacities and assist in the creation of a comprehensive understanding of the opportunities and threats that arise when edge computing nodes are deployed,and when AI/ML technologies are migrated to the periphery of IoT networks.
基金funded by the UK EPSRC[grant number:EP/S035362/1,EP/N023013/1,EP/N02334X/1]by the Cisco Research Centre[grant number 1525381].
文摘Digital technologies have changed the way supply chain operations are structured.In this article,we conduct systematic syntheses of literature on the impact of new technologies on supply chains and the related cyber risks.A taxonomic/cladistic approach is used for the evaluations of progress in the area of supply chain integration in the Industrial Internet of Things and Industry 4.0,with a specific focus on the mitigation of cyber risks.An analytical framework is presented,based on a critical assessment with respect to issues related to new types of cyber risk and the integration of supply chains with new technologies.This paper identifies a dynamic and self-adapting supply chain system supported with Artificial Intelligence and Machine Learning(AI/ML)and real-time intelligence for predictive cyber risk analytics.The system is integrated into a cognition engine that enables predictive cyber risk analytics with real-time intelligence from IoT networks at the edge.This enhances capacities and assist in the creation of a comprehensive understanding of the opportunities and threats that arise when edge computing nodes are deployed,and when AI/ML technologies are migrated to the periphery of IoT networks.