Neural Networks (NN) are the functional unit of Deep Learning and are known to mimic the behavior of the human brain to solve complex data-driven problems. Whenever we train our own neural networks, we need to take ca...Neural Networks (NN) are the functional unit of Deep Learning and are known to mimic the behavior of the human brain to solve complex data-driven problems. Whenever we train our own neural networks, we need to take care of something called the generalization of the neural network. The performance of Artificial Neural Networks (ANN) mostly depends upon its generalization capability. In this paper, we propose an innovative approach to enhance the generalization capability of artificial neural networks (ANN) using structural redundancy. A novel perspective on handling input data prototypes and their impact on the development of generalization, which could improve to ANN architectures accuracy and reliability is described.展开更多
The article is focused on discussing a new methodological approach to the study on specifics of transferring human beings to the posthuman cyber society.The approach in question assists in rethinking interconnected pr...The article is focused on discussing a new methodological approach to the study on specifics of transferring human beings to the posthuman cyber society.The approach in question assists in rethinking interconnected problems both of human origins in the universe and mankind’s digital future.And,besides,such an approach allows to deal with self-organising interconversions between the poles of the cardinal dual opposition of the Global Noosphere Brain and the Artificial General Intelligence.Herewith such phenomena of digital social life as Global Digitalisation,Digital Immortality,Mindcloning,and Technological Zombification being the constituents of Technological Singularity Concept,are rethought as paving the way for oncoming Posthuman Digital Era.This concept is evidently exemplified by a bifurcation resulting in two alternatives to be chosen by human beings,to wit,either to be undergone Mindcloning and become digitally immortal or being destroyed by powerful intelligent machines.The investigation in question is based on such a progressive methodology as the Law of Self-Organizing Ideals,as well as on the Method of Dual Oppositions.Rethinking interrelationships between the problem of a sense of social history and the meaning-of-life of local societies members which any intelligent machine is devoid of permits to substantiate specific regularities of Self-Transforming Homo Faber into Homo Digitalis and Technological Zombies ready to be transferred to posthuman cyberspace.展开更多
This article explores the key role of intelligent computing in driving the paradigm shift of scientific discovery.The article first outlines the five paradigms of scientific discovery,from empirical observation to the...This article explores the key role of intelligent computing in driving the paradigm shift of scientific discovery.The article first outlines the five paradigms of scientific discovery,from empirical observation to theoretical models,then to computational simulation and data intensive science,and finally introduces intelligent computing as the core of the fifth paradigm.Intelligent computing enhances the ability to understand,predict,and automate scientific discoveries of complex systems through technologies such as deep learning and machine learning.The article further analyzes the applications of intelligent computing in fields such as bioinformatics,astronomy,climate science,materials science,and medical image analysis,demonstrating its practical utility in solving scientific problems and promoting knowledge development.Finally,the article predicts that intelligent computing will play a more critical role in future scientific research,promoting interdisciplinary integration,open science,and collaboration,providing new solutions for solving complex problems.展开更多
To achieve the artificial general intelligence (AGI), imitate the intelligence? or imitate the brain? This is the question! Most artificial intelligence (AI) approaches set the understanding of the intelligence ...To achieve the artificial general intelligence (AGI), imitate the intelligence? or imitate the brain? This is the question! Most artificial intelligence (AI) approaches set the understanding of the intelligence principle as their premise. This may be correct to implement specific intelligence such as computing, symbolic logic, or what the AlphaGo could do. However, this is not correct for AGI, because to understand the principle of the brain intelligence is one of the most difficult challenges for our human beings. It is not wise to set such a question as the premise of the AGI mission. To achieve AGI, a practical approach is to build the so-called neurocomputer, which could be trained to produce autonomous intelligence and AGI. A neurocomputer imitates the biological neural network with neuromorphic devices which emulate the bio-neurons, synapses and other essential neural components. The neurocomputer could perceive the environment via sensors and interact with other entities via a physical body. The philosophy under the "new" approach, so-called as imitationalism in this paper, is the engineering methodology which has been practiced for thousands of years, and for many cases, such as the invention of the first airplane, succeeded. This paper compares the neurocomputer with the conventional computer. The major progress about neurocomputer is also reviewed.展开更多
The release of the generative pre-trained transformer(GPT)series has brought artificial general intelligence(AGI)to the forefront of the artificial intelligence(AI)field once again.However,the questions of how to defi...The release of the generative pre-trained transformer(GPT)series has brought artificial general intelligence(AGI)to the forefront of the artificial intelligence(AI)field once again.However,the questions of how to define and evaluate AGI remain unclear.This perspective article proposes that the evaluation of AGI should be rooted in dynamic embodied physical and social interactions(DEPSI).More specifically,we propose five critical characteristics to be considered as AGI benchmarks and suggest the Tong test as an AGI evaluation system.The Tong test describes a value-and ability-oriented testing system that delineates five levels of AGI milestones through a virtual environment with DEPSI,allowing for infinite task generation.We contrast the Tong test with classical AI testing systems in terms of various aspects and propose a systematic evaluation system to promote standardized,quantitative,and objective benchmarks and evaluation of AGI.展开更多
This paper proposes and illustrates an AI embedded object-oriented methodology to formulate the computable general equilibrium (CGE) models. In this framework, a CGE model is viewed as a collection of objects embedd...This paper proposes and illustrates an AI embedded object-oriented methodology to formulate the computable general equilibrium (CGE) models. In this framework, a CGE model is viewed as a collection of objects embedded AI or namely agents in computer world, corresponding to economic agents and entities in real world, such as government, households, markets and so on. A frame representation of major objects in CGE model is used for trade and environment. Embedded Al object-oriented approach (or software agent) is used in the CGE model representation can able to narrow the gap among the semantic representation, formal CGE (mathematical) representation and computer and algorithm representation, and to improve CGE in understanding and maintenance etc. In such a system, constructing a CGE model to appear an intuitive process rather than an abstract process. This intuitive process needs more understanding of the substance of economics and the logic underlying the problem rather than mathematical notation.展开更多
It has been an exciting journey since the mobile communications and artificial intelligence(AI)were conceived in 1983 and 1956.While both fields evolved independently and profoundly changed communications and computin...It has been an exciting journey since the mobile communications and artificial intelligence(AI)were conceived in 1983 and 1956.While both fields evolved independently and profoundly changed communications and computing industries,the rapid convergence of 5th generation mobile communication technology(5G)and AI is beginning to significantly transform the core communication infrastructure,network management,and vertical applications.The paper first outlined the individual roadmaps of mobile communications and AI in the early stage,with a concentration to review the era from 3rd generation mobile communication technology(3G)to 5G when AI and mobile communications started to converge.With regard to telecommunications AI,the progress of AI in the ecosystem of mobile communications was further introduced in detail,including network infrastructure,network operation and management,business operation and management,intelligent applications towards business supporting system(BSS)&operation supporting system(OSS)convergence,verticals and private networks,etc.Then the classifications of AI in telecommunication ecosystems were summarized along with its evolution paths specified by various international telecommunications standardization organizations.Towards the next decade,the prospective roadmap of telecommunications AI was forecasted.In line with 3rd generation partnership project(3GPP)and International Telecommunication Union Radiocommunication Sector(ITU-R)timeline of 5G&6th generation mobile communication technology(6G),the network intelligence following 3GPP and open radio access network(O-RAN)routes,experience and intent-based network management and operation,network AI signaling system,intelligent middle-office based BSS,intelligent customer experience management and policy control driven by BSS&OSS convergence,evolution from service level agreement(SLA)to experience level agreement(ELA),and intelligent private network for verticals were further explored.The paper is concluded with the vision that AI will reshape the future beyond 5G(B5G)/6G landscape,and we need pivot our research and development(R&D),standardizations,and ecosystem to fully take the unprecedented opportunities.展开更多
Deepfake technology can be used to replace people’s faces in videos or pictures to show them saying or doing things they never said or did. Deepfake media are often used to extort, defame, and manipulate public opini...Deepfake technology can be used to replace people’s faces in videos or pictures to show them saying or doing things they never said or did. Deepfake media are often used to extort, defame, and manipulate public opinion. However, despite deepfake technology’s risks, current deepfake detection methods lack generalization and are inconsistent when applied to unknown videos, i.e., videos on which they have not been trained. The purpose of this study is to develop a generalizable deepfake detection model by training convoluted neural networks (CNNs) to classify human facial features in videos. The study formulated the research questions: “How effectively does the developed model provide reliable generalizations?” A CNN model was trained to distinguish between real and fake videos using the facial features of human subjects in videos. The model was trained, validated, and tested using the FaceForensiq++ dataset, which contains more than 500,000 frames and subsets of the DFDC dataset, totaling more than 22,000 videos. The study demonstrated high generalizability, as the accuracy of the unknown dataset was only marginally (about 1%) lower than that of the known dataset. The findings of this study indicate that detection systems can be more generalizable, lighter, and faster by focusing on just a small region (the human face) of an entire video.展开更多
The explanation and simulation of the natural and artificial intelligence are the central goals of the studies of Neuroscience, Psychology, Artificial Intelligence and Cognitive Science. This paper first gives an intr...The explanation and simulation of the natural and artificial intelligence are the central goals of the studies of Neuroscience, Psychology, Artificial Intelligence and Cognitive Science. This paper first gives an introduction to the core topics and approaches in the study. Then, GAF--a general adaptive framework for neural system is proposed. Interdisciplinary discussions around the adaptation of the human nervous system are presented. Rules describing the theory of adaptation of the nervous system are provided.展开更多
Coordinates are basic needs for both geospatial and non-geospatial professionals and as a result, geodesists have the responsibility to develop methods that are applicable and practicable for determining cartesian coo...Coordinates are basic needs for both geospatial and non-geospatial professionals and as a result, geodesists have the responsibility to develop methods that are applicable and practicable for determining cartesian coordinates either through transformation, conversion or prediction for the geo-scientific community. It is therefore necessary to implement mechanisms and systems that can be employed to predict coordinates in either two dimensional (2D) or three dimensional (3D) spaces. Artificial Intelligence (AI) techniques and conventional methods within the last decade have been proposed as an effective tool for modeling and forecasting in various scientific disciplines for solving majority of problems. The primary objective of this work is to compare the efficiency of artificial intelligence technique (Feed Forward Back propagation Neural Network (FFBPNN)) and conventional methods (Ordinary Least Squares (OLS), General Least Squares (GLS), and Total Least Squares (TLS)) in cartesian planimetric coordinate's prediction. In addition, a hybrid approach of conventional and artificial intelligence method thus, TLS-FFBPNN has been proposed in this study for 2D cartesian coordinates prediction. It was observed from the results obtained that FFBPNN performed significantly better than the conventional methods. However, the TLS-FFBPNN when compared with FFBPNN, OLS, GLS and TLS gave stronger and better performance and superior predictions. To further confirm the superiority of the TLS-FFBPNN the Bayesian Information Criterion was introduced. The BIC selected the TLS-FFBPNN as the optimum model for prediction.展开更多
Advanced data mining methods have shown a promising capacity in building energy management.However,in the past decade,such methods are rarely applied in practice,since they highly rely on users to customize solutions ...Advanced data mining methods have shown a promising capacity in building energy management.However,in the past decade,such methods are rarely applied in practice,since they highly rely on users to customize solutions according to the characteristics of target building energy systems.Hence,the major barrier is that the practical applications of such methods remain laborious.It is necessary to enable computers to have the human-like ability to solve data mining tasks.Generative pre-trained transformers(GPT)might be capable of addressing this issue,as some GPT models such as GPT-3.5 and GPT-4 have shown powerful abilities on interaction with humans,code generation,and inference with common sense and domain knowledge.This study explores the potential of the most advanced GPT model(GPT-4)in three data mining scenarios of building energy management,i.e.,energy load prediction,fault diagnosis,and anomaly detection.A performance evaluation framework is proposed to verify the capabilities of GPT-4 on generating energy load prediction codes,diagnosing device faults,and detecting abnormal system operation patterns.It is demonstrated that GPT-4 can automatically solve most of the data mining tasks in this domain,which overcomes the barrier of practical applications of data mining methods in this domain.In the exploration of GPT-4,its advantages and limitations are also discussed comprehensively for revealing future research directions in this domain.展开更多
This paper explores the question of how we can know if Artificial Intelligence(AI)systems have become or are becoming sentient.After an overview of some arguments regarding AI sentience,it proceeds to an outline of th...This paper explores the question of how we can know if Artificial Intelligence(AI)systems have become or are becoming sentient.After an overview of some arguments regarding AI sentience,it proceeds to an outline of the notion of negation in the philosophy of Josiah Royce,which is then applied to the arguments already presented.Royce’s notion of the primitive dyadic and symmetric negation relation is shown to bypass such arguments.The negation relation and its expansion into higher types of order are then considered with regard to how,in small variations of active negation,they would disclose sentience in AI systems.Finally,I argue that the much-hyped arguments and apocalyptic speculations regarding Artificial General Intelligence(AGI)takeover and similar scenarios,abetted by the notion of unlimited data,are based on a fundamental misunderstanding of how entities engage their experience.Namely,limitation,proceeding from the symmetric negation relation,expands outward into higher types of order in polyadic relations,wherein the entity self-limits and creatively moves toward uniqueness.展开更多
Aiming at the convergence between Earth observation(EO)Big Data and Artificial General Intelligence(AGI),this paper consists of two parts.In the previous Part 1,existing EO optical sensory imagederived Level 2/Analysi...Aiming at the convergence between Earth observation(EO)Big Data and Artificial General Intelligence(AGI),this paper consists of two parts.In the previous Part 1,existing EO optical sensory imagederived Level 2/Analysis Ready Data(ARD)products and processes are critically compared,to overcome their lack of harmonization/standardization/interoperability and suitability in a new notion of Space Economy 4.0.In the present Part 2,original contributions comprise,at the Marr five levels of system understanding:(1)an innovative,but realistic EO optical sensory image-derived semantics-enriched ARD co-product pair requirements specification.First,in the pursuit of third-level semantic/ontological interoperability,a novel ARD symbolic(categorical and semantic)co-product,known as Scene Classification Map(SCM),adopts an augmented Cloud versus Not-Cloud taxonomy,whose Not-Cloud class legend complies with the standard fully-nested Land Cover Classification System’s Dichotomous Phase taxonomy proposed by the United Nations Food and Agriculture Organization.Second,a novel ARD subsymbolic numerical co-product,specifically,a panchromatic or multispectral EO image whose dimensionless digital numbers are radiometrically calibrated into a physical unit of radiometric measure,ranging from top-of-atmosphere reflectance to surface reflectance and surface albedo values,in a five-stage radiometric correction sequence.(2)An original ARD process requirements specification.(3)An innovative ARD processing system design(architecture),where stepwise SCM generation and stepwise SCM-conditional EO optical image radiometric correction are alternated in sequence.(4)An original modular hierarchical hybrid(combined deductive and inductive)computer vision subsystem design,provided with feedback loops,where software solutions at the Marr two shallowest levels of system understanding,specifically,algorithm and implementation,are selected from the scientific literature,to benefit from their technology readiness level as proof of feasibility,required in addition to proven suitability.To be implemented in operational mode at the space segment and/or midstream segment by both public and private EO big data providers,the proposed EO optical sensory image-derived semantics-enriched ARD product-pair and process reference standard is highlighted as linchpin for success of a new notion of Space Economy 4.0.展开更多
In this paper, we review recent emerging theoretical and technological advances of artificial intelligence (AI) in the big data settings. We conclude that integrating data-driven machine learning with human knowled...In this paper, we review recent emerging theoretical and technological advances of artificial intelligence (AI) in the big data settings. We conclude that integrating data-driven machine learning with human knowledge (common priors or implicit intuitions) can effectively lead to explainable, robust, and general AI, as follows: from shallow computation to deep neural reasoning; from merely data-driven model to data-driven with structured logic rules models; from task-oriented (domain-specific) intelligence (adherence to explicit instructions) to artificial general intelligence in a general context (the capability to learn from experience). Motivated by such endeavors, the next generation of AI, namely AI 2.0, is positioned to reinvent computing itself, to transform big data into structured knowledge, and to enable better decision-making for our society.展开更多
The outputs of a national economy can be partitioned into three sets of products:tangible goods(due to manufacturing,construction,extraction and agriculture),intangible services(due to an act of useful effort),and an ...The outputs of a national economy can be partitioned into three sets of products:tangible goods(due to manufacturing,construction,extraction and agriculture),intangible services(due to an act of useful effort),and an integration of services and goods or,as initially defined by Tien(2012),servgoods.Actually,these products can also be considered in terms of their relation to the first three Industrial Revolutions:the First Industrial Revolution(circa 1800)was primarily focused on the production of goods;the Second Industrial Revolution(circa 1900)was primarily focused on the mass production of goods;and the Third Industrial Revolution(circa 2000)has been primarily focused on the mass customization of goods,services or servgoods.In this follow-up paper,the Third Industrial Revolution of mass customization continues to accelerate in its evolution and,in many respects,is subsuming the earlier Industrial Revolutions of production and mass production.More importantly,with the advent of real-time decision making,artificial intelligence,Internet of Things,mobile networks,and other advanced digital technologies,customization has been extensively enabled,thereby advancing mass customization into a Fourth Industrial Revolution of real-time customization.Moreover,the moral,ethical,security and employment problems associated with both mass and real-time customization must be carefully assessed and mitigated,especially in regard to unintended consequences.Looking ahead and with the advance of artificial general intelligence,this Fourth Industrial Revolution could be forthcoming in about the middle of the 21st Century;it would allow for multiple activities to be simultaneously tackled in real-time and in a customized manner.展开更多
文摘Neural Networks (NN) are the functional unit of Deep Learning and are known to mimic the behavior of the human brain to solve complex data-driven problems. Whenever we train our own neural networks, we need to take care of something called the generalization of the neural network. The performance of Artificial Neural Networks (ANN) mostly depends upon its generalization capability. In this paper, we propose an innovative approach to enhance the generalization capability of artificial neural networks (ANN) using structural redundancy. A novel perspective on handling input data prototypes and their impact on the development of generalization, which could improve to ANN architectures accuracy and reliability is described.
文摘The article is focused on discussing a new methodological approach to the study on specifics of transferring human beings to the posthuman cyber society.The approach in question assists in rethinking interconnected problems both of human origins in the universe and mankind’s digital future.And,besides,such an approach allows to deal with self-organising interconversions between the poles of the cardinal dual opposition of the Global Noosphere Brain and the Artificial General Intelligence.Herewith such phenomena of digital social life as Global Digitalisation,Digital Immortality,Mindcloning,and Technological Zombification being the constituents of Technological Singularity Concept,are rethought as paving the way for oncoming Posthuman Digital Era.This concept is evidently exemplified by a bifurcation resulting in two alternatives to be chosen by human beings,to wit,either to be undergone Mindcloning and become digitally immortal or being destroyed by powerful intelligent machines.The investigation in question is based on such a progressive methodology as the Law of Self-Organizing Ideals,as well as on the Method of Dual Oppositions.Rethinking interrelationships between the problem of a sense of social history and the meaning-of-life of local societies members which any intelligent machine is devoid of permits to substantiate specific regularities of Self-Transforming Homo Faber into Homo Digitalis and Technological Zombies ready to be transferred to posthuman cyberspace.
文摘This article explores the key role of intelligent computing in driving the paradigm shift of scientific discovery.The article first outlines the five paradigms of scientific discovery,from empirical observation to theoretical models,then to computational simulation and data intensive science,and finally introduces intelligent computing as the core of the fifth paradigm.Intelligent computing enhances the ability to understand,predict,and automate scientific discoveries of complex systems through technologies such as deep learning and machine learning.The article further analyzes the applications of intelligent computing in fields such as bioinformatics,astronomy,climate science,materials science,and medical image analysis,demonstrating its practical utility in solving scientific problems and promoting knowledge development.Finally,the article predicts that intelligent computing will play a more critical role in future scientific research,promoting interdisciplinary integration,open science,and collaboration,providing new solutions for solving complex problems.
基金supported by the Natural Science Foundation of China(Nos.61425025 and 61390515)
文摘To achieve the artificial general intelligence (AGI), imitate the intelligence? or imitate the brain? This is the question! Most artificial intelligence (AI) approaches set the understanding of the intelligence principle as their premise. This may be correct to implement specific intelligence such as computing, symbolic logic, or what the AlphaGo could do. However, this is not correct for AGI, because to understand the principle of the brain intelligence is one of the most difficult challenges for our human beings. It is not wise to set such a question as the premise of the AGI mission. To achieve AGI, a practical approach is to build the so-called neurocomputer, which could be trained to produce autonomous intelligence and AGI. A neurocomputer imitates the biological neural network with neuromorphic devices which emulate the bio-neurons, synapses and other essential neural components. The neurocomputer could perceive the environment via sensors and interact with other entities via a physical body. The philosophy under the "new" approach, so-called as imitationalism in this paper, is the engineering methodology which has been practiced for thousands of years, and for many cases, such as the invention of the first airplane, succeeded. This paper compares the neurocomputer with the conventional computer. The major progress about neurocomputer is also reviewed.
基金supported by the National Key Research and Development Program of China (2022ZD0114900).
文摘The release of the generative pre-trained transformer(GPT)series has brought artificial general intelligence(AGI)to the forefront of the artificial intelligence(AI)field once again.However,the questions of how to define and evaluate AGI remain unclear.This perspective article proposes that the evaluation of AGI should be rooted in dynamic embodied physical and social interactions(DEPSI).More specifically,we propose five critical characteristics to be considered as AGI benchmarks and suggest the Tong test as an AGI evaluation system.The Tong test describes a value-and ability-oriented testing system that delineates five levels of AGI milestones through a virtual environment with DEPSI,allowing for infinite task generation.We contrast the Tong test with classical AI testing systems in terms of various aspects and propose a systematic evaluation system to promote standardized,quantitative,and objective benchmarks and evaluation of AGI.
文摘This paper proposes and illustrates an AI embedded object-oriented methodology to formulate the computable general equilibrium (CGE) models. In this framework, a CGE model is viewed as a collection of objects embedded AI or namely agents in computer world, corresponding to economic agents and entities in real world, such as government, households, markets and so on. A frame representation of major objects in CGE model is used for trade and environment. Embedded Al object-oriented approach (or software agent) is used in the CGE model representation can able to narrow the gap among the semantic representation, formal CGE (mathematical) representation and computer and algorithm representation, and to improve CGE in understanding and maintenance etc. In such a system, constructing a CGE model to appear an intuitive process rather than an abstract process. This intuitive process needs more understanding of the substance of economics and the logic underlying the problem rather than mathematical notation.
文摘It has been an exciting journey since the mobile communications and artificial intelligence(AI)were conceived in 1983 and 1956.While both fields evolved independently and profoundly changed communications and computing industries,the rapid convergence of 5th generation mobile communication technology(5G)and AI is beginning to significantly transform the core communication infrastructure,network management,and vertical applications.The paper first outlined the individual roadmaps of mobile communications and AI in the early stage,with a concentration to review the era from 3rd generation mobile communication technology(3G)to 5G when AI and mobile communications started to converge.With regard to telecommunications AI,the progress of AI in the ecosystem of mobile communications was further introduced in detail,including network infrastructure,network operation and management,business operation and management,intelligent applications towards business supporting system(BSS)&operation supporting system(OSS)convergence,verticals and private networks,etc.Then the classifications of AI in telecommunication ecosystems were summarized along with its evolution paths specified by various international telecommunications standardization organizations.Towards the next decade,the prospective roadmap of telecommunications AI was forecasted.In line with 3rd generation partnership project(3GPP)and International Telecommunication Union Radiocommunication Sector(ITU-R)timeline of 5G&6th generation mobile communication technology(6G),the network intelligence following 3GPP and open radio access network(O-RAN)routes,experience and intent-based network management and operation,network AI signaling system,intelligent middle-office based BSS,intelligent customer experience management and policy control driven by BSS&OSS convergence,evolution from service level agreement(SLA)to experience level agreement(ELA),and intelligent private network for verticals were further explored.The paper is concluded with the vision that AI will reshape the future beyond 5G(B5G)/6G landscape,and we need pivot our research and development(R&D),standardizations,and ecosystem to fully take the unprecedented opportunities.
文摘Deepfake technology can be used to replace people’s faces in videos or pictures to show them saying or doing things they never said or did. Deepfake media are often used to extort, defame, and manipulate public opinion. However, despite deepfake technology’s risks, current deepfake detection methods lack generalization and are inconsistent when applied to unknown videos, i.e., videos on which they have not been trained. The purpose of this study is to develop a generalizable deepfake detection model by training convoluted neural networks (CNNs) to classify human facial features in videos. The study formulated the research questions: “How effectively does the developed model provide reliable generalizations?” A CNN model was trained to distinguish between real and fake videos using the facial features of human subjects in videos. The model was trained, validated, and tested using the FaceForensiq++ dataset, which contains more than 500,000 frames and subsets of the DFDC dataset, totaling more than 22,000 videos. The study demonstrated high generalizability, as the accuracy of the unknown dataset was only marginally (about 1%) lower than that of the known dataset. The findings of this study indicate that detection systems can be more generalizable, lighter, and faster by focusing on just a small region (the human face) of an entire video.
基金This work is partly supported by China 863 Project Foundation
文摘The explanation and simulation of the natural and artificial intelligence are the central goals of the studies of Neuroscience, Psychology, Artificial Intelligence and Cognitive Science. This paper first gives an introduction to the core topics and approaches in the study. Then, GAF--a general adaptive framework for neural system is proposed. Interdisciplinary discussions around the adaptation of the human nervous system are presented. Rules describing the theory of adaptation of the nervous system are provided.
文摘Coordinates are basic needs for both geospatial and non-geospatial professionals and as a result, geodesists have the responsibility to develop methods that are applicable and practicable for determining cartesian coordinates either through transformation, conversion or prediction for the geo-scientific community. It is therefore necessary to implement mechanisms and systems that can be employed to predict coordinates in either two dimensional (2D) or three dimensional (3D) spaces. Artificial Intelligence (AI) techniques and conventional methods within the last decade have been proposed as an effective tool for modeling and forecasting in various scientific disciplines for solving majority of problems. The primary objective of this work is to compare the efficiency of artificial intelligence technique (Feed Forward Back propagation Neural Network (FFBPNN)) and conventional methods (Ordinary Least Squares (OLS), General Least Squares (GLS), and Total Least Squares (TLS)) in cartesian planimetric coordinate's prediction. In addition, a hybrid approach of conventional and artificial intelligence method thus, TLS-FFBPNN has been proposed in this study for 2D cartesian coordinates prediction. It was observed from the results obtained that FFBPNN performed significantly better than the conventional methods. However, the TLS-FFBPNN when compared with FFBPNN, OLS, GLS and TLS gave stronger and better performance and superior predictions. To further confirm the superiority of the TLS-FFBPNN the Bayesian Information Criterion was introduced. The BIC selected the TLS-FFBPNN as the optimum model for prediction.
文摘Advanced data mining methods have shown a promising capacity in building energy management.However,in the past decade,such methods are rarely applied in practice,since they highly rely on users to customize solutions according to the characteristics of target building energy systems.Hence,the major barrier is that the practical applications of such methods remain laborious.It is necessary to enable computers to have the human-like ability to solve data mining tasks.Generative pre-trained transformers(GPT)might be capable of addressing this issue,as some GPT models such as GPT-3.5 and GPT-4 have shown powerful abilities on interaction with humans,code generation,and inference with common sense and domain knowledge.This study explores the potential of the most advanced GPT model(GPT-4)in three data mining scenarios of building energy management,i.e.,energy load prediction,fault diagnosis,and anomaly detection.A performance evaluation framework is proposed to verify the capabilities of GPT-4 on generating energy load prediction codes,diagnosing device faults,and detecting abnormal system operation patterns.It is demonstrated that GPT-4 can automatically solve most of the data mining tasks in this domain,which overcomes the barrier of practical applications of data mining methods in this domain.In the exploration of GPT-4,its advantages and limitations are also discussed comprehensively for revealing future research directions in this domain.
基金funded by AI-PROFICIENT which has received funding from the European Union’s Horizon 2020 research and innovation program(No.957391).
文摘This paper explores the question of how we can know if Artificial Intelligence(AI)systems have become or are becoming sentient.After an overview of some arguments regarding AI sentience,it proceeds to an outline of the notion of negation in the philosophy of Josiah Royce,which is then applied to the arguments already presented.Royce’s notion of the primitive dyadic and symmetric negation relation is shown to bypass such arguments.The negation relation and its expansion into higher types of order are then considered with regard to how,in small variations of active negation,they would disclose sentience in AI systems.Finally,I argue that the much-hyped arguments and apocalyptic speculations regarding Artificial General Intelligence(AGI)takeover and similar scenarios,abetted by the notion of unlimited data,are based on a fundamental misunderstanding of how entities engage their experience.Namely,limitation,proceeding from the symmetric negation relation,expands outward into higher types of order in polyadic relations,wherein the entity self-limits and creatively moves toward uniqueness.
基金ASAP 16 project call,project title:SemantiX-A cross-sensor semantic EO data cube to open and leverage essential climate variables with scientists and the public,Grant ID:878939ASAP 17 project call,project title:SIMS-Soil sealing identification and monitoring system,Grant ID:885365.
文摘Aiming at the convergence between Earth observation(EO)Big Data and Artificial General Intelligence(AGI),this paper consists of two parts.In the previous Part 1,existing EO optical sensory imagederived Level 2/Analysis Ready Data(ARD)products and processes are critically compared,to overcome their lack of harmonization/standardization/interoperability and suitability in a new notion of Space Economy 4.0.In the present Part 2,original contributions comprise,at the Marr five levels of system understanding:(1)an innovative,but realistic EO optical sensory image-derived semantics-enriched ARD co-product pair requirements specification.First,in the pursuit of third-level semantic/ontological interoperability,a novel ARD symbolic(categorical and semantic)co-product,known as Scene Classification Map(SCM),adopts an augmented Cloud versus Not-Cloud taxonomy,whose Not-Cloud class legend complies with the standard fully-nested Land Cover Classification System’s Dichotomous Phase taxonomy proposed by the United Nations Food and Agriculture Organization.Second,a novel ARD subsymbolic numerical co-product,specifically,a panchromatic or multispectral EO image whose dimensionless digital numbers are radiometrically calibrated into a physical unit of radiometric measure,ranging from top-of-atmosphere reflectance to surface reflectance and surface albedo values,in a five-stage radiometric correction sequence.(2)An original ARD process requirements specification.(3)An innovative ARD processing system design(architecture),where stepwise SCM generation and stepwise SCM-conditional EO optical image radiometric correction are alternated in sequence.(4)An original modular hierarchical hybrid(combined deductive and inductive)computer vision subsystem design,provided with feedback loops,where software solutions at the Marr two shallowest levels of system understanding,specifically,algorithm and implementation,are selected from the scientific literature,to benefit from their technology readiness level as proof of feasibility,required in addition to proven suitability.To be implemented in operational mode at the space segment and/or midstream segment by both public and private EO big data providers,the proposed EO optical sensory image-derived semantics-enriched ARD product-pair and process reference standard is highlighted as linchpin for success of a new notion of Space Economy 4.0.
文摘In this paper, we review recent emerging theoretical and technological advances of artificial intelligence (AI) in the big data settings. We conclude that integrating data-driven machine learning with human knowledge (common priors or implicit intuitions) can effectively lead to explainable, robust, and general AI, as follows: from shallow computation to deep neural reasoning; from merely data-driven model to data-driven with structured logic rules models; from task-oriented (domain-specific) intelligence (adherence to explicit instructions) to artificial general intelligence in a general context (the capability to learn from experience). Motivated by such endeavors, the next generation of AI, namely AI 2.0, is positioned to reinvent computing itself, to transform big data into structured knowledge, and to enable better decision-making for our society.
文摘The outputs of a national economy can be partitioned into three sets of products:tangible goods(due to manufacturing,construction,extraction and agriculture),intangible services(due to an act of useful effort),and an integration of services and goods or,as initially defined by Tien(2012),servgoods.Actually,these products can also be considered in terms of their relation to the first three Industrial Revolutions:the First Industrial Revolution(circa 1800)was primarily focused on the production of goods;the Second Industrial Revolution(circa 1900)was primarily focused on the mass production of goods;and the Third Industrial Revolution(circa 2000)has been primarily focused on the mass customization of goods,services or servgoods.In this follow-up paper,the Third Industrial Revolution of mass customization continues to accelerate in its evolution and,in many respects,is subsuming the earlier Industrial Revolutions of production and mass production.More importantly,with the advent of real-time decision making,artificial intelligence,Internet of Things,mobile networks,and other advanced digital technologies,customization has been extensively enabled,thereby advancing mass customization into a Fourth Industrial Revolution of real-time customization.Moreover,the moral,ethical,security and employment problems associated with both mass and real-time customization must be carefully assessed and mitigated,especially in regard to unintended consequences.Looking ahead and with the advance of artificial general intelligence,this Fourth Industrial Revolution could be forthcoming in about the middle of the 21st Century;it would allow for multiple activities to be simultaneously tackled in real-time and in a customized manner.