In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of ...In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of UAV,the transmitting beamforming of users,and the phase shift matrix of IRS.The original problem is strong non-convex and difficult to solve.We first propose two basic modes of the proactive eavesdropper,and obtain the closed-form solution for the boundary conditions of the two modes.Then we transform the original problem into an equivalent one and propose an alternating optimization(AO)based method to obtain a local optimal solution.The convergence of the algorithm is illustrated by numerical results.Further,we propose a zero forcing(ZF)based method as sub-optimal solution,and the simulation section shows that the proposed two schemes could obtain better performance compared with traditional schemes.展开更多
AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by ...AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI.展开更多
Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,exces...Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,excessive computing power,and so on.Spiking neural networks(SNNs)provide a new approach combined with brain-like science to improve the computational energy efficiency,computational architecture,and biological credibility of current deep learning applications.In the early stage of development,its poor performance hindered the application of SNNs in real-world scenarios.In recent years,SNNs have made great progress in computational performance and practicability compared with the earlier research results,and are continuously producing significant results.Although there are already many pieces of literature on SNNs,there is still a lack of comprehensive review on SNNs from the perspective of improving performance and practicality as well as incorporating the latest research results.Starting from this issue,this paper elaborates on SNNs along the complete usage process of SNNs including network construction,data processing,model training,development,and deployment,aiming to provide more comprehensive and practical guidance to promote the development of SNNs.Therefore,the connotation and development status of SNNcomputing is reviewed systematically and comprehensively from four aspects:composition structure,data set,learning algorithm,software/hardware development platform.Then the development characteristics of SNNs in intelligent computing are summarized,the current challenges of SNNs are discussed and the future development directions are also prospected.Our research shows that in the fields of machine learning and intelligent computing,SNNs have comparable network scale and performance to ANNs and the ability to challenge large datasets and a variety of tasks.The advantages of SNNs over ANNs in terms of energy efficiency and spatial-temporal data processing have been more fully exploited.And the development of programming and deployment tools has lowered the threshold for the use of SNNs.SNNs show a broad development prospect for brain-like computing.展开更多
In this paper,we investigate the energy efficiency maximization for mobile edge computing(MEC)in intelligent reflecting surface(IRS)assisted unmanned aerial vehicle(UAV)communications.In particular,UAVcan collect the ...In this paper,we investigate the energy efficiency maximization for mobile edge computing(MEC)in intelligent reflecting surface(IRS)assisted unmanned aerial vehicle(UAV)communications.In particular,UAVcan collect the computing tasks of the terrestrial users and transmit the results back to them after computing.We jointly optimize the users’transmitted beamforming and uploading ratios,the phase shift matrix of IRS,and the UAV trajectory to improve the energy efficiency.The formulated optimization problem is highly non-convex and difficult to be solved directly.Therefore,we decompose the original problem into three sub-problems.We first propose the successive convex approximation(SCA)based method to design the beamforming of the users and the phase shift matrix of IRS,and apply the Lagrange dual method to obtain a closed-form expression of the uploading ratios.For the trajectory optimization,we propose a block coordinate descent(BCD)based method to obtain a local optimal solution.Finally,we propose the alternating optimization(AO)based overall algorithmand analyzed its complexity to be equivalent or lower than existing algorithms.Simulation results show the superiority of the proposedmethod compared with existing schemes in energy efficiency.展开更多
Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this r...Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this regard. The findings have shown that many challenges are linked to edge computing, such as privacy concerns, security breaches, high costs, low efficiency, etc. Therefore, there is a need to implement proper security measures to overcome these issues. Using emerging trends, like machine learning, encryption, artificial intelligence, real-time monitoring, etc., can help mitigate security issues. They can also develop a secure and safe future in cloud computing. It was concluded that the security implications of edge computing can easily be covered with the help of new technologies and techniques.展开更多
In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(...In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(UCS).The UCS necessitates heterogeneity,management level,and data transmission for distributed users.Simultaneously,security remains a major issue in the IoT-driven UCS.Besides,energy-limited IoT devices need an effective clustering strategy for optimal energy utilization.The recent developments of explainable artificial intelligence(XAI)concepts can be employed to effectively design intrusion detection systems(IDS)for accomplishing security in UCS.In this view,this study designs a novel Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for IoT Driven Ubiquitous Computing System(BXAI-IDCUCS)model.The major intention of the BXAI-IDCUCS model is to accomplish energy efficacy and security in the IoT environment.The BXAI-IDCUCS model initially clusters the IoT nodes using an energy-aware duck swarm optimization(EADSO)algorithm to accomplish this.Besides,deep neural network(DNN)is employed for detecting and classifying intrusions in the IoT network.Lastly,blockchain technology is exploited for secure inter-cluster data transmission processes.To ensure the productive performance of the BXAI-IDCUCS model,a comprehensive experimentation study is applied,and the outcomes are assessed under different aspects.The comparison study emphasized the superiority of the BXAI-IDCUCS model over the current state-of-the-art approaches with a packet delivery ratio of 99.29%,a packet loss rate of 0.71%,a throughput of 92.95 Mbps,energy consumption of 0.0891 mJ,a lifetime of 3529 rounds,and accuracy of 99.38%.展开更多
Similarity has been playing an important role in computer science,artificial intelligence(AI)and data science.However,similarity intelligence has been ignored in these disciplines.Similarity intelligence is a process ...Similarity has been playing an important role in computer science,artificial intelligence(AI)and data science.However,similarity intelligence has been ignored in these disciplines.Similarity intelligence is a process of discovering intelligence through similarity.This article will explore similarity intelligence,similarity-based reasoning,similarity computing and analytics.More specifically,this article looks at the similarity as an intelligence and its impact on a few areas in the real world.It explores similarity intelligence accompanying experience-based intelligence,knowledge-based intelligence,and data-based intelligence to play an important role in computer science,AI,and data science.This article explores similarity-based reasoning(SBR)and proposes three similarity-based inference rules.It then examines similarity computing and analytics,and a multiagent SBR system.The main contributions of this article are:1)Similarity intelligence is discovered from experience-based intelligence consisting of data-based intelligence and knowledge-based intelligence.2)Similarity-based reasoning,computing and analytics can be used to create similarity intelligence.The proposed approach will facilitate research and development of similarity intelligence,similarity computing and analytics,machine learning and case-based reasoning.展开更多
In this paper,the security problem for the multi-access edge computing(MEC)network is researched,and an intelligent immunity-based security defense system is proposed to identify the unauthorized mobile users and to p...In this paper,the security problem for the multi-access edge computing(MEC)network is researched,and an intelligent immunity-based security defense system is proposed to identify the unauthorized mobile users and to protect the security of whole system.In the proposed security defense system,the security is protected by the intelligent immunity through three functions,identification function,learning function,and regulation function,respectively.Meanwhile,a three process-based intelligent algorithm is proposed for the intelligent immunity system.Numerical simulations are given to prove the effeteness of the proposed approach.展开更多
In recent years,with the increase in the price of cryptocurrencies,the number of malicious cryptomining software has increased significantly.With their powerful spreading ability,cryptomining malware can unknowingly o...In recent years,with the increase in the price of cryptocurrencies,the number of malicious cryptomining software has increased significantly.With their powerful spreading ability,cryptomining malware can unknowingly occupy our resources,harm our interests,and damage more legitimate assets.However,although current traditional rule-based malware detection methods have a low false alarm rate,they have a relatively low detection rate when faced with a large volume of emerging malware.Even though common machine learning-based or deep learning-based methods have certain ability to learn and detect unknown malware,the characteristics they learn are single and independent,and cannot be learned adaptively.Aiming at the above problems,we propose a deep learning model with multi-input of multi-modal features,which can simultaneously accept digital features and image features on different dimensions.The model in turn includes parallel learning of three sub-models and ensemble learning of another specific sub-model.The four sub-models can be processed in parallel on different devices and can be further applied to edge computing environments.The model can adaptively learn multi-modal features and output prediction results.The detection rate of our model is as high as 97.01%and the false alarm rate is only 0.63%.The experimental results prove the advantage and effectiveness of the proposed method.展开更多
Unmanned Aerial Vehicle(UAV)has emerged as a promising technology for the support of human activities,such as target tracking,disaster rescue,and surveillance.However,these tasks require a large computation load of im...Unmanned Aerial Vehicle(UAV)has emerged as a promising technology for the support of human activities,such as target tracking,disaster rescue,and surveillance.However,these tasks require a large computation load of image or video processing,which imposes enormous pressure on the UAV computation platform.To solve this issue,in this work,we propose an intelligent Task Offloading Algorithm(iTOA)for UAV edge computing network.Compared with existing methods,iTOA is able to perceive the network’s environment intelligently to decide the offloading action based on deep Monte Calor Tree Search(MCTS),the core algorithm of Alpha Go.MCTS will simulate the offloading decision trajectories to acquire the best decision by maximizing the reward,such as lowest latency or power consumption.To accelerate the search convergence of MCTS,we also proposed a splitting Deep Neural Network(sDNN)to supply the prior probability for MCTS.The sDNN is trained by a self-supervised learning manager.Here,the training data set is obtained from iTOA itself as its own teacher.Compared with game theory and greedy search-based methods,the proposed iTOA improves service latency performance by 33%and 60%,respectively.展开更多
Dispersed computing can link all devices with computing capabilities on a global scale to form a fully decentralized network,which can make full use of idle computing resources.Realizing the overall resource allocatio...Dispersed computing can link all devices with computing capabilities on a global scale to form a fully decentralized network,which can make full use of idle computing resources.Realizing the overall resource allocation of the dispersed computing system is a significant challenge.In detail,by jointly managing the task requests of external users and the resource allocation of the internal system to achieve dynamic balance,the efficient and stable operation of the system can be guaranteed.In this paper,we first propose a task-resource joint management model,which quantifies the dynamic transformation relationship between the resources consumed by task requests and the resources occupied by the system in dispersed computing.Secondly,to avoid downtime caused by an overload of resources,we introduce intelligent control into the task-resource joint management model.The existence and stability of the positive periodic solution of the model can be obtained by theoretical analysis,which means that the stable operation of dispersed computing can be guaranteed through the intelligent feedback control strategy.Additionally,to improve the system utilization,the task-resource joint management model with bi-directional intelligent control is further explored.Setting control thresholds for the two resources not only reverse restrains the system resource overload,but also carries out positive incentive control when a large number of idle resources appear.The existence and stability of the positive periodic solution of the model are proved theoretically,that is,the model effectively avoids the two extreme cases and ensure the efficient and stable operation of the system.Finally,numerical simulation verifies the correctness and validity of the theoretical results.展开更多
The article consists of two parts.Part I shows the possibility of quantum/soft computing optimizers of knowledge bases(QSCOptKB™)as the toolkit of quantum deep machine learning technology implementation in the solutio...The article consists of two parts.Part I shows the possibility of quantum/soft computing optimizers of knowledge bases(QSCOptKB™)as the toolkit of quantum deep machine learning technology implementation in the solution’s search of intelligent cognitive control tasks applied the cognitive helmet as neurointerface.In particular case,the aim of this part is to demonstrate the possibility of classifying the mental states of a human being operator in on line with knowledge extraction from electroencephalograms based on SCOptKB™and QCOptKB™sophisticated toolkit.Application of soft computing technologies to identify objective indicators of the psychophysiological state of an examined person described.The role and necessity of applying intelligent information technologies development based on computational intelligence toolkits in the task of objective estimation of a general psychophysical state of a human being operator shown.Developed information technology examined with special(difficult in diagnostic practice)examples emotion state estimation of autism children(ASD)and dementia and background of the knowledge bases design for intelligent robot of service use is it.Application of cognitive intelligent control in navigation of autonomous robot for avoidance of obstacles demonstrated.展开更多
Redundant robotic arm models as a control object discussed.Background of computational intelligence IT on soft computing optimizer of knowledge base in smart robotic manipulators introduced.Soft computing optimizer is...Redundant robotic arm models as a control object discussed.Background of computational intelligence IT on soft computing optimizer of knowledge base in smart robotic manipulators introduced.Soft computing optimizer is the sophisticated computational intelligence toolkit of deep machine learning SW platform with optimal fuzzy neural network structure.The methods for development and design technology of control systems based on soft computing introduced in this Part 1 allow one to implement the principle of design an optimal intelligent control systems with a maximum reliability and controllability level of a complex control object under conditions of uncertainty in the source data,and in the presence of stochastic noises of various physical and statistical characters.The knowledge bases formed with the application of soft computing optimizer produce robust control laws for the schedule of time dependent coefficient gains of conventional PID controllers for a wide range of external perturbations and are maximally insensitive to random variations of the structure of control object.The robustness is achieved by application a vector fitness function for genetic algorithm,whose one component describes the physical principle of minimum production of generalized entropy both in the control object and the control system,and the other components describe conventional control objective functionals such as minimum control error,etc.The application of soft computing technologies(Part I)for the development a robust intelligent control system that solving the problem of precision positioning redundant(3DOF and 7 DOF)manipulators considered.Application of quantum soft computing in robust intelligent control of smart manipulators in Part II described.展开更多
In parametric cost estimating, objections to using statistical Cost Estimating Relationships (CERs) and parametric models include problems of low statistical significance due to limited data points, biases in the un...In parametric cost estimating, objections to using statistical Cost Estimating Relationships (CERs) and parametric models include problems of low statistical significance due to limited data points, biases in the underlying data, and lack of robustness. Soft Computing (SC) technologies are used for building intelligent cost models. The SC models are systemically evaluated based on their training and prediction of the historical cost data of airborne avionics systems. Results indicating the strengths and weakness of each model are presented. In general, the intelligent cost models have higher prediction precision, better data adaptability, and stronger self-learning capability than the regression CERs.展开更多
The rapid development of artificial intelligence(AI)facilitates various applications from all areas but also poses great challenges in its hardware implementation in terms of speed and energy because of the explosive ...The rapid development of artificial intelligence(AI)facilitates various applications from all areas but also poses great challenges in its hardware implementation in terms of speed and energy because of the explosive growth of data.Optical computing provides a distinctive perspective to address this bottleneck by harnessing the unique properties of photons including broad bandwidth,low latency,and high energy efficiency.In this review,we introduce the latest developments of optical computing for different AI models,including feedforward neural networks,reservoir computing,and spiking neural networks(SNNs).Recent progress in integrated photonic devices,combined with the rise of AI,provides a great opportunity for the renaissance of optical computing in practical applications.This effort requires multidisciplinary efforts from a broad community.This review provides an overview of the state-of-the-art accomplishments in recent years,discusses the availability of current technologies,and points out various remaining challenges in different aspects to push the frontier.We anticipate that the era of large-scale integrated photonics processors will soon arrive for practical AI applications in the form of hybrid optoelectronic frameworks.展开更多
Our living environments are being gradually occupied with an abundant number of digital objects that have networking and computing capabilities. After these devices are plugged into a network, they initially advertise...Our living environments are being gradually occupied with an abundant number of digital objects that have networking and computing capabilities. After these devices are plugged into a network, they initially advertise their presence and capabilities in the form of services so that they can be discovered and, if desired, exploited by the user or other networked devices. With the increasing number of these devices attached to the network, the complexity to configure and control them increases, which may lead to major processing and communication overhead. Hence, the devices are no longer expected to just act as primitive stand-alone appliances that only provide the facilities and services to the user they are designed for, but also offer complex services that emerge from unique combinations of devices. This creates the necessity for these devices to be equipped with some sort of intelligence and self-awareness to enable them to be self-configuring and self-programming. However, with this "smart evolution", the cognitive load to configure and control such spaces becomes immense. One way to relieve this load is by employing artificial intelligence (AI) techniques to create an intelligent "presence" where the system will be able to recognize the users and autonomously program the environment to be energy efficient and responsive to the user's needs and behaviours. These AI mechanisms should be embedded in the user's environments and should operate in a non-intrusive manner. This paper will show how computational intelligence (CI), which is an emerging domain of AI, could be employed and embedded in our living spaces to help such environments to be more energy efficient, intelligent, adaptive and convenient to the users.展开更多
In the past decade,with the optimization and widespread use of high-performance computing devices and data storage devices,artificial intelligence technology driven by computing resource and vast amounts of data has b...In the past decade,with the optimization and widespread use of high-performance computing devices and data storage devices,artificial intelligence technology driven by computing resource and vast amounts of data has been greatly improved.Artificial intelligence technology has also been applied to medical image analysis,and it has yielded advancements in the early detection.展开更多
In recent years,statistics have indicated that the number of patients with malignant brain tumors has increased sharply.However,most surgeons still perform surgical training using the traditional autopsy and prosthesi...In recent years,statistics have indicated that the number of patients with malignant brain tumors has increased sharply.However,most surgeons still perform surgical training using the traditional autopsy and prosthesis model,which encounters many problems,such as insufficient corpse resources,low efficiency,and high cost.With the advent of the 5G era,a wide range of Industrial Internet of Things(IIOT)applications have been developed.Virtual Reality(VR)and Augmented Reality(AR)technologies that emerged with 5G are developing rapidly for intelligent medical training.To address the challenges encountered during neurosurgery training,and combining with cloud computing,in this paper,a highly immersive AR-based brain tumor neurosurgery remote collaborative virtual surgery training system is developed,in which a VR simulator is embedded.The system enables real-time remote surgery training interaction through 5G transmission.Six experts and 18 novices were invited to participate in the experiment to verify the system.Subsequently,the two simulators were evaluated using face and construction validation methods.The results obtained by training the novices 50 times were further analyzed using the Learning Curve-Cumulative Sum(LC-CUSUM)evaluation method to validate the effectiveness of the two simulators.The results of the face and content validation demonstrated that the AR simulator in the system was superior to the VR simulator in terms of vision and scene authenticity,and had a better effect on the improvement of surgical skills.Moreover,the surgical training scheme proposed in this paper is effective,and the remote collaborative training effect of the system is ideal.展开更多
In healthcare systems,the Internet of Things(IoT)innovation and development approached new ways to evaluate patient data.A cloud-based platform tends to process data generated by IoT medical devices instead of high st...In healthcare systems,the Internet of Things(IoT)innovation and development approached new ways to evaluate patient data.A cloud-based platform tends to process data generated by IoT medical devices instead of high storage,and computational hardware.In this paper,an intelligent healthcare system has been proposed for the prediction and severity analysis of lung disease from chest computer tomography(CT)images of patients with pneumonia,Covid-19,tuberculosis(TB),and cancer.Firstly,the CT images are captured and transmitted to the fog node through IoT devices.In the fog node,the image gets modified into a convenient and efficient format for further processing.advanced encryption Standard(AES)algorithm serves a substantial role in IoT and fog nodes for preventing data from being accessed by other operating systems.Finally,the preprocessed image can be classified automatically in the cloud by using various transfer and ensemble learning models.Herein different pre-trained deep learning architectures(Inception-ResNet-v2,VGG-19,ResNet-50)used transfer learning is adopted for feature extraction.The softmax of heterogeneous base classifiers assists to make individual predictions.As a meta-classifier,the ensemble approach is employed to obtain final optimal results.Disease predicted image is consigned to the recurrent neural network with long short-term memory(RNN-LSTM)for severity analysis,and the patient is directed to seek therapy based on the outcome.The proposed method achieved 98.6%accuracy,0.978 precision,0.982 recalls,and 0.974 F1-score on five class classifications.The experimental findings reveal that the proposed framework assists medical experts with lung disease screening and provides a valuable second perspective.展开更多
Federated learning is an emerging machine learning techniquethat enables clients to collaboratively train a deep learning model withoutuploading raw data to the aggregation server. Each client may be equippedwith diff...Federated learning is an emerging machine learning techniquethat enables clients to collaboratively train a deep learning model withoutuploading raw data to the aggregation server. Each client may be equippedwith different computing resources for model training. The client equippedwith a lower computing capability requires more time for model training,resulting in a prolonged training time in federated learning. Moreover, it mayfail to train the entire model because of the out-of-memory issue. This studyaims to tackle these problems and propose the federated feature concatenate(FedFC) method for federated learning considering heterogeneous clients.FedFC leverages the model splitting and feature concatenate for offloadinga portion of the training loads from clients to the aggregation server. Eachclient in FedFC can collaboratively train a model with different cutting layers.Therefore, the specific features learned in the deeper layer of the serversidemodel are more identical for the data class classification. Accordingly,FedFC can reduce the computation loading for the resource-constrainedclient and accelerate the convergence time. The performance effectiveness isverified by considering different dataset scenarios, such as data and classimbalance for the participant clients in the experiments. The performanceimpacts of different cutting layers are evaluated during the model training.The experimental results show that the co-adapted features have a criticalimpact on the adequate classification of the deep learning model. Overall,FedFC not only shortens the convergence time, but also improves the bestaccuracy by up to 5.9% and 14.5% when compared to conventional federatedlearning and splitfed, respectively. In conclusion, the proposed approach isfeasible and effective for heterogeneous clients in federated learning.展开更多
基金This work was supported by the Key Scientific and Technological Project of Henan Province(Grant Number 222102210212)Doctoral Research Start Project of Henan Institute of Technology(Grant Number KQ2005)Key Research Projects of Colleges and Universities in Henan Province(Grant Number 23B510006).
文摘In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of UAV,the transmitting beamforming of users,and the phase shift matrix of IRS.The original problem is strong non-convex and difficult to solve.We first propose two basic modes of the proactive eavesdropper,and obtain the closed-form solution for the boundary conditions of the two modes.Then we transform the original problem into an equivalent one and propose an alternating optimization(AO)based method to obtain a local optimal solution.The convergence of the algorithm is illustrated by numerical results.Further,we propose a zero forcing(ZF)based method as sub-optimal solution,and the simulation section shows that the proposed two schemes could obtain better performance compared with traditional schemes.
基金Project supported in part by the National Key Research and Development Program of China(Grant No.2021YFA0716400)the National Natural Science Foundation of China(Grant Nos.62225405,62150027,61974080,61991443,61975093,61927811,61875104,62175126,and 62235011)+2 种基金the Ministry of Science and Technology of China(Grant Nos.2021ZD0109900 and 2021ZD0109903)the Collaborative Innovation Center of Solid-State Lighting and Energy-Saving ElectronicsTsinghua University Initiative Scientific Research Program.
文摘AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI.
基金supported by the National Natural Science Foundation of China(Nos.61974164,62074166,62004219,62004220,and 62104256).
文摘Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,excessive computing power,and so on.Spiking neural networks(SNNs)provide a new approach combined with brain-like science to improve the computational energy efficiency,computational architecture,and biological credibility of current deep learning applications.In the early stage of development,its poor performance hindered the application of SNNs in real-world scenarios.In recent years,SNNs have made great progress in computational performance and practicability compared with the earlier research results,and are continuously producing significant results.Although there are already many pieces of literature on SNNs,there is still a lack of comprehensive review on SNNs from the perspective of improving performance and practicality as well as incorporating the latest research results.Starting from this issue,this paper elaborates on SNNs along the complete usage process of SNNs including network construction,data processing,model training,development,and deployment,aiming to provide more comprehensive and practical guidance to promote the development of SNNs.Therefore,the connotation and development status of SNNcomputing is reviewed systematically and comprehensively from four aspects:composition structure,data set,learning algorithm,software/hardware development platform.Then the development characteristics of SNNs in intelligent computing are summarized,the current challenges of SNNs are discussed and the future development directions are also prospected.Our research shows that in the fields of machine learning and intelligent computing,SNNs have comparable network scale and performance to ANNs and the ability to challenge large datasets and a variety of tasks.The advantages of SNNs over ANNs in terms of energy efficiency and spatial-temporal data processing have been more fully exploited.And the development of programming and deployment tools has lowered the threshold for the use of SNNs.SNNs show a broad development prospect for brain-like computing.
基金the Key Scientific and Technological Project of Henan Province(Grant Number 222102210212)Doctoral Research Start Project of Henan Institute of Technology(Grant Number KQ2005)+1 种基金Doctoral Research Start Project of Henan Institute of Technology(Grant Number KQ2110)Key Research Projects of Colleges and Universities in Henan Province(Grant Number 23B510006).
文摘In this paper,we investigate the energy efficiency maximization for mobile edge computing(MEC)in intelligent reflecting surface(IRS)assisted unmanned aerial vehicle(UAV)communications.In particular,UAVcan collect the computing tasks of the terrestrial users and transmit the results back to them after computing.We jointly optimize the users’transmitted beamforming and uploading ratios,the phase shift matrix of IRS,and the UAV trajectory to improve the energy efficiency.The formulated optimization problem is highly non-convex and difficult to be solved directly.Therefore,we decompose the original problem into three sub-problems.We first propose the successive convex approximation(SCA)based method to design the beamforming of the users and the phase shift matrix of IRS,and apply the Lagrange dual method to obtain a closed-form expression of the uploading ratios.For the trajectory optimization,we propose a block coordinate descent(BCD)based method to obtain a local optimal solution.Finally,we propose the alternating optimization(AO)based overall algorithmand analyzed its complexity to be equivalent or lower than existing algorithms.Simulation results show the superiority of the proposedmethod compared with existing schemes in energy efficiency.
文摘Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this regard. The findings have shown that many challenges are linked to edge computing, such as privacy concerns, security breaches, high costs, low efficiency, etc. Therefore, there is a need to implement proper security measures to overcome these issues. Using emerging trends, like machine learning, encryption, artificial intelligence, real-time monitoring, etc., can help mitigate security issues. They can also develop a secure and safe future in cloud computing. It was concluded that the security implications of edge computing can easily be covered with the help of new technologies and techniques.
基金This research work was funded by Institutional Fund Projects under grant no.(IFPIP:624-611-1443)。
文摘In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(UCS).The UCS necessitates heterogeneity,management level,and data transmission for distributed users.Simultaneously,security remains a major issue in the IoT-driven UCS.Besides,energy-limited IoT devices need an effective clustering strategy for optimal energy utilization.The recent developments of explainable artificial intelligence(XAI)concepts can be employed to effectively design intrusion detection systems(IDS)for accomplishing security in UCS.In this view,this study designs a novel Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for IoT Driven Ubiquitous Computing System(BXAI-IDCUCS)model.The major intention of the BXAI-IDCUCS model is to accomplish energy efficacy and security in the IoT environment.The BXAI-IDCUCS model initially clusters the IoT nodes using an energy-aware duck swarm optimization(EADSO)algorithm to accomplish this.Besides,deep neural network(DNN)is employed for detecting and classifying intrusions in the IoT network.Lastly,blockchain technology is exploited for secure inter-cluster data transmission processes.To ensure the productive performance of the BXAI-IDCUCS model,a comprehensive experimentation study is applied,and the outcomes are assessed under different aspects.The comparison study emphasized the superiority of the BXAI-IDCUCS model over the current state-of-the-art approaches with a packet delivery ratio of 99.29%,a packet loss rate of 0.71%,a throughput of 92.95 Mbps,energy consumption of 0.0891 mJ,a lifetime of 3529 rounds,and accuracy of 99.38%.
文摘Similarity has been playing an important role in computer science,artificial intelligence(AI)and data science.However,similarity intelligence has been ignored in these disciplines.Similarity intelligence is a process of discovering intelligence through similarity.This article will explore similarity intelligence,similarity-based reasoning,similarity computing and analytics.More specifically,this article looks at the similarity as an intelligence and its impact on a few areas in the real world.It explores similarity intelligence accompanying experience-based intelligence,knowledge-based intelligence,and data-based intelligence to play an important role in computer science,AI,and data science.This article explores similarity-based reasoning(SBR)and proposes three similarity-based inference rules.It then examines similarity computing and analytics,and a multiagent SBR system.The main contributions of this article are:1)Similarity intelligence is discovered from experience-based intelligence consisting of data-based intelligence and knowledge-based intelligence.2)Similarity-based reasoning,computing and analytics can be used to create similarity intelligence.The proposed approach will facilitate research and development of similarity intelligence,similarity computing and analytics,machine learning and case-based reasoning.
基金This work was supported by National Natural Science Foundation of China(No.61971026)the Fundamental Research Funds for the Central Universities(No.FRF-TP-18-008A3).
文摘In this paper,the security problem for the multi-access edge computing(MEC)network is researched,and an intelligent immunity-based security defense system is proposed to identify the unauthorized mobile users and to protect the security of whole system.In the proposed security defense system,the security is protected by the intelligent immunity through three functions,identification function,learning function,and regulation function,respectively.Meanwhile,a three process-based intelligent algorithm is proposed for the intelligent immunity system.Numerical simulations are given to prove the effeteness of the proposed approach.
基金supported by the Key Research and Development Program of Shandong Province(Soft Science Project)(2020RKB01364).
文摘In recent years,with the increase in the price of cryptocurrencies,the number of malicious cryptomining software has increased significantly.With their powerful spreading ability,cryptomining malware can unknowingly occupy our resources,harm our interests,and damage more legitimate assets.However,although current traditional rule-based malware detection methods have a low false alarm rate,they have a relatively low detection rate when faced with a large volume of emerging malware.Even though common machine learning-based or deep learning-based methods have certain ability to learn and detect unknown malware,the characteristics they learn are single and independent,and cannot be learned adaptively.Aiming at the above problems,we propose a deep learning model with multi-input of multi-modal features,which can simultaneously accept digital features and image features on different dimensions.The model in turn includes parallel learning of three sub-models and ensemble learning of another specific sub-model.The four sub-models can be processed in parallel on different devices and can be further applied to edge computing environments.The model can adaptively learn multi-modal features and output prediction results.The detection rate of our model is as high as 97.01%and the false alarm rate is only 0.63%.The experimental results prove the advantage and effectiveness of the proposed method.
基金the Artificial Intelligence Key Laboratory of Sichuan Province(Nos.2019RYJ05)National Natural Science Foundation of China(Nos.61971107).
文摘Unmanned Aerial Vehicle(UAV)has emerged as a promising technology for the support of human activities,such as target tracking,disaster rescue,and surveillance.However,these tasks require a large computation load of image or video processing,which imposes enormous pressure on the UAV computation platform.To solve this issue,in this work,we propose an intelligent Task Offloading Algorithm(iTOA)for UAV edge computing network.Compared with existing methods,iTOA is able to perceive the network’s environment intelligently to decide the offloading action based on deep Monte Calor Tree Search(MCTS),the core algorithm of Alpha Go.MCTS will simulate the offloading decision trajectories to acquire the best decision by maximizing the reward,such as lowest latency or power consumption.To accelerate the search convergence of MCTS,we also proposed a splitting Deep Neural Network(sDNN)to supply the prior probability for MCTS.The sDNN is trained by a self-supervised learning manager.Here,the training data set is obtained from iTOA itself as its own teacher.Compared with game theory and greedy search-based methods,the proposed iTOA improves service latency performance by 33%and 60%,respectively.
基金supported in part by the National Science Foundation Project of P.R.China(No.61931001)the Scientific and Technological Innovation Foundation of Foshan,USTB(No.BK20AF003)。
文摘Dispersed computing can link all devices with computing capabilities on a global scale to form a fully decentralized network,which can make full use of idle computing resources.Realizing the overall resource allocation of the dispersed computing system is a significant challenge.In detail,by jointly managing the task requests of external users and the resource allocation of the internal system to achieve dynamic balance,the efficient and stable operation of the system can be guaranteed.In this paper,we first propose a task-resource joint management model,which quantifies the dynamic transformation relationship between the resources consumed by task requests and the resources occupied by the system in dispersed computing.Secondly,to avoid downtime caused by an overload of resources,we introduce intelligent control into the task-resource joint management model.The existence and stability of the positive periodic solution of the model can be obtained by theoretical analysis,which means that the stable operation of dispersed computing can be guaranteed through the intelligent feedback control strategy.Additionally,to improve the system utilization,the task-resource joint management model with bi-directional intelligent control is further explored.Setting control thresholds for the two resources not only reverse restrains the system resource overload,but also carries out positive incentive control when a large number of idle resources appear.The existence and stability of the positive periodic solution of the model are proved theoretically,that is,the model effectively avoids the two extreme cases and ensure the efficient and stable operation of the system.Finally,numerical simulation verifies the correctness and validity of the theoretical results.
文摘The article consists of two parts.Part I shows the possibility of quantum/soft computing optimizers of knowledge bases(QSCOptKB™)as the toolkit of quantum deep machine learning technology implementation in the solution’s search of intelligent cognitive control tasks applied the cognitive helmet as neurointerface.In particular case,the aim of this part is to demonstrate the possibility of classifying the mental states of a human being operator in on line with knowledge extraction from electroencephalograms based on SCOptKB™and QCOptKB™sophisticated toolkit.Application of soft computing technologies to identify objective indicators of the psychophysiological state of an examined person described.The role and necessity of applying intelligent information technologies development based on computational intelligence toolkits in the task of objective estimation of a general psychophysical state of a human being operator shown.Developed information technology examined with special(difficult in diagnostic practice)examples emotion state estimation of autism children(ASD)and dementia and background of the knowledge bases design for intelligent robot of service use is it.Application of cognitive intelligent control in navigation of autonomous robot for avoidance of obstacles demonstrated.
文摘Redundant robotic arm models as a control object discussed.Background of computational intelligence IT on soft computing optimizer of knowledge base in smart robotic manipulators introduced.Soft computing optimizer is the sophisticated computational intelligence toolkit of deep machine learning SW platform with optimal fuzzy neural network structure.The methods for development and design technology of control systems based on soft computing introduced in this Part 1 allow one to implement the principle of design an optimal intelligent control systems with a maximum reliability and controllability level of a complex control object under conditions of uncertainty in the source data,and in the presence of stochastic noises of various physical and statistical characters.The knowledge bases formed with the application of soft computing optimizer produce robust control laws for the schedule of time dependent coefficient gains of conventional PID controllers for a wide range of external perturbations and are maximally insensitive to random variations of the structure of control object.The robustness is achieved by application a vector fitness function for genetic algorithm,whose one component describes the physical principle of minimum production of generalized entropy both in the control object and the control system,and the other components describe conventional control objective functionals such as minimum control error,etc.The application of soft computing technologies(Part I)for the development a robust intelligent control system that solving the problem of precision positioning redundant(3DOF and 7 DOF)manipulators considered.Application of quantum soft computing in robust intelligent control of smart manipulators in Part II described.
文摘In parametric cost estimating, objections to using statistical Cost Estimating Relationships (CERs) and parametric models include problems of low statistical significance due to limited data points, biases in the underlying data, and lack of robustness. Soft Computing (SC) technologies are used for building intelligent cost models. The SC models are systemically evaluated based on their training and prediction of the historical cost data of airborne avionics systems. Results indicating the strengths and weakness of each model are presented. In general, the intelligent cost models have higher prediction precision, better data adaptability, and stronger self-learning capability than the regression CERs.
基金supported by the National Natural Science Foundation of China(61927802,61722209,and 61805145)the Beijing Municipal Science and Technology Commission(Z181100003118014)+3 种基金the National Key Research and Development Program of China(2020AAA0130000)the support from the National Postdoctoral Program for Innovative TalentShuimu Tsinghua Scholar Programthe support from the Hong Kong Research Grants Council(16306220)。
文摘The rapid development of artificial intelligence(AI)facilitates various applications from all areas but also poses great challenges in its hardware implementation in terms of speed and energy because of the explosive growth of data.Optical computing provides a distinctive perspective to address this bottleneck by harnessing the unique properties of photons including broad bandwidth,low latency,and high energy efficiency.In this review,we introduce the latest developments of optical computing for different AI models,including feedforward neural networks,reservoir computing,and spiking neural networks(SNNs).Recent progress in integrated photonic devices,combined with the rise of AI,provides a great opportunity for the renaissance of optical computing in practical applications.This effort requires multidisciplinary efforts from a broad community.This review provides an overview of the state-of-the-art accomplishments in recent years,discusses the availability of current technologies,and points out various remaining challenges in different aspects to push the frontier.We anticipate that the era of large-scale integrated photonics processors will soon arrive for practical AI applications in the form of hybrid optoelectronic frameworks.
文摘Our living environments are being gradually occupied with an abundant number of digital objects that have networking and computing capabilities. After these devices are plugged into a network, they initially advertise their presence and capabilities in the form of services so that they can be discovered and, if desired, exploited by the user or other networked devices. With the increasing number of these devices attached to the network, the complexity to configure and control them increases, which may lead to major processing and communication overhead. Hence, the devices are no longer expected to just act as primitive stand-alone appliances that only provide the facilities and services to the user they are designed for, but also offer complex services that emerge from unique combinations of devices. This creates the necessity for these devices to be equipped with some sort of intelligence and self-awareness to enable them to be self-configuring and self-programming. However, with this "smart evolution", the cognitive load to configure and control such spaces becomes immense. One way to relieve this load is by employing artificial intelligence (AI) techniques to create an intelligent "presence" where the system will be able to recognize the users and autonomously program the environment to be energy efficient and responsive to the user's needs and behaviours. These AI mechanisms should be embedded in the user's environments and should operate in a non-intrusive manner. This paper will show how computational intelligence (CI), which is an emerging domain of AI, could be employed and embedded in our living spaces to help such environments to be more energy efficient, intelligent, adaptive and convenient to the users.
文摘In the past decade,with the optimization and widespread use of high-performance computing devices and data storage devices,artificial intelligence technology driven by computing resource and vast amounts of data has been greatly improved.Artificial intelligence technology has also been applied to medical image analysis,and it has yielded advancements in the early detection.
基金supported by the Yunnan Key Laboratory of Optoelectronic Information Technology,and grant funded by the National Natural Science Foundation of China(62062069,62062070,and 62005235)Taif University Researchers Supporting Project(TURSP-2020/126)Taif University,Taif,Saudi Arabia.Jun Liu and Kai Qian contributed equally to this paper。
文摘In recent years,statistics have indicated that the number of patients with malignant brain tumors has increased sharply.However,most surgeons still perform surgical training using the traditional autopsy and prosthesis model,which encounters many problems,such as insufficient corpse resources,low efficiency,and high cost.With the advent of the 5G era,a wide range of Industrial Internet of Things(IIOT)applications have been developed.Virtual Reality(VR)and Augmented Reality(AR)technologies that emerged with 5G are developing rapidly for intelligent medical training.To address the challenges encountered during neurosurgery training,and combining with cloud computing,in this paper,a highly immersive AR-based brain tumor neurosurgery remote collaborative virtual surgery training system is developed,in which a VR simulator is embedded.The system enables real-time remote surgery training interaction through 5G transmission.Six experts and 18 novices were invited to participate in the experiment to verify the system.Subsequently,the two simulators were evaluated using face and construction validation methods.The results obtained by training the novices 50 times were further analyzed using the Learning Curve-Cumulative Sum(LC-CUSUM)evaluation method to validate the effectiveness of the two simulators.The results of the face and content validation demonstrated that the AR simulator in the system was superior to the VR simulator in terms of vision and scene authenticity,and had a better effect on the improvement of surgical skills.Moreover,the surgical training scheme proposed in this paper is effective,and the remote collaborative training effect of the system is ideal.
文摘In healthcare systems,the Internet of Things(IoT)innovation and development approached new ways to evaluate patient data.A cloud-based platform tends to process data generated by IoT medical devices instead of high storage,and computational hardware.In this paper,an intelligent healthcare system has been proposed for the prediction and severity analysis of lung disease from chest computer tomography(CT)images of patients with pneumonia,Covid-19,tuberculosis(TB),and cancer.Firstly,the CT images are captured and transmitted to the fog node through IoT devices.In the fog node,the image gets modified into a convenient and efficient format for further processing.advanced encryption Standard(AES)algorithm serves a substantial role in IoT and fog nodes for preventing data from being accessed by other operating systems.Finally,the preprocessed image can be classified automatically in the cloud by using various transfer and ensemble learning models.Herein different pre-trained deep learning architectures(Inception-ResNet-v2,VGG-19,ResNet-50)used transfer learning is adopted for feature extraction.The softmax of heterogeneous base classifiers assists to make individual predictions.As a meta-classifier,the ensemble approach is employed to obtain final optimal results.Disease predicted image is consigned to the recurrent neural network with long short-term memory(RNN-LSTM)for severity analysis,and the patient is directed to seek therapy based on the outcome.The proposed method achieved 98.6%accuracy,0.978 precision,0.982 recalls,and 0.974 F1-score on five class classifications.The experimental findings reveal that the proposed framework assists medical experts with lung disease screening and provides a valuable second perspective.
基金supported by the National Science and Technology Council (NSTC)of Taiwan under Grants 108-2218-E-033-008-MY3,110-2634-F-A49-005,111-2221-E-033-033the Veterans General Hospitals and University System of Taiwan Joint Research Program under Grant VGHUST111-G6-5-1.
文摘Federated learning is an emerging machine learning techniquethat enables clients to collaboratively train a deep learning model withoutuploading raw data to the aggregation server. Each client may be equippedwith different computing resources for model training. The client equippedwith a lower computing capability requires more time for model training,resulting in a prolonged training time in federated learning. Moreover, it mayfail to train the entire model because of the out-of-memory issue. This studyaims to tackle these problems and propose the federated feature concatenate(FedFC) method for federated learning considering heterogeneous clients.FedFC leverages the model splitting and feature concatenate for offloadinga portion of the training loads from clients to the aggregation server. Eachclient in FedFC can collaboratively train a model with different cutting layers.Therefore, the specific features learned in the deeper layer of the serversidemodel are more identical for the data class classification. Accordingly,FedFC can reduce the computation loading for the resource-constrainedclient and accelerate the convergence time. The performance effectiveness isverified by considering different dataset scenarios, such as data and classimbalance for the participant clients in the experiments. The performanceimpacts of different cutting layers are evaluated during the model training.The experimental results show that the co-adapted features have a criticalimpact on the adequate classification of the deep learning model. Overall,FedFC not only shortens the convergence time, but also improves the bestaccuracy by up to 5.9% and 14.5% when compared to conventional federatedlearning and splitfed, respectively. In conclusion, the proposed approach isfeasible and effective for heterogeneous clients in federated learning.