期刊文献+
共找到8,409篇文章
< 1 2 250 >
每页显示 20 50 100
Computing Power Network:A Survey
1
作者 Sun Yukun Lei Bo +4 位作者 Liu Junlin Huang Haonan Zhang Xing Peng Jing Wang Wenbo 《China Communications》 SCIE CSCD 2024年第9期109-145,共37页
With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these... With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well. 展开更多
关键词 computing power modeling computing power network computing power scheduling information awareness network forwarding
下载PDF
ATSSC:An Attack Tolerant System in Serverless Computing
2
作者 Zhang Shuai Guo Yunfei +2 位作者 Hu Hongchao Liu Wenyan Wang Yawen 《China Communications》 SCIE CSCD 2024年第6期192-205,共14页
Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are ... Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are invoked by its driven events.Nonetheless,security threats in serverless computing such as vulnerability-based security threats have become the pain point hindering its wide adoption.The ideas in proactive defense such as redundancy,diversity and dynamic provide promising approaches to protect against cyberattacks.However,these security technologies are mostly applied to serverless platform based on“stacked”mode,as they are designed independent with serverless computing.The lack of security consideration in the initial design makes it especially challenging to achieve the all life cycle protection for serverless application with limited cost.In this paper,we present ATSSC,a proactive defense enabled attack tolerant serverless platform.ATSSC integrates the characteristic of redundancy,diversity and dynamic into serverless seamless to achieve high-level security and efficiency.Specifically,ATSSC constructs multiple diverse function replicas to process the driven events and performs cross-validation to verify the results.In order to create diverse function replicas,both software diversity and environment diversity are adopted.Furthermore,a dynamic function refresh strategy is proposed to keep the clean state of serverless functions.We implement ATSSC based on Kubernetes and Knative.Analysis and experimental results demonstrate that ATSSC can effectively protect serverless computing against cyberattacks with acceptable costs. 展开更多
关键词 active defense attack tolerant cloud computing SECURITY serverless computing
下载PDF
Complementary memtransistors for neuromorphic computing: How, what and why
3
作者 Qi Chen Yue Zhou +4 位作者 Weiwei Xiong Zirui Chen Yasai Wang Xiangshui Miao Yuhui He 《Journal of Semiconductors》 EI CAS CSCD 2024年第6期64-80,共17页
Memtransistors in which the source-drain channel conductance can be nonvolatilely manipulated through the gate signals have emerged as promising components for implementing neuromorphic computing.On the other side,it ... Memtransistors in which the source-drain channel conductance can be nonvolatilely manipulated through the gate signals have emerged as promising components for implementing neuromorphic computing.On the other side,it is known that the complementary metal-oxide-semiconductor(CMOS)field effect transistors have played the fundamental role in the modern integrated circuit technology.Therefore,will complementary memtransistors(CMT)also play such a role in the future neuromorphic circuits and chips?In this review,various types of materials and physical mechanisms for constructing CMT(how)are inspected with their merits and need-to-address challenges discussed.Then the unique properties(what)and poten-tial applications of CMT in different learning algorithms/scenarios of spiking neural networks(why)are reviewed,including super-vised rule,reinforcement one,dynamic vision with in-sensor computing,etc.Through exploiting the complementary structure-related novel functions,significant reduction of hardware consuming,enhancement of energy/efficiency ratio and other advan-tages have been gained,illustrating the alluring prospect of design technology co-optimization(DTCO)of CMT towards neuro-morphic computing. 展开更多
关键词 complementary memtransistor neuromorphic computing reward-modulated spike timing-dependent plasticity remote supervise method in-sensor computing
下载PDF
Hybrid Approach for Cost Efficient Application Placement in Fog-Cloud Computing Environments
4
作者 Abdulelah Alwabel Chinmaya Kumar Swain 《Computers, Materials & Continua》 SCIE EI 2024年第6期4127-4148,共22页
Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.How... Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.However,the majority of the fog nodes in this environment are geographically scattered with resources that are limited in terms of capabilities compared to cloud nodes,thus making the application placement problem more complex than that in cloud computing.An approach for cost-efficient application placement in fog-cloud computing environments that combines the benefits of both fog and cloud computing to optimize the placement of applications and services while minimizing costs.This approach is particularly relevant in scenarios where latency,resource constraints,and cost considerations are crucial factors for the deployment of applications.In this study,we propose a hybrid approach that combines a genetic algorithm(GA)with the Flamingo Search Algorithm(FSA)to place application modules while minimizing cost.We consider four cost-types for application deployment:Computation,communication,energy consumption,and violations.The proposed hybrid approach is called GA-FSA and is designed to place the application modules considering the deadline of the application and deploy them appropriately to fog or cloud nodes to curtail the overall cost of the system.An extensive simulation is conducted to assess the performance of the proposed approach compared to other state-of-the-art approaches.The results demonstrate that GA-FSA approach is superior to the other approaches with respect to task guarantee ratio(TGR)and total cost. 展开更多
关键词 Placement mechanism application module placement fog computing cloud computing genetic algorithm flamingo search algorithm
下载PDF
Advances in neuromorphic computing:Expanding horizons for AI development through novel artificial neurons and in-sensor computing
5
作者 杨玉波 赵吉哲 +11 位作者 刘胤洁 华夏扬 王天睿 郑纪元 郝智彪 熊兵 孙长征 韩彦军 王健 李洪涛 汪莱 罗毅 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期1-23,共23页
AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by ... AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI. 展开更多
关键词 neuromorphic computing spiking neural network(SNN) in-sensor computing artificial intelligence
下载PDF
Task Offloading in Edge Computing Using GNNs and DQN
6
作者 Asier Garmendia-Orbegozo Jose David Nunez-Gonzalez Miguel Angel Anton 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2649-2671,共23页
In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer t... In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSimand compares them with the two defaultmethods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices. 展开更多
关键词 Edge computing edge offloading fog computing task offloading
下载PDF
Online Learning-Based Offloading Decision and Resource Allocation in Mobile Edge Computing-Enabled Satellite-Terrestrial Networks
7
作者 Tong Minglei Li Song +1 位作者 Han Wanjiang Wang Xiaoxiang 《China Communications》 SCIE CSCD 2024年第3期230-246,共17页
Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal ... Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes. 展开更多
关键词 computing resource allocation mobile edge computing satellite-terrestrial networks task offloading decision
下载PDF
Developments of Computing in Papua New Guinea in the Post-Independence Era
8
作者 Zhaohao Sun Xuehui Wei Francisca Pambel 《Journal of Computer and Communications》 2024年第8期141-160,共20页
This article looks at the developments of computing in Papua New Guinea (PNG) in the post-independence era. More specifically, this article examines the development of national policies on Information and Communicatio... This article looks at the developments of computing in Papua New Guinea (PNG) in the post-independence era. More specifically, this article examines the development of national policies on Information and Communications Technology (ICT), digital technologies in PNG, and the development of computing education in PNG since 1975. The research findings reveal that PNG has made solid progress in computing, ICT, national ICT policies, digital technologies, and computing education at universities in the post-independence era. The proposed approach in this article might facilitate the research and development of computing, ICT, digital technologies, and big data analytics in PNG, and beyond. 展开更多
关键词 computing Digital Technologies ICT computing Education Papua New Guinea
下载PDF
Recent Advances in In-Memory Computing:Exploring Memristor and Memtransistor Arrays with 2D Materials 被引量:1
9
作者 Hangbo Zhou Sifan Li +1 位作者 Kah-Wee Ang Yong-Wei Zhang 《Nano-Micro Letters》 SCIE EI CAS CSCD 2024年第7期1-30,共30页
The conventional computing architecture faces substantial chal-lenges,including high latency and energy consumption between memory and processing units.In response,in-memory computing has emerged as a promising altern... The conventional computing architecture faces substantial chal-lenges,including high latency and energy consumption between memory and processing units.In response,in-memory computing has emerged as a promising alternative architecture,enabling computing operations within memory arrays to overcome these limitations.Memristive devices have gained significant attention as key components for in-memory computing due to their high-density arrays,rapid response times,and ability to emulate biological synapses.Among these devices,two-dimensional(2D)material-based memristor and memtransistor arrays have emerged as particularly promising candidates for next-generation in-memory computing,thanks to their exceptional performance driven by the unique properties of 2D materials,such as layered structures,mechanical flexibility,and the capability to form heterojunctions.This review delves into the state-of-the-art research on 2D material-based memristive arrays,encompassing critical aspects such as material selection,device perfor-mance metrics,array structures,and potential applications.Furthermore,it provides a comprehensive overview of the current challenges and limitations associated with these arrays,along with potential solutions.The primary objective of this review is to serve as a significant milestone in realizing next-generation in-memory computing utilizing 2D materials and bridge the gap from single-device characterization to array-level and system-level implementations of neuromorphic computing,leveraging the potential of 2D material-based memristive devices. 展开更多
关键词 2D materials MEMRISTORS Memtransistors Crossbar array In-memory computing
下载PDF
Multiframe-integrated, in-sensor computing using persistent photoconductivity 被引量:1
10
作者 Xiaoyong Jiang Minrui Ye +7 位作者 Yunhai Li Xiao Fu Tangxin Li Qixiao Zhao Jinjin Wang Tao Zhang Jinshui Miao Zengguang Cheng 《Journal of Semiconductors》 EI CAS CSCD 2024年第9期36-41,共6页
The utilization of processing capabilities within the detector holds significant promise in addressing energy consumption and latency challenges. Especially in the context of dynamic motion recognition tasks, where su... The utilization of processing capabilities within the detector holds significant promise in addressing energy consumption and latency challenges. Especially in the context of dynamic motion recognition tasks, where substantial data transfers are necessitated by the generation of extensive information and the need for frame-by-frame analysis. Herein, we present a novel approach for dynamic motion recognition, leveraging a spatial-temporal in-sensor computing system rooted in multiframe integration by employing photodetector. Our approach introduced a retinomorphic MoS_(2) photodetector device for motion detection and analysis. The device enables the generation of informative final states, nonlinearly embedding both past and present frames. Subsequent multiply-accumulate (MAC) calculations are efficiently performed as the classifier. When evaluating our devices for target detection and direction classification, we achieved an impressive recognition accuracy of 93.5%. By eliminating the need for frame-by-frame analysis, our system not only achieves high precision but also facilitates energy-efficient in-sensor computing. 展开更多
关键词 in-sensor MOS2 PHOTODETECTOR persistent photoconductivity reservoir computing
下载PDF
IRS Assisted UAV Communications against Proactive Eavesdropping in Mobile Edge Computing Networks 被引量:1
11
作者 Ying Zhang Weiming Niu Leibing Yan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第1期885-902,共18页
In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of ... In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of UAV,the transmitting beamforming of users,and the phase shift matrix of IRS.The original problem is strong non-convex and difficult to solve.We first propose two basic modes of the proactive eavesdropper,and obtain the closed-form solution for the boundary conditions of the two modes.Then we transform the original problem into an equivalent one and propose an alternating optimization(AO)based method to obtain a local optimal solution.The convergence of the algorithm is illustrated by numerical results.Further,we propose a zero forcing(ZF)based method as sub-optimal solution,and the simulation section shows that the proposed two schemes could obtain better performance compared with traditional schemes. 展开更多
关键词 Mobile edge computing(MEC) unmanned aerial vehicle(UAV) intelligent reflecting surface(IRS) zero forcing(ZF)
下载PDF
Delayed-measurement one-way quantum computing on cloud quantum computer
12
作者 Zhi-Peng Yang Yu-Ran Zhang +1 位作者 Fu-Li Li Heng Fan 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第9期125-131,共7页
One-way quantum computation focuses on initially generating an entangled cluster state followed by a sequence of measurements with classical communication of their individual outcomes.Recently,a delayed-measurement ap... One-way quantum computation focuses on initially generating an entangled cluster state followed by a sequence of measurements with classical communication of their individual outcomes.Recently,a delayed-measurement approach has been applied to replace classical communication of individual measurement outcomes.In this work,by considering the delayed-measurement approach,we demonstrate a modified one-way CNOT gate using the on-cloud superconducting quantum computing platform:Quafu.The modified protocol for one-way quantum computing requires only three qubits rather than the four used in the standard protocol.Since this modified cluster state decreases the number of physical qubits required to implement one-way computation,both the scalability and complexity of the computing process are improved.Compared to previous work,this modified one-way CNOT gate is superior to the standard one in both fidelity and resource requirements.We have also numerically compared the behavior of standard and modified methods in large-scale one-way quantum computing.Our results suggest that in a noisy intermediate-scale quantum(NISQ)era,the modified method shows a significant advantage for one-way quantum computation. 展开更多
关键词 measurement-based QUANTUM computing QUANTUM ENTANGLEMENT QUANTUM GATES
下载PDF
Exploring reservoir computing:Implementation via double stochastic nanowire networks
13
作者 唐健峰 夏磊 +3 位作者 李广隶 付军 段书凯 王丽丹 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期572-582,共11页
Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data ana... Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data analysis.This paper presents a model based on these nanowire networks,with an improved conductance variation profile.We suggest using these networks for temporal information processing via a reservoir computing scheme and propose an efficient data encoding method using voltage pulses.The nanowire network layer generates dynamic behaviors for pulse voltages,allowing time series prediction analysis.Our experiment uses a double stochastic nanowire network architecture for processing multiple input signals,outperforming traditional reservoir computing in terms of fewer nodes,enriched dynamics and improved prediction accuracy.Experimental results confirm the high accuracy of this architecture on multiple real-time series datasets,making neuromorphic nanowire networks promising for physical implementation of reservoir computing. 展开更多
关键词 double-layer stochastic(DS)nanowire network architecture neuromorphic computation nanowire network reservoir computing time series prediction
下载PDF
Performance Comparison of Hyper-V and KVM for Cryptographic Tasks in Cloud Computing
14
作者 Nader Abdel Karim Osama A.Khashan +4 位作者 Waleed K.Abdulraheem Moutaz Alazab Hasan Kanaker Mahmoud E.Farfoura Mohammad Alshinwan 《Computers, Materials & Continua》 SCIE EI 2024年第2期2023-2045,共23页
As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy i... As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy in the cloud environment.A hypervisor is a virtualization software used in cloud hosting to divide and allocate resources on various pieces of hardware.The choice of hypervisor can significantly impact the performance of cryptographic operations in the cloud environment.An important issue that must be carefully examined is that no hypervisor is completely superior in terms of performance;Each hypervisor should be examined to meet specific needs.The main objective of this study is to provide accurate results to compare the performance of Hyper-V and Kernel-based Virtual Machine(KVM)while implementing different cryptographic algorithms to guide cloud service providers and end users in choosing the most suitable hypervisor for their cryptographic needs.This study evaluated the efficiency of two hypervisors,Hyper-V and KVM,in implementing six cryptographic algorithms:Rivest,Shamir,Adleman(RSA),Advanced Encryption Standard(AES),Triple Data Encryption Standard(TripleDES),Carlisle Adams and Stafford Tavares(CAST-128),BLOWFISH,and TwoFish.The study’s findings show that KVM outperforms Hyper-V,with 12.2%less Central Processing Unit(CPU)use and 12.95%less time overall for encryption and decryption operations with various file sizes.The study’s findings emphasize how crucial it is to pick a hypervisor that is appropriate for cryptographic needs in a cloud environment,which could assist both cloud service providers and end users.Future research may focus more on how various hypervisors perform while handling cryptographic workloads. 展开更多
关键词 Cloud computing performance VIRTUALIZATION hypervisors HYPER-V KVM cryptographic algorithm
下载PDF
Fabrication and integration of photonic devices for phase-change memory and neuromorphic computing
15
作者 Wen Zhou Xueyang Shen +2 位作者 Xiaolong Yang Jiangjing Wang Wei Zhang 《International Journal of Extreme Manufacturing》 SCIE EI CAS CSCD 2024年第2期2-27,共26页
In the past decade,there has been tremendous progress in integrating chalcogenide phase-change materials(PCMs)on the silicon photonic platform for non-volatile memory to neuromorphic in-memory computing applications.I... In the past decade,there has been tremendous progress in integrating chalcogenide phase-change materials(PCMs)on the silicon photonic platform for non-volatile memory to neuromorphic in-memory computing applications.In particular,these non von Neumann computational elements and systems benefit from mass manufacturing of silicon photonic integrated circuits(PICs)on 8-inch wafers using a 130 nm complementary metal-oxide semiconductor line.Chip manufacturing based on deep-ultraviolet lithography and electron-beam lithography enables rapid prototyping of PICs,which can be integrated with high-quality PCMs based on the wafer-scale sputtering technique as a back-end-of-line process.In this article,we present an overview of recent advances in waveguide integrated PCM memory cells,functional devices,and neuromorphic systems,with an emphasis on fabrication and integration processes to attain state-of-the-art device performance.After a short overview of PCM based photonic devices,we discuss the materials properties of the functional layer as well as the progress on the light guiding layer,namely,the silicon and germanium waveguide platforms.Next,we discuss the cleanroom fabrication flow of waveguide devices integrated with thin films and nanowires,silicon waveguides and plasmonic microheaters for the electrothermal switching of PCMs and mixed-mode operation.Finally,the fabrication of photonic and photonic–electronic neuromorphic computing systems is reviewed.These systems consist of arrays of PCM memory elements for associative learning,matrix-vector multiplication,and pattern recognition.With large-scale integration,the neuromorphic photonic computing paradigm holds the promise to outperform digital electronic accelerators by taking the advantages of ultra-high bandwidth,high speed,and energy-efficient operation in running machine learning algorithms. 展开更多
关键词 nanofabrication silicon photonics phase-change materials non-volatile photonic memory neuromorphic photonic computing
下载PDF
For Mega-Constellations: Edge Computing and Safety Management Based on Blockchain Technology
16
作者 Zhen Zhang Bing Guo Chengjie Li 《China Communications》 SCIE CSCD 2024年第2期59-73,共15页
In mega-constellation Communication Systems, efficient routing algorithms and data transmission technologies are employed to ensure fast and reliable data transfer. However, the limited computational resources of sate... In mega-constellation Communication Systems, efficient routing algorithms and data transmission technologies are employed to ensure fast and reliable data transfer. However, the limited computational resources of satellites necessitate the use of edge computing to enhance secure communication.While edge computing reduces the burden on cloud computing, it introduces security and reliability challenges in open satellite communication channels. To address these challenges, we propose a blockchain architecture specifically designed for edge computing in mega-constellation communication systems. This architecture narrows down the consensus scope of the blockchain to meet the requirements of edge computing while ensuring comprehensive log storage across the network. Additionally, we introduce a reputation management mechanism for nodes within the blockchain, evaluating their trustworthiness, workload, and efficiency. Nodes with higher reputation scores are selected to participate in tasks and are appropriately incentivized. Simulation results demonstrate that our approach achieves a task result reliability of 95% while improving computational speed. 展开更多
关键词 blockchain consensus mechanism edge computing mega-constellation reputation management
下载PDF
Redundant Data Detection and Deletion to Meet Privacy Protection Requirements in Blockchain-Based Edge Computing Environment
17
作者 Zhang Lejun Peng Minghui +6 位作者 Su Shen Wang Weizheng Jin Zilong Su Yansen Chen Huiling Guo Ran Sergey Gataullin 《China Communications》 SCIE CSCD 2024年第3期149-159,共11页
With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for clou... With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for cloud servers and edge nodes.The storage capacity of edge nodes close to users is limited.We should store hotspot data in edge nodes as much as possible,so as to ensure response timeliness and access hit rate;However,the current scheme cannot guarantee that every sub-message in a complete data stored by the edge node meets the requirements of hot data;How to complete the detection and deletion of redundant data in edge nodes under the premise of protecting user privacy and data dynamic integrity has become a challenging problem.Our paper proposes a redundant data detection method that meets the privacy protection requirements.By scanning the cipher text,it is determined whether each sub-message of the data in the edge node meets the requirements of the hot data.It has the same effect as zero-knowledge proof,and it will not reveal the privacy of users.In addition,for redundant sub-data that does not meet the requirements of hot data,our paper proposes a redundant data deletion scheme that meets the dynamic integrity of the data.We use Content Extraction Signature(CES)to generate the remaining hot data signature after the redundant data is deleted.The feasibility of the scheme is proved through safety analysis and efficiency analysis. 展开更多
关键词 blockchain data integrity edge computing privacy protection redundant data
下载PDF
Application of artificial synapse based on all-inorganic perovskite memristor in neuromorphic computing
18
作者 Fang Luo Wen-Min Zhong +3 位作者 Xin-Gui Tang Jia-Ying Chen Yan-Ping Jiang Qiu-Xiang Liu 《Nano Materials Science》 EI CAS CSCD 2024年第1期68-76,共9页
Artificial synapse inspired by the biological brain has great potential in the field of neuromorphic computing and artificial intelligence.The memristor is an ideal artificial synaptic device with fast operation and g... Artificial synapse inspired by the biological brain has great potential in the field of neuromorphic computing and artificial intelligence.The memristor is an ideal artificial synaptic device with fast operation and good tolerance.Here,we have prepared a memristor device with Au/CsPbBr_(3)/ITO structure.The memristor device exhibits resistance switching behavior,the high and low resistance states no obvious decline after 400 switching times.The memristor device is stimulated by voltage pulses to simulate biological synaptic plasticity,such as long-term potentiation,long-term depression,pair-pulse facilitation,short-term depression,and short-term potentiation.The transformation from short-term memory to long-term memory is achieved by changing the stimulation frequency.In addition,a convolutional neural network was constructed to train/recognize MNIST handwritten data sets;a distinguished recognition accuracy of~96.7%on the digital image was obtained in 100 epochs,which is more accurate than other memristor-based neural networks.These results show that the memristor device based on CsPbBr3 has immense potential in the neuromorphic computing system. 展开更多
关键词 MEMRISTOR CsPbBr_(3) Resistive switching Artificial synapse Neuromorphic computing
下载PDF
Applications of Soft Computing Methods in Backbreak Assessment in Surface Mines: A Comprehensive Review
19
作者 Mojtaba Yari Manoj Khandelwal +3 位作者 Payam Abbasi Evangelos I.Koutras Danial Jahed Armaghani Panagiotis G.Asteris 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期2207-2238,共32页
Geo-engineering problems are known for their complexity and high uncertainty levels,requiring precise defini-tions,past experiences,logical reasoning,mathematical analysis,and practical insight to address them effecti... Geo-engineering problems are known for their complexity and high uncertainty levels,requiring precise defini-tions,past experiences,logical reasoning,mathematical analysis,and practical insight to address them effectively.Soft Computing(SC)methods have gained popularity in engineering disciplines such as mining and civil engineering due to computer hardware and machine learning advancements.Unlike traditional hard computing approaches,SC models use soft values and fuzzy sets to navigate uncertain environments.This study focuses on the application of SC methods to predict backbreak,a common issue in blasting operations within mining and civil projects.Backbreak,which refers to the unintended fracturing of rock beyond the desired blast perimeter,can significantly impact project timelines and costs.This study aims to explore how SC methods can be effectively employed to anticipate and mitigate the undesirable consequences of blasting operations,specifically focusing on backbreak prediction.The research explores the complexities of backbreak prediction and highlights the potential benefits of utilizing SC methods to address this challenging issue in geo-engineering projects. 展开更多
关键词 Backbreak BLASTING soft computing methods prediction theory-guided machine learning
下载PDF
Joint Optimization of Energy Consumption and Network Latency in Blockchain-Enabled Fog Computing Networks
20
作者 Huang Xiaoge Yin Hongbo +3 位作者 Cao Bin Wang Yongsheng Chen Qianbin Zhang Jie 《China Communications》 SCIE CSCD 2024年第4期104-119,共16页
Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this pap... Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this paper,we introduce a blockchain-enabled three-layer device-fog-cloud heterogeneous network.A reputation model is proposed to update the credibility of the fog nodes(FN),which is used to select blockchain nodes(BN)from FNs to participate in the consensus process.According to the Rivest-Shamir-Adleman(RSA)encryption algorithm applied to the blockchain system,FNs could verify the identity of the node through its public key to avoid malicious attacks.Additionally,to reduce the computation complexity of the consensus algorithms and the network overhead,we propose a dynamic offloading and resource allocation(DORA)algorithm and a reputation-based democratic byzantine fault tolerant(R-DBFT)algorithm to optimize the offloading decisions and decrease the number of BNs in the consensus algorithm while ensuring the network security.Simulation results demonstrate that the proposed algorithm could efficiently reduce the network overhead,and obtain a considerable performance improvement compared to the related algorithms in the previous literature. 展开更多
关键词 blockchain energy consumption fog computing network Internet of Things LATENCY
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部