期刊文献+
共找到246,743篇文章
< 1 2 250 >
每页显示 20 50 100
Recent Advances in In-Memory Computing:Exploring Memristor and Memtransistor Arrays with 2D Materials 被引量:1
1
作者 Hangbo Zhou Sifan Li +1 位作者 Kah-Wee Ang Yong-Wei Zhang 《Nano-Micro Letters》 SCIE EI CAS CSCD 2024年第7期1-30,共30页
The conventional computing architecture faces substantial chal-lenges,including high latency and energy consumption between memory and processing units.In response,in-memory computing has emerged as a promising altern... The conventional computing architecture faces substantial chal-lenges,including high latency and energy consumption between memory and processing units.In response,in-memory computing has emerged as a promising alternative architecture,enabling computing operations within memory arrays to overcome these limitations.Memristive devices have gained significant attention as key components for in-memory computing due to their high-density arrays,rapid response times,and ability to emulate biological synapses.Among these devices,two-dimensional(2D)material-based memristor and memtransistor arrays have emerged as particularly promising candidates for next-generation in-memory computing,thanks to their exceptional performance driven by the unique properties of 2D materials,such as layered structures,mechanical flexibility,and the capability to form heterojunctions.This review delves into the state-of-the-art research on 2D material-based memristive arrays,encompassing critical aspects such as material selection,device perfor-mance metrics,array structures,and potential applications.Furthermore,it provides a comprehensive overview of the current challenges and limitations associated with these arrays,along with potential solutions.The primary objective of this review is to serve as a significant milestone in realizing next-generation in-memory computing utilizing 2D materials and bridge the gap from single-device characterization to array-level and system-level implementations of neuromorphic computing,leveraging the potential of 2D material-based memristive devices. 展开更多
关键词 2D materials MEMRISTORS Memtransistors Crossbar array in-memory computing
下载PDF
In-memory computing to break the memory wall 被引量:1
2
作者 Xiaohe Huang Chunsen Liu +1 位作者 Yu-Gang Jiang Peng Zhou 《Chinese Physics B》 SCIE EI CAS CSCD 2020年第7期28-48,共21页
Facing the computing demands of Internet of things(IoT)and artificial intelligence(AI),the cost induced by moving the data between the central processing unit(CPU)and memory is the key problem and a chip featured with... Facing the computing demands of Internet of things(IoT)and artificial intelligence(AI),the cost induced by moving the data between the central processing unit(CPU)and memory is the key problem and a chip featured with flexible structural unit,ultra-low power consumption,and huge parallelism will be needed.In-memory computing,a non-von Neumann architecture fusing memory units and computing units,can eliminate the data transfer time and energy consumption while performing massive parallel computations.Prototype in-memory computing schemes modified from different memory technologies have shown orders of magnitude improvement in computing efficiency,making it be regarded as the ultimate computing paradigm.Here we review the state-of-the-art memory device technologies potential for in-memory computing,summarize their versatile applications in neural network,stochastic generation,and hybrid precision digital computing,with promising solutions for unprecedented computing tasks,and also discuss the challenges of stability and integration for general in-memory computing. 展开更多
关键词 in-memory computing non-volatile memory device technologies crossbar array
下载PDF
Flash-based in-memory computing for stochastic computing in image edge detection 被引量:1
3
作者 Zhaohui Sun Yang Feng +6 位作者 Peng Guo Zheng Dong Junyu Zhang Jing Liu Xuepeng Zhan Jixuan Wu Jiezhi Chen 《Journal of Semiconductors》 EI CAS CSCD 2023年第5期145-149,共5页
The“memory wall”of traditional von Neumann computing systems severely restricts the efficiency of data-intensive task execution,while in-memory computing(IMC)architecture is a promising approach to breaking the bott... The“memory wall”of traditional von Neumann computing systems severely restricts the efficiency of data-intensive task execution,while in-memory computing(IMC)architecture is a promising approach to breaking the bottleneck.Although variations and instability in ultra-scaled memory cells seriously degrade the calculation accuracy in IMC architectures,stochastic computing(SC)can compensate for these shortcomings due to its low sensitivity to cell disturbances.Furthermore,massive parallel computing can be processed to improve the speed and efficiency of the system.In this paper,by designing logic functions in NOR flash arrays,SC in IMC for the image edge detection is realized,demonstrating ultra-low computational complexity and power consumption(25.5 fJ/pixel at 2-bit sequence length).More impressively,the noise immunity is 6 times higher than that of the traditional binary method,showing good tolerances to cell variation and reliability degradation when implementing massive parallel computation in the array. 展开更多
关键词 in-memory computing stochastic computing NOR flash memory image edge detection
下载PDF
Realization of flexible in-memory computing in a van der Waals ferroelectric heterostructure tri-gate transistor 被引量:2
4
作者 Xinzhu Gao Quan Chen +7 位作者 Qinggang Qin Liang Li Meizhuang Liu Derek Hao Junjie Li Jingbo Li Zhongchang Wang Zuxin Chen 《Nano Research》 SCIE EI CSCD 2024年第3期1886-1892,共7页
Combining logical function and memory characteristics of transistors is an ideal strategy for enhancing computational efficiency of transistor devices.Here,we rationally design a tri-gate two-dimensional(2D)ferroelect... Combining logical function and memory characteristics of transistors is an ideal strategy for enhancing computational efficiency of transistor devices.Here,we rationally design a tri-gate two-dimensional(2D)ferroelectric van der Waals heterostructures device based on copper indium thiophosphate(CuInP_(2)S_(6))and few layers tungsten disulfide(WS_(2)),and demonstrate its multi-functional applications in multi-valued state of data,non-volatile storage,and logic operation.By co-regulating the input signals across the tri-gate,we show that the device can switch functions flexibly at a low supply voltage of 6 V,giving rise to an ultra-high current switching ratio of 107 and a low subthreshold swing of 53.9 mV/dec.These findings offer perspectives in designing smart 2D devices with excellent functions based on ferroelectric van der Waals heterostructures. 展开更多
关键词 two-dimensional(2D)ferroelectric HETEROSTRUCTURE tri-gate polymorphic regulation in-memory computing
原文传递
Toward memristive in-memory computing:principles and applications 被引量:2
5
作者 Han Bao Houji Zhou +13 位作者 Jiancong Li Huaizhi Pei Jing Tian Ling Yang Shengguang Ren Shaoqin Tong Yi Li Yuhui He Jia Chen Yimao Cai Huaqiang Wu Qi Liu Qing Wan Xiangshui Miao 《Frontiers of Optoelectronics》 EI CSCD 2022年第2期101-125,共25页
With the rapid growth of computer science and big data,the traditional von Neumann architecture suffers the aggravating data communication costs due to the separated structure of the processing units and memories.Memr... With the rapid growth of computer science and big data,the traditional von Neumann architecture suffers the aggravating data communication costs due to the separated structure of the processing units and memories.Memristive in-memory computing paradigm is considered as a prominent candidate to address these issues,and plentiful applications have been demonstrated and verified.These applications can be broadly categorized into two major types:soft computing that can tolerant uncertain and imprecise results,and hard computing that emphasizes explicit and precise numerical results for each task,leading to different requirements on the computational accuracies and the corresponding hardware solutions.In this review,we conduct a thorough survey of the recent advances of memristive in-memory computing applications,both on the soft computing type that focuses on artificial neural networks and other machine learning algorithms,and the hard computing type that includes scientific computing and digital image processing.At the end of the review,we discuss the remaining challenges and future opportunities of memristive in-memory computing in the incoming Artificial Intelligence of Things era. 展开更多
关键词 MEMRISTOR in-memory computing Matrix-vector multiplication Machine learning Scientific computing Digital image processing
原文传递
Logic and in-memory computing achieved in a single ferroelectric semiconductor transistor
6
作者 Junjun Wang Feng Wang +10 位作者 Zhenxing Wang Wenhao Huang Yuyu Yao Yanrong Wang Jia Yang Ningning Li Lei Yin Ruiqing Cheng Xueying Zhan Chongxin Shan Jun He 《Science Bulletin》 SCIE EI CSCD 2021年第22期2288-2296,M0003,共10页
Exploring materials with multiple properties who can endow a simple device with integrated functionalities has attracted enormous attention in the microelectronic field. One reason is the imperious demand for processo... Exploring materials with multiple properties who can endow a simple device with integrated functionalities has attracted enormous attention in the microelectronic field. One reason is the imperious demand for processors with continuously higher performance and totally new architecture. Combining ferroelectric with semiconducting properties is a promising solution. Here, we show that logic, in-memory computing, and optoelectrical logic and non-volatile computing functionalities can be integrated into a single transistor with ferroelectric semiconducting α-In2Se3 as the channel. Two-input AND, OR, and nonvolatile NOR and NAND logic operations with current on/off ratios reaching up to five orders, good endurance(1000 operation cycles), and fast operating speed(10μs) are realized. In addition, optoelectrical OR logic and non-volatile implication(IMP) operations, as well as ternary-input optoelectrical logic and inmemory computing functions are achieved by introducing light as an additional input signal. Our work highlights the potential of integrating complex logic functions and new-type computing into a simple device based on emerging ferroelectric semiconductors. 展开更多
关键词 Two-dimensional ferroelectric semiconductors LOGIC in-memory computing Optoelectrical logic Non-volatile optoelectrical computing
原文传递
2T1C DRAM based on semiconducting MoS_(2) and semimetallic graphene for in-memory computing
7
作者 Saifei Gou Yin Wang +6 位作者 Xiangqi Dong Zihan Xu Xinyu Wang Qicheng Sun Yufeng Xie Peng Zhou Wenzhong Bao 《National Science Open》 2023年第4期65-75,共11页
In-memory computing is an alternative method to effectively accelerate the massive data-computing tasks of artificial intelligence(AI)and break the memory wall.In this work,we propose a 2T1C DRAM structure for in-memo... In-memory computing is an alternative method to effectively accelerate the massive data-computing tasks of artificial intelligence(AI)and break the memory wall.In this work,we propose a 2T1C DRAM structure for in-memory computing.It integrates a monolayer graphene transistor,a monolayer MoS_(2)transistor,and a capacitor in a two-transistor-onecapacitor(2T1C)configuration.In this structure,the storage node is in a similar position to that of one-transistor-one-capacitor(1T1C)dynamic random-access memory(DRAM),while an additional graphene transistor is used to accomplish the nondestructive readout of the stored information.Furthermore,the ultralow leakage current of the MoS_(2)transistor enables the storage of multi-level voltages on the capacitor with a long retention time.The stored charges can effectually tune the channel conductance of the graphene transistor due to its excellent linearity so that linear analog multiplication can be realized.Because of the almost unlimited cycling endurance of DRAM,our 2T1C DRAM has great potential for in situ training and recognition,which can significantly improve the recognition accuracy of neural networks. 展开更多
关键词 molybdenum disulfide(MoS_(2)) GRAPHENE DRAM in-memory computing
原文传递
Computing Power Network:A Survey 被引量:1
8
作者 Sun Yukun Lei Bo +4 位作者 Liu Junlin Huang Haonan Zhang Xing Peng Jing Wang Wenbo 《China Communications》 SCIE CSCD 2024年第9期109-145,共37页
With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these... With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well. 展开更多
关键词 computing power modeling computing power network computing power scheduling information awareness network forwarding
下载PDF
A review on SRAM-based computing in-memory:Circuits,functions,and applications 被引量:4
9
作者 Zhiting Lin Zhongzhen Tong +8 位作者 Jin Zhang Fangming Wang Tian Xu Yue Zhao Xiulong Wu Chunyu Peng Wenjuan Lu Qiang Zhao Junning Chen 《Journal of Semiconductors》 EI CAS CSCD 2022年第3期22-46,共25页
Artificial intelligence(AI)processes data-centric applications with minimal effort.However,it poses new challenges to system design in terms of computational speed and energy efficiency.The traditional von Neumann arc... Artificial intelligence(AI)processes data-centric applications with minimal effort.However,it poses new challenges to system design in terms of computational speed and energy efficiency.The traditional von Neumann architecture cannot meet the requirements of heavily datacentric applications due to the separation of computation and storage.The emergence of computing inmemory(CIM)is significant in circumventing the von Neumann bottleneck.A commercialized memory architecture,static random-access memory(SRAM),is fast and robust,consumes less power,and is compatible with state-of-the-art technology.This study investigates the research progress of SRAM-based CIM technology in three levels:circuit,function,and application.It also outlines the problems,challenges,and prospects of SRAM-based CIM macros. 展开更多
关键词 static random-access memory(SRAM) artificial intelligence(AI) von Neumann bottleneck computing in-memory(CIM) convolutional neural network(CNN)
下载PDF
Air-Ground Collaborative Mobile Edge Computing:Architecture,Challenges,and Opportunities 被引量:1
10
作者 Qin Zhen He Shoushuai +5 位作者 Wang Hai Qu Yuben Dai Haipeng Xiong Fei Wei Zhenhua Li Hailong 《China Communications》 SCIE CSCD 2024年第5期1-16,共16页
By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-grow... By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-growing computational demands,it is impossible for a single MEC paradigm to effectively support high-quality intelligent services at end user equipments(UEs).To address this issue,we propose an air-ground collaborative MEC(AGCMEC)architecture in this article.The proposed AGCMEC integrates all potentially available MEC servers within air and ground in the envisioned 6G,by a variety of collaborative ways to provide computation services at their best for UEs.Firstly,we introduce the AGC-MEC architecture and elaborate three typical use cases.Then,we discuss four main challenges in the AGC-MEC as well as their potential solutions.Next,we conduct a case study of collaborative service placement for AGC-MEC to validate the effectiveness of the proposed collaborative service placement strategy.Finally,we highlight several potential research directions of the AGC-MEC. 展开更多
关键词 air-ground architecture COLLABORATIVE mobile edge computing
下载PDF
UAV-assisted cooperative offloading energy efficiency system for mobile edge computing 被引量:1
11
作者 Xue-Yong Yu Wen-Jin Niu +1 位作者 Ye Zhu Hong-Bo Zhu 《Digital Communications and Networks》 SCIE CSCD 2024年第1期16-24,共9页
Reliable communication and intensive computing power cannot be provided effectively by temporary hot spots in disaster areas and complex terrain ground infrastructure.Mitigating this has greatly developed the applicat... Reliable communication and intensive computing power cannot be provided effectively by temporary hot spots in disaster areas and complex terrain ground infrastructure.Mitigating this has greatly developed the application and integration of UAV and Mobile Edge Computing(MEC)to the Internet of Things(loT).However,problems such as multi-user and huge data flow in large areas,which contradict the reality that a single UAV is constrained by limited computing power,still exist.Due to allowing UAV collaboration to accomplish complex tasks,cooperative task offloading between multiple UAVs must meet the interdependence of tasks and realize parallel processing,which reduces the computing power consumption and endurance pressure of terminals.Considering the computing requirements of the user terminal,delay constraint of a computing task,energy constraint,and safe distance of UAV,we constructed a UAV-Assisted cooperative offloading energy efficiency system for mobile edge computing to minimize user terminal energy consumption.However,the resulting optimization problem is originally nonconvex and thus,difficult to solve optimally.To tackle this problem,we developed an energy efficiency optimization algorithm using Block Coordinate Descent(BCD)that decomposes the problem into three convex subproblems.Furthermore,we jointly optimized the number of local computing tasks,number of computing offloaded tasks,trajectories of UAV,and offloading matching relationship between multi-UAVs and multiuser terminals.Simulation results show that the proposed approach is suitable for different channel conditions and significantly saves the user terminal energy consumption compared with other benchmark schemes. 展开更多
关键词 computation offloading Internet of things(IoT) Mobile edge computing(MEC) Block coordinate descent(BCD)
下载PDF
Multiframe-integrated, in-sensor computing using persistent photoconductivity 被引量:1
12
作者 Xiaoyong Jiang Minrui Ye +7 位作者 Yunhai Li Xiao Fu Tangxin Li Qixiao Zhao Jinjin Wang Tao Zhang Jinshui Miao Zengguang Cheng 《Journal of Semiconductors》 EI CAS CSCD 2024年第9期36-41,共6页
The utilization of processing capabilities within the detector holds significant promise in addressing energy consumption and latency challenges. Especially in the context of dynamic motion recognition tasks, where su... The utilization of processing capabilities within the detector holds significant promise in addressing energy consumption and latency challenges. Especially in the context of dynamic motion recognition tasks, where substantial data transfers are necessitated by the generation of extensive information and the need for frame-by-frame analysis. Herein, we present a novel approach for dynamic motion recognition, leveraging a spatial-temporal in-sensor computing system rooted in multiframe integration by employing photodetector. Our approach introduced a retinomorphic MoS_(2) photodetector device for motion detection and analysis. The device enables the generation of informative final states, nonlinearly embedding both past and present frames. Subsequent multiply-accumulate (MAC) calculations are efficiently performed as the classifier. When evaluating our devices for target detection and direction classification, we achieved an impressive recognition accuracy of 93.5%. By eliminating the need for frame-by-frame analysis, our system not only achieves high precision but also facilitates energy-efficient in-sensor computing. 展开更多
关键词 in-sensor MOS2 PHOTODETECTOR persistent photoconductivity reservoir computing
下载PDF
MCWOA Scheduler:Modified Chimp-Whale Optimization Algorithm for Task Scheduling in Cloud Computing 被引量:1
13
作者 Chirag Chandrashekar Pradeep Krishnadoss +1 位作者 Vijayakumar Kedalu Poornachary Balasundaram Ananthakrishnan 《Computers, Materials & Continua》 SCIE EI 2024年第2期2593-2616,共24页
Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay ... Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay can hamper the performance of IoT-enabled cloud platforms.However,efficient task scheduling can lower the cloud infrastructure’s energy consumption,thus maximizing the service provider’s revenue by decreasing user job processing times.The proposed Modified Chimp-Whale Optimization Algorithm called Modified Chimp-Whale Optimization Algorithm(MCWOA),combines elements of the Chimp Optimization Algorithm(COA)and the Whale Optimization Algorithm(WOA).To enhance MCWOA’s identification precision,the Sobol sequence is used in the population initialization phase,ensuring an even distribution of the population across the solution space.Moreover,the traditional MCWOA’s local search capabilities are augmented by incorporating the whale optimization algorithm’s bubble-net hunting and random search mechanisms into MCWOA’s position-updating process.This study demonstrates the effectiveness of the proposed approach using a two-story rigid frame and a simply supported beam model.Simulated outcomes reveal that the new method outperforms the original MCWOA,especially in multi-damage detection scenarios.MCWOA excels in avoiding false positives and enhancing computational speed,making it an optimal choice for structural damage detection.The efficiency of the proposed MCWOA is assessed against metrics such as energy usage,computational expense,task duration,and delay.The simulated data indicates that the new MCWOA outpaces other methods across all metrics.The study also references the Whale Optimization Algorithm(WOA),Chimp Algorithm(CA),Ant Lion Optimizer(ALO),Genetic Algorithm(GA)and Grey Wolf Optimizer(GWO). 展开更多
关键词 Cloud computing SCHEDULING chimp optimization algorithm whale optimization algorithm
下载PDF
Fabrication and integration of photonic devices for phase-change memory and neuromorphic computing 被引量:1
14
作者 Wen Zhou Xueyang Shen +2 位作者 Xiaolong Yang Jiangjing Wang Wei Zhang 《International Journal of Extreme Manufacturing》 SCIE EI CAS CSCD 2024年第2期2-27,共26页
In the past decade,there has been tremendous progress in integrating chalcogenide phase-change materials(PCMs)on the silicon photonic platform for non-volatile memory to neuromorphic in-memory computing applications.I... In the past decade,there has been tremendous progress in integrating chalcogenide phase-change materials(PCMs)on the silicon photonic platform for non-volatile memory to neuromorphic in-memory computing applications.In particular,these non von Neumann computational elements and systems benefit from mass manufacturing of silicon photonic integrated circuits(PICs)on 8-inch wafers using a 130 nm complementary metal-oxide semiconductor line.Chip manufacturing based on deep-ultraviolet lithography and electron-beam lithography enables rapid prototyping of PICs,which can be integrated with high-quality PCMs based on the wafer-scale sputtering technique as a back-end-of-line process.In this article,we present an overview of recent advances in waveguide integrated PCM memory cells,functional devices,and neuromorphic systems,with an emphasis on fabrication and integration processes to attain state-of-the-art device performance.After a short overview of PCM based photonic devices,we discuss the materials properties of the functional layer as well as the progress on the light guiding layer,namely,the silicon and germanium waveguide platforms.Next,we discuss the cleanroom fabrication flow of waveguide devices integrated with thin films and nanowires,silicon waveguides and plasmonic microheaters for the electrothermal switching of PCMs and mixed-mode operation.Finally,the fabrication of photonic and photonic–electronic neuromorphic computing systems is reviewed.These systems consist of arrays of PCM memory elements for associative learning,matrix-vector multiplication,and pattern recognition.With large-scale integration,the neuromorphic photonic computing paradigm holds the promise to outperform digital electronic accelerators by taking the advantages of ultra-high bandwidth,high speed,and energy-efficient operation in running machine learning algorithms. 展开更多
关键词 nanofabrication silicon photonics phase-change materials non-volatile photonic memory neuromorphic photonic computing
下载PDF
Anti-Byzantine Attacks Enabled Vehicle Selection for Asynchronous Federated Learning in Vehicular Edge Computing 被引量:1
15
作者 Zhang Cui Xu Xiao +4 位作者 Wu Qiong Fan Pingyi Fan Qiang Zhu Huiling Wang Jiangzhou 《China Communications》 SCIE CSCD 2024年第8期1-17,共17页
In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amount... In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amounts of local data,computing capabilities and locations of the vehicles,renewing the global model with same weight is inappropriate.The above factors will affect the local calculation time and upload time of the local model,and the vehicle may also be affected by Byzantine attacks,leading to the deterioration of the vehicle data.However,based on deep reinforcement learning(DRL),we can consider these factors comprehensively to eliminate vehicles with poor performance as much as possible and exclude vehicles that have suffered Byzantine attacks before AFL.At the same time,when aggregating AFL,we can focus on those vehicles with better performance to improve the accuracy and safety of the system.In this paper,we proposed a vehicle selection scheme based on DRL in VEC.In this scheme,vehicle’s mobility,channel conditions with temporal variations,computational resources with temporal variations,different data amount,transmission channel status of vehicles as well as Byzantine attacks were taken into account.Simulation results show that the proposed scheme effectively improves the safety and accuracy of the global model. 展开更多
关键词 asynchronous federated learning byzantine attacks vehicle selection vehicular edge computing
下载PDF
Security Implications of Edge Computing in Cloud Networks 被引量:1
16
作者 Sina Ahmadi 《Journal of Computer and Communications》 2024年第2期26-46,共21页
Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this r... Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this regard. The findings have shown that many challenges are linked to edge computing, such as privacy concerns, security breaches, high costs, low efficiency, etc. Therefore, there is a need to implement proper security measures to overcome these issues. Using emerging trends, like machine learning, encryption, artificial intelligence, real-time monitoring, etc., can help mitigate security issues. They can also develop a secure and safe future in cloud computing. It was concluded that the security implications of edge computing can easily be covered with the help of new technologies and techniques. 展开更多
关键词 Edge computing Cloud Networks Artificial Intelligence Machine Learning Cloud Security
下载PDF
IRS Assisted UAV Communications against Proactive Eavesdropping in Mobile Edge Computing Networks 被引量:1
17
作者 Ying Zhang Weiming Niu Leibing Yan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第1期885-902,共18页
In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of ... In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of UAV,the transmitting beamforming of users,and the phase shift matrix of IRS.The original problem is strong non-convex and difficult to solve.We first propose two basic modes of the proactive eavesdropper,and obtain the closed-form solution for the boundary conditions of the two modes.Then we transform the original problem into an equivalent one and propose an alternating optimization(AO)based method to obtain a local optimal solution.The convergence of the algorithm is illustrated by numerical results.Further,we propose a zero forcing(ZF)based method as sub-optimal solution,and the simulation section shows that the proposed two schemes could obtain better performance compared with traditional schemes. 展开更多
关键词 Mobile edge computing(MEC) unmanned aerial vehicle(UAV) intelligent reflecting surface(IRS) zero forcing(ZF)
下载PDF
ATSSC:An Attack Tolerant System in Serverless Computing
18
作者 Zhang Shuai Guo Yunfei +2 位作者 Hu Hongchao Liu Wenyan Wang Yawen 《China Communications》 SCIE CSCD 2024年第6期192-205,共14页
Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are ... Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are invoked by its driven events.Nonetheless,security threats in serverless computing such as vulnerability-based security threats have become the pain point hindering its wide adoption.The ideas in proactive defense such as redundancy,diversity and dynamic provide promising approaches to protect against cyberattacks.However,these security technologies are mostly applied to serverless platform based on“stacked”mode,as they are designed independent with serverless computing.The lack of security consideration in the initial design makes it especially challenging to achieve the all life cycle protection for serverless application with limited cost.In this paper,we present ATSSC,a proactive defense enabled attack tolerant serverless platform.ATSSC integrates the characteristic of redundancy,diversity and dynamic into serverless seamless to achieve high-level security and efficiency.Specifically,ATSSC constructs multiple diverse function replicas to process the driven events and performs cross-validation to verify the results.In order to create diverse function replicas,both software diversity and environment diversity are adopted.Furthermore,a dynamic function refresh strategy is proposed to keep the clean state of serverless functions.We implement ATSSC based on Kubernetes and Knative.Analysis and experimental results demonstrate that ATSSC can effectively protect serverless computing against cyberattacks with acceptable costs. 展开更多
关键词 active defense attack tolerant cloud computing SECURITY serverless computing
下载PDF
Task Offloading in Edge Computing Using GNNs and DQN
19
作者 Asier Garmendia-Orbegozo Jose David Nunez-Gonzalez Miguel Angel Anton 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2649-2671,共23页
In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer t... In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSimand compares them with the two defaultmethods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices. 展开更多
关键词 Edge computing edge offloading fog computing task offloading
下载PDF
Advances in neuromorphic computing:Expanding horizons for AI development through novel artificial neurons and in-sensor computing
20
作者 杨玉波 赵吉哲 +11 位作者 刘胤洁 华夏扬 王天睿 郑纪元 郝智彪 熊兵 孙长征 韩彦军 王健 李洪涛 汪莱 罗毅 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期1-23,共23页
AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by ... AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI. 展开更多
关键词 neuromorphic computing spiking neural network(SNN) in-sensor computing artificial intelligence
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部