期刊文献+
共找到6,782篇文章
< 1 2 250 >
每页显示 20 50 100
Tackling the Existential Threats from Quantum Computers and AI
1
作者 Fazal Raheman 《Intelligent Information Management》 2024年第3期121-146,共26页
Although AI and quantum computing (QC) are fast emerging as key enablers of the future Internet, experts believe they pose an existential threat to humanity. Responding to the frenzied release of ChatGPT/GPT-4, thousa... Although AI and quantum computing (QC) are fast emerging as key enablers of the future Internet, experts believe they pose an existential threat to humanity. Responding to the frenzied release of ChatGPT/GPT-4, thousands of alarmed tech leaders recently signed an open letter to pause AI research to prepare for the catastrophic threats to humanity from uncontrolled AGI (Artificial General Intelligence). Perceived as an “epistemological nightmare”, AGI is believed to be on the anvil with GPT-5. Two computing rules appear responsible for these risks. 1) Mandatory third-party permissions that allow computers to run applications at the expense of introducing vulnerabilities. 2) The Halting Problem of Turing-complete AI programming languages potentially renders AGI unstoppable. The double whammy of these inherent weaknesses remains invincible under the legacy systems. A recent cybersecurity breakthrough shows that banning all permissions reduces the computer attack surface to zero, delivering a new zero vulnerability computing (ZVC) paradigm. Deploying ZVC and blockchain, this paper formulates and supports a hypothesis: “Safe, secure, ethical, controllable AGI/QC is possible by conquering the two unassailable rules of computability.” Pursued by a European consortium, testing/proving the proposed hypothesis will have a groundbreaking impact on the future digital infrastructure when AGI/QC starts powering the 75 billion internet devices by 2025. 展开更多
关键词 Ethical AI Quantum computers Existential Threat Computer Vulnerabilities Halting Problem AGI
下载PDF
Solving the independent set problem by sticker based DNA computers
2
作者 Hassan Taghipour Ahad Taghipour +1 位作者 Mahdi Rezaei Heydar Ali Esmaili 《American Journal of Molecular Biology》 2012年第2期153-158,共6页
In this paper, the sticker based DNA computing was used for solving the independent set problem. At first, solution space was constructed by using appropriate DNA memory complexes. We defined a new operation called “... In this paper, the sticker based DNA computing was used for solving the independent set problem. At first, solution space was constructed by using appropriate DNA memory complexes. We defined a new operation called “divide” and applied it in construction of solution space. Then, by application of a sticker based parallel algorithm using biological operations, independent set problem was resolved in polynomial time. 展开更多
关键词 Parallel Computing Sticker BASED DNA computers INDEPENDENT Set PROBLEM NP-COMPLETE PROBLEM
下载PDF
When Computers Can Kill—Two Unusual Cases of “E-Thrombosis”
3
作者 Nausheen Doctor Puneet Seth 《International Journal of Clinical Medicine》 2018年第4期335-340,共6页
Deep vein thrombosis (DVT) is a common and potentially fatal vascular event when it leads to pulmonary embolism. Occurring as part of the broader phenomenon of Venous Thromboembolism (VTE), DVT classically arises when... Deep vein thrombosis (DVT) is a common and potentially fatal vascular event when it leads to pulmonary embolism. Occurring as part of the broader phenomenon of Venous Thromboembolism (VTE), DVT classically arises when Virchow’s triad of hypercoagulability, changes in blood flow (e.g. stasis) and endothelial dysfunction, is fulfilled. Although such immobilisation is most often seen in bedbound patients and travellers on long distance flights, there is increasing evidence that prolonged periods of work or leisure related to using computers while seated at work desks, is an independent risk factor. In this report, we present two cases of “e-thrombosis” from prolonged sitting while using a computer. 展开更多
关键词 Deep Vein THROMBOSIS Seated Immobilisation VENOUS STASIS Prolonged SITTING computers
下载PDF
Force Computers公司Advanced TCA交换及SBC技术
4
《电子产品世界》 2004年第07B期24-24,共1页
Force Computers推出2种ATCA产品——种是ATCF300,据称是唯一一种同时具有通信Base和Fabric接口的Gigabit Ethemet转换器;另一种是SBC的ATCA-715系列,它在Advanced TCA外形规格板卡上采用Intel Pentium M处理器、Intd E7501芯片组和I... Force Computers推出2种ATCA产品——种是ATCF300,据称是唯一一种同时具有通信Base和Fabric接口的Gigabit Ethemet转换器;另一种是SBC的ATCA-715系列,它在Advanced TCA外形规格板卡上采用Intel Pentium M处理器、Intd E7501芯片组和Intd 6300ESB I/O控制器hub。每种型号都为PICMG3.0和3. 展开更多
关键词 FORCE computers公司 ATCA-F300 SBC 转换器
下载PDF
Inova Computers公司单板计算机
5
《电子产品世界》 2004年第06B期20-21,共2页
关键词 Inova computers公司 单板计算机 ICP-PM SBC
下载PDF
摩托罗拉公司收购Force Computers并为嵌入式计算机业务更名
6
《测控自动化》 2004年第9期19-19,共1页
关键词 摩托罗拉公司 公司收购 FORCE computers部门 嵌入式通讯计算机部
下载PDF
Computer Viruses Do computers get sick ?You bet ,They're laid low-by a virus!
7
作者 史习冬 《大学英语》 2000年第1期12-13,共2页
关键词 Outlook Computer Viruses Do computers get sick They’re laid low-by a virus You bet
下载PDF
Are Computers Good or Bad for the Children?
8
作者 刘洪毓 《中学英语园地(八年级)》 2007年第10期9-10,共2页
We are already familiar with computers——computers work for us at home, in offices and in factories. But it is also true that many children today are using computers at schools before they can write. What does this m... We are already familiar with computers——computers work for us at home, in offices and in factories. But it is also true that many children today are using computers at schools before they can write. What does this mean for the future? Are these children lucky or not? 展开更多
关键词 BAD Are computers Good or Bad for the Children
下载PDF
17例耳硬化症患者的临床与影像学分析
9
作者 魏建初 张敏 +1 位作者 何云生 胡先芳 《中国耳鼻咽喉头颈外科》 CSCD 2024年第4期266-268,共3页
目的分析17例耳硬化症患者的临床特征、影像学表现及手术治疗效果。方法收集2020年5月~2023年5月在湖州市中心医院接受手术治疗的17例(17耳)耳硬化症患者临床、影像学资料,分析其临床表现、颞骨CT影像学表现及手术前后纯音测听结果变化... 目的分析17例耳硬化症患者的临床特征、影像学表现及手术治疗效果。方法收集2020年5月~2023年5月在湖州市中心医院接受手术治疗的17例(17耳)耳硬化症患者临床、影像学资料,分析其临床表现、颞骨CT影像学表现及手术前后纯音测听结果变化。结果17例(17耳)患者中,11耳(64.71%)为前庭窗型,表现为进行性传导性听力损失,其中9耳有耳鸣,3耳有轻微平衡问题;混合型6耳(35.29%)显示混合性听力损失,5耳有耳鸣,3耳有偶发性平衡障碍。影像学上,前庭窗型显示镫骨底板区域骨质增厚、前庭窗密度增加;混合型则显示镫骨底板增厚及耳蜗周围密度减低,1耳可见“双环征”,两种类型耳硬化症患者骨密度差异无统计学意义(P>0.05)。术后均未见耳鸣加重或其他严重并发症,仅2例(11.76%)出现暂时性头晕,术后随访显示术区恢复良好,无感染或其他迟发性并发症;术后听力测试显示气导阈值、气骨导差较术前显著改善(P均<0.05)。结论前庭窗型和混合型耳硬化症的影像学和临床表现各有特点,镫骨成形术可显著改善听力,降低气导阈值和气骨导差,且手术安全、并发症少。 展开更多
关键词 耳硬化(Otosclerosis) 体征和症状(Signs and Symptoms) 颞骨(Temporal Bone) 体层摄影术 X线计算机(Tomography X-Ray Computed) 测听法 纯音(Audiometry Pure-Tone) 听阈(Auditory Threshold)
下载PDF
From Standard Policy-Based Zero Trust to Absolute Zero Trust (AZT): A Quantum Leap to Q-Day Security
10
作者 Fazal Raheman 《Journal of Computer and Communications》 2024年第3期252-282,共31页
Cybercrime is projected to cost a whopping $23.8 Trillion by 2027. This is essentially because there’s no computer network that’s not vulnerable. Fool-proof cybersecurity of personal data in a connected computer is ... Cybercrime is projected to cost a whopping $23.8 Trillion by 2027. This is essentially because there’s no computer network that’s not vulnerable. Fool-proof cybersecurity of personal data in a connected computer is considered practically impossible. The advent of quantum computers (QC) will worsen cybersecurity. QC will be a boon for data-intensive industries by drastically reducing the computing time from years to minutes. But QC will render our current cryptography vulnerable to quantum attacks, breaking nearly all modern cryptographic systems. Before QCs with sufficient qubits arrive, we must be ready with quantum-safe strategies to protect our ICT infrastructures. Post-quantum cryptography (PQC) is being aggressively pursued worldwide as a defence from the potential Q-day threat. NIST (National Institute of Standards and Technology), in a rigorous process, tested 82 PQC schemes, 80 of which failed after the final round in 2022. Recently the remaining two PQCs were also cracked by a Swedish and a French team of cryptographers, placing NIST’s PQC standardization process in serious jeopardy. With all the NIST-evaluated PQCs failing, there’s an urgent need to explore alternate strategies. Although cybersecurity heavily relies on cryptography, recent evidence indicates that it can indeed transcend beyond encryption using Zero Vulnerability Computing (ZVC) technology. ZVC is an encryption-agnostic absolute zero trust (AZT) approach that can potentially render computers quantum resistant by banning all third-party permissions, a root cause of most vulnerabilities. Unachievable in legacy systems, AZT is pursued by an experienced consortium of European partners to build compact, solid-state devices that are robust, resilient, energy-efficient, and with zero attack surface, rendering them resistant to malware and future Q-Day threats. 展开更多
关键词 CYBERSECURITY Quantum computers Post Quantum Cryptography Q-Day Zero Trust
下载PDF
基于高空平台的边缘计算卸载:网络、算法和展望
11
作者 孙恩昌 李梦思 +2 位作者 何若兰 张卉 张延华 《北京工业大学学报》 CAS CSCD 北大核心 2024年第3期348-361,共14页
高空平台(high altitude platform,HAP)技术与多接入边缘计算(multi-access edge computing,MEC)技术的结合将MEC服务器部署区域由地面扩展到空中,打破传统地面MEC网络的局限性,为用户提供无处不在的计算卸载服务。针对基于HAP的MEC卸... 高空平台(high altitude platform,HAP)技术与多接入边缘计算(multi-access edge computing,MEC)技术的结合将MEC服务器部署区域由地面扩展到空中,打破传统地面MEC网络的局限性,为用户提供无处不在的计算卸载服务。针对基于HAP的MEC卸载研究进行综述,首先,从HAP计算节点的优势、网络组成部分、网络结构、主要挑战及其应对技术4个方面分析基于HAP的MEC网络;其次,分别从图论、博弈论、机器学习、联邦学习等理论的角度对基于HAP的MEC卸载算法进行横向分析和纵向对比;最后,指出基于HAP的MEC卸载技术目前存在的问题,并对该技术的未来研究方向进行展望。 展开更多
关键词 高空平台(high altitude platform HAP) 多接入边缘计算(multi-access edge computing MEC) 计算卸载 图论 博弈论 机器学习
下载PDF
IRS Assisted UAV Communications against Proactive Eavesdropping in Mobile Edge Computing Networks 被引量:1
12
作者 Ying Zhang Weiming Niu Leibing Yan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第1期885-902,共18页
In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of ... In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of UAV,the transmitting beamforming of users,and the phase shift matrix of IRS.The original problem is strong non-convex and difficult to solve.We first propose two basic modes of the proactive eavesdropper,and obtain the closed-form solution for the boundary conditions of the two modes.Then we transform the original problem into an equivalent one and propose an alternating optimization(AO)based method to obtain a local optimal solution.The convergence of the algorithm is illustrated by numerical results.Further,we propose a zero forcing(ZF)based method as sub-optimal solution,and the simulation section shows that the proposed two schemes could obtain better performance compared with traditional schemes. 展开更多
关键词 Mobile edge computing(MEC) unmanned aerial vehicle(UAV) intelligent reflecting surface(IRS) zero forcing(ZF)
下载PDF
垂直轴风机功率增强的翼型与阵列优化
13
作者 雷鸣 方辉 《中国海洋平台》 2024年第2期12-18,47,共8页
针对H型垂直轴风机(Vertical Axis Wind Turbine,VAWT),通过计算流体动力学(Computational Fluid Dynamics,CFD)模拟,将翼型设计与涡轮阵列相关联,对比分析多种翼型及不同阵列条件下VAWT的转矩系数C_(m)、功率系数C_(P)和平均功率参数... 针对H型垂直轴风机(Vertical Axis Wind Turbine,VAWT),通过计算流体动力学(Computational Fluid Dynamics,CFD)模拟,将翼型设计与涡轮阵列相关联,对比分析多种翼型及不同阵列条件下VAWT的转矩系数C_(m)、功率系数C_(P)和平均功率参数Ω。结果表明:与对称翼型相比,非对称翼型在高叶尖速比下功率系数较小,弯度效应可显著增大翼型在下风区的功率系数;在风场阵列中,三涡轮阵列优化后下风区涡轮功率显著提升,单涡轮功率可提升40%,风场整体功率提升约20%;针对海上牧场结构平面提出五涡轮阵列,优化后风场整体效率提升65%,单涡轮性能提升可达100%。研究成果对于提高深远海网箱系统功能与设计具有推动意义。 展开更多
关键词 垂直轴风机 计算流体动力学(Computational Fluid Dynamics CFD) 仿真 翼型 风场阵列
下载PDF
ATSSC:An Attack Tolerant System in Serverless Computing
14
作者 Zhang Shuai Guo Yunfei +2 位作者 Hu Hongchao Liu Wenyan Wang Yawen 《China Communications》 SCIE CSCD 2024年第6期192-205,共14页
Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are ... Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are invoked by its driven events.Nonetheless,security threats in serverless computing such as vulnerability-based security threats have become the pain point hindering its wide adoption.The ideas in proactive defense such as redundancy,diversity and dynamic provide promising approaches to protect against cyberattacks.However,these security technologies are mostly applied to serverless platform based on“stacked”mode,as they are designed independent with serverless computing.The lack of security consideration in the initial design makes it especially challenging to achieve the all life cycle protection for serverless application with limited cost.In this paper,we present ATSSC,a proactive defense enabled attack tolerant serverless platform.ATSSC integrates the characteristic of redundancy,diversity and dynamic into serverless seamless to achieve high-level security and efficiency.Specifically,ATSSC constructs multiple diverse function replicas to process the driven events and performs cross-validation to verify the results.In order to create diverse function replicas,both software diversity and environment diversity are adopted.Furthermore,a dynamic function refresh strategy is proposed to keep the clean state of serverless functions.We implement ATSSC based on Kubernetes and Knative.Analysis and experimental results demonstrate that ATSSC can effectively protect serverless computing against cyberattacks with acceptable costs. 展开更多
关键词 active defense attack tolerant cloud computing SECURITY serverless computing
下载PDF
Static Analysis Techniques for Fixing Software Defects in MPI-Based Parallel Programs
15
作者 Norah Abdullah Al-Johany Sanaa Abdullah Sharaf +1 位作者 Fathy Elbouraey Eassa Reem Abdulaziz Alnanih 《Computers, Materials & Continua》 SCIE EI 2024年第5期3139-3173,共35页
The Message Passing Interface (MPI) is a widely accepted standard for parallel computing on distributed memorysystems.However, MPI implementations can contain defects that impact the reliability and performance of par... The Message Passing Interface (MPI) is a widely accepted standard for parallel computing on distributed memorysystems.However, MPI implementations can contain defects that impact the reliability and performance of parallelapplications. Detecting and correcting these defects is crucial, yet there is a lack of published models specificallydesigned for correctingMPI defects. To address this, we propose a model for detecting and correcting MPI defects(DC_MPI), which aims to detect and correct defects in various types of MPI communication, including blockingpoint-to-point (BPTP), nonblocking point-to-point (NBPTP), and collective communication (CC). The defectsaddressed by the DC_MPI model include illegal MPI calls, deadlocks (DL), race conditions (RC), and messagemismatches (MM). To assess the effectiveness of the DC_MPI model, we performed experiments on a datasetconsisting of 40 MPI codes. The results indicate that the model achieved a detection rate of 37 out of 40 codes,resulting in an overall detection accuracy of 92.5%. Additionally, the execution duration of the DC_MPI modelranged from 0.81 to 1.36 s. These findings show that the DC_MPI model is useful in detecting and correctingdefects in MPI implementations, thereby enhancing the reliability and performance of parallel applications. TheDC_MPImodel fills an important research gap and provides a valuable tool for improving the quality ofMPI-basedparallel computing systems. 展开更多
关键词 High-performance computing parallel computing software engineering software defect message passing interface DEADLOCK
下载PDF
Complementary memtransistors for neuromorphic computing: How, what and why
16
作者 Qi Chen Yue Zhou +4 位作者 Weiwei Xiong Zirui Chen Yasai Wang Xiangshui Miao Yuhui He 《Journal of Semiconductors》 EI CAS CSCD 2024年第6期64-80,共17页
Memtransistors in which the source-drain channel conductance can be nonvolatilely manipulated through the gate signals have emerged as promising components for implementing neuromorphic computing.On the other side,it ... Memtransistors in which the source-drain channel conductance can be nonvolatilely manipulated through the gate signals have emerged as promising components for implementing neuromorphic computing.On the other side,it is known that the complementary metal-oxide-semiconductor(CMOS)field effect transistors have played the fundamental role in the modern integrated circuit technology.Therefore,will complementary memtransistors(CMT)also play such a role in the future neuromorphic circuits and chips?In this review,various types of materials and physical mechanisms for constructing CMT(how)are inspected with their merits and need-to-address challenges discussed.Then the unique properties(what)and poten-tial applications of CMT in different learning algorithms/scenarios of spiking neural networks(why)are reviewed,including super-vised rule,reinforcement one,dynamic vision with in-sensor computing,etc.Through exploiting the complementary structure-related novel functions,significant reduction of hardware consuming,enhancement of energy/efficiency ratio and other advan-tages have been gained,illustrating the alluring prospect of design technology co-optimization(DTCO)of CMT towards neuro-morphic computing. 展开更多
关键词 complementary memtransistor neuromorphic computing reward-modulated spike timing-dependent plasticity remote supervise method in-sensor computing
下载PDF
Advances in neuromorphic computing:Expanding horizons for AI development through novel artificial neurons and in-sensor computing
17
作者 杨玉波 赵吉哲 +11 位作者 刘胤洁 华夏扬 王天睿 郑纪元 郝智彪 熊兵 孙长征 韩彦军 王健 李洪涛 汪莱 罗毅 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期1-23,共23页
AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by ... AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI. 展开更多
关键词 neuromorphic computing spiking neural network(SNN) in-sensor computing artificial intelligence
下载PDF
Enhanced Temporal Correlation for Universal Lesion Detection
18
作者 Muwei Jian Yue Jin Hui Yu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期3051-3063,共13页
Universal lesion detection(ULD)methods for computed tomography(CT)images play a vital role in the modern clinical medicine and intelligent automation.It is well known that single 2D CT slices lack spatial-temporal cha... Universal lesion detection(ULD)methods for computed tomography(CT)images play a vital role in the modern clinical medicine and intelligent automation.It is well known that single 2D CT slices lack spatial-temporal characteristics and contextual information compared to 3D CT blocks.However,3D CT blocks necessitate significantly higher hardware resources during the learning phase.Therefore,efficiently exploiting temporal correlation and spatial-temporal features of 2D CT slices is crucial for ULD tasks.In this paper,we propose a ULD network with the enhanced temporal correlation for this purpose,named TCE-Net.The designed TCE module is applied to enrich the discriminate feature representation of multiple sequential CT slices.Besides,we employ multi-scale feature maps to facilitate the localization and detection of lesions in various sizes.Extensive experiments are conducted on the DeepLesion benchmark demonstrate that thismethod achieves 66.84%and 78.18%for FS@0.5 and FS@1.0,respectively,outperforming compared state-of-the-art methods. 展开更多
关键词 Universal lesion detection computational biology medical computing deep learning enhanced temporal correlation
下载PDF
Task Offloading in Edge Computing Using GNNs and DQN
19
作者 Asier Garmendia-Orbegozo Jose David Nunez-Gonzalez Miguel Angel Anton 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2649-2671,共23页
In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer t... In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSimand compares them with the two defaultmethods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices. 展开更多
关键词 Edge computing edge offloading fog computing task offloading
下载PDF
Exploring reservoir computing:Implementation via double stochastic nanowire networks
20
作者 唐健峰 夏磊 +3 位作者 李广隶 付军 段书凯 王丽丹 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期572-582,共11页
Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data ana... Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data analysis.This paper presents a model based on these nanowire networks,with an improved conductance variation profile.We suggest using these networks for temporal information processing via a reservoir computing scheme and propose an efficient data encoding method using voltage pulses.The nanowire network layer generates dynamic behaviors for pulse voltages,allowing time series prediction analysis.Our experiment uses a double stochastic nanowire network architecture for processing multiple input signals,outperforming traditional reservoir computing in terms of fewer nodes,enriched dynamics and improved prediction accuracy.Experimental results confirm the high accuracy of this architecture on multiple real-time series datasets,making neuromorphic nanowire networks promising for physical implementation of reservoir computing. 展开更多
关键词 double-layer stochastic(DS)nanowire network architecture neuromorphic computation nanowire network reservoir computing time series prediction
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部