期刊文献+
共找到5,430篇文章
< 1 2 250 >
每页显示 20 50 100
Understanding the Theory of Karp, Miller and Winograd
1
作者 Athanasios I. Margaris 《Journal of Applied Mathematics and Physics》 2024年第4期1203-1236,共34页
The objective of this tutorial is to present the fundamental theory of Karp, Miller and Winograd, whose seminal paper laid the foundations regarding the systematic description of the organization of computations in sy... The objective of this tutorial is to present the fundamental theory of Karp, Miller and Winograd, whose seminal paper laid the foundations regarding the systematic description of the organization of computations in systems of uniform recurrent equations by means of graph structures, via the definition of computability conditions and techniques for the construction of one-dimensional and multi-dimensional scheduling functions. Besides the description of this theory, the paper presents improvements and revisions made by other authors and furthermore, points out the differences regarding the conditions of causality and dependency between the general case of systems of recurrent equations and the special case of multiple nested loops. 展开更多
关键词 computability SCHEDULING COMPUTATIONS Recurrent Equations
下载PDF
基于高空平台的边缘计算卸载:网络、算法和展望
2
作者 孙恩昌 李梦思 +2 位作者 何若兰 张卉 张延华 《北京工业大学学报》 CAS CSCD 北大核心 2024年第3期348-361,共14页
高空平台(high altitude platform,HAP)技术与多接入边缘计算(multi-access edge computing,MEC)技术的结合将MEC服务器部署区域由地面扩展到空中,打破传统地面MEC网络的局限性,为用户提供无处不在的计算卸载服务。针对基于HAP的MEC卸... 高空平台(high altitude platform,HAP)技术与多接入边缘计算(multi-access edge computing,MEC)技术的结合将MEC服务器部署区域由地面扩展到空中,打破传统地面MEC网络的局限性,为用户提供无处不在的计算卸载服务。针对基于HAP的MEC卸载研究进行综述,首先,从HAP计算节点的优势、网络组成部分、网络结构、主要挑战及其应对技术4个方面分析基于HAP的MEC网络;其次,分别从图论、博弈论、机器学习、联邦学习等理论的角度对基于HAP的MEC卸载算法进行横向分析和纵向对比;最后,指出基于HAP的MEC卸载技术目前存在的问题,并对该技术的未来研究方向进行展望。 展开更多
关键词 高空平台(high altitude platform HAP) 多接入边缘计算(multi-access edge computing MEC) 计算卸载 图论 博弈论 机器学习
下载PDF
垂直轴风机功率增强的翼型与阵列优化
3
作者 雷鸣 方辉 《中国海洋平台》 2024年第2期12-18,47,共8页
针对H型垂直轴风机(Vertical Axis Wind Turbine,VAWT),通过计算流体动力学(Computational Fluid Dynamics,CFD)模拟,将翼型设计与涡轮阵列相关联,对比分析多种翼型及不同阵列条件下VAWT的转矩系数C_(m)、功率系数C_(P)和平均功率参数... 针对H型垂直轴风机(Vertical Axis Wind Turbine,VAWT),通过计算流体动力学(Computational Fluid Dynamics,CFD)模拟,将翼型设计与涡轮阵列相关联,对比分析多种翼型及不同阵列条件下VAWT的转矩系数C_(m)、功率系数C_(P)和平均功率参数Ω。结果表明:与对称翼型相比,非对称翼型在高叶尖速比下功率系数较小,弯度效应可显著增大翼型在下风区的功率系数;在风场阵列中,三涡轮阵列优化后下风区涡轮功率显著提升,单涡轮功率可提升40%,风场整体功率提升约20%;针对海上牧场结构平面提出五涡轮阵列,优化后风场整体效率提升65%,单涡轮性能提升可达100%。研究成果对于提高深远海网箱系统功能与设计具有推动意义。 展开更多
关键词 垂直轴风机 计算流体动力学(Computational Fluid Dynamics CFD) 仿真 翼型 风场阵列
下载PDF
Static Analysis Techniques for Fixing Software Defects in MPI-Based Parallel Programs
4
作者 Norah Abdullah Al-Johany Sanaa Abdullah Sharaf +1 位作者 Fathy Elbouraey Eassa Reem Abdulaziz Alnanih 《Computers, Materials & Continua》 SCIE EI 2024年第5期3139-3173,共35页
The Message Passing Interface (MPI) is a widely accepted standard for parallel computing on distributed memorysystems.However, MPI implementations can contain defects that impact the reliability and performance of par... The Message Passing Interface (MPI) is a widely accepted standard for parallel computing on distributed memorysystems.However, MPI implementations can contain defects that impact the reliability and performance of parallelapplications. Detecting and correcting these defects is crucial, yet there is a lack of published models specificallydesigned for correctingMPI defects. To address this, we propose a model for detecting and correcting MPI defects(DC_MPI), which aims to detect and correct defects in various types of MPI communication, including blockingpoint-to-point (BPTP), nonblocking point-to-point (NBPTP), and collective communication (CC). The defectsaddressed by the DC_MPI model include illegal MPI calls, deadlocks (DL), race conditions (RC), and messagemismatches (MM). To assess the effectiveness of the DC_MPI model, we performed experiments on a datasetconsisting of 40 MPI codes. The results indicate that the model achieved a detection rate of 37 out of 40 codes,resulting in an overall detection accuracy of 92.5%. Additionally, the execution duration of the DC_MPI modelranged from 0.81 to 1.36 s. These findings show that the DC_MPI model is useful in detecting and correctingdefects in MPI implementations, thereby enhancing the reliability and performance of parallel applications. TheDC_MPImodel fills an important research gap and provides a valuable tool for improving the quality ofMPI-basedparallel computing systems. 展开更多
关键词 High-performance computing parallel computing software engineering software defect message passing interface DEADLOCK
下载PDF
Advances in neuromorphic computing:Expanding horizons for AI development through novel artificial neurons and in-sensor computing
5
作者 杨玉波 赵吉哲 +11 位作者 刘胤洁 华夏扬 王天睿 郑纪元 郝智彪 熊兵 孙长征 韩彦军 王健 李洪涛 汪莱 罗毅 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期1-23,共23页
AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by ... AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI. 展开更多
关键词 neuromorphic computing spiking neural network(SNN) in-sensor computing artificial intelligence
下载PDF
Enhanced Temporal Correlation for Universal Lesion Detection
6
作者 Muwei Jian Yue Jin Hui Yu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期3051-3063,共13页
Universal lesion detection(ULD)methods for computed tomography(CT)images play a vital role in the modern clinical medicine and intelligent automation.It is well known that single 2D CT slices lack spatial-temporal cha... Universal lesion detection(ULD)methods for computed tomography(CT)images play a vital role in the modern clinical medicine and intelligent automation.It is well known that single 2D CT slices lack spatial-temporal characteristics and contextual information compared to 3D CT blocks.However,3D CT blocks necessitate significantly higher hardware resources during the learning phase.Therefore,efficiently exploiting temporal correlation and spatial-temporal features of 2D CT slices is crucial for ULD tasks.In this paper,we propose a ULD network with the enhanced temporal correlation for this purpose,named TCE-Net.The designed TCE module is applied to enrich the discriminate feature representation of multiple sequential CT slices.Besides,we employ multi-scale feature maps to facilitate the localization and detection of lesions in various sizes.Extensive experiments are conducted on the DeepLesion benchmark demonstrate that thismethod achieves 66.84%and 78.18%for FS@0.5 and FS@1.0,respectively,outperforming compared state-of-the-art methods. 展开更多
关键词 Universal lesion detection computational biology medical computing deep learning enhanced temporal correlation
下载PDF
Task Offloading in Edge Computing Using GNNs and DQN
7
作者 Asier Garmendia-Orbegozo Jose David Nunez-Gonzalez Miguel Angel Anton 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2649-2671,共23页
In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer t... In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSimand compares them with the two defaultmethods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices. 展开更多
关键词 Edge computing edge offloading fog computing task offloading
下载PDF
Exploring reservoir computing:Implementation via double stochastic nanowire networks
8
作者 唐健峰 夏磊 +3 位作者 李广隶 付军 段书凯 王丽丹 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期572-582,共11页
Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data ana... Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data analysis.This paper presents a model based on these nanowire networks,with an improved conductance variation profile.We suggest using these networks for temporal information processing via a reservoir computing scheme and propose an efficient data encoding method using voltage pulses.The nanowire network layer generates dynamic behaviors for pulse voltages,allowing time series prediction analysis.Our experiment uses a double stochastic nanowire network architecture for processing multiple input signals,outperforming traditional reservoir computing in terms of fewer nodes,enriched dynamics and improved prediction accuracy.Experimental results confirm the high accuracy of this architecture on multiple real-time series datasets,making neuromorphic nanowire networks promising for physical implementation of reservoir computing. 展开更多
关键词 double-layer stochastic(DS)nanowire network architecture neuromorphic computation nanowire network reservoir computing time series prediction
下载PDF
Online Learning-Based Offloading Decision and Resource Allocation in Mobile Edge Computing-Enabled Satellite-Terrestrial Networks
9
作者 Tong Minglei Li Song +1 位作者 Han Wanjiang Wang Xiaoxiang 《China Communications》 SCIE CSCD 2024年第3期230-246,共17页
Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal ... Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes. 展开更多
关键词 computing resource allocation mobile edge computing satellite-terrestrial networks task offloading decision
下载PDF
Complementary comments on diagnosis,severity and prognosis prediction of acute pancreatitis
10
作者 Muhsin Ozgun Ozturk Sonay Aydin 《World Journal of Gastroenterology》 SCIE CAS 2024年第1期108-111,共4页
The radiological differential diagnosis of acute pancreatitis includes diffuse pancreatic lymphoma,diffuse autoimmune pancreatitis and groove located mass lesions that may mimic groove pancreatitis.Dual energy compute... The radiological differential diagnosis of acute pancreatitis includes diffuse pancreatic lymphoma,diffuse autoimmune pancreatitis and groove located mass lesions that may mimic groove pancreatitis.Dual energy computed tomography and diffusion weighted magnetic resonance imaging are useful in the early diagnosis of acute pancreatitis,and dual energy computed tomography is also useful in severity assessment and prognosis prediction.Walled off necrosis is an important complication in terms of prognosis,and it is important to know its radiological findings and distinguish it from pseudocyst. 展开更多
关键词 Acute pancreatitis Computed tomography Diffusion weighted imaging Dual energy computed tomography Walled off necrosis
下载PDF
Effects of the initiation position on the damage and fracture characteristics of linear-charge blasting in rock
11
作者 Chenxi Ding Renshu Yang +3 位作者 Xiao Guo Zhe Sui Chenglong Xiao Liyun Yang 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS CSCD 2024年第3期443-451,共9页
To study the effects of the initiation position on the damage and fracture characteristics of linear-charge blasting, blasting model experiments were conducted in this study using computed tomography scanning and thre... To study the effects of the initiation position on the damage and fracture characteristics of linear-charge blasting, blasting model experiments were conducted in this study using computed tomography scanning and three-dimensional reconstruction methods. The fractal damage theory was used to quantify the crack distribution and damage degree of sandstone specimens after blasting. The results showed that regardless of an inverse or top initiation, due to compression deformation and sliding frictional resistance, the plugging medium of the borehole is effective. The energy of the explosive gas near the top of the borehole is consumed. This affects the effective crushing of rocks near the top of the borehole, where the extent of damage to Sections Ⅰ and Ⅱ is less than that of Sections Ⅲ and Ⅳ. In addition, the analysis revealed that under conditions of top initiation, the reflected tensile damage of the rock at the free face of the top of the borehole and the compression deformation of the plug and friction consume more blasting energy, resulting in lower blasting energy efficiency for top initiation. As a result, the overall damage degree of the specimens in the top-initiation group was significantly smaller than that in the inverse-initiation group. Under conditions of inverse initiation, the blasting energy efficiency is greater, causing the specimen to experience greater damage. Therefore, in the engineering practice of rock tunnel cut blasting, to utilize blasting energy effectively and enhance the effects of rock fragmentation, using the inverse-initiation method is recommended. In addition, in three-dimensional(3D) rock blasting, the bottom of the borehole has obvious end effects under the conditions of inverse initiation, and the crack distribution at the bottom of the borehole is trumpet-shaped. The occurrence of an end effect in the 3D linear-charge blasting model experiment is related to the initiation position and the blocking condition. 展开更多
关键词 BLASTING linear charge initiation position computed tomography three-dimensional reconstruction damage
下载PDF
From the perspective of experimental practice: High-throughput computational screening in photocatalysis
12
作者 Yunxuan Zhao Junyu Gao +2 位作者 Xuanang Bian Han Tang Tierui Zhang 《Green Energy & Environment》 SCIE EI CAS CSCD 2024年第1期1-6,共6页
Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is... Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is now generating widespread interest in boosting the conversion effi-ciency of solar energy.In the past decade,computational technologies and theoretical simulations have led to a major leap in the development of high-throughput computational screening strategies for novel high-efficiency photocatalysts.In this viewpoint,we started with introducing the challenges of photocatalysis from the view of experimental practice,especially the inefficiency of the traditional“trial and error”method.Sub-sequently,a cross-sectional comparison between experimental and high-throughput computational screening for photocatalysis is presented and discussed in detail.On the basis of the current experimental progress in photocatalysis,we also exemplified the various challenges associated with high-throughput computational screening strategies.Finally,we offered a preferred high-throughput computational screening procedure for pho-tocatalysts from an experimental practice perspective(model construction and screening,standardized experiments,assessment and revision),with the aim of a better correlation of high-throughput simulations and experimental practices,motivating to search for better descriptors. 展开更多
关键词 PHOTOCATALYSIS High-throughput computational screening PHOTOCATALYST Theoretical simulations Experiments
下载PDF
For Mega-Constellations: Edge Computing and Safety Management Based on Blockchain Technology
13
作者 Zhen Zhang Bing Guo Chengjie Li 《China Communications》 SCIE CSCD 2024年第2期59-73,共15页
In mega-constellation Communication Systems, efficient routing algorithms and data transmission technologies are employed to ensure fast and reliable data transfer. However, the limited computational resources of sate... In mega-constellation Communication Systems, efficient routing algorithms and data transmission technologies are employed to ensure fast and reliable data transfer. However, the limited computational resources of satellites necessitate the use of edge computing to enhance secure communication.While edge computing reduces the burden on cloud computing, it introduces security and reliability challenges in open satellite communication channels. To address these challenges, we propose a blockchain architecture specifically designed for edge computing in mega-constellation communication systems. This architecture narrows down the consensus scope of the blockchain to meet the requirements of edge computing while ensuring comprehensive log storage across the network. Additionally, we introduce a reputation management mechanism for nodes within the blockchain, evaluating their trustworthiness, workload, and efficiency. Nodes with higher reputation scores are selected to participate in tasks and are appropriately incentivized. Simulation results demonstrate that our approach achieves a task result reliability of 95% while improving computational speed. 展开更多
关键词 blockchain consensus mechanism edge computing mega-constellation reputation management
下载PDF
Air-Ground Collaborative Mobile Edge Computing:Architecture,Challenges,and Opportunities
14
作者 Qin Zhen He Shoushuai +5 位作者 Wang Hai Qu Yuben Dai Haipeng Xiong Fei Wei Zhenhua Li Hailong 《China Communications》 SCIE CSCD 2024年第5期1-16,共16页
By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-grow... By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-growing computational demands,it is impossible for a single MEC paradigm to effectively support high-quality intelligent services at end user equipments(UEs).To address this issue,we propose an air-ground collaborative MEC(AGCMEC)architecture in this article.The proposed AGCMEC integrates all potentially available MEC servers within air and ground in the envisioned 6G,by a variety of collaborative ways to provide computation services at their best for UEs.Firstly,we introduce the AGC-MEC architecture and elaborate three typical use cases.Then,we discuss four main challenges in the AGC-MEC as well as their potential solutions.Next,we conduct a case study of collaborative service placement for AGC-MEC to validate the effectiveness of the proposed collaborative service placement strategy.Finally,we highlight several potential research directions of the AGC-MEC. 展开更多
关键词 air-ground ARCHITECTURE COLLABORATIVE mobile edge computing
下载PDF
Redundant Data Detection and Deletion to Meet Privacy Protection Requirements in Blockchain-Based Edge Computing Environment
15
作者 Zhang Lejun Peng Minghui +6 位作者 Su Shen Wang Weizheng Jin Zilong Su Yansen Chen Huiling Guo Ran Sergey Gataullin 《China Communications》 SCIE CSCD 2024年第3期149-159,共11页
With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for clou... With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for cloud servers and edge nodes.The storage capacity of edge nodes close to users is limited.We should store hotspot data in edge nodes as much as possible,so as to ensure response timeliness and access hit rate;However,the current scheme cannot guarantee that every sub-message in a complete data stored by the edge node meets the requirements of hot data;How to complete the detection and deletion of redundant data in edge nodes under the premise of protecting user privacy and data dynamic integrity has become a challenging problem.Our paper proposes a redundant data detection method that meets the privacy protection requirements.By scanning the cipher text,it is determined whether each sub-message of the data in the edge node meets the requirements of the hot data.It has the same effect as zero-knowledge proof,and it will not reveal the privacy of users.In addition,for redundant sub-data that does not meet the requirements of hot data,our paper proposes a redundant data deletion scheme that meets the dynamic integrity of the data.We use Content Extraction Signature(CES)to generate the remaining hot data signature after the redundant data is deleted.The feasibility of the scheme is proved through safety analysis and efficiency analysis. 展开更多
关键词 blockchain data integrity edge computing privacy protection redundant data
下载PDF
Performance Comparison of Hyper-V and KVM for Cryptographic Tasks in Cloud Computing
16
作者 Nader Abdel Karim Osama A.Khashan +4 位作者 Waleed K.Abdulraheem Moutaz Alazab Hasan Kanaker Mahmoud E.Farfoura Mohammad Alshinwan 《Computers, Materials & Continua》 SCIE EI 2024年第2期2023-2045,共23页
As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy i... As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy in the cloud environment.A hypervisor is a virtualization software used in cloud hosting to divide and allocate resources on various pieces of hardware.The choice of hypervisor can significantly impact the performance of cryptographic operations in the cloud environment.An important issue that must be carefully examined is that no hypervisor is completely superior in terms of performance;Each hypervisor should be examined to meet specific needs.The main objective of this study is to provide accurate results to compare the performance of Hyper-V and Kernel-based Virtual Machine(KVM)while implementing different cryptographic algorithms to guide cloud service providers and end users in choosing the most suitable hypervisor for their cryptographic needs.This study evaluated the efficiency of two hypervisors,Hyper-V and KVM,in implementing six cryptographic algorithms:Rivest,Shamir,Adleman(RSA),Advanced Encryption Standard(AES),Triple Data Encryption Standard(TripleDES),Carlisle Adams and Stafford Tavares(CAST-128),BLOWFISH,and TwoFish.The study’s findings show that KVM outperforms Hyper-V,with 12.2%less Central Processing Unit(CPU)use and 12.95%less time overall for encryption and decryption operations with various file sizes.The study’s findings emphasize how crucial it is to pick a hypervisor that is appropriate for cryptographic needs in a cloud environment,which could assist both cloud service providers and end users.Future research may focus more on how various hypervisors perform while handling cryptographic workloads. 展开更多
关键词 Cloud computing performance VIRTUALIZATION hypervisors HYPER-V KVM cryptographic algorithm
下载PDF
Intelligent Solution System for Cloud Security Based on Equity Distribution:Model and Algorithms
17
作者 Sarah Mustafa Eljack Mahdi Jemmali +3 位作者 Mohsen Denden Mutasim Al Sadig Abdullah M.Algashami Sadok Turki 《Computers, Materials & Continua》 SCIE EI 2024年第1期1461-1479,共19页
In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding ... In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding the risk of data loss and data overlapping.The development of data flow scheduling approaches in the cloud environment taking security parameters into account is insufficient.In our work,we propose a data scheduling model for the cloud environment.Themodel is made up of three parts that together help dispatch user data flow to the appropriate cloudVMs.The first component is the Collector Agent whichmust periodically collect information on the state of the network links.The second one is the monitoring agent which must then analyze,classify,and make a decision on the state of the link and finally transmit this information to the scheduler.The third one is the scheduler who must consider previous information to transfer user data,including fair distribution and reliable paths.It should be noted that each part of the proposedmodel requires the development of its algorithms.In this article,we are interested in the development of data transfer algorithms,including fairness distribution with the consideration of a stable link state.These algorithms are based on the grouping of transmitted files and the iterative method.The proposed algorithms showthe performances to obtain an approximate solution to the studied problem which is an NP-hard(Non-Polynomial solution)problem.The experimental results show that the best algorithm is the half-grouped minimum excluding(HME),with a percentage of 91.3%,an average deviation of 0.042,and an execution time of 0.001 s. 展开更多
关键词 Cyber-security cloud computing cloud security ALGORITHMS HEURISTICS
下载PDF
Prediction of the thermal conductivity of Mg–Al–La alloys by CALPHAD method
18
作者 Hongxia Li Wenjun Xu +5 位作者 Yufei Zhang Shenglan Yang Lijun Zhang Bin Liu Qun Luo Qian Li 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CSCD 2024年第1期129-137,共9页
Mg-Al alloys have excellent strength and ductility but relatively low thermal conductivity due to Al addition.The accurate prediction of thermal conductivity is a prerequisite for designing Mg-Al alloys with high ther... Mg-Al alloys have excellent strength and ductility but relatively low thermal conductivity due to Al addition.The accurate prediction of thermal conductivity is a prerequisite for designing Mg-Al alloys with high thermal conductivity.Thus,databases for predicting temperature-and composition-dependent thermal conductivities must be established.In this study,Mg-Al-La alloys with different contents of Al2La,Al3La,and Al11La3phases and solid solubility of Al in the α-Mg phase were designed.The influence of the second phase(s) and Al solid solubility on thermal conductivity was investigated.Experimental results revealed a second phase transformation from Al_(2)La to Al_(3)La and further to Al_(11)La_(3)with the increasing Al content at a constant La amount.The degree of the negative effect of the second phase(s) on thermal diffusivity followed the sequence of Al2La>Al3La>Al_(11)La_(3).Compared with the second phase,an increase in the solid solubility of Al in α-Mg remarkably reduced the thermal conductivity.On the basis of the experimental data,a database of the reciprocal thermal diffusivity of the Mg-Al-La system was established by calculation of the phase diagram (CALPHAD)method.With a standard error of±1.2 W/(m·K),the predicted results were in good agreement with the experimental data.The established database can be used to design Mg-Al alloys with high thermal conductivity and provide valuable guidance for expanding their application prospects. 展开更多
关键词 magnesium alloy thermal conductivity thermodynamic calculations materials computation
下载PDF
A Novel Scheduling Framework for Multi-Programming Quantum Computing in Cloud Environment
19
作者 Danyang Zheng Jinchen Xv +3 位作者 Feng Yue Qiming Du ZhihengWang Zheng Shan 《Computers, Materials & Continua》 SCIE EI 2024年第5期1957-1974,共18页
As cloud quantum computing gains broader acceptance,a growing quantity of researchers are directing their focus towards this domain.Nevertheless,the rapid surge in demand for cloud-based quantum computing resources ha... As cloud quantum computing gains broader acceptance,a growing quantity of researchers are directing their focus towards this domain.Nevertheless,the rapid surge in demand for cloud-based quantum computing resources has led to a scarcity,which in turn hampers users from achieving optimal satisfaction.Therefore,cloud quantum computing service providers require a unified analysis and scheduling framework for their quantumresources and user jobs to meet the ever-growing usage demands.This paper introduces a new multi-programming scheduling framework for quantum computing in a cloud environment.The framework addresses the issue of limited quantum computing resources in cloud environments and ensures a satisfactory user experience.It introduces three innovative designs:1)Our framework automatically allocates tasks to different quantum backends while ensuring fairness among users by considering both the cloud-based quantum resources and the user-submitted tasks.2)Multi-programming mechanism is employed across different quantum backends to enhance the overall throughput of the quantum cloud.In comparison to conventional task schedulers,our proposed framework achieves a throughput improvement of more than two-fold in the quantum cloud.3)The framework can balance fidelity and user waiting time by adaptively adjusting scheduling parameters. 展开更多
关键词 Quantum computing SCHEDULING multi-programming qubit mapping
下载PDF
IoT Task Offloading in Edge Computing Using Non-Cooperative Game Theory for Healthcare Systems
20
作者 Dinesh Mavaluru Chettupally Anil Carie +4 位作者 Ahmed I.Alutaibi Satish Anamalamudi Bayapa Reddy Narapureddy Murali Krishna Enduri Md Ezaz Ahmed 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期1487-1503,共17页
In this paper,we present a comprehensive system model for Industrial Internet of Things(IIoT)networks empowered by Non-Orthogonal Multiple Access(NOMA)and Mobile Edge Computing(MEC)technologies.The network comprises e... In this paper,we present a comprehensive system model for Industrial Internet of Things(IIoT)networks empowered by Non-Orthogonal Multiple Access(NOMA)and Mobile Edge Computing(MEC)technologies.The network comprises essential components such as base stations,edge servers,and numerous IIoT devices characterized by limited energy and computing capacities.The central challenge addressed is the optimization of resource allocation and task distribution while adhering to stringent queueing delay constraints and minimizing overall energy consumption.The system operates in discrete time slots and employs a quasi-static approach,with a specific focus on the complexities of task partitioning and the management of constrained resources within the IIoT context.This study makes valuable contributions to the field by enhancing the understanding of resourceefficient management and task allocation,particularly relevant in real-time industrial applications.Experimental results indicate that our proposed algorithmsignificantly outperforms existing approaches,reducing queue backlog by 45.32% and 17.25% compared to SMRA and ACRA while achieving a 27.31% and 74.12% improvement in Qn O.Moreover,the algorithmeffectively balances complexity and network performance,as demonstratedwhen reducing the number of devices in each group(Ng)from 200 to 50,resulting in a 97.21% reduction in complexity with only a 7.35% increase in energy consumption.This research offers a practical solution for optimizing IIoT networks in real-time industrial settings. 展开更多
关键词 Internet of Things edge computing OFFLOADING NOMA
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部