期刊文献+
共找到42篇文章
< 1 2 3 >
每页显示 20 50 100
A Public Blockchain Consensus Mechanism for Fault-Tolerant Distributed Computing in LEO Satellite Communications 被引量:1
1
作者 Zhen Zhang Bing Guo +3 位作者 Lidong Zhu Yan Shen Chaoxia Qin Chengjie Li 《China Communications》 SCIE CSCD 2022年第7期110-123,共14页
In LEO(Low Earth Orbit)satellite communication systems,the satellite network is made up of a large number of satellites,the dynamically changing network environment affects the results of distributed computing.In orde... In LEO(Low Earth Orbit)satellite communication systems,the satellite network is made up of a large number of satellites,the dynamically changing network environment affects the results of distributed computing.In order to improve the fault tolerance rate,a novel public blockchain consensus mechanism that applies a distributed computing architecture in a public network is proposed.Redundant calculation of blockchain ensures the credibility of the results;and the transactions with calculation results of a task are stored distributed in sequence in Directed Acyclic Graphs(DAG).The transactions issued by nodes are connected to form a net.The net can quickly provide node reputation evaluation that does not rely on third parties.Simulations show that our proposed blockchain has the following advantages:1.The task processing speed of the blockchain can be close to that of the fastest node in the entire blockchain;2.When the tasks’arrival time intervals and demanded working nodes(WNs)meet certain conditions,the network can tolerate more than 50%of malicious devices;3.No matter the number of nodes in the blockchain is increased or reduced,the network can keep robustness by adjusting the task’s arrival time interval and demanded WNs. 展开更多
关键词 distributed computing public blockchain network consensus mechanism CREDIBILITY FAULTTOLERANCE
下载PDF
A Distributed Computing Framework Based on Lightweight Variance Reduction Method to Accelerate Machine Learning Training on Blockchain 被引量:1
2
作者 Zhen Huang Feng Liu +2 位作者 Mingxing Tang Jinyan Qiu Yuxing Peng 《China Communications》 SCIE CSCD 2020年第9期77-89,共13页
To security support large-scale intelligent applications,distributed machine learning based on blockchain is an intuitive solution scheme.However,the distributed machine learning is difficult to train due to that the ... To security support large-scale intelligent applications,distributed machine learning based on blockchain is an intuitive solution scheme.However,the distributed machine learning is difficult to train due to that the corresponding optimization solver algorithms converge slowly,which highly demand on computing and memory resources.To overcome the challenges,we propose a distributed computing framework for L-BFGS optimization algorithm based on variance reduction method,which is a lightweight,few additional cost and parallelized scheme for the model training process.To validate the claims,we have conducted several experiments on multiple classical datasets.Results show that our proposed computing framework can steadily accelerate the training process of solver in either local mode or distributed mode. 展开更多
关键词 machine learning optimization algorithm blockchain distributed computing variance reduction
下载PDF
G-Phenomena as a Base of Scalable Distributed Computing—G-Phenomena in Moore’s Law
3
作者 Karolj Skala Davor Davidovic +1 位作者 Tomislav Lipic Ivan Sovic 《International Journal of Internet and Distributed Systems》 2014年第1期1-4,共4页
Today we witness the exponential growth of scientific research. This fast growth is possible thanks to the rapid development of computing systems since its first days in 1947 and the invention of transistor till the p... Today we witness the exponential growth of scientific research. This fast growth is possible thanks to the rapid development of computing systems since its first days in 1947 and the invention of transistor till the present days with high performance and scalable distributed computing systems. This fast growth of computing systems was first observed by Gordon E. Moore in 1965 and postulated as Moore’s Law. For the development of the scalable distributed computing systems, the year 2000 was a very special year. The first GHz speed processor, GB size memory and GB/s data transmission through network were achieved. Interestingly, in the same year the usable Grid computing systems emerged, which gave a strong impulse to a rapid development of distributed computing systems. This paper recognizes these facts that occurred in the year 2000, as the G-phenomena, a millennium cornerstone for the rapid development of scalable distributed systems evolved around the Grid and Cloud computing paradigms. 展开更多
关键词 Historical Development of computing G-Phenomena Moore’s Law distributed computing SCALABILITY Grid computing Cloud computing Component
下载PDF
Survey of Distributed Computing Frameworks for Supporting Big Data Analysis
4
作者 Xudong Sun Yulin He +1 位作者 Dingming Wu Joshua Zhexue Huang 《Big Data Mining and Analytics》 EI CSCD 2023年第2期154-169,共16页
Distributed computing frameworks are the fundamental component of distributed computing systems.They provide an essential way to support the efficient processing of big data on clusters or cloud.The size of big data i... Distributed computing frameworks are the fundamental component of distributed computing systems.They provide an essential way to support the efficient processing of big data on clusters or cloud.The size of big data increases at a pace that is faster than the increase in the big data processing capacity of clusters.Thus,distributed computing frameworks based on the MapReduce computing model are not adequate to support big data analysis tasks which often require running complex analytical algorithms on extremely big data sets in terabytes.In performing such tasks,these frameworks face three challenges:computational inefficiency due to high I/O and communication costs,non-scalability to big data due to memory limit,and limited analytical algorithms because many serial algorithms cannot be implemented in the MapReduce programming model.New distributed computing frameworks need to be developed to conquer these challenges.In this paper,we review MapReduce-type distributed computing frameworks that are currently used in handling big data and discuss their problems when conducting big data analysis.In addition,we present a non-MapReduce distributed computing framework that has the potential to overcome big data analysis challenges. 展开更多
关键词 distributed computing frameworks big data analysis approximate computing MapReduce computing model
原文传递
L_(1)-Smooth SVM with Distributed Adaptive Proximal Stochastic Gradient Descent with Momentum for Fast Brain Tumor Detection
5
作者 Chuandong Qin Yu Cao Liqun Meng 《Computers, Materials & Continua》 SCIE EI 2024年第5期1975-1994,共20页
Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for ga... Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for gaining a deeper understanding of tumors and improving treatment outcomes.Machine learning models have become key players in automating brain tumor detection.Gradient descent methods are the mainstream algorithms for solving machine learning models.In this paper,we propose a novel distributed proximal stochastic gradient descent approach to solve the L_(1)-Smooth Support Vector Machine(SVM)classifier for brain tumor detection.Firstly,the smooth hinge loss is introduced to be used as the loss function of SVM.It avoids the issue of nondifferentiability at the zero point encountered by the traditional hinge loss function during gradient descent optimization.Secondly,the L_(1) regularization method is employed to sparsify features and enhance the robustness of the model.Finally,adaptive proximal stochastic gradient descent(PGD)with momentum,and distributed adaptive PGDwithmomentum(DPGD)are proposed and applied to the L_(1)-Smooth SVM.Distributed computing is crucial in large-scale data analysis,with its value manifested in extending algorithms to distributed clusters,thus enabling more efficient processing ofmassive amounts of data.The DPGD algorithm leverages Spark,enabling full utilization of the computer’s multi-core resources.Due to its sparsity induced by L_(1) regularization on parameters,it exhibits significantly accelerated convergence speed.From the perspective of loss reduction,DPGD converges faster than PGD.The experimental results show that adaptive PGD withmomentumand its variants have achieved cutting-edge accuracy and efficiency in brain tumor detection.Frompre-trained models,both the PGD andDPGD outperform other models,boasting an accuracy of 95.21%. 展开更多
关键词 Support vector machine proximal stochastic gradient descent brain tumor detection distributed computing
下载PDF
Research on a Fog Computing Architecture and BP Algorithm Application for Medical Big Data
6
作者 Baoling Qin 《Intelligent Automation & Soft Computing》 SCIE 2023年第7期255-267,共13页
Although the Internet of Things has been widely applied,the problems of cloud computing in the application of digital smart medical Big Data collection,processing,analysis,and storage remain,especially the low efficie... Although the Internet of Things has been widely applied,the problems of cloud computing in the application of digital smart medical Big Data collection,processing,analysis,and storage remain,especially the low efficiency of medical diagnosis.And with the wide application of the Internet of Things and Big Data in the medical field,medical Big Data is increasing in geometric magnitude resulting in cloud service overload,insufficient storage,communication delay,and network congestion.In order to solve these medical and network problems,a medical big-data-oriented fog computing architec-ture and BP algorithm application are proposed,and its structural advantages and characteristics are studied.This architecture enables the medical Big Data generated by medical edge devices and the existing data in the cloud service center to calculate,compare and analyze the fog node through the Internet of Things.The diagnosis results are designed to reduce the business processing delay and improve the diagnosis effect.Considering the weak computing of each edge device,the artificial intelligence BP neural network algorithm is used in the core computing model of the medical diagnosis system to improve the system computing power,enhance the medical intelligence-aided decision-making,and improve the clinical diagnosis and treatment efficiency.In the application process,combined with the characteristics of medical Big Data technology,through fog architecture design and Big Data technology integration,we could research the processing and analysis of heterogeneous data of the medical diagnosis system in the context of the Internet of Things.The results are promising:The medical platform network is smooth,the data storage space is sufficient,the data processing and analysis speed is fast,the diagnosis effect is remarkable,and it is a good assistant to doctors’treatment effect.It not only effectively solves the problem of low clinical diagnosis,treatment efficiency and quality,but also reduces the waiting time of patients,effectively solves the contradiction between doctors and patients,and improves the medical service quality and management level. 展开更多
关键词 Medical big data IOT fog computing distributed computing BP algorithm model
下载PDF
Data Utilization-Based Adaptive Data Management Method for Distributed Storage System in WAN Environment
7
作者 Sanghyuck Nam Jaehwan Lee +2 位作者 Kyoungchan Kim Mingyu Jo Sangoh Park 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期3457-3469,共13页
Recently,research on a distributed storage system that efficiently manages a large amount of data has been actively conducted following data production and demand increase.Physical expansion limits exist for tradition... Recently,research on a distributed storage system that efficiently manages a large amount of data has been actively conducted following data production and demand increase.Physical expansion limits exist for traditional standalone storage systems,such as I/O and file system capacity.However,the existing distributed storage system does not consider where data is consumed and is more focused on data dissemination and optimizing the lookup cost of data location.And this leads to system performance degradation due to low locality occurring in a Wide Area Network(WAN)environment with high network latency.This problem hinders deploying distributed storage systems to multiple data centers over WAN.It lowers the scalability of distributed storage systems to accommodate data storage needs.This paper proposes a method for distributing data in a WAN environment considering network latency and data locality to solve this problem and increase overall system performance.The proposed distributed storage method monitors data utilization and locality to classify data temperature as hot,warm,and cold.With assigned data temperature,the proposed algorithm adaptively selects the appropriate data center and places data accordingly to overcome the excess latency from the WAN environment,leading to overall system performance degradation.This paper also conducts simulations to evaluate the proposed and existing distributed storage methods.The result shows that our proposed method reduced latency by 38%compared to the existing method.Therefore,the proposed method in this paper can be used in large-scale distributed storage systems over a WAN environment to improve latency and performance compared to existing methods,such as consistent hashing. 展开更多
关键词 distributed system distributed storage distributed computing object storage
下载PDF
Video-based Person Re-identification Based on Distributed Cloud Computing
8
作者 Chengyan Zhong Xiaoyu Jiang Guanqiu Qi 《Journal of Artificial Intelligence and Technology》 2021年第2期110-120,共11页
Person re-identification has been a hot research issues in the field of computer vision.In recent years,with the maturity of the theory,a large number of excellent methods have been proposed.However,large-scale data s... Person re-identification has been a hot research issues in the field of computer vision.In recent years,with the maturity of the theory,a large number of excellent methods have been proposed.However,large-scale data sets and huge networks make training a time-consuming process.At the same time,the parameters and their values generated during the training process also take up a lot of computer resources.Therefore,we apply distributed cloud computing method to perform person re-identification task.Using distributed data storage method,pedestrian data sets and parameters are stored in cloud nodes.To speed up operational efficiency and increase fault tolerance,we add data redundancy mechanism to copy and store data blocks to different nodes,and we propose a hash loop optimization algorithm to optimize the data distribution process.Moreover,we assign different layers of the re-identification network to different nodes to complete the training in the way of model parallelism.By comparing and analyzing the accuracy and operation speed of the distributed model on the video-based dataset MARS,the results show that our distributed model has a faster training speed. 展开更多
关键词 person re-identification distributed cloud computing data redundancy mechanism
下载PDF
A Distributed LRTCO Algorithm in Large-Scale DVE Multimedia Systems
9
作者 Hangjun Zhou Guang Sun +3 位作者 Sha Fu Wangdong Jiang Tingting Xie Danqing Duan 《Computers, Materials & Continua》 SCIE EI 2018年第7期73-89,共17页
In the large-scale Distributed Virtual Environment(DVE)multimedia systems,one of key challenges is to distributedly preserve causal order delivery of messages in real time.Most of the existing causal order control app... In the large-scale Distributed Virtual Environment(DVE)multimedia systems,one of key challenges is to distributedly preserve causal order delivery of messages in real time.Most of the existing causal order control approaches with real-time constraints use vector time as causal control information which is closely coupled with system scales.As the scale expands,each message is attached a large amount of control information that introduces too much network transmission overhead to maintain the real-time causal order delivery.In this article,a novel Lightweight Real-Time Causal Order(LRTCO)algorithm is proposed for large-scale DVE multimedia systems.LRTCO predicts and compares the network transmission times of messages so as to select the proper causal control information of which the amount is dynamically adapted to the network latency variations and unconcerned with system scales.The control information in LRTCO is effective to preserve causal order delivery of messages and lightweight to maintain the real-time property of DVE systems.Experimental results demonstrate that LRTCO costs low transmission overhead and communication bandwidth,reduces causal order violations efficiently,and improves the scalability of DVE systems. 展开更多
关键词 distributed computing distributed virtual environment multimedia system causality violation causal order delivery real time
下载PDF
A Distributed Framework for Large-scale Protein-protein Interaction Data Analysis and Prediction Using MapReduce
10
作者 Lun Hu Shicheng Yang +3 位作者 Xin Luo Huaqiang Yuan Khaled Sedraoui MengChu Zhou 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第1期160-172,共13页
Protein-protein interactions are of great significance for human to understand the functional mechanisms of proteins.With the rapid development of high-throughput genomic technologies,massive protein-protein interacti... Protein-protein interactions are of great significance for human to understand the functional mechanisms of proteins.With the rapid development of high-throughput genomic technologies,massive protein-protein interaction(PPI)data have been generated,making it very difficult to analyze them efficiently.To address this problem,this paper presents a distributed framework by reimplementing one of state-of-the-art algorithms,i.e.,CoFex,using MapReduce.To do so,an in-depth analysis of its limitations is conducted from the perspectives of efficiency and memory consumption when applying it for large-scale PPI data analysis and prediction.Respective solutions are then devised to overcome these limitations.In particular,we adopt a novel tree-based data structure to reduce the heavy memory consumption caused by the huge sequence information of proteins.After that,its procedure is modified by following the MapReduce framework to take the prediction task distributively.A series of extensive experiments have been conducted to evaluate the performance of our framework in terms of both efficiency and accuracy.Experimental results well demonstrate that the proposed framework can considerably improve its computational efficiency by more than two orders of magnitude while retaining the same high accuracy. 展开更多
关键词 distributed computing large-scale prediction machine learning MAPREDUCE protein-protein interaction(PPI)
下载PDF
Cluster-Based Distributed Algorithms for Very Large Linear Equations
11
作者 古志民 MARTA Kwiatkowska 付引霞 《Journal of Beijing Institute of Technology》 EI CAS 2006年第1期66-70,共5页
In many applications such as computational fluid dynamics and weather prediction, as well as image processing and state of Markov chain etc., the grade of matrix n is often very large, and any serial algorithm cannot ... In many applications such as computational fluid dynamics and weather prediction, as well as image processing and state of Markov chain etc., the grade of matrix n is often very large, and any serial algorithm cannot solve the problems. A distributed cluster-based solution for very large linear equations is discussed, it includes the definitions of notations, partition of matrix, communication mechanism, and a master-slaver algorithm etc., the computing cost is O(n3/N), the memory cost is O(n2/N), the I/O cost is O(n2/N), and the communication cost is O(Nn), here, N is the number of computing nodes or processes. Some tests show that the solution could solve the double type of matrix under 106×106 effectively. 展开更多
关键词 Gaussian elimination PARTITION cluster-based distributed computing
下载PDF
Distributed Least-Squares Iterative Methods in Large-Scale Networks:A Survey
12
作者 SHI Lei ZHAO Liang +3 位作者 SONG Wenzhan Goutham Kamath WU Yuan LIU Xuefeng 《ZTE Communications》 2017年第3期37-45,共9页
Many science and engineering applications involve solvinga linear least-squares system formed from some field measurements. In the distributed cyber-physical systems(CPS),each sensor node used for measurement often on... Many science and engineering applications involve solvinga linear least-squares system formed from some field measurements. In the distributed cyber-physical systems(CPS),each sensor node used for measurement often only knowspartial independent rows of the least-squares system. To solve the least-squares all the measurements must be gathered at a centralized location and then perform the computa-tion. Such data collection and computation are inefficient because of bandwidth and time constraints and sometimes areinfeasible because of data privacy concerns. Iterative methods are natural candidates for solving the aforementionedproblem and there are many studies regarding this. However,most of the proposed solutions are related to centralized/parallel computations while only a few have the potential to beapplied in distributed networks. Thus distributed computations are strongly preferred or demanded in many of the realworld applications, e.g. smart-grid, target tracking, etc. Thispaper surveys the representative iterative methods for distributed least-squares in networks. 展开更多
关键词 distributed computing iterative methods least⁃squares mesh network
下载PDF
Spatial Management of Distributed Social Systems
13
作者 Peter Simon Sapaty 《Journal of Computer Science Research》 2020年第3期1-5,共5页
The paper describes the use of invented,developed,and tested in different countries of the high-level spatial grasp model and technology capable of solving important problems in large social systems,which may be repre... The paper describes the use of invented,developed,and tested in different countries of the high-level spatial grasp model and technology capable of solving important problems in large social systems,which may be represented as dynamic,self-evolving and distributed social networks.The approach allows us to find important solutions on a holistic level by spatial navigation and parallel pattern matching of social networks with active self-propagating scenarios represented in a special recursive language.This approach effectively hides inside the distributed and networked language implementation traditional system management routines,often providing hundreds of times shorter and simpler high-level solution code.The paper highlights the demands to efficient simulation of social systems,briefs the technology used,and provides some programming examples for solutions of practical problems. 展开更多
关键词 Social systems Social networks Parallel and distributed computing Spatial Grasp Technology Spatial Grasp Language Holistic solutions
下载PDF
Intelligent Ironmaking Optimization Service on a Cloud Computing Platform by Digital Twin 被引量:2
14
作者 Heng Zhou Chunjie Yang Youxian Sun 《Engineering》 SCIE EI 2021年第9期1274-1281,共8页
The shortage of computation methods and storage devices has largely limited the development of multiobjective optimization in industrial processes.To improve the operational levels of the process industries,we propose... The shortage of computation methods and storage devices has largely limited the development of multiobjective optimization in industrial processes.To improve the operational levels of the process industries,we propose a multi-objective optimization framework based on cloud services and a cloud distribution system.Real-time data from manufacturing procedures are first temporarily stored in a local database,and then transferred to the relational database in the cloud.Next,a distribution system with elastic compute power is set up for the optimization framework.Finally,a multi-objective optimization model based on deep learning and an evolutionary algorithm is proposed to optimize several conflicting goals of the blast furnace ironmaking process.With the application of this optimization service in a cloud factory,iron production was found to increase by 83.91 t∙d^(-1),the coke ratio decreased 13.50 kg∙t^(-1),and the silicon content decreased by an average of 0.047%. 展开更多
关键词 Cloud factory Blast furnace Multi-objective optimization distributed computation
下载PDF
Resource Load Prediction of Internet of Vehicles Mobile Cloud Computing
15
作者 Wenbin Bi Fang Yu +1 位作者 Ning Cao Russell Higgs 《Computers, Materials & Continua》 SCIE EI 2022年第10期165-180,共16页
Load-time series data in mobile cloud computing of Internet of Vehicles(IoV)usually have linear and nonlinear composite characteristics.In order to accurately describe the dynamic change trend of such loads,this study... Load-time series data in mobile cloud computing of Internet of Vehicles(IoV)usually have linear and nonlinear composite characteristics.In order to accurately describe the dynamic change trend of such loads,this study designs a load prediction method by using the resource scheduling model for mobile cloud computing of IoV.Firstly,a chaotic analysis algorithm is implemented to process the load-time series,while some learning samples of load prediction are constructed.Secondly,a support vector machine(SVM)is used to establish a load prediction model,and an improved artificial bee colony(IABC)function is designed to enhance the learning ability of the SVM.Finally,a CloudSim simulation platform is created to select the perminute CPU load history data in the mobile cloud computing system,which is composed of 50 vehicles as the data set;and a comparison experiment is conducted by using a grey model,a back propagation neural network,a radial basis function(RBF)neural network and a RBF kernel function of SVM.As shown in the experimental results,the prediction accuracy of the method proposed in this study is significantly higher than other models,with a significantly reduced real-time prediction error for resource loading in mobile cloud environments.Compared with single-prediction models,the prediction method proposed can build up multidimensional time series in capturing complex load time series,fit and describe the load change trends,approximate the load time variability more precisely,and deliver strong generalization ability to load prediction models for mobile cloud computing resources. 展开更多
关键词 Internet of Vehicles mobile cloud computing resource load predicting multi distributed resource computing scheduling chaos analysis algorithm improved artificial bee colony function
下载PDF
Discrete Event Simulation-Based Evaluation of a Single-Lane Synchronized Dual-Traffic Light Intersections
16
作者 Chimezie Calistus Ogharandukun Martin +1 位作者 Abdullahi Monday Essien Joe 《Journal of Computer and Communications》 2023年第10期82-100,共19页
This research involved an exploratory evaluation of the dynamics of vehicular traffic on a road network across two traffic light-controlled junctions. The study uses the case study of a one-kilometer road system model... This research involved an exploratory evaluation of the dynamics of vehicular traffic on a road network across two traffic light-controlled junctions. The study uses the case study of a one-kilometer road system modelled on Anylogic version 8.8.4. Anylogic is a multi-paradigm simulation tool that supports three main simulation methodologies: discrete event simulation, agent-based modeling, and system dynamics modeling. The system is used to evaluate the implication of stochastic time-based vehicle variables on the general efficiency of road use. Road use efficiency as reflected in this model is based on the percentage of entry vehicles to exit the model within a one-hour simulation period. The study deduced that for the model under review, an increase in entry point time delay has a domineering influence on the efficiency of road use far beyond any other consideration. This study therefore presents a novel approach that leverages Discrete Events Simulation to facilitate efficient road management with a focus on optimum road use efficiency. The study also determined that the inclusion of appropriate random parameters to reflect road use activities at critical event points in a simulation can help in the effective representation of authentic traffic models. The Anylogic simulation software leverages the Classic DEVS and Parallel DEVS formalisms to achieve these objectives. 展开更多
关键词 Multi-Core Processing distributed computing Event-Driven Modelling Discrete Event Simulation Data Analysis and Visualization
下载PDF
Communication-Efficient Edge AI Inference over Wireless Networks 被引量:1
17
作者 YANG Kai ZHOU Yong +1 位作者 YANG Zhanpeng SHI Yuanming 《ZTE Communications》 2020年第2期31-39,共9页
Given the fast growth of intelligent devices, it is expected that a large number of high-stakes artificial intelligence (AI) applications, e. g., drones, autonomous cars, and tac?tile robots, will be deployed at the e... Given the fast growth of intelligent devices, it is expected that a large number of high-stakes artificial intelligence (AI) applications, e. g., drones, autonomous cars, and tac?tile robots, will be deployed at the edge of wireless networks in the near future. Therefore, the intelligent communication networks will be designed to leverage advanced wireless tech?niques and edge computing technologies to support AI-enabled applications at various end devices with limited communication, computation, hardware and energy resources. In this article, we present the principles of efficient deployment of model inference at network edge to provide low-latency and energy-efficient AI services. This includes the wireless distribut?ed computing framework for low-latency device distributed model inference as well as the wireless cooperative transmission strategy for energy-efficient edge cooperative model infer?ence. The communication efficiency of edge inference systems is further improved by build?ing up a smart radio propagation environment via intelligent reflecting surface. 展开更多
关键词 communication efficiency cooperative transmission distributed computing edge AI edge inference
下载PDF
Machine Learning-based Optimal Framework for Internet of Things Networks
18
作者 Moath Alsafasfeh Zaid A.Arida Omar A.Saraereh 《Computers, Materials & Continua》 SCIE EI 2022年第6期5355-5380,共26页
Deep neural networks(DNN)are widely employed in a wide range of intelligent applications,including image and video recognition.However,due to the enormous amount of computations required by DNN.Therefore,performing DN... Deep neural networks(DNN)are widely employed in a wide range of intelligent applications,including image and video recognition.However,due to the enormous amount of computations required by DNN.Therefore,performing DNN inference tasks locally is problematic for resourceconstrained Internet of Things(IoT)devices.Existing cloud approaches are sensitive to problems like erratic communication delays and unreliable remote server performance.The utilization of IoT device collaboration to create distributed and scalable DNN task inference is a very promising strategy.The existing research,on the other hand,exclusively looks at the static split method in the scenario of homogeneous IoT devices.As a result,there is a pressing need to investigate how to divide DNN tasks adaptively among IoT devices with varying capabilities and resource constraints,and execute the task inference cooperatively.Two major obstacles confront the aforementioned research problems:1)In a heterogeneous dynamic multi-device environment,it is difficult to estimate the multi-layer inference delay of DNN tasks;2)It is difficult to intelligently adapt the collaborative inference approach in real time.As a result,a multi-layer delay prediction model with fine-grained interpretability is proposed initially.Furthermore,for DNN inference tasks,evolutionary reinforcement learning(ERL)is employed to adaptively discover the approximate best split strategy.Experiments show that,in a heterogeneous dynamic environment,the proposed framework can provide considerable DNN inference acceleration.When the number of devices is 2,3,and 4,the delay acceleration of the proposed algorithm is 1.81 times,1.98 times and 5.28 times that of the EE algorithm,respectively. 展开更多
关键词 IOT distributed computing neural networks reinforcement learning
下载PDF
An Efficient Scheme for Data Pattern Matching in IoT Networks
19
作者 Ashraf Ali Omar A.Saraereh 《Computers, Materials & Continua》 SCIE EI 2022年第8期2203-2219,共17页
The Internet has become an unavoidable trend of all things due to the rapid growth of networking technology,smart home technology encompasses a variety of sectors,including intelligent transportation,allowing users to... The Internet has become an unavoidable trend of all things due to the rapid growth of networking technology,smart home technology encompasses a variety of sectors,including intelligent transportation,allowing users to communicate with anybody or any device at any time and from anywhere.However,most things are different now.Background:Structured data is a form of separated storage that slows down the rate at which everything is connected.Data pattern matching is commonly used in data connectivity and can help with the issues mentioned above.Aim:The present pattern matching system is ineffective due to the heterogeneity and rapid expansion of large IoT data.The method requires a lot of manual work and has a poor match with real-world applications.In the modern IoT context,solving the challenge of automatic pattern matching is complex.Methodology:A three-layer mapping matching is proposed for heterogeneous data from the IoT,and a hierarchical pattern matching technique.The feature classification matching,relational feature clustering matching,and mixed element matching are all examples of feature classification matching.Through layer-by-layer matching,the algorithm gradually narrows the matching space,improving matching quality,reducing the number of matching between components and the degree of manual participation,and producing a better automatic mode matching.Results:The algorithm’s efficiency and performance are tested using a large number of data samples,and the results show that the technique is practical and effective.Conclusion:the proposed algorithm utilizes the instance information of the data pattern.It deploys three-layer mapping matching approach and mixed element matching and realizes the automatic pattern matching of heterogeneous data which reduces the matching space between elements in complex patterns.It improves the efficiency and accuracy of automatic matching. 展开更多
关键词 Internet of things distributed computing optimization feature classification
下载PDF
An Optimized Resource Scheduling Strategy for Hadoop Speculative Execution Based on Non-cooperative Game Schemes
20
作者 Yinghang Jiang Qi Liu +3 位作者 Williams Dannah Dandan Jin Xiaodong Liu Mingxu Sun 《Computers, Materials & Continua》 SCIE EI 2020年第2期713-729,共17页
Hadoop is a well-known parallel computing system for distributed computing and large-scale data processes.“Straggling”tasks,however,have a serious impact on task allocation and scheduling in a Hadoop system.Speculat... Hadoop is a well-known parallel computing system for distributed computing and large-scale data processes.“Straggling”tasks,however,have a serious impact on task allocation and scheduling in a Hadoop system.Speculative Execution(SE)is an efficient method of processing“Straggling”Tasks by monitoring real-time running status of tasks and then selectively backing up“Stragglers”in another node to increase the chance to complete the entire mission early.Present speculative execution strategies meet challenges on misjudgement of“Straggling”tasks and improper selection of backup nodes,which leads to inefficient implementation of speculative executive processes.This paper has proposed an Optimized Resource Scheduling strategy for Speculative Execution(ORSE)by introducing non-cooperative game schemes.The ORSE transforms the resource scheduling of backup tasks into a multi-party non-cooperative game problem,where the tasks are regarded as game participants,whilst total task execution time of the entire cluster as the utility function.In that case,the most benefit strategy can be implemented in each computing node when the game reaches a Nash equilibrium point,i.e.,the final resource scheduling scheme to be obtained.The strategy has been implemented in Hadoop-2.x.Experimental results depict that the ORSE can maintain the efficiency of speculative executive processes and improve fault-tolerant and computation performance under the circumstances of Normal Load,Busy Load and Busy Load with Skewed Data. 展开更多
关键词 distributed computing speculative execution resource scheduling non-cooperative game theory
下载PDF
上一页 1 2 3 下一页 到第
使用帮助 返回顶部