Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay ...Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay can hamper the performance of IoT-enabled cloud platforms.However,efficient task scheduling can lower the cloud infrastructure’s energy consumption,thus maximizing the service provider’s revenue by decreasing user job processing times.The proposed Modified Chimp-Whale Optimization Algorithm called Modified Chimp-Whale Optimization Algorithm(MCWOA),combines elements of the Chimp Optimization Algorithm(COA)and the Whale Optimization Algorithm(WOA).To enhance MCWOA’s identification precision,the Sobol sequence is used in the population initialization phase,ensuring an even distribution of the population across the solution space.Moreover,the traditional MCWOA’s local search capabilities are augmented by incorporating the whale optimization algorithm’s bubble-net hunting and random search mechanisms into MCWOA’s position-updating process.This study demonstrates the effectiveness of the proposed approach using a two-story rigid frame and a simply supported beam model.Simulated outcomes reveal that the new method outperforms the original MCWOA,especially in multi-damage detection scenarios.MCWOA excels in avoiding false positives and enhancing computational speed,making it an optimal choice for structural damage detection.The efficiency of the proposed MCWOA is assessed against metrics such as energy usage,computational expense,task duration,and delay.The simulated data indicates that the new MCWOA outpaces other methods across all metrics.The study also references the Whale Optimization Algorithm(WOA),Chimp Algorithm(CA),Ant Lion Optimizer(ALO),Genetic Algorithm(GA)and Grey Wolf Optimizer(GWO).展开更多
As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy i...As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy in the cloud environment.A hypervisor is a virtualization software used in cloud hosting to divide and allocate resources on various pieces of hardware.The choice of hypervisor can significantly impact the performance of cryptographic operations in the cloud environment.An important issue that must be carefully examined is that no hypervisor is completely superior in terms of performance;Each hypervisor should be examined to meet specific needs.The main objective of this study is to provide accurate results to compare the performance of Hyper-V and Kernel-based Virtual Machine(KVM)while implementing different cryptographic algorithms to guide cloud service providers and end users in choosing the most suitable hypervisor for their cryptographic needs.This study evaluated the efficiency of two hypervisors,Hyper-V and KVM,in implementing six cryptographic algorithms:Rivest,Shamir,Adleman(RSA),Advanced Encryption Standard(AES),Triple Data Encryption Standard(TripleDES),Carlisle Adams and Stafford Tavares(CAST-128),BLOWFISH,and TwoFish.The study’s findings show that KVM outperforms Hyper-V,with 12.2%less Central Processing Unit(CPU)use and 12.95%less time overall for encryption and decryption operations with various file sizes.The study’s findings emphasize how crucial it is to pick a hypervisor that is appropriate for cryptographic needs in a cloud environment,which could assist both cloud service providers and end users.Future research may focus more on how various hypervisors perform while handling cryptographic workloads.展开更多
Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are ...Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are invoked by its driven events.Nonetheless,security threats in serverless computing such as vulnerability-based security threats have become the pain point hindering its wide adoption.The ideas in proactive defense such as redundancy,diversity and dynamic provide promising approaches to protect against cyberattacks.However,these security technologies are mostly applied to serverless platform based on“stacked”mode,as they are designed independent with serverless computing.The lack of security consideration in the initial design makes it especially challenging to achieve the all life cycle protection for serverless application with limited cost.In this paper,we present ATSSC,a proactive defense enabled attack tolerant serverless platform.ATSSC integrates the characteristic of redundancy,diversity and dynamic into serverless seamless to achieve high-level security and efficiency.Specifically,ATSSC constructs multiple diverse function replicas to process the driven events and performs cross-validation to verify the results.In order to create diverse function replicas,both software diversity and environment diversity are adopted.Furthermore,a dynamic function refresh strategy is proposed to keep the clean state of serverless functions.We implement ATSSC based on Kubernetes and Knative.Analysis and experimental results demonstrate that ATSSC can effectively protect serverless computing against cyberattacks with acceptable costs.展开更多
Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.How...Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.However,the majority of the fog nodes in this environment are geographically scattered with resources that are limited in terms of capabilities compared to cloud nodes,thus making the application placement problem more complex than that in cloud computing.An approach for cost-efficient application placement in fog-cloud computing environments that combines the benefits of both fog and cloud computing to optimize the placement of applications and services while minimizing costs.This approach is particularly relevant in scenarios where latency,resource constraints,and cost considerations are crucial factors for the deployment of applications.In this study,we propose a hybrid approach that combines a genetic algorithm(GA)with the Flamingo Search Algorithm(FSA)to place application modules while minimizing cost.We consider four cost-types for application deployment:Computation,communication,energy consumption,and violations.The proposed hybrid approach is called GA-FSA and is designed to place the application modules considering the deadline of the application and deploy them appropriately to fog or cloud nodes to curtail the overall cost of the system.An extensive simulation is conducted to assess the performance of the proposed approach compared to other state-of-the-art approaches.The results demonstrate that GA-FSA approach is superior to the other approaches with respect to task guarantee ratio(TGR)and total cost.展开更多
Task scheduling plays a key role in effectively managing and allocating computing resources to meet various computing tasks in a cloud computing environment.Short execution time and low load imbalance may be the chall...Task scheduling plays a key role in effectively managing and allocating computing resources to meet various computing tasks in a cloud computing environment.Short execution time and low load imbalance may be the challenges for some algorithms in resource scheduling scenarios.In this work,the Hierarchical Particle Swarm Optimization-Evolutionary Artificial Bee Colony Algorithm(HPSO-EABC)has been proposed,which hybrids our presented Evolutionary Artificial Bee Colony(EABC),and Hierarchical Particle Swarm Optimization(HPSO)algorithm.The HPSO-EABC algorithm incorporates both the advantages of the HPSO and the EABC algorithm.Comprehensive testing including evaluations of algorithm convergence speed,resource execution time,load balancing,and operational costs has been done.The results indicate that the EABC algorithm exhibits greater parallelism compared to the Artificial Bee Colony algorithm.Compared with the Particle Swarm Optimization algorithm,the HPSO algorithmnot only improves the global search capability but also effectively mitigates getting stuck in local optima.As a result,the hybrid HPSO-EABC algorithm demonstrates significant improvements in terms of stability and convergence speed.Moreover,it exhibits enhanced resource scheduling performance in both homogeneous and heterogeneous environments,effectively reducing execution time and cost,which also is verified by the ablation experimental.展开更多
The cloud computing technology is utilized for achieving resource utilization of remotebased virtual computer to facilitate the consumers with rapid and accurate massive data services.It utilizes on-demand resource pr...The cloud computing technology is utilized for achieving resource utilization of remotebased virtual computer to facilitate the consumers with rapid and accurate massive data services.It utilizes on-demand resource provisioning,but the necessitated constraints of rapid turnaround time,minimal execution cost,high rate of resource utilization and limited makespan transforms the Load Balancing(LB)process-based Task Scheduling(TS)problem into an NP-hard optimization issue.In this paper,Hybrid Prairie Dog and Beluga Whale Optimization Algorithm(HPDBWOA)is propounded for precise mapping of tasks to virtual machines with the due objective of addressing the dynamic nature of cloud environment.This capability of HPDBWOA helps in decreasing the SLA violations and Makespan with optimal resource management.It is modelled as a scheduling strategy which utilizes the merits of PDOA and BWOA for attaining reactive decisions making with respect to the process of assigning the tasks to virtual resources by considering their priorities into account.It addresses the problem of pre-convergence with wellbalanced exploration and exploitation to attain necessitated Quality of Service(QoS)for minimizing the waiting time incurred during TS process.It further balanced exploration and exploitation rates for reducing the makespan during the task allocation with complete awareness of VM state.The results of the proposed HPDBWOA confirmed minimized energy utilization of 32.18% and reduced cost of 28.94% better than approaches used for investigation.The statistical investigation of the proposed HPDBWOA conducted using ANOVA confirmed its efficacy over the benchmarked systems in terms of throughput,system,and response time.展开更多
Cloud computing is the new norm within business entities as businesses try to keep up with technological advancements and user needs. The concept is defined as a computing environment allowing for remote outsourcing o...Cloud computing is the new norm within business entities as businesses try to keep up with technological advancements and user needs. The concept is defined as a computing environment allowing for remote outsourcing of storage and computing resources. A hybrid cloud environment is an excellent example of cloud computing. Specifically, the hybrid system provides organizations with increased scalability and control over their data and support for a remote workforce. However, hybrid cloud systems are expensive as organizations operate different infrastructures while introducing complexity to the organization’s activities. Data security is critical among the most vital concerns that have resulted from the use of cloud computing, thus, affecting the rate of user adoption and acceptance. This article, borrowing from the hybrid cloud computing system, recommends combining traditional and modern data security systems. Traditional data security systems have proven effective in their respective roles, with the main challenge arising from their recognition of context and connectivity. Therefore, integrating traditional and modern designs is recommended to enhance effectiveness, context, connectivity, and efficiency.展开更多
Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led...Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led to a significant increase in the user demand for services.However,in cloud environments efficient load balancing is essential to ensure optimal performance and resource utilization.This systematic review targets a detailed description of load balancing techniques including static and dynamic load balancing algorithms.Specifically,metaheuristic-based dynamic load balancing algorithms are identified as the optimal solution in case of increased traffic.In a cloud-based context,this paper describes load balancing measurements,including the benefits and drawbacks associated with the selected load balancing techniques.It also summarizes the algorithms based on implementation,time complexity,adaptability,associated issue(s),and targeted QoS parameters.Additionally,the analysis evaluates the tools and instruments utilized in each investigated study.Moreover,comparative analysis among static,traditional dynamic and metaheuristic algorithms based on response time by using the CloudSim simulation tool is also performed.Finally,the key open problems and potential directions for the state-of-the-art metaheuristic-based approaches are also addressed.展开更多
This study investigates how cybersecurity can be enhanced through cloud computing solutions in the United States. The motive for this study is due to the rampant loss of data, breaches, and unauthorized access of inte...This study investigates how cybersecurity can be enhanced through cloud computing solutions in the United States. The motive for this study is due to the rampant loss of data, breaches, and unauthorized access of internet criminals in the United States. The study adopted a survey research design, collecting data from 890 cloud professionals with relevant knowledge of cybersecurity and cloud computing. A machine learning approach was adopted, specifically a random forest classifier, an ensemble, and a decision tree model. Out of the features in the data, ten important features were selected using random forest feature importance, which helps to achieve the objective of the study. The study’s purpose is to enable organizations to develop suitable techniques to prevent cybercrime using random forest predictions as they relate to cloud services in the United States. The effectiveness of the models used is evaluated by utilizing validation matrices that include recall values, accuracy, and precision, in addition to F1 scores and confusion matrices. Based on evaluation scores (accuracy, precision, recall, and F1 scores) of 81.9%, 82.6%, and 82.1%, the results demonstrated the effectiveness of the random forest model. It showed the importance of machine learning algorithms in preventing cybercrime and boosting security in the cloud environment. It recommends that other machine learning models be adopted to see how to improve cybersecurity through cloud computing.展开更多
Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodo...Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodologies to address these challenges. Through a series of experiments, we evaluate the performance, security, and efficiency of the proposed algorithms in real-world cloud environments. Our results demonstrate the effectiveness of homomorphic encryption-based secure computation, secure multiparty computation, and trusted execution environment-based approaches in mitigating security threats while ensuring efficient resource utilization. Specifically, our homomorphic encryption-based algorithm exhibits encryption times ranging from 20 to 1000 milliseconds and decryption times ranging from 25 to 1250 milliseconds for payload sizes varying from 100 KB to 5000 KB. Furthermore, our comparative analysis against state-of-the-art solutions reveals the strengths of our proposed algorithms in terms of security guarantees, encryption overhead, and communication latency.展开更多
This paper examines how the adoption of cloud computing affects the relationship between the technical and environmental capabilities of small and medium-sized enterprises(SMEs)in the tourism industry in Henan Provinc...This paper examines how the adoption of cloud computing affects the relationship between the technical and environmental capabilities of small and medium-sized enterprises(SMEs)in the tourism industry in Henan Province,China,thereby promoting the stable and sustainable development of the tourism industry,combining the laws of tourism market development,vigorously constructing a smart tourism project,guiding tourism cloud service providers to strengthen the cooperation and contact with the market’s tourism enterprises,introducing and utilizing cloud computing technology,optimizing and improving the functions of various tourism services of the enterprises,and enhancing the processing and analysis of enterprise-related data to provide tourism information.Strengthen the processing and analysis of enterprise-related data to provide tourism information,and further study the adoption of cloud computing and its impact on small and medium-sized enterprises(SMEs)in terms of technology and business environment knowledge,so as to make the best enterprise management decisions and realize the overall enhancement of the enterprise’s tourism brand value.展开更多
Reliability,QoS and energy consumption are three important concerns of cloud service providers.Most of the current research on reliable task deployment in cloud computing focuses on only one or two of the three concer...Reliability,QoS and energy consumption are three important concerns of cloud service providers.Most of the current research on reliable task deployment in cloud computing focuses on only one or two of the three concerns.However,these three factors have intrinsic trade-off relationships.The existing studies show that load concentration can reduce the number of servers and hence save energy.In this paper,we deal with the problem of reliable task deployment in data centers,with the goal of minimizing the number of servers used in cloud data centers under the constraint that the job execution deadline can be met upon single server failure.We propose a QoS-Constrained,Reliable and Energy-efficient task replica deployment(QSRE)algorithm for the problem by combining task replication and re-execution.For each task in a job that cannot finish executing by re-execution within deadline,we initiate two replicas for the task:main task and task replica.Each main task runs on an individual server.The associated task replica is deployed on a backup server and completes part of the whole task load before the main task failure.Different from the main tasks,multiple task replicas can be allocated to the same backup server to reduce the energy consumption of cloud data centers by minimizing the number of servers required for running the task replicas.Specifically,QSRE assigns the task replicas with the longest and the shortest execution time to the backup servers in turn,such that the task replicas can meet the QoS-specified job execution deadline under the main task failure.We conduct experiments through simulations.The experimental results show that QSRE can effectively reduce the number of servers used,while ensuring the reliability and QoS of job execution.展开更多
The current education field is experiencing an innovation driven by big data and cloud technologies,and these advanced technologies play a central role in the construction of smart campuses.Big data technology has a w...The current education field is experiencing an innovation driven by big data and cloud technologies,and these advanced technologies play a central role in the construction of smart campuses.Big data technology has a wide range of applications in student learning behavior analysis,teaching resource management,campus safety monitoring,and decision support,which improves the quality of education and management efficiency.Cloud computing technology supports the integration,distribution,and optimal use of educational resources through cloud resource sharing,virtual classrooms,intelligent campus management systems,and Infrastructure-as-a-Service(IaaS)models,which reduce costs and increase flexibility.This paper comprehensively discusses the practical application of big data and cloud computing technologies in smart campuses,showing how these technologies can contribute to the development of smart campuses,and laying the foundation for the future innovation of education models.展开更多
The rapid expansion of the Internet of Things (IoT) has driven the need for advanced computational frameworks capable of handling the complex data processing and security challenges that modern IoT applications demand...The rapid expansion of the Internet of Things (IoT) has driven the need for advanced computational frameworks capable of handling the complex data processing and security challenges that modern IoT applications demand. However, traditional cloud computing frameworks face significant latency, scalability, and security issues. Quantum-Edge Cloud Computing (QECC) offers an innovative solution by integrating the computational power of quantum computing with the low-latency advantages of edge computing and the scalability of cloud computing resources. This study is grounded in an extensive literature review, performance improvements, and metrics data from Bangladesh, focusing on smart city infrastructure, healthcare monitoring, and the industrial IoT sector. The discussion covers vital elements, including integrating quantum cryptography to enhance data security, the critical role of edge computing in reducing response times, and cloud computing’s ability to support large-scale IoT networks with its extensive resources. Through case studies such as the application of quantum sensors in autonomous vehicles, the practical impact of QECC is demonstrated. Additionally, the paper outlines future research opportunities, including developing quantum-resistant encryption techniques and optimizing quantum algorithms for edge computing. The convergence of these technologies in QECC has the potential to overcome the current limitations of IoT frameworks, setting a new standard for future IoT applications.展开更多
With the rapid development of the Internet of Things(IoT),there are several challenges pertaining to security in IoT applications.Compared with the characteristics of the traditional Internet,the IoT has many problems...With the rapid development of the Internet of Things(IoT),there are several challenges pertaining to security in IoT applications.Compared with the characteristics of the traditional Internet,the IoT has many problems,such as large assets,complex and diverse structures,and lack of computing resources.Traditional network intrusion detection systems cannot meet the security needs of IoT applications.In view of this situation,this study applies cloud computing and machine learning to the intrusion detection system of IoT to improve detection performance.Usually,traditional intrusion detection algorithms require considerable time for training,and these intrusion detection algorithms are not suitable for cloud computing due to the limited computing power and storage capacity of cloud nodes;therefore,it is necessary to study intrusion detection algorithms with low weights,short training time,and high detection accuracy for deployment and application on cloud nodes.An appropriate classification algorithm is a primary factor for deploying cloud computing intrusion prevention systems and a prerequisite for the system to respond to intrusion and reduce intrusion threats.This paper discusses the problems related to IoT intrusion prevention in cloud computing environments.Based on the analysis of cloud computing security threats,this study extensively explores IoT intrusion detection,cloud node monitoring,and intrusion response in cloud computing environments by using cloud computing,an improved extreme learning machine,and other methods.We use the Multi-Feature Extraction Extreme Learning Machine(MFE-ELM)algorithm for cloud computing,which adds a multi-feature extraction process to cloud servers,and use the deployed MFE-ELM algorithm on cloud nodes to detect and discover network intrusions to cloud nodes.In our simulation experiments,a classical dataset for intrusion detection is selected as a test,and test steps such as data preprocessing,feature engineering,model training,and result analysis are performed.The experimental results show that the proposed algorithm can effectively detect and identify most network data packets with good model performance and achieve efficient intrusion detection for heterogeneous data of the IoT from cloud nodes.Furthermore,it can enable the cloud server to discover nodes with serious security threats in the cloud cluster in real time,so that further security protection measures can be taken to obtain the optimal intrusion response strategy for the cloud cluster.展开更多
Cloud computing has taken over the high-performance distributed computing area,and it currently provides on-demand services and resource polling over the web.As a result of constantly changing user service demand,the ...Cloud computing has taken over the high-performance distributed computing area,and it currently provides on-demand services and resource polling over the web.As a result of constantly changing user service demand,the task scheduling problem has emerged as a critical analytical topic in cloud computing.The primary goal of scheduling tasks is to distribute tasks to available processors to construct the shortest possible schedule without breaching precedence restrictions.Assignments and schedules of tasks substantially influence system operation in a heterogeneous multiprocessor system.The diverse processes inside the heuristic-based task scheduling method will result in varying makespan in the heterogeneous computing system.As a result,an intelligent scheduling algorithm should efficiently determine the priority of every subtask based on the resources necessary to lower the makespan.This research introduced a novel efficient scheduling task method in cloud computing systems based on the cooperation search algorithm to tackle an essential task and schedule a heterogeneous cloud computing problem.The basic idea of thismethod is to use the advantages of meta-heuristic algorithms to get the optimal solution.We assess our algorithm’s performance by running it through three scenarios with varying numbers of tasks.The findings demonstrate that the suggested technique beats existingmethods NewGenetic Algorithm(NGA),Genetic Algorithm(GA),Whale Optimization Algorithm(WOA),Gravitational Search Algorithm(GSA),and Hybrid Heuristic and Genetic(HHG)by 7.9%,2.1%,8.8%,7.7%,3.4%respectively according to makespan.展开更多
This paper presents a novel fuzzy firefly-based intelligent algorithm for load balancing in mobile cloud computing while reducing makespan.The proposed technique implicitly acts intelligently by using inherent traits ...This paper presents a novel fuzzy firefly-based intelligent algorithm for load balancing in mobile cloud computing while reducing makespan.The proposed technique implicitly acts intelligently by using inherent traits of fuzzy and firefly.It automatically adjusts its behavior or converges depending on the information gathered during the search process and objective function.It works for 3-tier architecture,including cloudlet and public cloud.As cloudlets have limited resources,fuzzy logic is used for cloudlet selection using capacity and waiting time as input.Fuzzy provides human-like decisions without using any mathematical model.Firefly is a powerful meta-heuristic optimization technique to balance diversification and solution speed.It balances the load on cloud and cloudlet while minimizing makespan and execution time.However,it may trap in local optimum;levy flight can handle it.Hybridization of fuzzy fireflywith levy flight is a novel technique that provides reduced makespan,execution time,and Degree of imbalance while balancing the load.Simulation has been carried out on the Cloud Analyst platform with National Aeronautics and Space Administration(NASA)and Clarknet datasets.Results show that the proposed algorithm outperforms Ant Colony Optimization Queue Decision Maker(ACOQDM),Distributed Scheduling Optimization Algorithm(DSOA),andUtility-based Firefly Algorithm(UFA)when compared in terms of makespan,Degree of imbalance,and Figure of Merit.展开更多
Some of the significant new technologies researched in recent studies include BlockChain(BC),Software Defined Networking(SDN),and Smart Industrial Internet of Things(IIoT).All three technologies provide data integrity...Some of the significant new technologies researched in recent studies include BlockChain(BC),Software Defined Networking(SDN),and Smart Industrial Internet of Things(IIoT).All three technologies provide data integrity,confidentiality,and integrity in their respective use cases(especially in industrial fields).Additionally,cloud computing has been in use for several years now.Confidential information is exchanged with cloud infrastructure to provide clients with access to distant resources,such as computing and storage activities in the IIoT.There are also significant security risks,concerns,and difficulties associated with cloud computing.To address these challenges,we propose merging BC and SDN into a cloud computing platform for the IIoT.This paper introduces“DistB-SDCloud”,an architecture for enhanced cloud security for smart IIoT applications.The proposed architecture uses a distributed BC method to provide security,secrecy,privacy,and integrity while remaining flexible and scalable.Customers in the industrial sector benefit from the dispersed or decentralized,and efficient environment of BC.Additionally,we described an SDN method to improve the durability,stability,and load balancing of cloud infrastructure.The efficacy of our SDN and BC-based implementation was experimentally tested by using various parameters including throughput,packet analysis,response time,bandwidth,and latency analysis,as well as the monitoring of several attacks on the system itself.展开更多
In recent years,statistics have indicated that the number of patients with malignant brain tumors has increased sharply.However,most surgeons still perform surgical training using the traditional autopsy and prosthesi...In recent years,statistics have indicated that the number of patients with malignant brain tumors has increased sharply.However,most surgeons still perform surgical training using the traditional autopsy and prosthesis model,which encounters many problems,such as insufficient corpse resources,low efficiency,and high cost.With the advent of the 5G era,a wide range of Industrial Internet of Things(IIOT)applications have been developed.Virtual Reality(VR)and Augmented Reality(AR)technologies that emerged with 5G are developing rapidly for intelligent medical training.To address the challenges encountered during neurosurgery training,and combining with cloud computing,in this paper,a highly immersive AR-based brain tumor neurosurgery remote collaborative virtual surgery training system is developed,in which a VR simulator is embedded.The system enables real-time remote surgery training interaction through 5G transmission.Six experts and 18 novices were invited to participate in the experiment to verify the system.Subsequently,the two simulators were evaluated using face and construction validation methods.The results obtained by training the novices 50 times were further analyzed using the Learning Curve-Cumulative Sum(LC-CUSUM)evaluation method to validate the effectiveness of the two simulators.The results of the face and content validation demonstrated that the AR simulator in the system was superior to the VR simulator in terms of vision and scene authenticity,and had a better effect on the improvement of surgical skills.Moreover,the surgical training scheme proposed in this paper is effective,and the remote collaborative training effect of the system is ideal.展开更多
The massive growth of diversified smart devices and continuous data generation poses a challenge to communication architectures.To deal with this problem,communication networks consider fog computing as one of promisi...The massive growth of diversified smart devices and continuous data generation poses a challenge to communication architectures.To deal with this problem,communication networks consider fog computing as one of promising technologies that can improve overall communication performance.It brings on-demand services proximate to the end devices and delivers the requested data in a short time.Fog computing faces several issues such as latency,bandwidth,and link utilization due to limited resources and the high processing demands of end devices.To this end,fog caching plays an imperative role in addressing data dissemination issues.This study provides a comprehensive discussion of fog computing,Internet of Things(IoTs)and the critical issues related to data security and dissemination in fog computing.Moreover,we determine the fog-based caching schemes and contribute to deal with the existing issues of fog computing.Besides,this paper presents a number of caching schemes with their contributions,benefits,and challenges to overcome the problems and limitations of fog computing.We also identify machine learning-based approaches for cache security and management in fog computing,as well as several prospective future research directions in caching,fog computing,and machine learning.展开更多
文摘Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay can hamper the performance of IoT-enabled cloud platforms.However,efficient task scheduling can lower the cloud infrastructure’s energy consumption,thus maximizing the service provider’s revenue by decreasing user job processing times.The proposed Modified Chimp-Whale Optimization Algorithm called Modified Chimp-Whale Optimization Algorithm(MCWOA),combines elements of the Chimp Optimization Algorithm(COA)and the Whale Optimization Algorithm(WOA).To enhance MCWOA’s identification precision,the Sobol sequence is used in the population initialization phase,ensuring an even distribution of the population across the solution space.Moreover,the traditional MCWOA’s local search capabilities are augmented by incorporating the whale optimization algorithm’s bubble-net hunting and random search mechanisms into MCWOA’s position-updating process.This study demonstrates the effectiveness of the proposed approach using a two-story rigid frame and a simply supported beam model.Simulated outcomes reveal that the new method outperforms the original MCWOA,especially in multi-damage detection scenarios.MCWOA excels in avoiding false positives and enhancing computational speed,making it an optimal choice for structural damage detection.The efficiency of the proposed MCWOA is assessed against metrics such as energy usage,computational expense,task duration,and delay.The simulated data indicates that the new MCWOA outpaces other methods across all metrics.The study also references the Whale Optimization Algorithm(WOA),Chimp Algorithm(CA),Ant Lion Optimizer(ALO),Genetic Algorithm(GA)and Grey Wolf Optimizer(GWO).
文摘As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy in the cloud environment.A hypervisor is a virtualization software used in cloud hosting to divide and allocate resources on various pieces of hardware.The choice of hypervisor can significantly impact the performance of cryptographic operations in the cloud environment.An important issue that must be carefully examined is that no hypervisor is completely superior in terms of performance;Each hypervisor should be examined to meet specific needs.The main objective of this study is to provide accurate results to compare the performance of Hyper-V and Kernel-based Virtual Machine(KVM)while implementing different cryptographic algorithms to guide cloud service providers and end users in choosing the most suitable hypervisor for their cryptographic needs.This study evaluated the efficiency of two hypervisors,Hyper-V and KVM,in implementing six cryptographic algorithms:Rivest,Shamir,Adleman(RSA),Advanced Encryption Standard(AES),Triple Data Encryption Standard(TripleDES),Carlisle Adams and Stafford Tavares(CAST-128),BLOWFISH,and TwoFish.The study’s findings show that KVM outperforms Hyper-V,with 12.2%less Central Processing Unit(CPU)use and 12.95%less time overall for encryption and decryption operations with various file sizes.The study’s findings emphasize how crucial it is to pick a hypervisor that is appropriate for cryptographic needs in a cloud environment,which could assist both cloud service providers and end users.Future research may focus more on how various hypervisors perform while handling cryptographic workloads.
基金supported by the Foundation for Innovative Research Groups of the National Natural Science Foundation of China under Grant No.61521003the National Natural Science Foundation of China under Grant No.62072467 and 62002383.
文摘Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are invoked by its driven events.Nonetheless,security threats in serverless computing such as vulnerability-based security threats have become the pain point hindering its wide adoption.The ideas in proactive defense such as redundancy,diversity and dynamic provide promising approaches to protect against cyberattacks.However,these security technologies are mostly applied to serverless platform based on“stacked”mode,as they are designed independent with serverless computing.The lack of security consideration in the initial design makes it especially challenging to achieve the all life cycle protection for serverless application with limited cost.In this paper,we present ATSSC,a proactive defense enabled attack tolerant serverless platform.ATSSC integrates the characteristic of redundancy,diversity and dynamic into serverless seamless to achieve high-level security and efficiency.Specifically,ATSSC constructs multiple diverse function replicas to process the driven events and performs cross-validation to verify the results.In order to create diverse function replicas,both software diversity and environment diversity are adopted.Furthermore,a dynamic function refresh strategy is proposed to keep the clean state of serverless functions.We implement ATSSC based on Kubernetes and Knative.Analysis and experimental results demonstrate that ATSSC can effectively protect serverless computing against cyberattacks with acceptable costs.
基金supported via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2024/R/1445).
文摘Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.However,the majority of the fog nodes in this environment are geographically scattered with resources that are limited in terms of capabilities compared to cloud nodes,thus making the application placement problem more complex than that in cloud computing.An approach for cost-efficient application placement in fog-cloud computing environments that combines the benefits of both fog and cloud computing to optimize the placement of applications and services while minimizing costs.This approach is particularly relevant in scenarios where latency,resource constraints,and cost considerations are crucial factors for the deployment of applications.In this study,we propose a hybrid approach that combines a genetic algorithm(GA)with the Flamingo Search Algorithm(FSA)to place application modules while minimizing cost.We consider four cost-types for application deployment:Computation,communication,energy consumption,and violations.The proposed hybrid approach is called GA-FSA and is designed to place the application modules considering the deadline of the application and deploy them appropriately to fog or cloud nodes to curtail the overall cost of the system.An extensive simulation is conducted to assess the performance of the proposed approach compared to other state-of-the-art approaches.The results demonstrate that GA-FSA approach is superior to the other approaches with respect to task guarantee ratio(TGR)and total cost.
基金jointly supported by the Jiangsu Postgraduate Research and Practice Innovation Project under Grant KYCX22_1030,SJCX22_0283 and SJCX23_0293the NUPTSF under Grant NY220201.
文摘Task scheduling plays a key role in effectively managing and allocating computing resources to meet various computing tasks in a cloud computing environment.Short execution time and low load imbalance may be the challenges for some algorithms in resource scheduling scenarios.In this work,the Hierarchical Particle Swarm Optimization-Evolutionary Artificial Bee Colony Algorithm(HPSO-EABC)has been proposed,which hybrids our presented Evolutionary Artificial Bee Colony(EABC),and Hierarchical Particle Swarm Optimization(HPSO)algorithm.The HPSO-EABC algorithm incorporates both the advantages of the HPSO and the EABC algorithm.Comprehensive testing including evaluations of algorithm convergence speed,resource execution time,load balancing,and operational costs has been done.The results indicate that the EABC algorithm exhibits greater parallelism compared to the Artificial Bee Colony algorithm.Compared with the Particle Swarm Optimization algorithm,the HPSO algorithmnot only improves the global search capability but also effectively mitigates getting stuck in local optima.As a result,the hybrid HPSO-EABC algorithm demonstrates significant improvements in terms of stability and convergence speed.Moreover,it exhibits enhanced resource scheduling performance in both homogeneous and heterogeneous environments,effectively reducing execution time and cost,which also is verified by the ablation experimental.
文摘The cloud computing technology is utilized for achieving resource utilization of remotebased virtual computer to facilitate the consumers with rapid and accurate massive data services.It utilizes on-demand resource provisioning,but the necessitated constraints of rapid turnaround time,minimal execution cost,high rate of resource utilization and limited makespan transforms the Load Balancing(LB)process-based Task Scheduling(TS)problem into an NP-hard optimization issue.In this paper,Hybrid Prairie Dog and Beluga Whale Optimization Algorithm(HPDBWOA)is propounded for precise mapping of tasks to virtual machines with the due objective of addressing the dynamic nature of cloud environment.This capability of HPDBWOA helps in decreasing the SLA violations and Makespan with optimal resource management.It is modelled as a scheduling strategy which utilizes the merits of PDOA and BWOA for attaining reactive decisions making with respect to the process of assigning the tasks to virtual resources by considering their priorities into account.It addresses the problem of pre-convergence with wellbalanced exploration and exploitation to attain necessitated Quality of Service(QoS)for minimizing the waiting time incurred during TS process.It further balanced exploration and exploitation rates for reducing the makespan during the task allocation with complete awareness of VM state.The results of the proposed HPDBWOA confirmed minimized energy utilization of 32.18% and reduced cost of 28.94% better than approaches used for investigation.The statistical investigation of the proposed HPDBWOA conducted using ANOVA confirmed its efficacy over the benchmarked systems in terms of throughput,system,and response time.
文摘Cloud computing is the new norm within business entities as businesses try to keep up with technological advancements and user needs. The concept is defined as a computing environment allowing for remote outsourcing of storage and computing resources. A hybrid cloud environment is an excellent example of cloud computing. Specifically, the hybrid system provides organizations with increased scalability and control over their data and support for a remote workforce. However, hybrid cloud systems are expensive as organizations operate different infrastructures while introducing complexity to the organization’s activities. Data security is critical among the most vital concerns that have resulted from the use of cloud computing, thus, affecting the rate of user adoption and acceptance. This article, borrowing from the hybrid cloud computing system, recommends combining traditional and modern data security systems. Traditional data security systems have proven effective in their respective roles, with the main challenge arising from their recognition of context and connectivity. Therefore, integrating traditional and modern designs is recommended to enhance effectiveness, context, connectivity, and efficiency.
文摘Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led to a significant increase in the user demand for services.However,in cloud environments efficient load balancing is essential to ensure optimal performance and resource utilization.This systematic review targets a detailed description of load balancing techniques including static and dynamic load balancing algorithms.Specifically,metaheuristic-based dynamic load balancing algorithms are identified as the optimal solution in case of increased traffic.In a cloud-based context,this paper describes load balancing measurements,including the benefits and drawbacks associated with the selected load balancing techniques.It also summarizes the algorithms based on implementation,time complexity,adaptability,associated issue(s),and targeted QoS parameters.Additionally,the analysis evaluates the tools and instruments utilized in each investigated study.Moreover,comparative analysis among static,traditional dynamic and metaheuristic algorithms based on response time by using the CloudSim simulation tool is also performed.Finally,the key open problems and potential directions for the state-of-the-art metaheuristic-based approaches are also addressed.
文摘This study investigates how cybersecurity can be enhanced through cloud computing solutions in the United States. The motive for this study is due to the rampant loss of data, breaches, and unauthorized access of internet criminals in the United States. The study adopted a survey research design, collecting data from 890 cloud professionals with relevant knowledge of cybersecurity and cloud computing. A machine learning approach was adopted, specifically a random forest classifier, an ensemble, and a decision tree model. Out of the features in the data, ten important features were selected using random forest feature importance, which helps to achieve the objective of the study. The study’s purpose is to enable organizations to develop suitable techniques to prevent cybercrime using random forest predictions as they relate to cloud services in the United States. The effectiveness of the models used is evaluated by utilizing validation matrices that include recall values, accuracy, and precision, in addition to F1 scores and confusion matrices. Based on evaluation scores (accuracy, precision, recall, and F1 scores) of 81.9%, 82.6%, and 82.1%, the results demonstrated the effectiveness of the random forest model. It showed the importance of machine learning algorithms in preventing cybercrime and boosting security in the cloud environment. It recommends that other machine learning models be adopted to see how to improve cybersecurity through cloud computing.
文摘Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodologies to address these challenges. Through a series of experiments, we evaluate the performance, security, and efficiency of the proposed algorithms in real-world cloud environments. Our results demonstrate the effectiveness of homomorphic encryption-based secure computation, secure multiparty computation, and trusted execution environment-based approaches in mitigating security threats while ensuring efficient resource utilization. Specifically, our homomorphic encryption-based algorithm exhibits encryption times ranging from 20 to 1000 milliseconds and decryption times ranging from 25 to 1250 milliseconds for payload sizes varying from 100 KB to 5000 KB. Furthermore, our comparative analysis against state-of-the-art solutions reveals the strengths of our proposed algorithms in terms of security guarantees, encryption overhead, and communication latency.
文摘This paper examines how the adoption of cloud computing affects the relationship between the technical and environmental capabilities of small and medium-sized enterprises(SMEs)in the tourism industry in Henan Province,China,thereby promoting the stable and sustainable development of the tourism industry,combining the laws of tourism market development,vigorously constructing a smart tourism project,guiding tourism cloud service providers to strengthen the cooperation and contact with the market’s tourism enterprises,introducing and utilizing cloud computing technology,optimizing and improving the functions of various tourism services of the enterprises,and enhancing the processing and analysis of enterprise-related data to provide tourism information.Strengthen the processing and analysis of enterprise-related data to provide tourism information,and further study the adoption of cloud computing and its impact on small and medium-sized enterprises(SMEs)in terms of technology and business environment knowledge,so as to make the best enterprise management decisions and realize the overall enhancement of the enterprise’s tourism brand value.
文摘Reliability,QoS and energy consumption are three important concerns of cloud service providers.Most of the current research on reliable task deployment in cloud computing focuses on only one or two of the three concerns.However,these three factors have intrinsic trade-off relationships.The existing studies show that load concentration can reduce the number of servers and hence save energy.In this paper,we deal with the problem of reliable task deployment in data centers,with the goal of minimizing the number of servers used in cloud data centers under the constraint that the job execution deadline can be met upon single server failure.We propose a QoS-Constrained,Reliable and Energy-efficient task replica deployment(QSRE)algorithm for the problem by combining task replication and re-execution.For each task in a job that cannot finish executing by re-execution within deadline,we initiate two replicas for the task:main task and task replica.Each main task runs on an individual server.The associated task replica is deployed on a backup server and completes part of the whole task load before the main task failure.Different from the main tasks,multiple task replicas can be allocated to the same backup server to reduce the energy consumption of cloud data centers by minimizing the number of servers required for running the task replicas.Specifically,QSRE assigns the task replicas with the longest and the shortest execution time to the backup servers in turn,such that the task replicas can meet the QoS-specified job execution deadline under the main task failure.We conduct experiments through simulations.The experimental results show that QSRE can effectively reduce the number of servers used,while ensuring the reliability and QoS of job execution.
文摘The current education field is experiencing an innovation driven by big data and cloud technologies,and these advanced technologies play a central role in the construction of smart campuses.Big data technology has a wide range of applications in student learning behavior analysis,teaching resource management,campus safety monitoring,and decision support,which improves the quality of education and management efficiency.Cloud computing technology supports the integration,distribution,and optimal use of educational resources through cloud resource sharing,virtual classrooms,intelligent campus management systems,and Infrastructure-as-a-Service(IaaS)models,which reduce costs and increase flexibility.This paper comprehensively discusses the practical application of big data and cloud computing technologies in smart campuses,showing how these technologies can contribute to the development of smart campuses,and laying the foundation for the future innovation of education models.
文摘The rapid expansion of the Internet of Things (IoT) has driven the need for advanced computational frameworks capable of handling the complex data processing and security challenges that modern IoT applications demand. However, traditional cloud computing frameworks face significant latency, scalability, and security issues. Quantum-Edge Cloud Computing (QECC) offers an innovative solution by integrating the computational power of quantum computing with the low-latency advantages of edge computing and the scalability of cloud computing resources. This study is grounded in an extensive literature review, performance improvements, and metrics data from Bangladesh, focusing on smart city infrastructure, healthcare monitoring, and the industrial IoT sector. The discussion covers vital elements, including integrating quantum cryptography to enhance data security, the critical role of edge computing in reducing response times, and cloud computing’s ability to support large-scale IoT networks with its extensive resources. Through case studies such as the application of quantum sensors in autonomous vehicles, the practical impact of QECC is demonstrated. Additionally, the paper outlines future research opportunities, including developing quantum-resistant encryption techniques and optimizing quantum algorithms for edge computing. The convergence of these technologies in QECC has the potential to overcome the current limitations of IoT frameworks, setting a new standard for future IoT applications.
基金funded by the Key Research and Development plan of Jiangsu Province (Social Development)No.BE20217162Jiangsu Modern Agricultural Machinery Equipment and Technology Demonstration and Promotion Project No.NJ2021-19.
文摘With the rapid development of the Internet of Things(IoT),there are several challenges pertaining to security in IoT applications.Compared with the characteristics of the traditional Internet,the IoT has many problems,such as large assets,complex and diverse structures,and lack of computing resources.Traditional network intrusion detection systems cannot meet the security needs of IoT applications.In view of this situation,this study applies cloud computing and machine learning to the intrusion detection system of IoT to improve detection performance.Usually,traditional intrusion detection algorithms require considerable time for training,and these intrusion detection algorithms are not suitable for cloud computing due to the limited computing power and storage capacity of cloud nodes;therefore,it is necessary to study intrusion detection algorithms with low weights,short training time,and high detection accuracy for deployment and application on cloud nodes.An appropriate classification algorithm is a primary factor for deploying cloud computing intrusion prevention systems and a prerequisite for the system to respond to intrusion and reduce intrusion threats.This paper discusses the problems related to IoT intrusion prevention in cloud computing environments.Based on the analysis of cloud computing security threats,this study extensively explores IoT intrusion detection,cloud node monitoring,and intrusion response in cloud computing environments by using cloud computing,an improved extreme learning machine,and other methods.We use the Multi-Feature Extraction Extreme Learning Machine(MFE-ELM)algorithm for cloud computing,which adds a multi-feature extraction process to cloud servers,and use the deployed MFE-ELM algorithm on cloud nodes to detect and discover network intrusions to cloud nodes.In our simulation experiments,a classical dataset for intrusion detection is selected as a test,and test steps such as data preprocessing,feature engineering,model training,and result analysis are performed.The experimental results show that the proposed algorithm can effectively detect and identify most network data packets with good model performance and achieve efficient intrusion detection for heterogeneous data of the IoT from cloud nodes.Furthermore,it can enable the cloud server to discover nodes with serious security threats in the cloud cluster in real time,so that further security protection measures can be taken to obtain the optimal intrusion response strategy for the cloud cluster.
文摘Cloud computing has taken over the high-performance distributed computing area,and it currently provides on-demand services and resource polling over the web.As a result of constantly changing user service demand,the task scheduling problem has emerged as a critical analytical topic in cloud computing.The primary goal of scheduling tasks is to distribute tasks to available processors to construct the shortest possible schedule without breaching precedence restrictions.Assignments and schedules of tasks substantially influence system operation in a heterogeneous multiprocessor system.The diverse processes inside the heuristic-based task scheduling method will result in varying makespan in the heterogeneous computing system.As a result,an intelligent scheduling algorithm should efficiently determine the priority of every subtask based on the resources necessary to lower the makespan.This research introduced a novel efficient scheduling task method in cloud computing systems based on the cooperation search algorithm to tackle an essential task and schedule a heterogeneous cloud computing problem.The basic idea of thismethod is to use the advantages of meta-heuristic algorithms to get the optimal solution.We assess our algorithm’s performance by running it through three scenarios with varying numbers of tasks.The findings demonstrate that the suggested technique beats existingmethods NewGenetic Algorithm(NGA),Genetic Algorithm(GA),Whale Optimization Algorithm(WOA),Gravitational Search Algorithm(GSA),and Hybrid Heuristic and Genetic(HHG)by 7.9%,2.1%,8.8%,7.7%,3.4%respectively according to makespan.
基金funded by University Grant Commission with UGC-Ref.No.:3364/(NET-JUNE 2015).
文摘This paper presents a novel fuzzy firefly-based intelligent algorithm for load balancing in mobile cloud computing while reducing makespan.The proposed technique implicitly acts intelligently by using inherent traits of fuzzy and firefly.It automatically adjusts its behavior or converges depending on the information gathered during the search process and objective function.It works for 3-tier architecture,including cloudlet and public cloud.As cloudlets have limited resources,fuzzy logic is used for cloudlet selection using capacity and waiting time as input.Fuzzy provides human-like decisions without using any mathematical model.Firefly is a powerful meta-heuristic optimization technique to balance diversification and solution speed.It balances the load on cloud and cloudlet while minimizing makespan and execution time.However,it may trap in local optimum;levy flight can handle it.Hybridization of fuzzy fireflywith levy flight is a novel technique that provides reduced makespan,execution time,and Degree of imbalance while balancing the load.Simulation has been carried out on the Cloud Analyst platform with National Aeronautics and Space Administration(NASA)and Clarknet datasets.Results show that the proposed algorithm outperforms Ant Colony Optimization Queue Decision Maker(ACOQDM),Distributed Scheduling Optimization Algorithm(DSOA),andUtility-based Firefly Algorithm(UFA)when compared in terms of makespan,Degree of imbalance,and Figure of Merit.
基金Supporting Project number(RSP2023R34)King Saud University,Riyadh,Saudi Arabia.
文摘Some of the significant new technologies researched in recent studies include BlockChain(BC),Software Defined Networking(SDN),and Smart Industrial Internet of Things(IIoT).All three technologies provide data integrity,confidentiality,and integrity in their respective use cases(especially in industrial fields).Additionally,cloud computing has been in use for several years now.Confidential information is exchanged with cloud infrastructure to provide clients with access to distant resources,such as computing and storage activities in the IIoT.There are also significant security risks,concerns,and difficulties associated with cloud computing.To address these challenges,we propose merging BC and SDN into a cloud computing platform for the IIoT.This paper introduces“DistB-SDCloud”,an architecture for enhanced cloud security for smart IIoT applications.The proposed architecture uses a distributed BC method to provide security,secrecy,privacy,and integrity while remaining flexible and scalable.Customers in the industrial sector benefit from the dispersed or decentralized,and efficient environment of BC.Additionally,we described an SDN method to improve the durability,stability,and load balancing of cloud infrastructure.The efficacy of our SDN and BC-based implementation was experimentally tested by using various parameters including throughput,packet analysis,response time,bandwidth,and latency analysis,as well as the monitoring of several attacks on the system itself.
基金supported by the Yunnan Key Laboratory of Optoelectronic Information Technology,and grant funded by the National Natural Science Foundation of China(62062069,62062070,and 62005235)Taif University Researchers Supporting Project(TURSP-2020/126)Taif University,Taif,Saudi Arabia.Jun Liu and Kai Qian contributed equally to this paper。
文摘In recent years,statistics have indicated that the number of patients with malignant brain tumors has increased sharply.However,most surgeons still perform surgical training using the traditional autopsy and prosthesis model,which encounters many problems,such as insufficient corpse resources,low efficiency,and high cost.With the advent of the 5G era,a wide range of Industrial Internet of Things(IIOT)applications have been developed.Virtual Reality(VR)and Augmented Reality(AR)technologies that emerged with 5G are developing rapidly for intelligent medical training.To address the challenges encountered during neurosurgery training,and combining with cloud computing,in this paper,a highly immersive AR-based brain tumor neurosurgery remote collaborative virtual surgery training system is developed,in which a VR simulator is embedded.The system enables real-time remote surgery training interaction through 5G transmission.Six experts and 18 novices were invited to participate in the experiment to verify the system.Subsequently,the two simulators were evaluated using face and construction validation methods.The results obtained by training the novices 50 times were further analyzed using the Learning Curve-Cumulative Sum(LC-CUSUM)evaluation method to validate the effectiveness of the two simulators.The results of the face and content validation demonstrated that the AR simulator in the system was superior to the VR simulator in terms of vision and scene authenticity,and had a better effect on the improvement of surgical skills.Moreover,the surgical training scheme proposed in this paper is effective,and the remote collaborative training effect of the system is ideal.
基金Provincial key platforms and major scientific research projects of universities in Guangdong Province,Peoples R China under Grant No.2017GXJK116.
文摘The massive growth of diversified smart devices and continuous data generation poses a challenge to communication architectures.To deal with this problem,communication networks consider fog computing as one of promising technologies that can improve overall communication performance.It brings on-demand services proximate to the end devices and delivers the requested data in a short time.Fog computing faces several issues such as latency,bandwidth,and link utilization due to limited resources and the high processing demands of end devices.To this end,fog caching plays an imperative role in addressing data dissemination issues.This study provides a comprehensive discussion of fog computing,Internet of Things(IoTs)and the critical issues related to data security and dissemination in fog computing.Moreover,we determine the fog-based caching schemes and contribute to deal with the existing issues of fog computing.Besides,this paper presents a number of caching schemes with their contributions,benefits,and challenges to overcome the problems and limitations of fog computing.We also identify machine learning-based approaches for cache security and management in fog computing,as well as several prospective future research directions in caching,fog computing,and machine learning.