The current electricity market fails to consider the energy consumption characteristics of transaction subjects such as virtual power plants.Besides,the game relationship between transaction subjects needs to be furth...The current electricity market fails to consider the energy consumption characteristics of transaction subjects such as virtual power plants.Besides,the game relationship between transaction subjects needs to be further explored.This paper proposes a Peer-to-Peer energy trading method for multi-virtual power plants based on a non-cooperative game.Firstly,a coordinated control model of public buildings is incorporated into the scheduling framework of the virtual power plant,considering the energy consumption characteristics of users.Secondly,the utility functions of multiple virtual power plants are analyzed,and a non-cooperative game model is established to explore the game relationship between electricity sellers in the Peer-to-Peer transaction process.Finally,the influence of user energy consumption characteristics on the virtual power plant operation and the Peer-to-Peer transaction process is analyzed by case studies.Furthermore,the effect of different parameters on the Nash equilibrium point is explored,and the influence factors of Peer-to-Peer transactions between virtual power plants are summarized.According to the obtained results,compared with the central air conditioning set as constant temperature control strategy,the flexible control strategy proposed in this paper improves the market power of each VPP and the overall revenue of the VPPs.In addition,the upper limit of the service quotation of the market operator have a great impact on the transaction mode of VPPs.When the service quotation decreases gradually,the P2P transaction between VPPs is more likely to occur.展开更多
Cultural relics line graphic serves as a crucial form of traditional artifact information documentation,which is a simple and intuitive product with low cost of displaying compared with 3D models.Dimensionality reduct...Cultural relics line graphic serves as a crucial form of traditional artifact information documentation,which is a simple and intuitive product with low cost of displaying compared with 3D models.Dimensionality reduction is undoubtedly necessary for line drawings.However,most existing methods for artifact drawing rely on the principles of orthographic projection that always cannot avoid angle occlusion and data overlapping while the surface of cultural relics is complex.Therefore,conformal mapping was introduced as a dimensionality reduction way to compensate for the limitation of orthographic projection.Based on the given criteria for assessing surface complexity,this paper proposed a three-dimensional feature guideline extraction method for complex cultural relic surfaces.A 2D and 3D combined factor that measured the importance of points on describing surface features,vertex weight,was designed.Then the selection threshold for feature guideline extraction was determined based on the differences between vertex weight and shape index distributions.The feasibility and stability were verified through experiments conducted on real cultural relic surface data.Results demonstrated the ability of the method to address the challenges associated with the automatic generation of line drawings for complex surfaces.The extraction method and the obtained results will be useful for line graphic drawing,displaying and propaganda of cultural relics.展开更多
In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding ...In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding the risk of data loss and data overlapping.The development of data flow scheduling approaches in the cloud environment taking security parameters into account is insufficient.In our work,we propose a data scheduling model for the cloud environment.Themodel is made up of three parts that together help dispatch user data flow to the appropriate cloudVMs.The first component is the Collector Agent whichmust periodically collect information on the state of the network links.The second one is the monitoring agent which must then analyze,classify,and make a decision on the state of the link and finally transmit this information to the scheduler.The third one is the scheduler who must consider previous information to transfer user data,including fair distribution and reliable paths.It should be noted that each part of the proposedmodel requires the development of its algorithms.In this article,we are interested in the development of data transfer algorithms,including fairness distribution with the consideration of a stable link state.These algorithms are based on the grouping of transmitted files and the iterative method.The proposed algorithms showthe performances to obtain an approximate solution to the studied problem which is an NP-hard(Non-Polynomial solution)problem.The experimental results show that the best algorithm is the half-grouped minimum excluding(HME),with a percentage of 91.3%,an average deviation of 0.042,and an execution time of 0.001 s.展开更多
This article explores the evolution of cloud computing, its advantages over traditional on-premises infrastructure, and its impact on information security. The study presents a comprehensive literature review covering...This article explores the evolution of cloud computing, its advantages over traditional on-premises infrastructure, and its impact on information security. The study presents a comprehensive literature review covering various cloud infrastructure offerings and security models. Additionally, it deeply analyzes real-life case studies illustrating successful cloud migrations and highlights common information security threats in current cloud computing. The article concludes by offering recommendations to businesses to protect themselves from cloud data breaches and providing insights into selecting a suitable cloud services provider from an information security perspective.展开更多
Amid the landscape of Cloud Computing(CC),the Cloud Datacenter(DC)stands as a conglomerate of physical servers,whose performance can be hindered by bottlenecks within the realm of proliferating CC services.A linchpin ...Amid the landscape of Cloud Computing(CC),the Cloud Datacenter(DC)stands as a conglomerate of physical servers,whose performance can be hindered by bottlenecks within the realm of proliferating CC services.A linchpin in CC’s performance,the Cloud Service Broker(CSB),orchestrates DC selection.Failure to adroitly route user requests with suitable DCs transforms the CSB into a bottleneck,endangering service quality.To tackle this,deploying an efficient CSB policy becomes imperative,optimizing DC selection to meet stringent Qualityof-Service(QoS)demands.Amidst numerous CSB policies,their implementation grapples with challenges like costs and availability.This article undertakes a holistic review of diverse CSB policies,concurrently surveying the predicaments confronted by current policies.The foremost objective is to pinpoint research gaps and remedies to invigorate future policy development.Additionally,it extensively clarifies various DC selection methodologies employed in CC,enriching practitioners and researchers alike.Employing synthetic analysis,the article systematically assesses and compares myriad DC selection techniques.These analytical insights equip decision-makers with a pragmatic framework to discern the apt technique for their needs.In summation,this discourse resoundingly underscores the paramount importance of adept CSB policies in DC selection,highlighting the imperative role of efficient CSB policies in optimizing CC performance.By emphasizing the significance of these policies and their modeling implications,the article contributes to both the general modeling discourse and its practical applications in the CC domain.展开更多
The cloud type product 2B-CLDCLASS-LIDAR based on CloudSat and CALIPSO from June 2006 to May 2017 is used to examine the temporal and spatial distribution characteristics and interannual variability of eight cloud typ...The cloud type product 2B-CLDCLASS-LIDAR based on CloudSat and CALIPSO from June 2006 to May 2017 is used to examine the temporal and spatial distribution characteristics and interannual variability of eight cloud types(high cloud, altostratus, altocumulus, stratus, stratocumulus, cumulus, nimbostratus, and deep convection) and three phases(ice,mixed, and water) in the Arctic. Possible reasons for the observed interannual variability are also discussed. The main conclusions are as follows:(1) More water clouds occur on the Atlantic side, and more ice clouds occur over continents.(2)The average spatial and seasonal distributions of cloud types show three patterns: high clouds and most cumuliform clouds are concentrated in low-latitude locations and peak in summer;altostratus and nimbostratus are concentrated over and around continents and are less abundant in summer;stratocumulus and stratus are concentrated near the inner Arctic and peak during spring and autumn.(3) Regional averaged interannual frequencies of ice clouds and altostratus clouds significantly decrease, while those of water clouds, altocumulus, and cumulus clouds increase significantly.(4) Significant features of the linear trends of cloud frequencies are mainly located over ocean areas.(5) The monthly water cloud frequency anomalies are positively correlated with air temperature in most of the troposphere, while those for ice clouds are negatively correlated.(6) The decrease in altostratus clouds is associated with the weakening of the Arctic front due to Arctic warming, while increased water vapor transport into the Arctic and higher atmospheric instability lead to more cumulus and altocumulus clouds.展开更多
This study investigates how cybersecurity can be enhanced through cloud computing solutions in the United States. The motive for this study is due to the rampant loss of data, breaches, and unauthorized access of inte...This study investigates how cybersecurity can be enhanced through cloud computing solutions in the United States. The motive for this study is due to the rampant loss of data, breaches, and unauthorized access of internet criminals in the United States. The study adopted a survey research design, collecting data from 890 cloud professionals with relevant knowledge of cybersecurity and cloud computing. A machine learning approach was adopted, specifically a random forest classifier, an ensemble, and a decision tree model. Out of the features in the data, ten important features were selected using random forest feature importance, which helps to achieve the objective of the study. The study’s purpose is to enable organizations to develop suitable techniques to prevent cybercrime using random forest predictions as they relate to cloud services in the United States. The effectiveness of the models used is evaluated by utilizing validation matrices that include recall values, accuracy, and precision, in addition to F1 scores and confusion matrices. Based on evaluation scores (accuracy, precision, recall, and F1 scores) of 81.9%, 82.6%, and 82.1%, the results demonstrated the effectiveness of the random forest model. It showed the importance of machine learning algorithms in preventing cybercrime and boosting security in the cloud environment. It recommends that other machine learning models be adopted to see how to improve cybersecurity through cloud computing.展开更多
This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hac...This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hackers, thereby making customer/client data visible and unprotected. Also, this led to enormous risk of the clients/customers due to defective equipment, bugs, faulty servers, and specious actions. The aim if this paper therefore is to analyze a secure model using Unicode Transformation Format (UTF) base 64 algorithms for storage of data in cloud securely. The methodology used was Object Orientated Hypermedia Analysis and Design Methodology (OOHADM) was adopted. Python was used to develop the security model;the role-based access control (RBAC) and multi-factor authentication (MFA) to enhance security Algorithm were integrated into the Information System developed with HTML 5, JavaScript, Cascading Style Sheet (CSS) version 3 and PHP7. This paper also discussed some of the following concepts;Development of Computing in Cloud, Characteristics of computing, Cloud deployment Model, Cloud Service Models, etc. The results showed that the proposed enhanced security model for information systems of cooperate platform handled multiple authorization and authentication menace, that only one login page will direct all login requests of the different modules to one Single Sign On Server (SSOS). This will in turn redirect users to their requested resources/module when authenticated, leveraging on the Geo-location integration for physical location validation. The emergence of this newly developed system will solve the shortcomings of the existing systems and reduce time and resources incurred while using the existing system.展开更多
The Access control scheme is an effective method to protect user data privacy.The access control scheme based on blockchain and ciphertext policy attribute encryption(CP–ABE)can solve the problems of single—point of...The Access control scheme is an effective method to protect user data privacy.The access control scheme based on blockchain and ciphertext policy attribute encryption(CP–ABE)can solve the problems of single—point of failure and lack of trust in the centralized system.However,it also brings new problems to the health information in the cloud storage environment,such as attribute leakage,low consensus efficiency,complex permission updates,and so on.This paper proposes an access control scheme with fine-grained attribute revocation,keyword search,and traceability of the attribute private key distribution process.Blockchain technology tracks the authorization of attribute private keys.The credit scoring method improves the Raft protocol in consensus efficiency.Besides,the interplanetary file system(IPFS)addresses the capacity deficit of blockchain.Under the premise of hiding policy,the research proposes a fine-grained access control method based on users,user attributes,and file structure.It optimizes the data-sharing mode.At the same time,Proxy Re-Encryption(PRE)technology is used to update the access rights.The proposed scheme proved to be secure.Comparative analysis and experimental results show that the proposed scheme has higher efficiency and more functions.It can meet the needs of medical institutions.展开更多
In this paper, we present an approach to establish efficient and scalable service provisioning in the cloud environment using P2P-based infrastructure for storing, sharing and discovering services. Unlike most other P...In this paper, we present an approach to establish efficient and scalable service provisioning in the cloud environment using P2P-based infrastructure for storing, sharing and discovering services. Unlike most other P2P-based approaches, it allows flexible search queries, since all of them are executed against internal database presenting at each overlay node. Various issues concerning using this approach in the cloud environment, such as load-balancing, queuing, dealing with skewed data and dynamic attributes, are addressed in the paper. The infrastructure proposed in the paper can serve as a base for creating robust, scalable and reliable cloud systems, able to fulfill client’s QoS requirements, and at the same time introduce more efficient utilization of resources to the cloud provider.展开更多
This paper focuses on the task of few-shot 3D point cloud semantic segmentation.Despite some progress,this task still encounters many issues due to the insufficient samples given,e.g.,incomplete object segmentation an...This paper focuses on the task of few-shot 3D point cloud semantic segmentation.Despite some progress,this task still encounters many issues due to the insufficient samples given,e.g.,incomplete object segmentation and inaccurate semantic discrimination.To tackle these issues,we first leverage part-whole relationships into the task of 3D point cloud semantic segmentation to capture semantic integrity,which is empowered by the dynamic capsule routing with the module of 3D Capsule Networks(CapsNets)in the embedding network.Concretely,the dynamic routing amalgamates geometric information of the 3D point cloud data to construct higher-level feature representations,which capture the relationships between object parts and their wholes.Secondly,we designed a multi-prototype enhancement module to enhance the prototype discriminability.Specifically,the single-prototype enhancement mechanism is expanded to the multi-prototype enhancement version for capturing rich semantics.Besides,the shot-correlation within the category is calculated via the interaction of different samples to enhance the intra-category similarity.Ablation studies prove that the involved part-whole relations and proposed multi-prototype enhancement module help to achieve complete object segmentation and improve semantic discrimination.Moreover,under the integration of these two modules,quantitative and qualitative experiments on two public benchmarks,including S3DIS and ScanNet,indicate the superior performance of the proposed framework on the task of 3D point cloud semantic segmentation,compared to some state-of-the-art methods.展开更多
As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy i...As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy in the cloud environment.A hypervisor is a virtualization software used in cloud hosting to divide and allocate resources on various pieces of hardware.The choice of hypervisor can significantly impact the performance of cryptographic operations in the cloud environment.An important issue that must be carefully examined is that no hypervisor is completely superior in terms of performance;Each hypervisor should be examined to meet specific needs.The main objective of this study is to provide accurate results to compare the performance of Hyper-V and Kernel-based Virtual Machine(KVM)while implementing different cryptographic algorithms to guide cloud service providers and end users in choosing the most suitable hypervisor for their cryptographic needs.This study evaluated the efficiency of two hypervisors,Hyper-V and KVM,in implementing six cryptographic algorithms:Rivest,Shamir,Adleman(RSA),Advanced Encryption Standard(AES),Triple Data Encryption Standard(TripleDES),Carlisle Adams and Stafford Tavares(CAST-128),BLOWFISH,and TwoFish.The study’s findings show that KVM outperforms Hyper-V,with 12.2%less Central Processing Unit(CPU)use and 12.95%less time overall for encryption and decryption operations with various file sizes.The study’s findings emphasize how crucial it is to pick a hypervisor that is appropriate for cryptographic needs in a cloud environment,which could assist both cloud service providers and end users.Future research may focus more on how various hypervisors perform while handling cryptographic workloads.展开更多
Cloud computing has drastically changed the delivery and consumption of live streaming content.The designs,challenges,and possible uses of cloud computing for live streaming are studied.A comprehensive overview of the...Cloud computing has drastically changed the delivery and consumption of live streaming content.The designs,challenges,and possible uses of cloud computing for live streaming are studied.A comprehensive overview of the technical and business issues surrounding cloudbased live streaming is provided,including the benefits of cloud computing,the various live streaming architectures,and the challenges that live streaming service providers face in delivering high‐quality,real‐time services.The different techniques used to improve the performance of video streaming,such as adaptive bit‐rate streaming,multicast distribution,and edge computing are discussed and the necessity of low‐latency and high‐quality video transmission in cloud‐based live streaming is underlined.Issues such as improving user experience and live streaming service performance using cutting‐edge technology,like artificial intelligence and machine learning are discussed.In addition,the legal and regulatory implications of cloud‐based live streaming,including issues with network neutrality,data privacy,and content moderation are addressed.The future of cloud computing for live streaming is examined in the section that follows,and it looks at the most likely new developments in terms of trends and technology.For technology vendors,live streaming service providers,and regulators,the findings have major policy‐relevant implications.Suggestions on how stakeholders should address these concerns and take advantage of the potential presented by this rapidly evolving sector,as well as insights into the key challenges and opportunities associated with cloud‐based live streaming are provided.展开更多
logical testing model and resource lifecycle information,generate test cases and complete parameters,and alleviate inconsistency issues through parameter inference.Once again,we propose a method of analyzing test resu...logical testing model and resource lifecycle information,generate test cases and complete parameters,and alleviate inconsistency issues through parameter inference.Once again,we propose a method of analyzing test results using joint state codes and call stack information,which compensates for the shortcomings of traditional analysis methods.We will apply our method to testing REST services,including OpenStack,an open source cloud operating platform for experimental evaluation.We have found a series of inconsistencies,known vulnerabilities,and new unknown logical defects.展开更多
Cavitation is a prevalent phenomenon within the domain of ship and ocean engineering,predominantly occurring in the tail flow fields of high-speed rotating propellers and on the surfaces of high-speed underwater vehic...Cavitation is a prevalent phenomenon within the domain of ship and ocean engineering,predominantly occurring in the tail flow fields of high-speed rotating propellers and on the surfaces of high-speed underwater vehicles.The re-entrant jet and compression wave resulting from the collapse of cavity vapour are pivotal factors contributing to cavity instability.Concurrently,these phenomena significantly modulate the evolution of cavitation flow.In this paper,numerical investigations into cloud cavitation over a Clark-Y hydrofoil were conducted,utilizing the Large Eddy Simulation(LES)turbulence model and the Volume of Fluid(VOF)method within the OpenFOAM framework.Comparative analysis of results obtained at different angles of attack is undertaken.A discernible augmentation in cavity thickness is observed concomitant with the escalation in attack angle,alongside a progressive intensification in pressure at the leading edge of the hydrofoil,contributing to the suction force.These results can serve as a fundamental point of reference for gaining a deeper comprehension of cloud cavitation dynamics.展开更多
Data security assurance is crucial due to the increasing prevalence of cloud computing and its widespread use across different industries,especially in light of the growing number of cybersecurity threats.A major and ...Data security assurance is crucial due to the increasing prevalence of cloud computing and its widespread use across different industries,especially in light of the growing number of cybersecurity threats.A major and everpresent threat is Ransomware-as-a-Service(RaaS)assaults,which enable even individuals with minimal technical knowledge to conduct ransomware operations.This study provides a new approach for RaaS attack detection which uses an ensemble of deep learning models.For this purpose,the network intrusion detection dataset“UNSWNB15”from the Intelligent Security Group of the University of New South Wales,Australia is analyzed.In the initial phase,the rectified linear unit-,scaled exponential linear unit-,and exponential linear unit-based three separate Multi-Layer Perceptron(MLP)models are developed.Later,using the combined predictive power of these three MLPs,the RansoDetect Fusion ensemble model is introduced in the suggested methodology.The proposed ensemble technique outperforms previous studieswith impressive performance metrics results,including 98.79%accuracy and recall,98.85%precision,and 98.80%F1-score.The empirical results of this study validate the ensemble model’s ability to improve cybersecurity defenses by showing that it outperforms individual MLPmodels.In expanding the field of cybersecurity strategy,this research highlights the significance of combined deep learning models in strengthening intrusion detection systems against sophisticated cyber threats.展开更多
We introduce Quafu-Qcover,an open-source cloud-based software package developed for solving combinatorial optimization problems using quantum simulators and hardware backends.Quafu-Qcover provides a standardized and c...We introduce Quafu-Qcover,an open-source cloud-based software package developed for solving combinatorial optimization problems using quantum simulators and hardware backends.Quafu-Qcover provides a standardized and comprehensive workflow that utilizes the quantum approximate optimization algorithm(QAOA).It facilitates the automatic conversion of the original problem into a quadratic unconstrained binary optimization(QUBO)model and its corresponding Ising model,which can be subsequently transformed into a weight graph.The core of Qcover relies on a graph decomposition-based classical algorithm,which efficiently derives the optimal parameters for the shallow QAOA circuit.Quafu-Qcover incorporates a dedicated compiler capable of translating QAOA circuits into physical quantum circuits that can be executed on Quafu cloud quantum computers.Compared to a general-purpose compiler,our compiler demonstrates the ability to generate shorter circuit depths,while also exhibiting superior speed performance.Additionally,the Qcover compiler has the capability to dynamically create a library of qubits coupling substructures in real-time,utilizing the most recent calibration data from the superconducting quantum devices.This ensures that computational tasks can be assigned to connected physical qubits with the highest fidelity.The Quafu-Qcover allows us to retrieve quantum computing sampling results using a task ID at any time,enabling asynchronous processing.Moreover,it incorporates modules for results preprocessing and visualization,facilitating an intuitive display of solutions for combinatorial optimization problems.We hope that Quafu-Qcover can serve as an instructive illustration for how to explore application problems on the Quafu cloud quantum computers.展开更多
Cloud computing has emerged as a viable alternative to traditional computing infrastructures,offering various benefits.However,the adoption of cloud storage poses significant risks to data secrecy and integrity.This a...Cloud computing has emerged as a viable alternative to traditional computing infrastructures,offering various benefits.However,the adoption of cloud storage poses significant risks to data secrecy and integrity.This article presents an effective mechanism to preserve the secrecy and integrity of data stored on the public cloud by leveraging blockchain technology,smart contracts,and cryptographic primitives.The proposed approach utilizes a Solidity-based smart contract as an auditor for maintaining and verifying the integrity of outsourced data.To preserve data secrecy,symmetric encryption systems are employed to encrypt user data before outsourcing it.An extensive performance analysis is conducted to illustrate the efficiency of the proposed mechanism.Additionally,a rigorous assessment is conducted to ensure that the developed smart contract is free from vulnerabilities and to measure its associated running costs.The security analysis of the proposed system confirms that our approach can securely maintain the confidentiality and integrity of cloud storage,even in the presence of malicious entities.The proposed mechanism contributes to enhancing data security in cloud computing environments and can be used as a foundation for developing more secure cloud storage systems.展开更多
基金supported by the Technology Project of State Grid Jiangsu Electric Power Co.,Ltd.,China,under Grant 2021200.
文摘The current electricity market fails to consider the energy consumption characteristics of transaction subjects such as virtual power plants.Besides,the game relationship between transaction subjects needs to be further explored.This paper proposes a Peer-to-Peer energy trading method for multi-virtual power plants based on a non-cooperative game.Firstly,a coordinated control model of public buildings is incorporated into the scheduling framework of the virtual power plant,considering the energy consumption characteristics of users.Secondly,the utility functions of multiple virtual power plants are analyzed,and a non-cooperative game model is established to explore the game relationship between electricity sellers in the Peer-to-Peer transaction process.Finally,the influence of user energy consumption characteristics on the virtual power plant operation and the Peer-to-Peer transaction process is analyzed by case studies.Furthermore,the effect of different parameters on the Nash equilibrium point is explored,and the influence factors of Peer-to-Peer transactions between virtual power plants are summarized.According to the obtained results,compared with the central air conditioning set as constant temperature control strategy,the flexible control strategy proposed in this paper improves the market power of each VPP and the overall revenue of the VPPs.In addition,the upper limit of the service quotation of the market operator have a great impact on the transaction mode of VPPs.When the service quotation decreases gradually,the P2P transaction between VPPs is more likely to occur.
基金National Natural Science Foundation of China(Nos.42071444,42101444)。
文摘Cultural relics line graphic serves as a crucial form of traditional artifact information documentation,which is a simple and intuitive product with low cost of displaying compared with 3D models.Dimensionality reduction is undoubtedly necessary for line drawings.However,most existing methods for artifact drawing rely on the principles of orthographic projection that always cannot avoid angle occlusion and data overlapping while the surface of cultural relics is complex.Therefore,conformal mapping was introduced as a dimensionality reduction way to compensate for the limitation of orthographic projection.Based on the given criteria for assessing surface complexity,this paper proposed a three-dimensional feature guideline extraction method for complex cultural relic surfaces.A 2D and 3D combined factor that measured the importance of points on describing surface features,vertex weight,was designed.Then the selection threshold for feature guideline extraction was determined based on the differences between vertex weight and shape index distributions.The feasibility and stability were verified through experiments conducted on real cultural relic surface data.Results demonstrated the ability of the method to address the challenges associated with the automatic generation of line drawings for complex surfaces.The extraction method and the obtained results will be useful for line graphic drawing,displaying and propaganda of cultural relics.
基金the deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number(IFP-2022-34).
文摘In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding the risk of data loss and data overlapping.The development of data flow scheduling approaches in the cloud environment taking security parameters into account is insufficient.In our work,we propose a data scheduling model for the cloud environment.Themodel is made up of three parts that together help dispatch user data flow to the appropriate cloudVMs.The first component is the Collector Agent whichmust periodically collect information on the state of the network links.The second one is the monitoring agent which must then analyze,classify,and make a decision on the state of the link and finally transmit this information to the scheduler.The third one is the scheduler who must consider previous information to transfer user data,including fair distribution and reliable paths.It should be noted that each part of the proposedmodel requires the development of its algorithms.In this article,we are interested in the development of data transfer algorithms,including fairness distribution with the consideration of a stable link state.These algorithms are based on the grouping of transmitted files and the iterative method.The proposed algorithms showthe performances to obtain an approximate solution to the studied problem which is an NP-hard(Non-Polynomial solution)problem.The experimental results show that the best algorithm is the half-grouped minimum excluding(HME),with a percentage of 91.3%,an average deviation of 0.042,and an execution time of 0.001 s.
文摘This article explores the evolution of cloud computing, its advantages over traditional on-premises infrastructure, and its impact on information security. The study presents a comprehensive literature review covering various cloud infrastructure offerings and security models. Additionally, it deeply analyzes real-life case studies illustrating successful cloud migrations and highlights common information security threats in current cloud computing. The article concludes by offering recommendations to businesses to protect themselves from cloud data breaches and providing insights into selecting a suitable cloud services provider from an information security perspective.
文摘Amid the landscape of Cloud Computing(CC),the Cloud Datacenter(DC)stands as a conglomerate of physical servers,whose performance can be hindered by bottlenecks within the realm of proliferating CC services.A linchpin in CC’s performance,the Cloud Service Broker(CSB),orchestrates DC selection.Failure to adroitly route user requests with suitable DCs transforms the CSB into a bottleneck,endangering service quality.To tackle this,deploying an efficient CSB policy becomes imperative,optimizing DC selection to meet stringent Qualityof-Service(QoS)demands.Amidst numerous CSB policies,their implementation grapples with challenges like costs and availability.This article undertakes a holistic review of diverse CSB policies,concurrently surveying the predicaments confronted by current policies.The foremost objective is to pinpoint research gaps and remedies to invigorate future policy development.Additionally,it extensively clarifies various DC selection methodologies employed in CC,enriching practitioners and researchers alike.Employing synthetic analysis,the article systematically assesses and compares myriad DC selection techniques.These analytical insights equip decision-makers with a pragmatic framework to discern the apt technique for their needs.In summation,this discourse resoundingly underscores the paramount importance of adept CSB policies in DC selection,highlighting the imperative role of efficient CSB policies in optimizing CC performance.By emphasizing the significance of these policies and their modeling implications,the article contributes to both the general modeling discourse and its practical applications in the CC domain.
基金supported in part by the National Natural Science Foundation of China (Grant No. 42105127)the Special Research Assistant Project of the Chinese Academy of Sciencesthe National Key Research and Development Plans of China (Grant Nos. 2019YFC1510304 and 2016YFE0201900-02)。
文摘The cloud type product 2B-CLDCLASS-LIDAR based on CloudSat and CALIPSO from June 2006 to May 2017 is used to examine the temporal and spatial distribution characteristics and interannual variability of eight cloud types(high cloud, altostratus, altocumulus, stratus, stratocumulus, cumulus, nimbostratus, and deep convection) and three phases(ice,mixed, and water) in the Arctic. Possible reasons for the observed interannual variability are also discussed. The main conclusions are as follows:(1) More water clouds occur on the Atlantic side, and more ice clouds occur over continents.(2)The average spatial and seasonal distributions of cloud types show three patterns: high clouds and most cumuliform clouds are concentrated in low-latitude locations and peak in summer;altostratus and nimbostratus are concentrated over and around continents and are less abundant in summer;stratocumulus and stratus are concentrated near the inner Arctic and peak during spring and autumn.(3) Regional averaged interannual frequencies of ice clouds and altostratus clouds significantly decrease, while those of water clouds, altocumulus, and cumulus clouds increase significantly.(4) Significant features of the linear trends of cloud frequencies are mainly located over ocean areas.(5) The monthly water cloud frequency anomalies are positively correlated with air temperature in most of the troposphere, while those for ice clouds are negatively correlated.(6) The decrease in altostratus clouds is associated with the weakening of the Arctic front due to Arctic warming, while increased water vapor transport into the Arctic and higher atmospheric instability lead to more cumulus and altocumulus clouds.
文摘This study investigates how cybersecurity can be enhanced through cloud computing solutions in the United States. The motive for this study is due to the rampant loss of data, breaches, and unauthorized access of internet criminals in the United States. The study adopted a survey research design, collecting data from 890 cloud professionals with relevant knowledge of cybersecurity and cloud computing. A machine learning approach was adopted, specifically a random forest classifier, an ensemble, and a decision tree model. Out of the features in the data, ten important features were selected using random forest feature importance, which helps to achieve the objective of the study. The study’s purpose is to enable organizations to develop suitable techniques to prevent cybercrime using random forest predictions as they relate to cloud services in the United States. The effectiveness of the models used is evaluated by utilizing validation matrices that include recall values, accuracy, and precision, in addition to F1 scores and confusion matrices. Based on evaluation scores (accuracy, precision, recall, and F1 scores) of 81.9%, 82.6%, and 82.1%, the results demonstrated the effectiveness of the random forest model. It showed the importance of machine learning algorithms in preventing cybercrime and boosting security in the cloud environment. It recommends that other machine learning models be adopted to see how to improve cybersecurity through cloud computing.
文摘This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hackers, thereby making customer/client data visible and unprotected. Also, this led to enormous risk of the clients/customers due to defective equipment, bugs, faulty servers, and specious actions. The aim if this paper therefore is to analyze a secure model using Unicode Transformation Format (UTF) base 64 algorithms for storage of data in cloud securely. The methodology used was Object Orientated Hypermedia Analysis and Design Methodology (OOHADM) was adopted. Python was used to develop the security model;the role-based access control (RBAC) and multi-factor authentication (MFA) to enhance security Algorithm were integrated into the Information System developed with HTML 5, JavaScript, Cascading Style Sheet (CSS) version 3 and PHP7. This paper also discussed some of the following concepts;Development of Computing in Cloud, Characteristics of computing, Cloud deployment Model, Cloud Service Models, etc. The results showed that the proposed enhanced security model for information systems of cooperate platform handled multiple authorization and authentication menace, that only one login page will direct all login requests of the different modules to one Single Sign On Server (SSOS). This will in turn redirect users to their requested resources/module when authenticated, leveraging on the Geo-location integration for physical location validation. The emergence of this newly developed system will solve the shortcomings of the existing systems and reduce time and resources incurred while using the existing system.
基金This research was funded by the National Natural Science Foundation of China,Grant Number 62162039the Shaanxi Provincial Key R&D Program,China with Grant Number 2020GY-041.
文摘The Access control scheme is an effective method to protect user data privacy.The access control scheme based on blockchain and ciphertext policy attribute encryption(CP–ABE)can solve the problems of single—point of failure and lack of trust in the centralized system.However,it also brings new problems to the health information in the cloud storage environment,such as attribute leakage,low consensus efficiency,complex permission updates,and so on.This paper proposes an access control scheme with fine-grained attribute revocation,keyword search,and traceability of the attribute private key distribution process.Blockchain technology tracks the authorization of attribute private keys.The credit scoring method improves the Raft protocol in consensus efficiency.Besides,the interplanetary file system(IPFS)addresses the capacity deficit of blockchain.Under the premise of hiding policy,the research proposes a fine-grained access control method based on users,user attributes,and file structure.It optimizes the data-sharing mode.At the same time,Proxy Re-Encryption(PRE)technology is used to update the access rights.The proposed scheme proved to be secure.Comparative analysis and experimental results show that the proposed scheme has higher efficiency and more functions.It can meet the needs of medical institutions.
文摘In this paper, we present an approach to establish efficient and scalable service provisioning in the cloud environment using P2P-based infrastructure for storing, sharing and discovering services. Unlike most other P2P-based approaches, it allows flexible search queries, since all of them are executed against internal database presenting at each overlay node. Various issues concerning using this approach in the cloud environment, such as load-balancing, queuing, dealing with skewed data and dynamic attributes, are addressed in the paper. The infrastructure proposed in the paper can serve as a base for creating robust, scalable and reliable cloud systems, able to fulfill client’s QoS requirements, and at the same time introduce more efficient utilization of resources to the cloud provider.
基金This work is supported by the National Natural Science Foundation of China under Grant No.62001341the National Natural Science Foundation of Jiangsu Province under Grant No.BK20221379the Jiangsu Engineering Research Center of Digital Twinning Technology for Key Equipment in Petrochemical Process under Grant No.DTEC202104.
文摘This paper focuses on the task of few-shot 3D point cloud semantic segmentation.Despite some progress,this task still encounters many issues due to the insufficient samples given,e.g.,incomplete object segmentation and inaccurate semantic discrimination.To tackle these issues,we first leverage part-whole relationships into the task of 3D point cloud semantic segmentation to capture semantic integrity,which is empowered by the dynamic capsule routing with the module of 3D Capsule Networks(CapsNets)in the embedding network.Concretely,the dynamic routing amalgamates geometric information of the 3D point cloud data to construct higher-level feature representations,which capture the relationships between object parts and their wholes.Secondly,we designed a multi-prototype enhancement module to enhance the prototype discriminability.Specifically,the single-prototype enhancement mechanism is expanded to the multi-prototype enhancement version for capturing rich semantics.Besides,the shot-correlation within the category is calculated via the interaction of different samples to enhance the intra-category similarity.Ablation studies prove that the involved part-whole relations and proposed multi-prototype enhancement module help to achieve complete object segmentation and improve semantic discrimination.Moreover,under the integration of these two modules,quantitative and qualitative experiments on two public benchmarks,including S3DIS and ScanNet,indicate the superior performance of the proposed framework on the task of 3D point cloud semantic segmentation,compared to some state-of-the-art methods.
文摘As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy in the cloud environment.A hypervisor is a virtualization software used in cloud hosting to divide and allocate resources on various pieces of hardware.The choice of hypervisor can significantly impact the performance of cryptographic operations in the cloud environment.An important issue that must be carefully examined is that no hypervisor is completely superior in terms of performance;Each hypervisor should be examined to meet specific needs.The main objective of this study is to provide accurate results to compare the performance of Hyper-V and Kernel-based Virtual Machine(KVM)while implementing different cryptographic algorithms to guide cloud service providers and end users in choosing the most suitable hypervisor for their cryptographic needs.This study evaluated the efficiency of two hypervisors,Hyper-V and KVM,in implementing six cryptographic algorithms:Rivest,Shamir,Adleman(RSA),Advanced Encryption Standard(AES),Triple Data Encryption Standard(TripleDES),Carlisle Adams and Stafford Tavares(CAST-128),BLOWFISH,and TwoFish.The study’s findings show that KVM outperforms Hyper-V,with 12.2%less Central Processing Unit(CPU)use and 12.95%less time overall for encryption and decryption operations with various file sizes.The study’s findings emphasize how crucial it is to pick a hypervisor that is appropriate for cryptographic needs in a cloud environment,which could assist both cloud service providers and end users.Future research may focus more on how various hypervisors perform while handling cryptographic workloads.
文摘Cloud computing has drastically changed the delivery and consumption of live streaming content.The designs,challenges,and possible uses of cloud computing for live streaming are studied.A comprehensive overview of the technical and business issues surrounding cloudbased live streaming is provided,including the benefits of cloud computing,the various live streaming architectures,and the challenges that live streaming service providers face in delivering high‐quality,real‐time services.The different techniques used to improve the performance of video streaming,such as adaptive bit‐rate streaming,multicast distribution,and edge computing are discussed and the necessity of low‐latency and high‐quality video transmission in cloud‐based live streaming is underlined.Issues such as improving user experience and live streaming service performance using cutting‐edge technology,like artificial intelligence and machine learning are discussed.In addition,the legal and regulatory implications of cloud‐based live streaming,including issues with network neutrality,data privacy,and content moderation are addressed.The future of cloud computing for live streaming is examined in the section that follows,and it looks at the most likely new developments in terms of trends and technology.For technology vendors,live streaming service providers,and regulators,the findings have major policy‐relevant implications.Suggestions on how stakeholders should address these concerns and take advantage of the potential presented by this rapidly evolving sector,as well as insights into the key challenges and opportunities associated with cloud‐based live streaming are provided.
文摘logical testing model and resource lifecycle information,generate test cases and complete parameters,and alleviate inconsistency issues through parameter inference.Once again,we propose a method of analyzing test results using joint state codes and call stack information,which compensates for the shortcomings of traditional analysis methods.We will apply our method to testing REST services,including OpenStack,an open source cloud operating platform for experimental evaluation.We have found a series of inconsistencies,known vulnerabilities,and new unknown logical defects.
基金supported by the National Natural Science Foundation of China(Nos.12202011,12332014)China Postdoctoral Science Foundation(No.2022M710190).
文摘Cavitation is a prevalent phenomenon within the domain of ship and ocean engineering,predominantly occurring in the tail flow fields of high-speed rotating propellers and on the surfaces of high-speed underwater vehicles.The re-entrant jet and compression wave resulting from the collapse of cavity vapour are pivotal factors contributing to cavity instability.Concurrently,these phenomena significantly modulate the evolution of cavitation flow.In this paper,numerical investigations into cloud cavitation over a Clark-Y hydrofoil were conducted,utilizing the Large Eddy Simulation(LES)turbulence model and the Volume of Fluid(VOF)method within the OpenFOAM framework.Comparative analysis of results obtained at different angles of attack is undertaken.A discernible augmentation in cavity thickness is observed concomitant with the escalation in attack angle,alongside a progressive intensification in pressure at the leading edge of the hydrofoil,contributing to the suction force.These results can serve as a fundamental point of reference for gaining a deeper comprehension of cloud cavitation dynamics.
基金the Deanship of Scientific Research,Najran University,Kingdom of Saudi Arabia,for funding this work under the Research Groups Funding Program Grant Code Number(NU/RG/SERC/12/43).
文摘Data security assurance is crucial due to the increasing prevalence of cloud computing and its widespread use across different industries,especially in light of the growing number of cybersecurity threats.A major and everpresent threat is Ransomware-as-a-Service(RaaS)assaults,which enable even individuals with minimal technical knowledge to conduct ransomware operations.This study provides a new approach for RaaS attack detection which uses an ensemble of deep learning models.For this purpose,the network intrusion detection dataset“UNSWNB15”from the Intelligent Security Group of the University of New South Wales,Australia is analyzed.In the initial phase,the rectified linear unit-,scaled exponential linear unit-,and exponential linear unit-based three separate Multi-Layer Perceptron(MLP)models are developed.Later,using the combined predictive power of these three MLPs,the RansoDetect Fusion ensemble model is introduced in the suggested methodology.The proposed ensemble technique outperforms previous studieswith impressive performance metrics results,including 98.79%accuracy and recall,98.85%precision,and 98.80%F1-score.The empirical results of this study validate the ensemble model’s ability to improve cybersecurity defenses by showing that it outperforms individual MLPmodels.In expanding the field of cybersecurity strategy,this research highlights the significance of combined deep learning models in strengthening intrusion detection systems against sophisticated cyber threats.
基金supported by the National Natural Science Foundation of China(Grant No.92365206)the support of the China Postdoctoral Science Foundation(Certificate Number:2023M740272)+1 种基金supported by the National Natural Science Foundation of China(Grant No.12247168)China Postdoctoral Science Foundation(Certificate Number:2022TQ0036)。
文摘We introduce Quafu-Qcover,an open-source cloud-based software package developed for solving combinatorial optimization problems using quantum simulators and hardware backends.Quafu-Qcover provides a standardized and comprehensive workflow that utilizes the quantum approximate optimization algorithm(QAOA).It facilitates the automatic conversion of the original problem into a quadratic unconstrained binary optimization(QUBO)model and its corresponding Ising model,which can be subsequently transformed into a weight graph.The core of Qcover relies on a graph decomposition-based classical algorithm,which efficiently derives the optimal parameters for the shallow QAOA circuit.Quafu-Qcover incorporates a dedicated compiler capable of translating QAOA circuits into physical quantum circuits that can be executed on Quafu cloud quantum computers.Compared to a general-purpose compiler,our compiler demonstrates the ability to generate shorter circuit depths,while also exhibiting superior speed performance.Additionally,the Qcover compiler has the capability to dynamically create a library of qubits coupling substructures in real-time,utilizing the most recent calibration data from the superconducting quantum devices.This ensures that computational tasks can be assigned to connected physical qubits with the highest fidelity.The Quafu-Qcover allows us to retrieve quantum computing sampling results using a task ID at any time,enabling asynchronous processing.Moreover,it incorporates modules for results preprocessing and visualization,facilitating an intuitive display of solutions for combinatorial optimization problems.We hope that Quafu-Qcover can serve as an instructive illustration for how to explore application problems on the Quafu cloud quantum computers.
文摘Cloud computing has emerged as a viable alternative to traditional computing infrastructures,offering various benefits.However,the adoption of cloud storage poses significant risks to data secrecy and integrity.This article presents an effective mechanism to preserve the secrecy and integrity of data stored on the public cloud by leveraging blockchain technology,smart contracts,and cryptographic primitives.The proposed approach utilizes a Solidity-based smart contract as an auditor for maintaining and verifying the integrity of outsourced data.To preserve data secrecy,symmetric encryption systems are employed to encrypt user data before outsourcing it.An extensive performance analysis is conducted to illustrate the efficiency of the proposed mechanism.Additionally,a rigorous assessment is conducted to ensure that the developed smart contract is free from vulnerabilities and to measure its associated running costs.The security analysis of the proposed system confirms that our approach can securely maintain the confidentiality and integrity of cloud storage,even in the presence of malicious entities.The proposed mechanism contributes to enhancing data security in cloud computing environments and can be used as a foundation for developing more secure cloud storage systems.