To overcome the difficulty of realizing large-scale quantum Fourier transform(QFT) within existing technology, this paper implements a resource-saving method(named t-bit semiclassical QFT over Z_(2~n)), which could re...To overcome the difficulty of realizing large-scale quantum Fourier transform(QFT) within existing technology, this paper implements a resource-saving method(named t-bit semiclassical QFT over Z_(2~n)), which could realize large-scale QFT using an arbitrary-scale quantum register. By developing a feasible method to realize the control quantum gate Rk, we experimentally realize the 2-bit semiclassical QFT over Z_(2~3) on IBM's quantum cloud computer, which shows the feasibility of the method. Then, we compare the actual performance of 2-bit semiclassical QFT with standard QFT in the experiments.The squared statistical overlap experimental data shows that the fidelity of 2-bit semiclassical QFT is higher than that of standard QFT, which is mainly due to fewer two-qubit gates in the semiclassical QFT. Furthermore, based on the proposed method, N = 15 is successfully factorized by implementing Shor's algorithm.展开更多
We introduce Quafu-Qcover,an open-source cloud-based software package developed for solving combinatorial optimization problems using quantum simulators and hardware backends.Quafu-Qcover provides a standardized and c...We introduce Quafu-Qcover,an open-source cloud-based software package developed for solving combinatorial optimization problems using quantum simulators and hardware backends.Quafu-Qcover provides a standardized and comprehensive workflow that utilizes the quantum approximate optimization algorithm(QAOA).It facilitates the automatic conversion of the original problem into a quadratic unconstrained binary optimization(QUBO)model and its corresponding Ising model,which can be subsequently transformed into a weight graph.The core of Qcover relies on a graph decomposition-based classical algorithm,which efficiently derives the optimal parameters for the shallow QAOA circuit.Quafu-Qcover incorporates a dedicated compiler capable of translating QAOA circuits into physical quantum circuits that can be executed on Quafu cloud quantum computers.Compared to a general-purpose compiler,our compiler demonstrates the ability to generate shorter circuit depths,while also exhibiting superior speed performance.Additionally,the Qcover compiler has the capability to dynamically create a library of qubits coupling substructures in real-time,utilizing the most recent calibration data from the superconducting quantum devices.This ensures that computational tasks can be assigned to connected physical qubits with the highest fidelity.The Quafu-Qcover allows us to retrieve quantum computing sampling results using a task ID at any time,enabling asynchronous processing.Moreover,it incorporates modules for results preprocessing and visualization,facilitating an intuitive display of solutions for combinatorial optimization problems.We hope that Quafu-Qcover can serve as an instructive illustration for how to explore application problems on the Quafu cloud quantum computers.展开更多
With the rapid advancement of quantum computing,hybrid quantum–classical machine learning has shown numerous potential applications at the current stage,with expectations of being achievable in the noisy intermediate...With the rapid advancement of quantum computing,hybrid quantum–classical machine learning has shown numerous potential applications at the current stage,with expectations of being achievable in the noisy intermediate-scale quantum(NISQ)era.Quantum reinforcement learning,as an indispensable study,has recently demonstrated its ability to solve standard benchmark environments with formally provable theoretical advantages over classical counterparts.However,despite the progress of quantum processors and the emergence of quantum computing clouds,implementing quantum reinforcement learning algorithms utilizing parameterized quantum circuits(PQCs)on NISQ devices remains infrequent.In this work,we take the first step towards executing benchmark quantum reinforcement problems on real devices equipped with at most 136 qubits on the BAQIS Quafu quantum computing cloud.The experimental results demonstrate that the policy agents can successfully accomplish objectives under modified conditions in both the training and inference phases.Moreover,we design hardware-efficient PQC architectures in the quantum model using a multi-objective evolutionary algorithm and develop a learning algorithm that is adaptable to quantum devices.We hope that the Quafu-RL can be a guiding example to show how to realize machine learning tasks by taking advantage of quantum computers on the quantum cloud platform.展开更多
Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to est...Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to establish relationships between distant but relevant points. To overcome the limitation of local spatial attention, we propose a point content-based Transformer architecture, called PointConT for short. It exploits the locality of points in the feature space(content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class, thus enabling an effective trade-off between capturing long-range dependencies and computational complexity. We further introduce an inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately. Extensive experiments show that our PointConT model achieves a remarkable performance on point cloud shape classification. Especially, our method exhibits 90.3% Top-1 accuracy on the hardest setting of ScanObjectN N. Source code of this paper is available at https://github.com/yahuiliu99/PointC onT.展开更多
The Access control scheme is an effective method to protect user data privacy.The access control scheme based on blockchain and ciphertext policy attribute encryption(CP–ABE)can solve the problems of single—point of...The Access control scheme is an effective method to protect user data privacy.The access control scheme based on blockchain and ciphertext policy attribute encryption(CP–ABE)can solve the problems of single—point of failure and lack of trust in the centralized system.However,it also brings new problems to the health information in the cloud storage environment,such as attribute leakage,low consensus efficiency,complex permission updates,and so on.This paper proposes an access control scheme with fine-grained attribute revocation,keyword search,and traceability of the attribute private key distribution process.Blockchain technology tracks the authorization of attribute private keys.The credit scoring method improves the Raft protocol in consensus efficiency.Besides,the interplanetary file system(IPFS)addresses the capacity deficit of blockchain.Under the premise of hiding policy,the research proposes a fine-grained access control method based on users,user attributes,and file structure.It optimizes the data-sharing mode.At the same time,Proxy Re-Encryption(PRE)technology is used to update the access rights.The proposed scheme proved to be secure.Comparative analysis and experimental results show that the proposed scheme has higher efficiency and more functions.It can meet the needs of medical institutions.展开更多
Cultural relics line graphic serves as a crucial form of traditional artifact information documentation,which is a simple and intuitive product with low cost of displaying compared with 3D models.Dimensionality reduct...Cultural relics line graphic serves as a crucial form of traditional artifact information documentation,which is a simple and intuitive product with low cost of displaying compared with 3D models.Dimensionality reduction is undoubtedly necessary for line drawings.However,most existing methods for artifact drawing rely on the principles of orthographic projection that always cannot avoid angle occlusion and data overlapping while the surface of cultural relics is complex.Therefore,conformal mapping was introduced as a dimensionality reduction way to compensate for the limitation of orthographic projection.Based on the given criteria for assessing surface complexity,this paper proposed a three-dimensional feature guideline extraction method for complex cultural relic surfaces.A 2D and 3D combined factor that measured the importance of points on describing surface features,vertex weight,was designed.Then the selection threshold for feature guideline extraction was determined based on the differences between vertex weight and shape index distributions.The feasibility and stability were verified through experiments conducted on real cultural relic surface data.Results demonstrated the ability of the method to address the challenges associated with the automatic generation of line drawings for complex surfaces.The extraction method and the obtained results will be useful for line graphic drawing,displaying and propaganda of cultural relics.展开更多
In a convective scheme featuring a discretized cloud size density, the assumed lateral mixing rate is inversely proportional to the exponential coefficient of plume size. This follows a typical assumption of-1, but it...In a convective scheme featuring a discretized cloud size density, the assumed lateral mixing rate is inversely proportional to the exponential coefficient of plume size. This follows a typical assumption of-1, but it has unveiled inherent uncertainties, especially for deep layer clouds. Addressing this knowledge gap, we conducted comprehensive large eddy simulations and comparative analyses focused on terrestrial regions. Our investigation revealed that cloud formation adheres to the tenets of Bernoulli trials, illustrating power-law scaling that remains consistent regardless of the inherent deep layer cloud attributes existing between cloud size and the number of clouds. This scaling paradigm encompasses liquid, ice, and mixed phases in deep layer clouds. The exponent characterizing the interplay between cloud scale and number in the deep layer cloud, specifically for liquid, ice, or mixed-phase clouds, resembles that of shallow convection,but converges closely to zero. This convergence signifies a propensity for diminished cloud numbers and sizes within deep layer clouds. Notably, the infusion of abundant moisture and the release of latent heat by condensation within the lower atmospheric strata make substantial contributions. However, this role in ice phase formation is limited. The emergence of liquid and ice phases in deep layer clouds is facilitated by the latent heat and influenced by the wind shear inherent in the middle levels. These interrelationships hold potential applications in formulating parameterizations and post-processing model outcomes.展开更多
The cloud type product 2B-CLDCLASS-LIDAR based on CloudSat and CALIPSO from June 2006 to May 2017 is used to examine the temporal and spatial distribution characteristics and interannual variability of eight cloud typ...The cloud type product 2B-CLDCLASS-LIDAR based on CloudSat and CALIPSO from June 2006 to May 2017 is used to examine the temporal and spatial distribution characteristics and interannual variability of eight cloud types(high cloud, altostratus, altocumulus, stratus, stratocumulus, cumulus, nimbostratus, and deep convection) and three phases(ice,mixed, and water) in the Arctic. Possible reasons for the observed interannual variability are also discussed. The main conclusions are as follows:(1) More water clouds occur on the Atlantic side, and more ice clouds occur over continents.(2)The average spatial and seasonal distributions of cloud types show three patterns: high clouds and most cumuliform clouds are concentrated in low-latitude locations and peak in summer;altostratus and nimbostratus are concentrated over and around continents and are less abundant in summer;stratocumulus and stratus are concentrated near the inner Arctic and peak during spring and autumn.(3) Regional averaged interannual frequencies of ice clouds and altostratus clouds significantly decrease, while those of water clouds, altocumulus, and cumulus clouds increase significantly.(4) Significant features of the linear trends of cloud frequencies are mainly located over ocean areas.(5) The monthly water cloud frequency anomalies are positively correlated with air temperature in most of the troposphere, while those for ice clouds are negatively correlated.(6) The decrease in altostratus clouds is associated with the weakening of the Arctic front due to Arctic warming, while increased water vapor transport into the Arctic and higher atmospheric instability lead to more cumulus and altocumulus clouds.展开更多
Amid the landscape of Cloud Computing(CC),the Cloud Datacenter(DC)stands as a conglomerate of physical servers,whose performance can be hindered by bottlenecks within the realm of proliferating CC services.A linchpin ...Amid the landscape of Cloud Computing(CC),the Cloud Datacenter(DC)stands as a conglomerate of physical servers,whose performance can be hindered by bottlenecks within the realm of proliferating CC services.A linchpin in CC’s performance,the Cloud Service Broker(CSB),orchestrates DC selection.Failure to adroitly route user requests with suitable DCs transforms the CSB into a bottleneck,endangering service quality.To tackle this,deploying an efficient CSB policy becomes imperative,optimizing DC selection to meet stringent Qualityof-Service(QoS)demands.Amidst numerous CSB policies,their implementation grapples with challenges like costs and availability.This article undertakes a holistic review of diverse CSB policies,concurrently surveying the predicaments confronted by current policies.The foremost objective is to pinpoint research gaps and remedies to invigorate future policy development.Additionally,it extensively clarifies various DC selection methodologies employed in CC,enriching practitioners and researchers alike.Employing synthetic analysis,the article systematically assesses and compares myriad DC selection techniques.These analytical insights equip decision-makers with a pragmatic framework to discern the apt technique for their needs.In summation,this discourse resoundingly underscores the paramount importance of adept CSB policies in DC selection,highlighting the imperative role of efficient CSB policies in optimizing CC performance.By emphasizing the significance of these policies and their modeling implications,the article contributes to both the general modeling discourse and its practical applications in the CC domain.展开更多
Cloud base height(CBH) is a crucial parameter for cloud radiative effect estimates, climate change simulations, and aviation guidance. However, due to the limited information on cloud vertical structures included in p...Cloud base height(CBH) is a crucial parameter for cloud radiative effect estimates, climate change simulations, and aviation guidance. However, due to the limited information on cloud vertical structures included in passive satellite radiometer observations, few operational satellite CBH products are currently available. This study presents a new method for retrieving CBH from satellite radiometers. The method first uses the combined measurements of satellite radiometers and ground-based cloud radars to develop a lookup table(LUT) of effective cloud water content(ECWC), representing the vertically varying cloud water content. This LUT allows for the conversion of cloud water path to cloud geometric thickness(CGT), enabling the estimation of CBH as the difference between cloud top height and CGT. Detailed comparative analysis of CBH estimates from the state-of-the-art ECWC LUT are conducted against four ground-based millimeter-wave cloud radar(MMCR) measurements, and results show that the mean bias(correlation coefficient) is0.18±1.79 km(0.73), which is lower(higher) than 0.23±2.11 km(0.67) as derived from the combined measurements of satellite radiometers and satellite radar-lidar(i.e., Cloud Sat and CALIPSO). Furthermore, the percentages of the CBH biases within 250 m increase by 5% to 10%, which varies by location. This indicates that the CBH estimates from our algorithm are more consistent with ground-based MMCR measurements. Therefore, this algorithm shows great potential for further improvement of the CBH retrievals as ground-based MMCR are being increasingly included in global surface meteorological observing networks, and the improved CBH retrievals will contribute to better cloud radiative effect estimates.展开更多
In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding ...In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding the risk of data loss and data overlapping.The development of data flow scheduling approaches in the cloud environment taking security parameters into account is insufficient.In our work,we propose a data scheduling model for the cloud environment.Themodel is made up of three parts that together help dispatch user data flow to the appropriate cloudVMs.The first component is the Collector Agent whichmust periodically collect information on the state of the network links.The second one is the monitoring agent which must then analyze,classify,and make a decision on the state of the link and finally transmit this information to the scheduler.The third one is the scheduler who must consider previous information to transfer user data,including fair distribution and reliable paths.It should be noted that each part of the proposedmodel requires the development of its algorithms.In this article,we are interested in the development of data transfer algorithms,including fairness distribution with the consideration of a stable link state.These algorithms are based on the grouping of transmitted files and the iterative method.The proposed algorithms showthe performances to obtain an approximate solution to the studied problem which is an NP-hard(Non-Polynomial solution)problem.The experimental results show that the best algorithm is the half-grouped minimum excluding(HME),with a percentage of 91.3%,an average deviation of 0.042,and an execution time of 0.001 s.展开更多
As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy i...As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy in the cloud environment.A hypervisor is a virtualization software used in cloud hosting to divide and allocate resources on various pieces of hardware.The choice of hypervisor can significantly impact the performance of cryptographic operations in the cloud environment.An important issue that must be carefully examined is that no hypervisor is completely superior in terms of performance;Each hypervisor should be examined to meet specific needs.The main objective of this study is to provide accurate results to compare the performance of Hyper-V and Kernel-based Virtual Machine(KVM)while implementing different cryptographic algorithms to guide cloud service providers and end users in choosing the most suitable hypervisor for their cryptographic needs.This study evaluated the efficiency of two hypervisors,Hyper-V and KVM,in implementing six cryptographic algorithms:Rivest,Shamir,Adleman(RSA),Advanced Encryption Standard(AES),Triple Data Encryption Standard(TripleDES),Carlisle Adams and Stafford Tavares(CAST-128),BLOWFISH,and TwoFish.The study’s findings show that KVM outperforms Hyper-V,with 12.2%less Central Processing Unit(CPU)use and 12.95%less time overall for encryption and decryption operations with various file sizes.The study’s findings emphasize how crucial it is to pick a hypervisor that is appropriate for cryptographic needs in a cloud environment,which could assist both cloud service providers and end users.Future research may focus more on how various hypervisors perform while handling cryptographic workloads.展开更多
Environmental conditions can change markedly over geographical distances along elevation gradients,making them natural laboratories to study the processes that structure communities.This work aimed to assess the influ...Environmental conditions can change markedly over geographical distances along elevation gradients,making them natural laboratories to study the processes that structure communities.This work aimed to assess the influences of elevation on Tropical Montane Cloud Forest plant communities in the Brazilian Atlantic Forest,a historically neglected ecoregion.We evaluated the phylogenetic structure,forest structure(tree basal area and tree density)and species richness along an elevation gradient,as well as the evolutionary fingerprints of elevation-success on phylogenetic lineages from the tree communities.To do so,we assessed nine communities along an elevation gradient from 1210 to 2310 m a.s.l.without large elevation gaps.The relationships between elevation and phylogenetic structure,forest structure and species richness were investigated through Linear Models.The occurrence of evolutionary fingerprint on phylogenetic lineages was investigated by quantifying the extent of phylogenetic signal of elevation-success using a genus-level molecular phylogeny.Our results showed decreased species richness at higher elevations and independence between forest structure,phylogenetic structure and elevation.We also verified that there is a phylogenetic signal associated with elevation-success by lineages.We concluded that the elevation is associated with species richness and the occurrence of phylogenetic lineages in the tree communities evaluated in Mantiqueira Range.On the other hand,elevation is not associated with forest structure or phylogenetic structure.Furthermore,closely related taxa tend to have their higher ecological success in similar elevations.Finally,we highlight the fragility of the tropical montane cloud forests in the Mantiqueira Range in face of environmental changes(i.e.global warming)due to the occurrence of exclusive phylogenetic lineages evolutionarily adapted to environmental conditions(i.e.minimum temperature)associated with each elevation range.展开更多
Data security assurance is crucial due to the increasing prevalence of cloud computing and its widespread use across different industries,especially in light of the growing number of cybersecurity threats.A major and ...Data security assurance is crucial due to the increasing prevalence of cloud computing and its widespread use across different industries,especially in light of the growing number of cybersecurity threats.A major and everpresent threat is Ransomware-as-a-Service(RaaS)assaults,which enable even individuals with minimal technical knowledge to conduct ransomware operations.This study provides a new approach for RaaS attack detection which uses an ensemble of deep learning models.For this purpose,the network intrusion detection dataset“UNSWNB15”from the Intelligent Security Group of the University of New South Wales,Australia is analyzed.In the initial phase,the rectified linear unit-,scaled exponential linear unit-,and exponential linear unit-based three separate Multi-Layer Perceptron(MLP)models are developed.Later,using the combined predictive power of these three MLPs,the RansoDetect Fusion ensemble model is introduced in the suggested methodology.The proposed ensemble technique outperforms previous studieswith impressive performance metrics results,including 98.79%accuracy and recall,98.85%precision,and 98.80%F1-score.The empirical results of this study validate the ensemble model’s ability to improve cybersecurity defenses by showing that it outperforms individual MLPmodels.In expanding the field of cybersecurity strategy,this research highlights the significance of combined deep learning models in strengthening intrusion detection systems against sophisticated cyber threats.展开更多
The excitation temperature T_(ex)for molecular emission and absorption lines is an essential parameter for interpreting the molecular environment.This temperature can be obtained by observing multiple molecular transi...The excitation temperature T_(ex)for molecular emission and absorption lines is an essential parameter for interpreting the molecular environment.This temperature can be obtained by observing multiple molecular transitions or hyperfine structures of a single transition,but it remains unknown for a single transition without hyperfine structure lines.Earlier H_(2)CO absorption experiments for a single transition without hyperfine structures adopted a constant value of T_(ex),which is not correct for molecular regions with active star formation and H II regions.For H_(2)CO,two equations with two unknowns may be used to determine the excitation temperature T_(ex)and the optical depthτ,if other parameters can be determined from measurements.Published observational data of the4.83 GHz(λ=6 cm)H_(2)CO(1_(10)-1_(11))absorption line for three star formation regions,W40,M17 and DR17,have been used to verify this method.The distributions of T_(ex)in these sources are in good agreement with the contours of the H110αemission of the H II regions in M17 and DR17 and with the H_(2)CO(1_(10)-1_(11))absorption in W40.The distributions of T_(ex)in the three sources indicate that there can be significant variation in the excitation temperature across star formation and H II regions and that the use of a fixed(low)value results in misinterpretation.展开更多
This paper focuses on the task of few-shot 3D point cloud semantic segmentation.Despite some progress,this task still encounters many issues due to the insufficient samples given,e.g.,incomplete object segmentation an...This paper focuses on the task of few-shot 3D point cloud semantic segmentation.Despite some progress,this task still encounters many issues due to the insufficient samples given,e.g.,incomplete object segmentation and inaccurate semantic discrimination.To tackle these issues,we first leverage part-whole relationships into the task of 3D point cloud semantic segmentation to capture semantic integrity,which is empowered by the dynamic capsule routing with the module of 3D Capsule Networks(CapsNets)in the embedding network.Concretely,the dynamic routing amalgamates geometric information of the 3D point cloud data to construct higher-level feature representations,which capture the relationships between object parts and their wholes.Secondly,we designed a multi-prototype enhancement module to enhance the prototype discriminability.Specifically,the single-prototype enhancement mechanism is expanded to the multi-prototype enhancement version for capturing rich semantics.Besides,the shot-correlation within the category is calculated via the interaction of different samples to enhance the intra-category similarity.Ablation studies prove that the involved part-whole relations and proposed multi-prototype enhancement module help to achieve complete object segmentation and improve semantic discrimination.Moreover,under the integration of these two modules,quantitative and qualitative experiments on two public benchmarks,including S3DIS and ScanNet,indicate the superior performance of the proposed framework on the task of 3D point cloud semantic segmentation,compared to some state-of-the-art methods.展开更多
The security performance of cloud services is a key factor influencing users’selection of Cloud Service Providers(CSPs).Continuous monitoring of the security status of cloud services is critical.However,existing rese...The security performance of cloud services is a key factor influencing users’selection of Cloud Service Providers(CSPs).Continuous monitoring of the security status of cloud services is critical.However,existing research lacks a practical framework for such ongoing monitoring.To address this gap,this paper proposes the first NonCollaborative Container-Based Cloud Service Operation State Continuous Monitoring Framework(NCCMF),based on relevant standards.NCCMF operates without the CSP’s collaboration by:1)establishing a scalable supervisory index system through the identification of security responsibilities for each role,and 2)designing a Continuous Metrics Supervision Protocol(CMA)to automate the negotiation of supervisory metrics.The framework also outlines the supervision process for cloud services across different deployment models.Experimental results demonstrate that NCCMF effectively monitors the operational state of two real-world IoT(Internet of Things)cloud services,with an average supervision error of less than 15%.展开更多
Redundancy elimination techniques are extensively investigated to reduce storage overheads for cloud-assisted health systems.Deduplication eliminates the redundancy of duplicate blocks by storing one physical instance...Redundancy elimination techniques are extensively investigated to reduce storage overheads for cloud-assisted health systems.Deduplication eliminates the redundancy of duplicate blocks by storing one physical instance referenced by multiple duplicates.Delta compression is usually regarded as a complementary technique to deduplication to further remove the redundancy of similar blocks,but our observations indicate that this is disobedient when data have sparse duplicate blocks.In addition,there are many overlapped deltas in the resemblance detection process of post-deduplication delta compression,which hinders the efficiency of delta compression and the index phase of resemblance detection inquires abundant non-similar blocks,resulting in inefficient system throughput.Therefore,a multi-feature-based redundancy elimination scheme,called MFRE,is proposed to solve these problems.The similarity feature and temporal locality feature are excavated to assist redundancy elimination where the similarity feature well expresses the duplicate attribute.Then,similarity-based dynamic post-deduplication delta compression and temporal locality-based dynamic delta compression discover more similar base blocks to minimise overlapped deltas and improve compression ratios.Moreover,the clustering method based on block-relationship and the feature index strategy based on bloom filters reduce IO overheads and improve system throughput.Experiments demonstrate that the proposed method,compared to the state-of-the-art method,improves the compression ratio and system throughput by 9.68%and 50%,respectively.展开更多
基金Project supported by the National Basic Research Program of China(Grant No.2013CB338002)the National Natural Science Foundation of China(Grant No.61502526)
文摘To overcome the difficulty of realizing large-scale quantum Fourier transform(QFT) within existing technology, this paper implements a resource-saving method(named t-bit semiclassical QFT over Z_(2~n)), which could realize large-scale QFT using an arbitrary-scale quantum register. By developing a feasible method to realize the control quantum gate Rk, we experimentally realize the 2-bit semiclassical QFT over Z_(2~3) on IBM's quantum cloud computer, which shows the feasibility of the method. Then, we compare the actual performance of 2-bit semiclassical QFT with standard QFT in the experiments.The squared statistical overlap experimental data shows that the fidelity of 2-bit semiclassical QFT is higher than that of standard QFT, which is mainly due to fewer two-qubit gates in the semiclassical QFT. Furthermore, based on the proposed method, N = 15 is successfully factorized by implementing Shor's algorithm.
基金supported by the National Natural Science Foundation of China(Grant No.92365206)the support of the China Postdoctoral Science Foundation(Certificate Number:2023M740272)+1 种基金supported by the National Natural Science Foundation of China(Grant No.12247168)China Postdoctoral Science Foundation(Certificate Number:2022TQ0036)。
文摘We introduce Quafu-Qcover,an open-source cloud-based software package developed for solving combinatorial optimization problems using quantum simulators and hardware backends.Quafu-Qcover provides a standardized and comprehensive workflow that utilizes the quantum approximate optimization algorithm(QAOA).It facilitates the automatic conversion of the original problem into a quadratic unconstrained binary optimization(QUBO)model and its corresponding Ising model,which can be subsequently transformed into a weight graph.The core of Qcover relies on a graph decomposition-based classical algorithm,which efficiently derives the optimal parameters for the shallow QAOA circuit.Quafu-Qcover incorporates a dedicated compiler capable of translating QAOA circuits into physical quantum circuits that can be executed on Quafu cloud quantum computers.Compared to a general-purpose compiler,our compiler demonstrates the ability to generate shorter circuit depths,while also exhibiting superior speed performance.Additionally,the Qcover compiler has the capability to dynamically create a library of qubits coupling substructures in real-time,utilizing the most recent calibration data from the superconducting quantum devices.This ensures that computational tasks can be assigned to connected physical qubits with the highest fidelity.The Quafu-Qcover allows us to retrieve quantum computing sampling results using a task ID at any time,enabling asynchronous processing.Moreover,it incorporates modules for results preprocessing and visualization,facilitating an intuitive display of solutions for combinatorial optimization problems.We hope that Quafu-Qcover can serve as an instructive illustration for how to explore application problems on the Quafu cloud quantum computers.
基金supported by the Beijing Academy of Quantum Information Sciencessupported by the National Natural Science Foundation of China(Grant No.92365206)+2 种基金the support of the China Postdoctoral Science Foundation(Certificate Number:2023M740272)supported by the National Natural Science Foundation of China(Grant No.12247168)China Postdoctoral Science Foundation(Certificate Number:2022TQ0036)。
文摘With the rapid advancement of quantum computing,hybrid quantum–classical machine learning has shown numerous potential applications at the current stage,with expectations of being achievable in the noisy intermediate-scale quantum(NISQ)era.Quantum reinforcement learning,as an indispensable study,has recently demonstrated its ability to solve standard benchmark environments with formally provable theoretical advantages over classical counterparts.However,despite the progress of quantum processors and the emergence of quantum computing clouds,implementing quantum reinforcement learning algorithms utilizing parameterized quantum circuits(PQCs)on NISQ devices remains infrequent.In this work,we take the first step towards executing benchmark quantum reinforcement problems on real devices equipped with at most 136 qubits on the BAQIS Quafu quantum computing cloud.The experimental results demonstrate that the policy agents can successfully accomplish objectives under modified conditions in both the training and inference phases.Moreover,we design hardware-efficient PQC architectures in the quantum model using a multi-objective evolutionary algorithm and develop a learning algorithm that is adaptable to quantum devices.We hope that the Quafu-RL can be a guiding example to show how to realize machine learning tasks by taking advantage of quantum computers on the quantum cloud platform.
基金supported in part by the Nationa Natural Science Foundation of China (61876011)the National Key Research and Development Program of China (2022YFB4703700)+1 种基金the Key Research and Development Program 2020 of Guangzhou (202007050002)the Key-Area Research and Development Program of Guangdong Province (2020B090921003)。
文摘Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to establish relationships between distant but relevant points. To overcome the limitation of local spatial attention, we propose a point content-based Transformer architecture, called PointConT for short. It exploits the locality of points in the feature space(content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class, thus enabling an effective trade-off between capturing long-range dependencies and computational complexity. We further introduce an inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately. Extensive experiments show that our PointConT model achieves a remarkable performance on point cloud shape classification. Especially, our method exhibits 90.3% Top-1 accuracy on the hardest setting of ScanObjectN N. Source code of this paper is available at https://github.com/yahuiliu99/PointC onT.
基金This research was funded by the National Natural Science Foundation of China,Grant Number 62162039the Shaanxi Provincial Key R&D Program,China with Grant Number 2020GY-041.
文摘The Access control scheme is an effective method to protect user data privacy.The access control scheme based on blockchain and ciphertext policy attribute encryption(CP–ABE)can solve the problems of single—point of failure and lack of trust in the centralized system.However,it also brings new problems to the health information in the cloud storage environment,such as attribute leakage,low consensus efficiency,complex permission updates,and so on.This paper proposes an access control scheme with fine-grained attribute revocation,keyword search,and traceability of the attribute private key distribution process.Blockchain technology tracks the authorization of attribute private keys.The credit scoring method improves the Raft protocol in consensus efficiency.Besides,the interplanetary file system(IPFS)addresses the capacity deficit of blockchain.Under the premise of hiding policy,the research proposes a fine-grained access control method based on users,user attributes,and file structure.It optimizes the data-sharing mode.At the same time,Proxy Re-Encryption(PRE)technology is used to update the access rights.The proposed scheme proved to be secure.Comparative analysis and experimental results show that the proposed scheme has higher efficiency and more functions.It can meet the needs of medical institutions.
基金National Natural Science Foundation of China(Nos.42071444,42101444)。
文摘Cultural relics line graphic serves as a crucial form of traditional artifact information documentation,which is a simple and intuitive product with low cost of displaying compared with 3D models.Dimensionality reduction is undoubtedly necessary for line drawings.However,most existing methods for artifact drawing rely on the principles of orthographic projection that always cannot avoid angle occlusion and data overlapping while the surface of cultural relics is complex.Therefore,conformal mapping was introduced as a dimensionality reduction way to compensate for the limitation of orthographic projection.Based on the given criteria for assessing surface complexity,this paper proposed a three-dimensional feature guideline extraction method for complex cultural relic surfaces.A 2D and 3D combined factor that measured the importance of points on describing surface features,vertex weight,was designed.Then the selection threshold for feature guideline extraction was determined based on the differences between vertex weight and shape index distributions.The feasibility and stability were verified through experiments conducted on real cultural relic surface data.Results demonstrated the ability of the method to address the challenges associated with the automatic generation of line drawings for complex surfaces.The extraction method and the obtained results will be useful for line graphic drawing,displaying and propaganda of cultural relics.
基金supported by the Second Tibetan Plateau Scientific Expedition and Research Program (STEP) (Grant No.2019QZKK010203)the National Natural Science Foundation of China (Grant No.42175174 and 41975130)+1 种基金the Natural Science Foundation of Sichuan Province (Grant No.2022NSFSC1092)the Sichuan Provincial Innovation Training Program for College Students (Grant No.S202210621009)。
文摘In a convective scheme featuring a discretized cloud size density, the assumed lateral mixing rate is inversely proportional to the exponential coefficient of plume size. This follows a typical assumption of-1, but it has unveiled inherent uncertainties, especially for deep layer clouds. Addressing this knowledge gap, we conducted comprehensive large eddy simulations and comparative analyses focused on terrestrial regions. Our investigation revealed that cloud formation adheres to the tenets of Bernoulli trials, illustrating power-law scaling that remains consistent regardless of the inherent deep layer cloud attributes existing between cloud size and the number of clouds. This scaling paradigm encompasses liquid, ice, and mixed phases in deep layer clouds. The exponent characterizing the interplay between cloud scale and number in the deep layer cloud, specifically for liquid, ice, or mixed-phase clouds, resembles that of shallow convection,but converges closely to zero. This convergence signifies a propensity for diminished cloud numbers and sizes within deep layer clouds. Notably, the infusion of abundant moisture and the release of latent heat by condensation within the lower atmospheric strata make substantial contributions. However, this role in ice phase formation is limited. The emergence of liquid and ice phases in deep layer clouds is facilitated by the latent heat and influenced by the wind shear inherent in the middle levels. These interrelationships hold potential applications in formulating parameterizations and post-processing model outcomes.
基金supported in part by the National Natural Science Foundation of China (Grant No. 42105127)the Special Research Assistant Project of the Chinese Academy of Sciencesthe National Key Research and Development Plans of China (Grant Nos. 2019YFC1510304 and 2016YFE0201900-02)。
文摘The cloud type product 2B-CLDCLASS-LIDAR based on CloudSat and CALIPSO from June 2006 to May 2017 is used to examine the temporal and spatial distribution characteristics and interannual variability of eight cloud types(high cloud, altostratus, altocumulus, stratus, stratocumulus, cumulus, nimbostratus, and deep convection) and three phases(ice,mixed, and water) in the Arctic. Possible reasons for the observed interannual variability are also discussed. The main conclusions are as follows:(1) More water clouds occur on the Atlantic side, and more ice clouds occur over continents.(2)The average spatial and seasonal distributions of cloud types show three patterns: high clouds and most cumuliform clouds are concentrated in low-latitude locations and peak in summer;altostratus and nimbostratus are concentrated over and around continents and are less abundant in summer;stratocumulus and stratus are concentrated near the inner Arctic and peak during spring and autumn.(3) Regional averaged interannual frequencies of ice clouds and altostratus clouds significantly decrease, while those of water clouds, altocumulus, and cumulus clouds increase significantly.(4) Significant features of the linear trends of cloud frequencies are mainly located over ocean areas.(5) The monthly water cloud frequency anomalies are positively correlated with air temperature in most of the troposphere, while those for ice clouds are negatively correlated.(6) The decrease in altostratus clouds is associated with the weakening of the Arctic front due to Arctic warming, while increased water vapor transport into the Arctic and higher atmospheric instability lead to more cumulus and altocumulus clouds.
文摘Amid the landscape of Cloud Computing(CC),the Cloud Datacenter(DC)stands as a conglomerate of physical servers,whose performance can be hindered by bottlenecks within the realm of proliferating CC services.A linchpin in CC’s performance,the Cloud Service Broker(CSB),orchestrates DC selection.Failure to adroitly route user requests with suitable DCs transforms the CSB into a bottleneck,endangering service quality.To tackle this,deploying an efficient CSB policy becomes imperative,optimizing DC selection to meet stringent Qualityof-Service(QoS)demands.Amidst numerous CSB policies,their implementation grapples with challenges like costs and availability.This article undertakes a holistic review of diverse CSB policies,concurrently surveying the predicaments confronted by current policies.The foremost objective is to pinpoint research gaps and remedies to invigorate future policy development.Additionally,it extensively clarifies various DC selection methodologies employed in CC,enriching practitioners and researchers alike.Employing synthetic analysis,the article systematically assesses and compares myriad DC selection techniques.These analytical insights equip decision-makers with a pragmatic framework to discern the apt technique for their needs.In summation,this discourse resoundingly underscores the paramount importance of adept CSB policies in DC selection,highlighting the imperative role of efficient CSB policies in optimizing CC performance.By emphasizing the significance of these policies and their modeling implications,the article contributes to both the general modeling discourse and its practical applications in the CC domain.
基金funded by the National Natural Science Foundation of China (Grant Nos. 42305150 and 42325501)the China Postdoctoral Science Foundation (Grant No. 2023M741774)。
文摘Cloud base height(CBH) is a crucial parameter for cloud radiative effect estimates, climate change simulations, and aviation guidance. However, due to the limited information on cloud vertical structures included in passive satellite radiometer observations, few operational satellite CBH products are currently available. This study presents a new method for retrieving CBH from satellite radiometers. The method first uses the combined measurements of satellite radiometers and ground-based cloud radars to develop a lookup table(LUT) of effective cloud water content(ECWC), representing the vertically varying cloud water content. This LUT allows for the conversion of cloud water path to cloud geometric thickness(CGT), enabling the estimation of CBH as the difference between cloud top height and CGT. Detailed comparative analysis of CBH estimates from the state-of-the-art ECWC LUT are conducted against four ground-based millimeter-wave cloud radar(MMCR) measurements, and results show that the mean bias(correlation coefficient) is0.18±1.79 km(0.73), which is lower(higher) than 0.23±2.11 km(0.67) as derived from the combined measurements of satellite radiometers and satellite radar-lidar(i.e., Cloud Sat and CALIPSO). Furthermore, the percentages of the CBH biases within 250 m increase by 5% to 10%, which varies by location. This indicates that the CBH estimates from our algorithm are more consistent with ground-based MMCR measurements. Therefore, this algorithm shows great potential for further improvement of the CBH retrievals as ground-based MMCR are being increasingly included in global surface meteorological observing networks, and the improved CBH retrievals will contribute to better cloud radiative effect estimates.
基金the deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number(IFP-2022-34).
文摘In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding the risk of data loss and data overlapping.The development of data flow scheduling approaches in the cloud environment taking security parameters into account is insufficient.In our work,we propose a data scheduling model for the cloud environment.Themodel is made up of three parts that together help dispatch user data flow to the appropriate cloudVMs.The first component is the Collector Agent whichmust periodically collect information on the state of the network links.The second one is the monitoring agent which must then analyze,classify,and make a decision on the state of the link and finally transmit this information to the scheduler.The third one is the scheduler who must consider previous information to transfer user data,including fair distribution and reliable paths.It should be noted that each part of the proposedmodel requires the development of its algorithms.In this article,we are interested in the development of data transfer algorithms,including fairness distribution with the consideration of a stable link state.These algorithms are based on the grouping of transmitted files and the iterative method.The proposed algorithms showthe performances to obtain an approximate solution to the studied problem which is an NP-hard(Non-Polynomial solution)problem.The experimental results show that the best algorithm is the half-grouped minimum excluding(HME),with a percentage of 91.3%,an average deviation of 0.042,and an execution time of 0.001 s.
文摘As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy in the cloud environment.A hypervisor is a virtualization software used in cloud hosting to divide and allocate resources on various pieces of hardware.The choice of hypervisor can significantly impact the performance of cryptographic operations in the cloud environment.An important issue that must be carefully examined is that no hypervisor is completely superior in terms of performance;Each hypervisor should be examined to meet specific needs.The main objective of this study is to provide accurate results to compare the performance of Hyper-V and Kernel-based Virtual Machine(KVM)while implementing different cryptographic algorithms to guide cloud service providers and end users in choosing the most suitable hypervisor for their cryptographic needs.This study evaluated the efficiency of two hypervisors,Hyper-V and KVM,in implementing six cryptographic algorithms:Rivest,Shamir,Adleman(RSA),Advanced Encryption Standard(AES),Triple Data Encryption Standard(TripleDES),Carlisle Adams and Stafford Tavares(CAST-128),BLOWFISH,and TwoFish.The study’s findings show that KVM outperforms Hyper-V,with 12.2%less Central Processing Unit(CPU)use and 12.95%less time overall for encryption and decryption operations with various file sizes.The study’s findings emphasize how crucial it is to pick a hypervisor that is appropriate for cryptographic needs in a cloud environment,which could assist both cloud service providers and end users.Future research may focus more on how various hypervisors perform while handling cryptographic workloads.
基金supported this work by granting the doctoral scholarship to Ravi Fernandes Mariano,Carolina Njaime Mendes and Cléber Rodrigo de Souza,and through the master’s scholarship to Aloysio Souza de Mourathe postdoctoral scholarship to Vanessa Leite Rezende+2 种基金The authors also thank the Conselho Nacional de Desenvolvimento Científico e Tecnológico(CNPQ)by project funding(Edital Universal 2014,Process 459739/2014-0)the Instituto Alto-Montana da Serra Fina,the Fundação de AmparoàPesquisa do Estado de Minas Gerais(FAPEMIG)the Fundação Grupo Boticário de ProteçãoàNatureza,and finally the Fundo de Recuperação,Proteção e Desenvolvimento Sustentável das Bacias Hidrográficas do Estado de Minas Gerais(Fhidro).
文摘Environmental conditions can change markedly over geographical distances along elevation gradients,making them natural laboratories to study the processes that structure communities.This work aimed to assess the influences of elevation on Tropical Montane Cloud Forest plant communities in the Brazilian Atlantic Forest,a historically neglected ecoregion.We evaluated the phylogenetic structure,forest structure(tree basal area and tree density)and species richness along an elevation gradient,as well as the evolutionary fingerprints of elevation-success on phylogenetic lineages from the tree communities.To do so,we assessed nine communities along an elevation gradient from 1210 to 2310 m a.s.l.without large elevation gaps.The relationships between elevation and phylogenetic structure,forest structure and species richness were investigated through Linear Models.The occurrence of evolutionary fingerprint on phylogenetic lineages was investigated by quantifying the extent of phylogenetic signal of elevation-success using a genus-level molecular phylogeny.Our results showed decreased species richness at higher elevations and independence between forest structure,phylogenetic structure and elevation.We also verified that there is a phylogenetic signal associated with elevation-success by lineages.We concluded that the elevation is associated with species richness and the occurrence of phylogenetic lineages in the tree communities evaluated in Mantiqueira Range.On the other hand,elevation is not associated with forest structure or phylogenetic structure.Furthermore,closely related taxa tend to have their higher ecological success in similar elevations.Finally,we highlight the fragility of the tropical montane cloud forests in the Mantiqueira Range in face of environmental changes(i.e.global warming)due to the occurrence of exclusive phylogenetic lineages evolutionarily adapted to environmental conditions(i.e.minimum temperature)associated with each elevation range.
基金the Deanship of Scientific Research,Najran University,Kingdom of Saudi Arabia,for funding this work under the Research Groups Funding Program Grant Code Number(NU/RG/SERC/12/43).
文摘Data security assurance is crucial due to the increasing prevalence of cloud computing and its widespread use across different industries,especially in light of the growing number of cybersecurity threats.A major and everpresent threat is Ransomware-as-a-Service(RaaS)assaults,which enable even individuals with minimal technical knowledge to conduct ransomware operations.This study provides a new approach for RaaS attack detection which uses an ensemble of deep learning models.For this purpose,the network intrusion detection dataset“UNSWNB15”from the Intelligent Security Group of the University of New South Wales,Australia is analyzed.In the initial phase,the rectified linear unit-,scaled exponential linear unit-,and exponential linear unit-based three separate Multi-Layer Perceptron(MLP)models are developed.Later,using the combined predictive power of these three MLPs,the RansoDetect Fusion ensemble model is introduced in the suggested methodology.The proposed ensemble technique outperforms previous studieswith impressive performance metrics results,including 98.79%accuracy and recall,98.85%precision,and 98.80%F1-score.The empirical results of this study validate the ensemble model’s ability to improve cybersecurity defenses by showing that it outperforms individual MLPmodels.In expanding the field of cybersecurity strategy,this research highlights the significance of combined deep learning models in strengthening intrusion detection systems against sophisticated cyber threats.
基金funded by the National Key R&D Program of China under grant No.2022YFA1603103partially funded by the Regional Collaborative Innovation Project of Xinjiang Uyghur Autonomous Region under grant No.2022E01050+7 种基金the Tianshan Talent Program of Xinjiang Uygur Autonomous Region under grant No.2022TSYCLJ0005the Natural Science Foundation of Xinjiang Uygur Autonomous Region under grant No.2022D01E06the Chinese Academy of Sciences(CAS)Light of West China Program under grants Nos.xbzg-zdsys-202212,2020-XBQNXZ-017,and 2021-XBQNXZ-028the National Natural Science Foundation of China(NSFC,grant Nos.12173075,11973076,and 12103082)the Xinjiang Key Laboratory of Radio Astrophysics under grant No.2022D04033the Youth Innovation Promotion Association CASfunded by the Chinese Academy of Sciences Presidents International Fellowship Initiative under grants Nos.2022VMA0019 and 2023VMA0030funded by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan under grant No.AP13067768。
文摘The excitation temperature T_(ex)for molecular emission and absorption lines is an essential parameter for interpreting the molecular environment.This temperature can be obtained by observing multiple molecular transitions or hyperfine structures of a single transition,but it remains unknown for a single transition without hyperfine structure lines.Earlier H_(2)CO absorption experiments for a single transition without hyperfine structures adopted a constant value of T_(ex),which is not correct for molecular regions with active star formation and H II regions.For H_(2)CO,two equations with two unknowns may be used to determine the excitation temperature T_(ex)and the optical depthτ,if other parameters can be determined from measurements.Published observational data of the4.83 GHz(λ=6 cm)H_(2)CO(1_(10)-1_(11))absorption line for three star formation regions,W40,M17 and DR17,have been used to verify this method.The distributions of T_(ex)in these sources are in good agreement with the contours of the H110αemission of the H II regions in M17 and DR17 and with the H_(2)CO(1_(10)-1_(11))absorption in W40.The distributions of T_(ex)in the three sources indicate that there can be significant variation in the excitation temperature across star formation and H II regions and that the use of a fixed(low)value results in misinterpretation.
基金This work is supported by the National Natural Science Foundation of China under Grant No.62001341the National Natural Science Foundation of Jiangsu Province under Grant No.BK20221379the Jiangsu Engineering Research Center of Digital Twinning Technology for Key Equipment in Petrochemical Process under Grant No.DTEC202104.
文摘This paper focuses on the task of few-shot 3D point cloud semantic segmentation.Despite some progress,this task still encounters many issues due to the insufficient samples given,e.g.,incomplete object segmentation and inaccurate semantic discrimination.To tackle these issues,we first leverage part-whole relationships into the task of 3D point cloud semantic segmentation to capture semantic integrity,which is empowered by the dynamic capsule routing with the module of 3D Capsule Networks(CapsNets)in the embedding network.Concretely,the dynamic routing amalgamates geometric information of the 3D point cloud data to construct higher-level feature representations,which capture the relationships between object parts and their wholes.Secondly,we designed a multi-prototype enhancement module to enhance the prototype discriminability.Specifically,the single-prototype enhancement mechanism is expanded to the multi-prototype enhancement version for capturing rich semantics.Besides,the shot-correlation within the category is calculated via the interaction of different samples to enhance the intra-category similarity.Ablation studies prove that the involved part-whole relations and proposed multi-prototype enhancement module help to achieve complete object segmentation and improve semantic discrimination.Moreover,under the integration of these two modules,quantitative and qualitative experiments on two public benchmarks,including S3DIS and ScanNet,indicate the superior performance of the proposed framework on the task of 3D point cloud semantic segmentation,compared to some state-of-the-art methods.
基金supported in part by the Intelligent Policing and National Security Risk Management Laboratory 2023 Opening Project(No.ZHKFYB2304)the Fundamental Research Funds for the Central Universities(Nos.SCU2023D008,2023SCU12129)+2 种基金the Natural Science Foundation of Sichuan Province(No.2024NSFSC1449)the Science and Engineering Connotation Development Project of Sichuan University(No.2020SCUNG129)the Key Laboratory of Data Protection and Intelligent Management(Sichuan University),Ministry of Education.
文摘The security performance of cloud services is a key factor influencing users’selection of Cloud Service Providers(CSPs).Continuous monitoring of the security status of cloud services is critical.However,existing research lacks a practical framework for such ongoing monitoring.To address this gap,this paper proposes the first NonCollaborative Container-Based Cloud Service Operation State Continuous Monitoring Framework(NCCMF),based on relevant standards.NCCMF operates without the CSP’s collaboration by:1)establishing a scalable supervisory index system through the identification of security responsibilities for each role,and 2)designing a Continuous Metrics Supervision Protocol(CMA)to automate the negotiation of supervisory metrics.The framework also outlines the supervision process for cloud services across different deployment models.Experimental results demonstrate that NCCMF effectively monitors the operational state of two real-world IoT(Internet of Things)cloud services,with an average supervision error of less than 15%.
基金National Key R&D Program of China,Grant/Award Number:2018AAA0102100National Natural Science Foundation of China,Grant/Award Numbers:62177047,U22A2034+6 种基金International Science and Technology Innovation Joint Base of Machine Vision and Medical Image Processing in Hunan Province,Grant/Award Number:2021CB1013Key Research and Development Program of Hunan Province,Grant/Award Number:2022SK2054111 Project,Grant/Award Number:B18059Natural Science Foundation of Hunan Province,Grant/Award Number:2022JJ30762Fundamental Research Funds for the Central Universities of Central South University,Grant/Award Number:2020zzts143Scientific and Technological Innovation Leading Plan of High‐tech Industry of Hunan Province,Grant/Award Number:2020GK2021Central South University Research Program of Advanced Interdisciplinary Studies,Grant/Award Number:2023QYJC020。
文摘Redundancy elimination techniques are extensively investigated to reduce storage overheads for cloud-assisted health systems.Deduplication eliminates the redundancy of duplicate blocks by storing one physical instance referenced by multiple duplicates.Delta compression is usually regarded as a complementary technique to deduplication to further remove the redundancy of similar blocks,but our observations indicate that this is disobedient when data have sparse duplicate blocks.In addition,there are many overlapped deltas in the resemblance detection process of post-deduplication delta compression,which hinders the efficiency of delta compression and the index phase of resemblance detection inquires abundant non-similar blocks,resulting in inefficient system throughput.Therefore,a multi-feature-based redundancy elimination scheme,called MFRE,is proposed to solve these problems.The similarity feature and temporal locality feature are excavated to assist redundancy elimination where the similarity feature well expresses the duplicate attribute.Then,similarity-based dynamic post-deduplication delta compression and temporal locality-based dynamic delta compression discover more similar base blocks to minimise overlapped deltas and improve compression ratios.Moreover,the clustering method based on block-relationship and the feature index strategy based on bloom filters reduce IO overheads and improve system throughput.Experiments demonstrate that the proposed method,compared to the state-of-the-art method,improves the compression ratio and system throughput by 9.68%and 50%,respectively.