As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The ...As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The eddy dissipation rate(EDR)has been established as the standard metric for quantifying turbulence in civil aviation.This study aims to explore a universally applicable symbolic classification approach based on genetic programming to detect turbulence anomalies using quick access recorder(QAR)data.The detection of atmospheric turbulence is approached as an anomaly detection problem.Comparative evaluations demonstrate that this approach performs on par with direct EDR calculation methods in identifying turbulence events.Moreover,comparisons with alternative machine learning techniques indicate that the proposed technique is the optimal methodology currently available.In summary,the use of symbolic classification via genetic programming enables accurate turbulence detection from QAR data,comparable to that with established EDR approaches and surpassing that achieved with machine learning algorithms.This finding highlights the potential of integrating symbolic classifiers into turbulence monitoring systems to enhance civil aviation safety amidst rising environmental and operational hazards.展开更多
Big data resources are characterized by large scale, wide sources, and strong dynamics. Existing access controlmechanisms based on manual policy formulation by security experts suffer from drawbacks such as low policy...Big data resources are characterized by large scale, wide sources, and strong dynamics. Existing access controlmechanisms based on manual policy formulation by security experts suffer from drawbacks such as low policymanagement efficiency and difficulty in accurately describing the access control policy. To overcome theseproblems, this paper proposes a big data access control mechanism based on a two-layer permission decisionstructure. This mechanism extends the attribute-based access control (ABAC) model. Business attributes areintroduced in the ABAC model as business constraints between entities. The proposed mechanism implementsa two-layer permission decision structure composed of the inherent attributes of access control entities and thebusiness attributes, which constitute the general permission decision algorithm based on logical calculation andthe business permission decision algorithm based on a bi-directional long short-term memory (BiLSTM) neuralnetwork, respectively. The general permission decision algorithm is used to implement accurate policy decisions,while the business permission decision algorithm implements fuzzy decisions based on the business constraints.The BiLSTM neural network is used to calculate the similarity of the business attributes to realize intelligent,adaptive, and efficient access control permission decisions. Through the two-layer permission decision structure,the complex and diverse big data access control management requirements can be satisfied by considering thesecurity and availability of resources. Experimental results show that the proposed mechanism is effective andreliable. In summary, it can efficiently support the secure sharing of big data resources.展开更多
Data trading enables data owners and data requesters to sell and purchase data.With the emergence of blockchain technology,research on blockchain-based data trading systems is receiving a lot of attention.Particularly...Data trading enables data owners and data requesters to sell and purchase data.With the emergence of blockchain technology,research on blockchain-based data trading systems is receiving a lot of attention.Particularly,to reduce the on-chain storage cost,a novel paradigm of blockchain and cloud fusion has been widely considered as a promising data trading platform.Moreover,the fact that data can be used for commercial purposes will encourage users and organizations from various fields to participate in the data marketplace.In the data marketplace,it is a challenge how to trade the data securely outsourced to the external cloud in a way that restricts access to the data only to authorized users across multiple domains.In this paper,we propose a cross-domain bilateral access control protocol for blockchain-cloud based data trading systems.We consider a system model that consists of domain authorities,data senders,data receivers,a blockchain layer,and a cloud provider.The proposed protocol enables access control and source identification of the outsourced data by leveraging identity-based cryptographic techniques.In the proposed protocol,the outsourced data of the sender is encrypted under the target receiver’s identity,and the cloud provider performs policy-match verification on the authorization tags of the sender and receiver generated by the identity-based signature scheme.Therefore,data trading can be achieved only if the identities of the data sender and receiver simultaneously meet the policies specified by each other.To demonstrate efficiency,we evaluate the performance of the proposed protocol and compare it with existing studies.展开更多
The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initiall...The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initially built a power IoT architecture comprising a perception,network,and platform application layer.However,owing to the structural complexity of the power system,the construction of the power IoT continues to face problems such as complex access management of massive heterogeneous equipment,diverse IoT protocol access methods,high concurrency of network communications,and weak data security protection.To address these issues,this study optimizes the existing architecture of the power IoT and designs an integrated management framework for the access of multi-source heterogeneous data in the power IoT,comprising cloud,pipe,edge,and terminal parts.It further reviews and analyzes the key technologies involved in the power IoT,such as the unified management of the physical model,high concurrent access,multi-protocol access,multi-source heterogeneous data storage management,and data security control,to provide a more flexible,efficient,secure,and easy-to-use solution for multi-source heterogeneous data access in the power IoT.展开更多
Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these adv...Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these advancements, efficiently programming GPUs remains a daunting challenge, often relying on trial-and-error optimization methods. This paper introduces an optimization technique for CUDA programs through a novel Data Layout strategy, aimed at restructuring memory data arrangement to significantly enhance data access locality. Focusing on the dynamic programming algorithm for chained matrix multiplication—a critical operation across various domains including artificial intelligence (AI), high-performance computing (HPC), and the Internet of Things (IoT)—this technique facilitates more localized access. We specifically illustrate the importance of efficient matrix multiplication in these areas, underscoring the technique’s broader applicability and its potential to address some of the most pressing computational challenges in GPU-accelerated applications. Our findings reveal a remarkable reduction in memory consumption and a substantial 50% decrease in execution time for CUDA programs utilizing this technique, thereby setting a new benchmark for optimization in GPU computing.展开更多
For the goals of security and privacy preservation,we propose a blind batch encryption-and public ledger-based data sharing protocol that allows the integrity of sensitive data to be audited by a public ledger and all...For the goals of security and privacy preservation,we propose a blind batch encryption-and public ledger-based data sharing protocol that allows the integrity of sensitive data to be audited by a public ledger and allows privacy information to be preserved.Data owners can tightly manage their data with efficient revocation and only grant one-time adaptive access for the fulfillment of the requester.We prove that our protocol is semanticallly secure,blind,and secure against oblivious requesters and malicious file keepers.We also provide security analysis in the context of four typical attacks.展开更多
With the growth of requirements for data sharing,a novel business model of digital assets trading has emerged that allows data owners to sell their data for monetary gain.In the distributed ledger of blockchain,howeve...With the growth of requirements for data sharing,a novel business model of digital assets trading has emerged that allows data owners to sell their data for monetary gain.In the distributed ledger of blockchain,however,the privacy of stakeholder's identity and the confidentiality of data content are threatened.Therefore,we proposed a blockchainenabled privacy-preserving and access control scheme to address the above problems.First,the multi-channel mechanism is introduced to provide the privacy protection of distributed ledger inside the channel and achieve coarse-grained access control to digital assets.Then,we use multi-authority attribute-based encryption(MAABE)algorithm to build a fine-grained access control model for data trading in a single channel and describe its instantiation in detail.Security analysis shows that the scheme has IND-CPA secure and can provide privacy protection and collusion resistance.Compared with other schemes,our solution has better performance in privacy protection and access control.The evaluation results demonstrate its effectiveness and practicability.展开更多
With the rapid advancement of cloud computing technology,reversible data hiding algorithms in encrypted images(RDH-EI)have developed into an important field of study concentrated on safeguarding privacy in distributed...With the rapid advancement of cloud computing technology,reversible data hiding algorithms in encrypted images(RDH-EI)have developed into an important field of study concentrated on safeguarding privacy in distributed cloud environments.However,existing algorithms often suffer from low embedding capacities and are inadequate for complex data access scenarios.To address these challenges,this paper proposes a novel reversible data hiding algorithm in encrypted images based on adaptive median edge detection(AMED)and ciphertext-policy attributebased encryption(CP-ABE).This proposed algorithm enhances the conventional median edge detection(MED)by incorporating dynamic variables to improve pixel prediction accuracy.The carrier image is subsequently reconstructed using the Huffman coding technique.Encrypted image generation is then achieved by encrypting the image based on system user attributes and data access rights,with the hierarchical embedding of the group’s secret data seamlessly integrated during the encryption process using the CP-ABE scheme.Ultimately,the encrypted image is transmitted to the data hider,enabling independent embedding of the secret data and resulting in the creation of the marked encrypted image.This approach allows only the receiver to extract the authorized group’s secret data,thereby enabling fine-grained,controlled access.Test results indicate that,in contrast to current algorithms,the method introduced here considerably improves the embedding rate while preserving lossless image recovery.Specifically,the average maximum embedding rates for the(3,4)-threshold and(6,6)-threshold schemes reach 5.7853 bits per pixel(bpp)and 7.7781 bpp,respectively,across the BOSSbase,BOW-2,and USD databases.Furthermore,the algorithm facilitates permission-granting and joint-decryption capabilities.Additionally,this paper conducts a comprehensive examination of the algorithm’s robustness using metrics such as image correlation,information entropy,and number of pixel change rate(NPCR),confirming its high level of security.Overall,the algorithm can be applied in a multi-user and multi-level cloud service environment to realize the secure storage of carrier images and secret data.展开更多
The traditional air traffic control information sharing data has weak security characteristics of personal privacy data and poor effect,which is easy to leads to the problem that the data is usurped.Starting from the ...The traditional air traffic control information sharing data has weak security characteristics of personal privacy data and poor effect,which is easy to leads to the problem that the data is usurped.Starting from the application of the ATC(automatic train control)network,this paper focuses on the zero trust and zero trust access strategy and the tamper-proof method of information-sharing network data.Through the improvement of ATC’s zero trust physical layer authentication and network data distributed feature differentiation calculation,this paper reconstructs the personal privacy scope authentication structure and designs a tamper-proof method of ATC’s information sharing on the Internet.From the single management authority to the unified management of data units,the systematic algorithm improvement of shared network data tamper prevention method is realized,and RDTP(Reliable Data Transfer Protocol)is selected in the network data of information sharing resources to realize the effectiveness of tamper prevention of air traffic control data during transmission.The results show that this method can reasonably avoid the tampering of information sharing on the Internet,maintain the security factors of air traffic control information sharing on the Internet,and the Central Processing Unit(CPU)utilization rate is only 4.64%,which effectively increases the performance of air traffic control data comprehensive security protection system.展开更多
Advances in technology require upgrades in the law. One such area involves data brokers, which have thus far gone unregulated. Data brokers use artificial intelligence to aggregate information into data profiles about...Advances in technology require upgrades in the law. One such area involves data brokers, which have thus far gone unregulated. Data brokers use artificial intelligence to aggregate information into data profiles about individual Americans derived from consumer use of the internet and connected devices. Data profiles are then sold for profit. Government investigators use a legal loophole to purchase this data instead of obtaining a search warrant, which the Fourth Amendment would otherwise require. Consumers have lacked a reasonable means to fight or correct the information data brokers collect. Americans may not even be aware of the risks of data aggregation, which upends the test of reasonable expectations used in a search warrant analysis. Data aggregation should be controlled and regulated, which is the direction some privacy laws take. Legislatures must step forward to safeguard against shadowy data-profiling practices, whether abroad or at home. In the meantime, courts can modify their search warrant analysis by including data privacy principles.展开更多
MORPAS is a special GIS (geographic information system) software system, based on the MAPGIS platform whose aim is to prospect and evaluate mineral resources quantificationally by synthesizing geological, geophysical,...MORPAS is a special GIS (geographic information system) software system, based on the MAPGIS platform whose aim is to prospect and evaluate mineral resources quantificationally by synthesizing geological, geophysical, geochemical and remote sensing data. It overlays geological database management, geological background and geological abnormality analysis, image processing of remote sensing and comprehensive abnormality analysis, etc.. It puts forward an integrative solution for the application of GIS in basic-level units and the construction of information engineering in the geological field. As the popularization of computer networks and the request of data sharing, it is necessary to extend its functions in data management so that all its data files can be accessed in the network server. This paper utilizes some MAPGIS functions for the second development and ADO (access data object) technique to access multi-source geological data in SQL Server databases. Then remote visiting and congruous management will be realized in the MORPAS system.展开更多
We show that an aggregated Interest in Named Data Networking (NDN) may fail to retrieve desired data since the Interest previously sent upstream for the same content is judged as a duplicate one and then dropped by an...We show that an aggregated Interest in Named Data Networking (NDN) may fail to retrieve desired data since the Interest previously sent upstream for the same content is judged as a duplicate one and then dropped by an upstream node due to its multipath forwarding. Furthermore, we propose NDRUDAF, a NACK based mechanism that enhances the Interest forwarding and enables Detection and fast Recovery from such Unanticipated Data Access Failure. In the NDN enhanced with NDRUDAF, the router that aggregates the Interest detects such unanticipated data access failure based on a negative acknowledgement from the upstream node that judges the Interest as a duplicate one. Then the router retransmits the Interest as soon as possible on behalf of the requester whose Interest is aggregated to fast recover from the data access failure. We qualitatively and quantitatively analyze the performance of the NDN enhanced with our proposed NDRUDAF and compare it with that of the present NDN. Our experimental results validate that NDRUDAF improves the system performance in case of such unanticipated data access failure in terms of data access delay and network resource utilization efficiency at routers.展开更多
In the security and privacy fields,Access Control(AC)systems are viewed as the fundamental aspects of networking security mechanisms.Enforcing AC becomes even more challenging when researchers and data analysts have t...In the security and privacy fields,Access Control(AC)systems are viewed as the fundamental aspects of networking security mechanisms.Enforcing AC becomes even more challenging when researchers and data analysts have to analyze complex and distributed Big Data(BD)processing cluster frameworks,which are adopted to manage yottabyte of unstructured sensitive data.For instance,Big Data systems’privacy and security restrictions are most likely to failure due to the malformed AC policy configurations.Furthermore,BD systems were initially developed toped to take care of some of the DB issues to address BD challenges and many of these dealt with the“three Vs”(Velocity,Volume,and Variety)attributes,without planning security consideration,which are considered to be patch work.Some of the BD“three Vs”characteristics,such as distributed computing,fragment,redundant data and node-to node communication,each with its own security challenges,complicate even more the applicability of AC in BD.This paper gives an overview of the latest security and privacy challenges in BD AC systems.Furthermore,it analyzes and compares some of the latest AC research frameworks to reduce privacy and security issues in distributed BD systems,which very few enforce AC in a cost-effective and in a timely manner.Moreover,this work discusses some of the future research methodologies and improvements for BD AC systems.This study is valuable asset for Artificial Intelligence(AI)researchers,DB developers and DB analysts who need the latest AC security and privacy research perspective before using and/or improving a current BD AC framework.展开更多
The traditional centralized data sharing systems have potential risks such as single point of failures and excessive working load on the central node.As a distributed and collaborative alternative,approaches based upo...The traditional centralized data sharing systems have potential risks such as single point of failures and excessive working load on the central node.As a distributed and collaborative alternative,approaches based upon blockchain have been explored recently for Internet of Things(IoTs).However,the access from a legitimate user may be denied without the pre-defined policy and data update on the blockchain could be costly to the owners.In this paper,we first address these issues by incorporating the Accountable Subgroup Multi-Signature(ASM)algorithm into the Attribute-based Access Control(ABAC)method with Policy Smart Contract,to provide a finegrained and flexible solution.Next,we propose a policy-based Chameleon Hash algorithm that allows the data to be updated in a reliable and convenient way by the authorized users.Finally,we evaluate our work by comparing its performance with the benchmarks.The results demonstrate significant improvement on the effectiveness and efficiency.展开更多
With the development of cloud computing, the mutual understandability among distributed data access control has become an important issue in the security field of cloud computing. To ensure security, confidentiality a...With the development of cloud computing, the mutual understandability among distributed data access control has become an important issue in the security field of cloud computing. To ensure security, confidentiality and fine-grained data access control of Cloud Data Storage (CDS) environment, we proposed Multi-Agent System (MAS) architecture. This architecture consists of two agents: Cloud Service Provider Agent (CSPA) and Cloud Data Confidentiality Agent (CDConA). CSPA provides a graphical interface to the cloud user that facilitates the access to the services offered by the system. CDConA provides each cloud user by definition and enforcement expressive and flexible access structure as a logic formula over cloud data file attributes. This new access control is named as Formula-Based Cloud Data Access Control (FCDAC). Our proposed FCDAC based on MAS architecture consists of four layers: interface layer, existing access control layer, proposed FCDAC layer and CDS layer as well as four types of entities of Cloud Service Provider (CSP), cloud users, knowledge base and confidentiality policy roles. FCDAC, it’s an access policy determined by our MAS architecture, not by the CSPs. A prototype of our proposed FCDAC scheme is implemented using the Java Agent Development Framework Security (JADE-S). Our results in the practical scenario defined formally in this paper, show the Round Trip Time (RTT) for an agent to travel in our system and measured by the times required for an agent to travel around different number of cloud users before and after implementing FCDAC.展开更多
As an industry accepted storage scheme, hafnium oxide(HfO_x) based resistive random access memory(RRAM)should further improve its thermal stability and data retention for practical applications. We therefore fabri...As an industry accepted storage scheme, hafnium oxide(HfO_x) based resistive random access memory(RRAM)should further improve its thermal stability and data retention for practical applications. We therefore fabricated RRAMs with HfO_x/ZnO double-layer as the storage medium to study their thermal stability as well as data retention. The HfO_x/ZnO double-layer is capable of reversible bipolar switching under ultralow switching current(〈 3 μA) with a Schottky emission dominant conduction for the high resistance state and a Poole–Frenkel emission governed conduction for the low resistance state. Compared with a drastically increased switching current at 120℃ for the single HfO_x layer RRAM, the HfO_x/ZnO double-layer exhibits excellent thermal stability and maintains neglectful fluctuations in switching current at high temperatures(up to 180℃), which might be attributed to the increased Schottky barrier height to suppress current at high temperatures. Additionally, the HfO_x/ZnO double-layer exhibits 10-year data retention @85℃ that is helpful for the practical applications in RRAMs.展开更多
Growing attention has been directed to the use of satellite imagery and open geospatial data to understand large-scale sustainable development outcomes.Health and education are critical domains of the Unites Nations’...Growing attention has been directed to the use of satellite imagery and open geospatial data to understand large-scale sustainable development outcomes.Health and education are critical domains of the Unites Nations’Sus-tainable Development Goals(SDGs),yet existing research on the accessibility of corresponding services focused mainly on detailed but small-scale studies.This means that such studies lack accessibility metrics for large-scale quantitative evaluations.To address this deficiency,we evaluated the accessibility of health and education ser-vices in China's Mainland in 2021 using point-of-interest data,OpenStreetMap road data,land cover data,and WorldPop spatial demographic data.The accessibility metrics used were the least time costs of reaching hospital and school services and population coverage with a time cost of less than 1 h.On the basis of the road network and land cover information,the overall average time costs of reaching hospital and school were 20 and 22 min,respectively.In terms of population coverage,94.7%and 92.5%of the population in China has a time cost of less than 1 h in obtaining hospital and school services,respectively.Counties with low accessibility to hospitals and schools were highly coupled with poor areas and ecological function regions,with the time cost incurred in these areas being more than twice that experienced in non-poor and non-ecological areas.Furthermore,the cumulative time cost incurred by the bottom 20%of counties(by GDP)from access to hospital and school services reached approximately 80%of the national total.Low-GDP counties were compelled to suffer disproportionately increased time costs to acquire health and education services compared with high-GDP counties.The accessibil-ity metrics proposed in this study are highly related to SDGs 3 and 4,and they can serve as auxiliary data that can be used to enhance the evaluation of SDG outcomes.The analysis of the uneven distribution of health and education services in China can help identify areas with backward public services and may contribute to targeted and efficient policy interventions.展开更多
Digital transformation has been corner stone of business innovation in the last decade, and these innovations have dramatically changed the definition and boundaries of enterprise business applications. Introduction o...Digital transformation has been corner stone of business innovation in the last decade, and these innovations have dramatically changed the definition and boundaries of enterprise business applications. Introduction of new products/ services, version management of existing products/ services, management of customer/partner connections, management of multi-channel service delivery (web, social media, web etc.), merger/acquisitions of new businesses and adoption of new innovations/technologies will drive data growth in business applications. These datasets exist in different sharing nothing business applications at different locations and in various forms. So, to make sense of this information and derive insight, it is essential to break the data silos, streamline data retrieval and simplify information access across the entire organization. The information access framework must support just-in-time processing capabilities to bring data from multiple sources, be fast and powerful enough to transform and process huge amounts of data quickly, and be agile enough to accommodate new data sources per user needs. This paper discusses the SAP HANA Smart Data Access data-virtualization technology to enable unified access to heterogenous data across the organization and analysis of huge volume of data in real-time using SAP HANA in-memory platform.展开更多
基金supported by the Meteorological Soft Science Project(Grant No.2023ZZXM29)the Natural Science Fund Project of Tianjin,China(Grant No.21JCYBJC00740)the Key Research and Development-Social Development Program of Jiangsu Province,China(Grant No.BE2021685).
文摘As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The eddy dissipation rate(EDR)has been established as the standard metric for quantifying turbulence in civil aviation.This study aims to explore a universally applicable symbolic classification approach based on genetic programming to detect turbulence anomalies using quick access recorder(QAR)data.The detection of atmospheric turbulence is approached as an anomaly detection problem.Comparative evaluations demonstrate that this approach performs on par with direct EDR calculation methods in identifying turbulence events.Moreover,comparisons with alternative machine learning techniques indicate that the proposed technique is the optimal methodology currently available.In summary,the use of symbolic classification via genetic programming enables accurate turbulence detection from QAR data,comparable to that with established EDR approaches and surpassing that achieved with machine learning algorithms.This finding highlights the potential of integrating symbolic classifiers into turbulence monitoring systems to enhance civil aviation safety amidst rising environmental and operational hazards.
基金Key Research and Development and Promotion Program of Henan Province(No.222102210069)Zhongyuan Science and Technology Innovation Leading Talent Project(224200510003)National Natural Science Foundation of China(No.62102449).
文摘Big data resources are characterized by large scale, wide sources, and strong dynamics. Existing access controlmechanisms based on manual policy formulation by security experts suffer from drawbacks such as low policymanagement efficiency and difficulty in accurately describing the access control policy. To overcome theseproblems, this paper proposes a big data access control mechanism based on a two-layer permission decisionstructure. This mechanism extends the attribute-based access control (ABAC) model. Business attributes areintroduced in the ABAC model as business constraints between entities. The proposed mechanism implementsa two-layer permission decision structure composed of the inherent attributes of access control entities and thebusiness attributes, which constitute the general permission decision algorithm based on logical calculation andthe business permission decision algorithm based on a bi-directional long short-term memory (BiLSTM) neuralnetwork, respectively. The general permission decision algorithm is used to implement accurate policy decisions,while the business permission decision algorithm implements fuzzy decisions based on the business constraints.The BiLSTM neural network is used to calculate the similarity of the business attributes to realize intelligent,adaptive, and efficient access control permission decisions. Through the two-layer permission decision structure,the complex and diverse big data access control management requirements can be satisfied by considering thesecurity and availability of resources. Experimental results show that the proposed mechanism is effective andreliable. In summary, it can efficiently support the secure sharing of big data resources.
基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2022R1I1A3063257)supported by the MSIT(Ministry of Science and ICT),Korea,under the Special R&D Zone Development Project(R&D)—Development of R&D Innovation Valley Support Program(2023-DD-RD-0152)supervised by the Innovation Foundation.
文摘Data trading enables data owners and data requesters to sell and purchase data.With the emergence of blockchain technology,research on blockchain-based data trading systems is receiving a lot of attention.Particularly,to reduce the on-chain storage cost,a novel paradigm of blockchain and cloud fusion has been widely considered as a promising data trading platform.Moreover,the fact that data can be used for commercial purposes will encourage users and organizations from various fields to participate in the data marketplace.In the data marketplace,it is a challenge how to trade the data securely outsourced to the external cloud in a way that restricts access to the data only to authorized users across multiple domains.In this paper,we propose a cross-domain bilateral access control protocol for blockchain-cloud based data trading systems.We consider a system model that consists of domain authorities,data senders,data receivers,a blockchain layer,and a cloud provider.The proposed protocol enables access control and source identification of the outsourced data by leveraging identity-based cryptographic techniques.In the proposed protocol,the outsourced data of the sender is encrypted under the target receiver’s identity,and the cloud provider performs policy-match verification on the authorization tags of the sender and receiver generated by the identity-based signature scheme.Therefore,data trading can be achieved only if the identities of the data sender and receiver simultaneously meet the policies specified by each other.To demonstrate efficiency,we evaluate the performance of the proposed protocol and compare it with existing studies.
基金supported by the National Key Research and Development Program of China(grant number 2019YFE0123600)。
文摘The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initially built a power IoT architecture comprising a perception,network,and platform application layer.However,owing to the structural complexity of the power system,the construction of the power IoT continues to face problems such as complex access management of massive heterogeneous equipment,diverse IoT protocol access methods,high concurrency of network communications,and weak data security protection.To address these issues,this study optimizes the existing architecture of the power IoT and designs an integrated management framework for the access of multi-source heterogeneous data in the power IoT,comprising cloud,pipe,edge,and terminal parts.It further reviews and analyzes the key technologies involved in the power IoT,such as the unified management of the physical model,high concurrent access,multi-protocol access,multi-source heterogeneous data storage management,and data security control,to provide a more flexible,efficient,secure,and easy-to-use solution for multi-source heterogeneous data access in the power IoT.
文摘Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these advancements, efficiently programming GPUs remains a daunting challenge, often relying on trial-and-error optimization methods. This paper introduces an optimization technique for CUDA programs through a novel Data Layout strategy, aimed at restructuring memory data arrangement to significantly enhance data access locality. Focusing on the dynamic programming algorithm for chained matrix multiplication—a critical operation across various domains including artificial intelligence (AI), high-performance computing (HPC), and the Internet of Things (IoT)—this technique facilitates more localized access. We specifically illustrate the importance of efficient matrix multiplication in these areas, underscoring the technique’s broader applicability and its potential to address some of the most pressing computational challenges in GPU-accelerated applications. Our findings reveal a remarkable reduction in memory consumption and a substantial 50% decrease in execution time for CUDA programs utilizing this technique, thereby setting a new benchmark for optimization in GPU computing.
基金partially supported by the National Natural Science Foundation of China under grant no.62372245the Foundation of Yunnan Key Laboratory of Blockchain Application Technology under Grant 202105AG070005+1 种基金in part by the Foundation of State Key Laboratory of Public Big Datain part by the Foundation of Key Laboratory of Computational Science and Application of Hainan Province under Grant JSKX202202。
文摘For the goals of security and privacy preservation,we propose a blind batch encryption-and public ledger-based data sharing protocol that allows the integrity of sensitive data to be audited by a public ledger and allows privacy information to be preserved.Data owners can tightly manage their data with efficient revocation and only grant one-time adaptive access for the fulfillment of the requester.We prove that our protocol is semanticallly secure,blind,and secure against oblivious requesters and malicious file keepers.We also provide security analysis in the context of four typical attacks.
基金supported by National Key Research and Development Plan in China(Grant No.2020YFB1005500)Beijing Natural Science Foundation(Grant No.M21034)BUPT Excellent Ph.D Students Foundation(Grant No.CX2023218)。
文摘With the growth of requirements for data sharing,a novel business model of digital assets trading has emerged that allows data owners to sell their data for monetary gain.In the distributed ledger of blockchain,however,the privacy of stakeholder's identity and the confidentiality of data content are threatened.Therefore,we proposed a blockchainenabled privacy-preserving and access control scheme to address the above problems.First,the multi-channel mechanism is introduced to provide the privacy protection of distributed ledger inside the channel and achieve coarse-grained access control to digital assets.Then,we use multi-authority attribute-based encryption(MAABE)algorithm to build a fine-grained access control model for data trading in a single channel and describe its instantiation in detail.Security analysis shows that the scheme has IND-CPA secure and can provide privacy protection and collusion resistance.Compared with other schemes,our solution has better performance in privacy protection and access control.The evaluation results demonstrate its effectiveness and practicability.
基金the National Natural Science Foundation of China(Grant Numbers 622724786210245062102451).
文摘With the rapid advancement of cloud computing technology,reversible data hiding algorithms in encrypted images(RDH-EI)have developed into an important field of study concentrated on safeguarding privacy in distributed cloud environments.However,existing algorithms often suffer from low embedding capacities and are inadequate for complex data access scenarios.To address these challenges,this paper proposes a novel reversible data hiding algorithm in encrypted images based on adaptive median edge detection(AMED)and ciphertext-policy attributebased encryption(CP-ABE).This proposed algorithm enhances the conventional median edge detection(MED)by incorporating dynamic variables to improve pixel prediction accuracy.The carrier image is subsequently reconstructed using the Huffman coding technique.Encrypted image generation is then achieved by encrypting the image based on system user attributes and data access rights,with the hierarchical embedding of the group’s secret data seamlessly integrated during the encryption process using the CP-ABE scheme.Ultimately,the encrypted image is transmitted to the data hider,enabling independent embedding of the secret data and resulting in the creation of the marked encrypted image.This approach allows only the receiver to extract the authorized group’s secret data,thereby enabling fine-grained,controlled access.Test results indicate that,in contrast to current algorithms,the method introduced here considerably improves the embedding rate while preserving lossless image recovery.Specifically,the average maximum embedding rates for the(3,4)-threshold and(6,6)-threshold schemes reach 5.7853 bits per pixel(bpp)and 7.7781 bpp,respectively,across the BOSSbase,BOW-2,and USD databases.Furthermore,the algorithm facilitates permission-granting and joint-decryption capabilities.Additionally,this paper conducts a comprehensive examination of the algorithm’s robustness using metrics such as image correlation,information entropy,and number of pixel change rate(NPCR),confirming its high level of security.Overall,the algorithm can be applied in a multi-user and multi-level cloud service environment to realize the secure storage of carrier images and secret data.
基金This work was supported by National Natural Science Foundation of China(U2133208,U20A20161).
文摘The traditional air traffic control information sharing data has weak security characteristics of personal privacy data and poor effect,which is easy to leads to the problem that the data is usurped.Starting from the application of the ATC(automatic train control)network,this paper focuses on the zero trust and zero trust access strategy and the tamper-proof method of information-sharing network data.Through the improvement of ATC’s zero trust physical layer authentication and network data distributed feature differentiation calculation,this paper reconstructs the personal privacy scope authentication structure and designs a tamper-proof method of ATC’s information sharing on the Internet.From the single management authority to the unified management of data units,the systematic algorithm improvement of shared network data tamper prevention method is realized,and RDTP(Reliable Data Transfer Protocol)is selected in the network data of information sharing resources to realize the effectiveness of tamper prevention of air traffic control data during transmission.The results show that this method can reasonably avoid the tampering of information sharing on the Internet,maintain the security factors of air traffic control information sharing on the Internet,and the Central Processing Unit(CPU)utilization rate is only 4.64%,which effectively increases the performance of air traffic control data comprehensive security protection system.
文摘Advances in technology require upgrades in the law. One such area involves data brokers, which have thus far gone unregulated. Data brokers use artificial intelligence to aggregate information into data profiles about individual Americans derived from consumer use of the internet and connected devices. Data profiles are then sold for profit. Government investigators use a legal loophole to purchase this data instead of obtaining a search warrant, which the Fourth Amendment would otherwise require. Consumers have lacked a reasonable means to fight or correct the information data brokers collect. Americans may not even be aware of the risks of data aggregation, which upends the test of reasonable expectations used in a search warrant analysis. Data aggregation should be controlled and regulated, which is the direction some privacy laws take. Legislatures must step forward to safeguard against shadowy data-profiling practices, whether abroad or at home. In the meantime, courts can modify their search warrant analysis by including data privacy principles.
文摘MORPAS is a special GIS (geographic information system) software system, based on the MAPGIS platform whose aim is to prospect and evaluate mineral resources quantificationally by synthesizing geological, geophysical, geochemical and remote sensing data. It overlays geological database management, geological background and geological abnormality analysis, image processing of remote sensing and comprehensive abnormality analysis, etc.. It puts forward an integrative solution for the application of GIS in basic-level units and the construction of information engineering in the geological field. As the popularization of computer networks and the request of data sharing, it is necessary to extend its functions in data management so that all its data files can be accessed in the network server. This paper utilizes some MAPGIS functions for the second development and ADO (access data object) technique to access multi-source geological data in SQL Server databases. Then remote visiting and congruous management will be realized in the MORPAS system.
基金supported in part by the National Natural Science Foundation of China (No.61602114)part by the National Key Research and Development Program of China (2017YFB0801703)+1 种基金part by the CERNET Innovation Project (NGII20170406)part by Jiangsu Provincial Key Laboratory of Network and Information Security (BM2003201)
文摘We show that an aggregated Interest in Named Data Networking (NDN) may fail to retrieve desired data since the Interest previously sent upstream for the same content is judged as a duplicate one and then dropped by an upstream node due to its multipath forwarding. Furthermore, we propose NDRUDAF, a NACK based mechanism that enhances the Interest forwarding and enables Detection and fast Recovery from such Unanticipated Data Access Failure. In the NDN enhanced with NDRUDAF, the router that aggregates the Interest detects such unanticipated data access failure based on a negative acknowledgement from the upstream node that judges the Interest as a duplicate one. Then the router retransmits the Interest as soon as possible on behalf of the requester whose Interest is aggregated to fast recover from the data access failure. We qualitatively and quantitatively analyze the performance of the NDN enhanced with our proposed NDRUDAF and compare it with that of the present NDN. Our experimental results validate that NDRUDAF improves the system performance in case of such unanticipated data access failure in terms of data access delay and network resource utilization efficiency at routers.
文摘In the security and privacy fields,Access Control(AC)systems are viewed as the fundamental aspects of networking security mechanisms.Enforcing AC becomes even more challenging when researchers and data analysts have to analyze complex and distributed Big Data(BD)processing cluster frameworks,which are adopted to manage yottabyte of unstructured sensitive data.For instance,Big Data systems’privacy and security restrictions are most likely to failure due to the malformed AC policy configurations.Furthermore,BD systems were initially developed toped to take care of some of the DB issues to address BD challenges and many of these dealt with the“three Vs”(Velocity,Volume,and Variety)attributes,without planning security consideration,which are considered to be patch work.Some of the BD“three Vs”characteristics,such as distributed computing,fragment,redundant data and node-to node communication,each with its own security challenges,complicate even more the applicability of AC in BD.This paper gives an overview of the latest security and privacy challenges in BD AC systems.Furthermore,it analyzes and compares some of the latest AC research frameworks to reduce privacy and security issues in distributed BD systems,which very few enforce AC in a cost-effective and in a timely manner.Moreover,this work discusses some of the future research methodologies and improvements for BD AC systems.This study is valuable asset for Artificial Intelligence(AI)researchers,DB developers and DB analysts who need the latest AC security and privacy research perspective before using and/or improving a current BD AC framework.
基金supported by the National Natural Science Foundation of China under Grant 61972148。
文摘The traditional centralized data sharing systems have potential risks such as single point of failures and excessive working load on the central node.As a distributed and collaborative alternative,approaches based upon blockchain have been explored recently for Internet of Things(IoTs).However,the access from a legitimate user may be denied without the pre-defined policy and data update on the blockchain could be costly to the owners.In this paper,we first address these issues by incorporating the Accountable Subgroup Multi-Signature(ASM)algorithm into the Attribute-based Access Control(ABAC)method with Policy Smart Contract,to provide a finegrained and flexible solution.Next,we propose a policy-based Chameleon Hash algorithm that allows the data to be updated in a reliable and convenient way by the authorized users.Finally,we evaluate our work by comparing its performance with the benchmarks.The results demonstrate significant improvement on the effectiveness and efficiency.
文摘With the development of cloud computing, the mutual understandability among distributed data access control has become an important issue in the security field of cloud computing. To ensure security, confidentiality and fine-grained data access control of Cloud Data Storage (CDS) environment, we proposed Multi-Agent System (MAS) architecture. This architecture consists of two agents: Cloud Service Provider Agent (CSPA) and Cloud Data Confidentiality Agent (CDConA). CSPA provides a graphical interface to the cloud user that facilitates the access to the services offered by the system. CDConA provides each cloud user by definition and enforcement expressive and flexible access structure as a logic formula over cloud data file attributes. This new access control is named as Formula-Based Cloud Data Access Control (FCDAC). Our proposed FCDAC based on MAS architecture consists of four layers: interface layer, existing access control layer, proposed FCDAC layer and CDS layer as well as four types of entities of Cloud Service Provider (CSP), cloud users, knowledge base and confidentiality policy roles. FCDAC, it’s an access policy determined by our MAS architecture, not by the CSPs. A prototype of our proposed FCDAC scheme is implemented using the Java Agent Development Framework Security (JADE-S). Our results in the practical scenario defined formally in this paper, show the Round Trip Time (RTT) for an agent to travel in our system and measured by the times required for an agent to travel around different number of cloud users before and after implementing FCDAC.
基金supported by the National Natural Science Foundation of China(Grant Nos.61006003 and 61674038)the Natural Science Foundation of Fujian Province,China(Grant Nos.2015J01249 and 2010J05134)+1 种基金the Science Foundation of Fujian Education Department of China(Grant No.JAT160073)the Science Foundation of Fujian Provincial Economic and Information Technology Commission of China(Grant No.83016006)
文摘As an industry accepted storage scheme, hafnium oxide(HfO_x) based resistive random access memory(RRAM)should further improve its thermal stability and data retention for practical applications. We therefore fabricated RRAMs with HfO_x/ZnO double-layer as the storage medium to study their thermal stability as well as data retention. The HfO_x/ZnO double-layer is capable of reversible bipolar switching under ultralow switching current(〈 3 μA) with a Schottky emission dominant conduction for the high resistance state and a Poole–Frenkel emission governed conduction for the low resistance state. Compared with a drastically increased switching current at 120℃ for the single HfO_x layer RRAM, the HfO_x/ZnO double-layer exhibits excellent thermal stability and maintains neglectful fluctuations in switching current at high temperatures(up to 180℃), which might be attributed to the increased Schottky barrier height to suppress current at high temperatures. Additionally, the HfO_x/ZnO double-layer exhibits 10-year data retention @85℃ that is helpful for the practical applications in RRAMs.
基金This work was supported by the National Natural Science Foundation for Distinguished Young Scholars of China(Grant No.41725006).
文摘Growing attention has been directed to the use of satellite imagery and open geospatial data to understand large-scale sustainable development outcomes.Health and education are critical domains of the Unites Nations’Sus-tainable Development Goals(SDGs),yet existing research on the accessibility of corresponding services focused mainly on detailed but small-scale studies.This means that such studies lack accessibility metrics for large-scale quantitative evaluations.To address this deficiency,we evaluated the accessibility of health and education ser-vices in China's Mainland in 2021 using point-of-interest data,OpenStreetMap road data,land cover data,and WorldPop spatial demographic data.The accessibility metrics used were the least time costs of reaching hospital and school services and population coverage with a time cost of less than 1 h.On the basis of the road network and land cover information,the overall average time costs of reaching hospital and school were 20 and 22 min,respectively.In terms of population coverage,94.7%and 92.5%of the population in China has a time cost of less than 1 h in obtaining hospital and school services,respectively.Counties with low accessibility to hospitals and schools were highly coupled with poor areas and ecological function regions,with the time cost incurred in these areas being more than twice that experienced in non-poor and non-ecological areas.Furthermore,the cumulative time cost incurred by the bottom 20%of counties(by GDP)from access to hospital and school services reached approximately 80%of the national total.Low-GDP counties were compelled to suffer disproportionately increased time costs to acquire health and education services compared with high-GDP counties.The accessibil-ity metrics proposed in this study are highly related to SDGs 3 and 4,and they can serve as auxiliary data that can be used to enhance the evaluation of SDG outcomes.The analysis of the uneven distribution of health and education services in China can help identify areas with backward public services and may contribute to targeted and efficient policy interventions.
文摘Digital transformation has been corner stone of business innovation in the last decade, and these innovations have dramatically changed the definition and boundaries of enterprise business applications. Introduction of new products/ services, version management of existing products/ services, management of customer/partner connections, management of multi-channel service delivery (web, social media, web etc.), merger/acquisitions of new businesses and adoption of new innovations/technologies will drive data growth in business applications. These datasets exist in different sharing nothing business applications at different locations and in various forms. So, to make sense of this information and derive insight, it is essential to break the data silos, streamline data retrieval and simplify information access across the entire organization. The information access framework must support just-in-time processing capabilities to bring data from multiple sources, be fast and powerful enough to transform and process huge amounts of data quickly, and be agile enough to accommodate new data sources per user needs. This paper discusses the SAP HANA Smart Data Access data-virtualization technology to enable unified access to heterogenous data across the organization and analysis of huge volume of data in real-time using SAP HANA in-memory platform.