As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The ...As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The eddy dissipation rate(EDR)has been established as the standard metric for quantifying turbulence in civil aviation.This study aims to explore a universally applicable symbolic classification approach based on genetic programming to detect turbulence anomalies using quick access recorder(QAR)data.The detection of atmospheric turbulence is approached as an anomaly detection problem.Comparative evaluations demonstrate that this approach performs on par with direct EDR calculation methods in identifying turbulence events.Moreover,comparisons with alternative machine learning techniques indicate that the proposed technique is the optimal methodology currently available.In summary,the use of symbolic classification via genetic programming enables accurate turbulence detection from QAR data,comparable to that with established EDR approaches and surpassing that achieved with machine learning algorithms.This finding highlights the potential of integrating symbolic classifiers into turbulence monitoring systems to enhance civil aviation safety amidst rising environmental and operational hazards.展开更多
Big data resources are characterized by large scale, wide sources, and strong dynamics. Existing access controlmechanisms based on manual policy formulation by security experts suffer from drawbacks such as low policy...Big data resources are characterized by large scale, wide sources, and strong dynamics. Existing access controlmechanisms based on manual policy formulation by security experts suffer from drawbacks such as low policymanagement efficiency and difficulty in accurately describing the access control policy. To overcome theseproblems, this paper proposes a big data access control mechanism based on a two-layer permission decisionstructure. This mechanism extends the attribute-based access control (ABAC) model. Business attributes areintroduced in the ABAC model as business constraints between entities. The proposed mechanism implementsa two-layer permission decision structure composed of the inherent attributes of access control entities and thebusiness attributes, which constitute the general permission decision algorithm based on logical calculation andthe business permission decision algorithm based on a bi-directional long short-term memory (BiLSTM) neuralnetwork, respectively. The general permission decision algorithm is used to implement accurate policy decisions,while the business permission decision algorithm implements fuzzy decisions based on the business constraints.The BiLSTM neural network is used to calculate the similarity of the business attributes to realize intelligent,adaptive, and efficient access control permission decisions. Through the two-layer permission decision structure,the complex and diverse big data access control management requirements can be satisfied by considering thesecurity and availability of resources. Experimental results show that the proposed mechanism is effective andreliable. In summary, it can efficiently support the secure sharing of big data resources.展开更多
Data trading enables data owners and data requesters to sell and purchase data.With the emergence of blockchain technology,research on blockchain-based data trading systems is receiving a lot of attention.Particularly...Data trading enables data owners and data requesters to sell and purchase data.With the emergence of blockchain technology,research on blockchain-based data trading systems is receiving a lot of attention.Particularly,to reduce the on-chain storage cost,a novel paradigm of blockchain and cloud fusion has been widely considered as a promising data trading platform.Moreover,the fact that data can be used for commercial purposes will encourage users and organizations from various fields to participate in the data marketplace.In the data marketplace,it is a challenge how to trade the data securely outsourced to the external cloud in a way that restricts access to the data only to authorized users across multiple domains.In this paper,we propose a cross-domain bilateral access control protocol for blockchain-cloud based data trading systems.We consider a system model that consists of domain authorities,data senders,data receivers,a blockchain layer,and a cloud provider.The proposed protocol enables access control and source identification of the outsourced data by leveraging identity-based cryptographic techniques.In the proposed protocol,the outsourced data of the sender is encrypted under the target receiver’s identity,and the cloud provider performs policy-match verification on the authorization tags of the sender and receiver generated by the identity-based signature scheme.Therefore,data trading can be achieved only if the identities of the data sender and receiver simultaneously meet the policies specified by each other.To demonstrate efficiency,we evaluate the performance of the proposed protocol and compare it with existing studies.展开更多
The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initiall...The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initially built a power IoT architecture comprising a perception,network,and platform application layer.However,owing to the structural complexity of the power system,the construction of the power IoT continues to face problems such as complex access management of massive heterogeneous equipment,diverse IoT protocol access methods,high concurrency of network communications,and weak data security protection.To address these issues,this study optimizes the existing architecture of the power IoT and designs an integrated management framework for the access of multi-source heterogeneous data in the power IoT,comprising cloud,pipe,edge,and terminal parts.It further reviews and analyzes the key technologies involved in the power IoT,such as the unified management of the physical model,high concurrent access,multi-protocol access,multi-source heterogeneous data storage management,and data security control,to provide a more flexible,efficient,secure,and easy-to-use solution for multi-source heterogeneous data access in the power IoT.展开更多
Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these adv...Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these advancements, efficiently programming GPUs remains a daunting challenge, often relying on trial-and-error optimization methods. This paper introduces an optimization technique for CUDA programs through a novel Data Layout strategy, aimed at restructuring memory data arrangement to significantly enhance data access locality. Focusing on the dynamic programming algorithm for chained matrix multiplication—a critical operation across various domains including artificial intelligence (AI), high-performance computing (HPC), and the Internet of Things (IoT)—this technique facilitates more localized access. We specifically illustrate the importance of efficient matrix multiplication in these areas, underscoring the technique’s broader applicability and its potential to address some of the most pressing computational challenges in GPU-accelerated applications. Our findings reveal a remarkable reduction in memory consumption and a substantial 50% decrease in execution time for CUDA programs utilizing this technique, thereby setting a new benchmark for optimization in GPU computing.展开更多
MORPAS is a special GIS (geographic information system) software system, based on the MAPGIS platform whose aim is to prospect and evaluate mineral resources quantificationally by synthesizing geological, geophysical,...MORPAS is a special GIS (geographic information system) software system, based on the MAPGIS platform whose aim is to prospect and evaluate mineral resources quantificationally by synthesizing geological, geophysical, geochemical and remote sensing data. It overlays geological database management, geological background and geological abnormality analysis, image processing of remote sensing and comprehensive abnormality analysis, etc.. It puts forward an integrative solution for the application of GIS in basic-level units and the construction of information engineering in the geological field. As the popularization of computer networks and the request of data sharing, it is necessary to extend its functions in data management so that all its data files can be accessed in the network server. This paper utilizes some MAPGIS functions for the second development and ADO (access data object) technique to access multi-source geological data in SQL Server databases. Then remote visiting and congruous management will be realized in the MORPAS system.展开更多
We show that an aggregated Interest in Named Data Networking (NDN) may fail to retrieve desired data since the Interest previously sent upstream for the same content is judged as a duplicate one and then dropped by an...We show that an aggregated Interest in Named Data Networking (NDN) may fail to retrieve desired data since the Interest previously sent upstream for the same content is judged as a duplicate one and then dropped by an upstream node due to its multipath forwarding. Furthermore, we propose NDRUDAF, a NACK based mechanism that enhances the Interest forwarding and enables Detection and fast Recovery from such Unanticipated Data Access Failure. In the NDN enhanced with NDRUDAF, the router that aggregates the Interest detects such unanticipated data access failure based on a negative acknowledgement from the upstream node that judges the Interest as a duplicate one. Then the router retransmits the Interest as soon as possible on behalf of the requester whose Interest is aggregated to fast recover from the data access failure. We qualitatively and quantitatively analyze the performance of the NDN enhanced with our proposed NDRUDAF and compare it with that of the present NDN. Our experimental results validate that NDRUDAF improves the system performance in case of such unanticipated data access failure in terms of data access delay and network resource utilization efficiency at routers.展开更多
In the security and privacy fields,Access Control(AC)systems are viewed as the fundamental aspects of networking security mechanisms.Enforcing AC becomes even more challenging when researchers and data analysts have t...In the security and privacy fields,Access Control(AC)systems are viewed as the fundamental aspects of networking security mechanisms.Enforcing AC becomes even more challenging when researchers and data analysts have to analyze complex and distributed Big Data(BD)processing cluster frameworks,which are adopted to manage yottabyte of unstructured sensitive data.For instance,Big Data systems’privacy and security restrictions are most likely to failure due to the malformed AC policy configurations.Furthermore,BD systems were initially developed toped to take care of some of the DB issues to address BD challenges and many of these dealt with the“three Vs”(Velocity,Volume,and Variety)attributes,without planning security consideration,which are considered to be patch work.Some of the BD“three Vs”characteristics,such as distributed computing,fragment,redundant data and node-to node communication,each with its own security challenges,complicate even more the applicability of AC in BD.This paper gives an overview of the latest security and privacy challenges in BD AC systems.Furthermore,it analyzes and compares some of the latest AC research frameworks to reduce privacy and security issues in distributed BD systems,which very few enforce AC in a cost-effective and in a timely manner.Moreover,this work discusses some of the future research methodologies and improvements for BD AC systems.This study is valuable asset for Artificial Intelligence(AI)researchers,DB developers and DB analysts who need the latest AC security and privacy research perspective before using and/or improving a current BD AC framework.展开更多
The traditional centralized data sharing systems have potential risks such as single point of failures and excessive working load on the central node.As a distributed and collaborative alternative,approaches based upo...The traditional centralized data sharing systems have potential risks such as single point of failures and excessive working load on the central node.As a distributed and collaborative alternative,approaches based upon blockchain have been explored recently for Internet of Things(IoTs).However,the access from a legitimate user may be denied without the pre-defined policy and data update on the blockchain could be costly to the owners.In this paper,we first address these issues by incorporating the Accountable Subgroup Multi-Signature(ASM)algorithm into the Attribute-based Access Control(ABAC)method with Policy Smart Contract,to provide a finegrained and flexible solution.Next,we propose a policy-based Chameleon Hash algorithm that allows the data to be updated in a reliable and convenient way by the authorized users.Finally,we evaluate our work by comparing its performance with the benchmarks.The results demonstrate significant improvement on the effectiveness and efficiency.展开更多
With the development of cloud computing, the mutual understandability among distributed data access control has become an important issue in the security field of cloud computing. To ensure security, confidentiality a...With the development of cloud computing, the mutual understandability among distributed data access control has become an important issue in the security field of cloud computing. To ensure security, confidentiality and fine-grained data access control of Cloud Data Storage (CDS) environment, we proposed Multi-Agent System (MAS) architecture. This architecture consists of two agents: Cloud Service Provider Agent (CSPA) and Cloud Data Confidentiality Agent (CDConA). CSPA provides a graphical interface to the cloud user that facilitates the access to the services offered by the system. CDConA provides each cloud user by definition and enforcement expressive and flexible access structure as a logic formula over cloud data file attributes. This new access control is named as Formula-Based Cloud Data Access Control (FCDAC). Our proposed FCDAC based on MAS architecture consists of four layers: interface layer, existing access control layer, proposed FCDAC layer and CDS layer as well as four types of entities of Cloud Service Provider (CSP), cloud users, knowledge base and confidentiality policy roles. FCDAC, it’s an access policy determined by our MAS architecture, not by the CSPs. A prototype of our proposed FCDAC scheme is implemented using the Java Agent Development Framework Security (JADE-S). Our results in the practical scenario defined formally in this paper, show the Round Trip Time (RTT) for an agent to travel in our system and measured by the times required for an agent to travel around different number of cloud users before and after implementing FCDAC.展开更多
As an industry accepted storage scheme, hafnium oxide(HfO_x) based resistive random access memory(RRAM)should further improve its thermal stability and data retention for practical applications. We therefore fabri...As an industry accepted storage scheme, hafnium oxide(HfO_x) based resistive random access memory(RRAM)should further improve its thermal stability and data retention for practical applications. We therefore fabricated RRAMs with HfO_x/ZnO double-layer as the storage medium to study their thermal stability as well as data retention. The HfO_x/ZnO double-layer is capable of reversible bipolar switching under ultralow switching current(〈 3 μA) with a Schottky emission dominant conduction for the high resistance state and a Poole–Frenkel emission governed conduction for the low resistance state. Compared with a drastically increased switching current at 120℃ for the single HfO_x layer RRAM, the HfO_x/ZnO double-layer exhibits excellent thermal stability and maintains neglectful fluctuations in switching current at high temperatures(up to 180℃), which might be attributed to the increased Schottky barrier height to suppress current at high temperatures. Additionally, the HfO_x/ZnO double-layer exhibits 10-year data retention @85℃ that is helpful for the practical applications in RRAMs.展开更多
Digital transformation has been corner stone of business innovation in the last decade, and these innovations have dramatically changed the definition and boundaries of enterprise business applications. Introduction o...Digital transformation has been corner stone of business innovation in the last decade, and these innovations have dramatically changed the definition and boundaries of enterprise business applications. Introduction of new products/ services, version management of existing products/ services, management of customer/partner connections, management of multi-channel service delivery (web, social media, web etc.), merger/acquisitions of new businesses and adoption of new innovations/technologies will drive data growth in business applications. These datasets exist in different sharing nothing business applications at different locations and in various forms. So, to make sense of this information and derive insight, it is essential to break the data silos, streamline data retrieval and simplify information access across the entire organization. The information access framework must support just-in-time processing capabilities to bring data from multiple sources, be fast and powerful enough to transform and process huge amounts of data quickly, and be agile enough to accommodate new data sources per user needs. This paper discusses the SAP HANA Smart Data Access data-virtualization technology to enable unified access to heterogenous data across the organization and analysis of huge volume of data in real-time using SAP HANA in-memory platform.展开更多
For the goals of security and privacy preservation,we propose a blind batch encryption-and public ledger-based data sharing protocol that allows the integrity of sensitive data to be audited by a public ledger and all...For the goals of security and privacy preservation,we propose a blind batch encryption-and public ledger-based data sharing protocol that allows the integrity of sensitive data to be audited by a public ledger and allows privacy information to be preserved.Data owners can tightly manage their data with efficient revocation and only grant one-time adaptive access for the fulfillment of the requester.We prove that our protocol is semanticallly secure,blind,and secure against oblivious requesters and malicious file keepers.We also provide security analysis in the context of four typical attacks.展开更多
This paper proposes a security policy model for mandatory access control in class B1 database management system whose level of labeling is tuple. The relation hierarchical data model is extended to multilevel relatio...This paper proposes a security policy model for mandatory access control in class B1 database management system whose level of labeling is tuple. The relation hierarchical data model is extended to multilevel relation hierarchical data model. Based on the multilevel relation hierarchical data model, the concept of upper lower layer relational integrity is presented after we analyze and eliminate the covert channels caused by the database integrity. Two SQL statements are extended to process polyinstantiation in the multilevel secure environment. The system is based on the multilevel relation hierarchical data model and is capable of integratively storing and manipulating multilevel complicated objects ( e.g., multilevel spatial data) and multilevel conventional data ( e.g., integer, real number and character string).展开更多
A new design solution of data access layer for N-tier architecture is presented. It can solve the problems such as low efficiency of development and difficulties in transplantation, update and reuse. The solution util...A new design solution of data access layer for N-tier architecture is presented. It can solve the problems such as low efficiency of development and difficulties in transplantation, update and reuse. The solution utilizes the reflection technology of .NET and design pattern. A typical application of the solution demonstrates that the new solution of data access layer performs better than the current N-tier architecture. More importantly, the application suggests that the new solution of data access layer can be reused effectively.展开更多
A new web product data management architecture is presented. The three-tier web architecture and Simple Object Access Protocol (SOAP) are combined to build the web-based product data management (PDM) system which incl...A new web product data management architecture is presented. The three-tier web architecture and Simple Object Access Protocol (SOAP) are combined to build the web-based product data management (PDM) system which includes three tiers: the user services tier, the business services tier, and the data services tier. The client service component uses the server-side technology, and Extensible Markup Language (XML) web service which uses SOAP as the communication protocol is chosen as the business service component. To illustrate how to build a web-based PDM system using the proposed architecture, a case PDM system which included three logical tires was built. To use the security and central management features of the database, a stored procedure was recommended in the data services tier. The business object was implemented as an XML web service so that client could use standard internet protocols to communicate with the business object from any platform. In order to satisfy users using all sorts of browser, the server-side technology and Microsoft ASP.NET was used to create the dynamic user interface.展开更多
In the environment of big data,the traditional access control lacks effective and flexible access mechanism.Based on attribute access control,this paper proposes a HBMC-ABAC big data access control framework.It solves...In the environment of big data,the traditional access control lacks effective and flexible access mechanism.Based on attribute access control,this paper proposes a HBMC-ABAC big data access control framework.It solves the problems of difficult authority change,complex management,over-authorization and lack of authorization in big data environment.At the same time,binary mapping codes are proposed to solve the problem of low efficiency of policy retrieval in traditional ABAC.Through experimental analysis,the results show that our proposed HBMC-ABAC model can meet the current large and complex environment of big data.展开更多
With the growth of requirements for data sharing,a novel business model of digital assets trading has emerged that allows data owners to sell their data for monetary gain.In the distributed ledger of blockchain,howeve...With the growth of requirements for data sharing,a novel business model of digital assets trading has emerged that allows data owners to sell their data for monetary gain.In the distributed ledger of blockchain,however,the privacy of stakeholder's identity and the confidentiality of data content are threatened.Therefore,we proposed a blockchainenabled privacy-preserving and access control scheme to address the above problems.First,the multi-channel mechanism is introduced to provide the privacy protection of distributed ledger inside the channel and achieve coarse-grained access control to digital assets.Then,we use multi-authority attribute-based encryption(MAABE)algorithm to build a fine-grained access control model for data trading in a single channel and describe its instantiation in detail.Security analysis shows that the scheme has IND-CPA secure and can provide privacy protection and collusion resistance.Compared with other schemes,our solution has better performance in privacy protection and access control.The evaluation results demonstrate its effectiveness and practicability.展开更多
基金supported by the Meteorological Soft Science Project(Grant No.2023ZZXM29)the Natural Science Fund Project of Tianjin,China(Grant No.21JCYBJC00740)the Key Research and Development-Social Development Program of Jiangsu Province,China(Grant No.BE2021685).
文摘As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The eddy dissipation rate(EDR)has been established as the standard metric for quantifying turbulence in civil aviation.This study aims to explore a universally applicable symbolic classification approach based on genetic programming to detect turbulence anomalies using quick access recorder(QAR)data.The detection of atmospheric turbulence is approached as an anomaly detection problem.Comparative evaluations demonstrate that this approach performs on par with direct EDR calculation methods in identifying turbulence events.Moreover,comparisons with alternative machine learning techniques indicate that the proposed technique is the optimal methodology currently available.In summary,the use of symbolic classification via genetic programming enables accurate turbulence detection from QAR data,comparable to that with established EDR approaches and surpassing that achieved with machine learning algorithms.This finding highlights the potential of integrating symbolic classifiers into turbulence monitoring systems to enhance civil aviation safety amidst rising environmental and operational hazards.
基金Key Research and Development and Promotion Program of Henan Province(No.222102210069)Zhongyuan Science and Technology Innovation Leading Talent Project(224200510003)National Natural Science Foundation of China(No.62102449).
文摘Big data resources are characterized by large scale, wide sources, and strong dynamics. Existing access controlmechanisms based on manual policy formulation by security experts suffer from drawbacks such as low policymanagement efficiency and difficulty in accurately describing the access control policy. To overcome theseproblems, this paper proposes a big data access control mechanism based on a two-layer permission decisionstructure. This mechanism extends the attribute-based access control (ABAC) model. Business attributes areintroduced in the ABAC model as business constraints between entities. The proposed mechanism implementsa two-layer permission decision structure composed of the inherent attributes of access control entities and thebusiness attributes, which constitute the general permission decision algorithm based on logical calculation andthe business permission decision algorithm based on a bi-directional long short-term memory (BiLSTM) neuralnetwork, respectively. The general permission decision algorithm is used to implement accurate policy decisions,while the business permission decision algorithm implements fuzzy decisions based on the business constraints.The BiLSTM neural network is used to calculate the similarity of the business attributes to realize intelligent,adaptive, and efficient access control permission decisions. Through the two-layer permission decision structure,the complex and diverse big data access control management requirements can be satisfied by considering thesecurity and availability of resources. Experimental results show that the proposed mechanism is effective andreliable. In summary, it can efficiently support the secure sharing of big data resources.
基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2022R1I1A3063257)supported by the MSIT(Ministry of Science and ICT),Korea,under the Special R&D Zone Development Project(R&D)—Development of R&D Innovation Valley Support Program(2023-DD-RD-0152)supervised by the Innovation Foundation.
文摘Data trading enables data owners and data requesters to sell and purchase data.With the emergence of blockchain technology,research on blockchain-based data trading systems is receiving a lot of attention.Particularly,to reduce the on-chain storage cost,a novel paradigm of blockchain and cloud fusion has been widely considered as a promising data trading platform.Moreover,the fact that data can be used for commercial purposes will encourage users and organizations from various fields to participate in the data marketplace.In the data marketplace,it is a challenge how to trade the data securely outsourced to the external cloud in a way that restricts access to the data only to authorized users across multiple domains.In this paper,we propose a cross-domain bilateral access control protocol for blockchain-cloud based data trading systems.We consider a system model that consists of domain authorities,data senders,data receivers,a blockchain layer,and a cloud provider.The proposed protocol enables access control and source identification of the outsourced data by leveraging identity-based cryptographic techniques.In the proposed protocol,the outsourced data of the sender is encrypted under the target receiver’s identity,and the cloud provider performs policy-match verification on the authorization tags of the sender and receiver generated by the identity-based signature scheme.Therefore,data trading can be achieved only if the identities of the data sender and receiver simultaneously meet the policies specified by each other.To demonstrate efficiency,we evaluate the performance of the proposed protocol and compare it with existing studies.
基金supported by the National Key Research and Development Program of China(grant number 2019YFE0123600)。
文摘The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initially built a power IoT architecture comprising a perception,network,and platform application layer.However,owing to the structural complexity of the power system,the construction of the power IoT continues to face problems such as complex access management of massive heterogeneous equipment,diverse IoT protocol access methods,high concurrency of network communications,and weak data security protection.To address these issues,this study optimizes the existing architecture of the power IoT and designs an integrated management framework for the access of multi-source heterogeneous data in the power IoT,comprising cloud,pipe,edge,and terminal parts.It further reviews and analyzes the key technologies involved in the power IoT,such as the unified management of the physical model,high concurrent access,multi-protocol access,multi-source heterogeneous data storage management,and data security control,to provide a more flexible,efficient,secure,and easy-to-use solution for multi-source heterogeneous data access in the power IoT.
文摘Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these advancements, efficiently programming GPUs remains a daunting challenge, often relying on trial-and-error optimization methods. This paper introduces an optimization technique for CUDA programs through a novel Data Layout strategy, aimed at restructuring memory data arrangement to significantly enhance data access locality. Focusing on the dynamic programming algorithm for chained matrix multiplication—a critical operation across various domains including artificial intelligence (AI), high-performance computing (HPC), and the Internet of Things (IoT)—this technique facilitates more localized access. We specifically illustrate the importance of efficient matrix multiplication in these areas, underscoring the technique’s broader applicability and its potential to address some of the most pressing computational challenges in GPU-accelerated applications. Our findings reveal a remarkable reduction in memory consumption and a substantial 50% decrease in execution time for CUDA programs utilizing this technique, thereby setting a new benchmark for optimization in GPU computing.
文摘MORPAS is a special GIS (geographic information system) software system, based on the MAPGIS platform whose aim is to prospect and evaluate mineral resources quantificationally by synthesizing geological, geophysical, geochemical and remote sensing data. It overlays geological database management, geological background and geological abnormality analysis, image processing of remote sensing and comprehensive abnormality analysis, etc.. It puts forward an integrative solution for the application of GIS in basic-level units and the construction of information engineering in the geological field. As the popularization of computer networks and the request of data sharing, it is necessary to extend its functions in data management so that all its data files can be accessed in the network server. This paper utilizes some MAPGIS functions for the second development and ADO (access data object) technique to access multi-source geological data in SQL Server databases. Then remote visiting and congruous management will be realized in the MORPAS system.
基金supported in part by the National Natural Science Foundation of China (No.61602114)part by the National Key Research and Development Program of China (2017YFB0801703)+1 种基金part by the CERNET Innovation Project (NGII20170406)part by Jiangsu Provincial Key Laboratory of Network and Information Security (BM2003201)
文摘We show that an aggregated Interest in Named Data Networking (NDN) may fail to retrieve desired data since the Interest previously sent upstream for the same content is judged as a duplicate one and then dropped by an upstream node due to its multipath forwarding. Furthermore, we propose NDRUDAF, a NACK based mechanism that enhances the Interest forwarding and enables Detection and fast Recovery from such Unanticipated Data Access Failure. In the NDN enhanced with NDRUDAF, the router that aggregates the Interest detects such unanticipated data access failure based on a negative acknowledgement from the upstream node that judges the Interest as a duplicate one. Then the router retransmits the Interest as soon as possible on behalf of the requester whose Interest is aggregated to fast recover from the data access failure. We qualitatively and quantitatively analyze the performance of the NDN enhanced with our proposed NDRUDAF and compare it with that of the present NDN. Our experimental results validate that NDRUDAF improves the system performance in case of such unanticipated data access failure in terms of data access delay and network resource utilization efficiency at routers.
文摘In the security and privacy fields,Access Control(AC)systems are viewed as the fundamental aspects of networking security mechanisms.Enforcing AC becomes even more challenging when researchers and data analysts have to analyze complex and distributed Big Data(BD)processing cluster frameworks,which are adopted to manage yottabyte of unstructured sensitive data.For instance,Big Data systems’privacy and security restrictions are most likely to failure due to the malformed AC policy configurations.Furthermore,BD systems were initially developed toped to take care of some of the DB issues to address BD challenges and many of these dealt with the“three Vs”(Velocity,Volume,and Variety)attributes,without planning security consideration,which are considered to be patch work.Some of the BD“three Vs”characteristics,such as distributed computing,fragment,redundant data and node-to node communication,each with its own security challenges,complicate even more the applicability of AC in BD.This paper gives an overview of the latest security and privacy challenges in BD AC systems.Furthermore,it analyzes and compares some of the latest AC research frameworks to reduce privacy and security issues in distributed BD systems,which very few enforce AC in a cost-effective and in a timely manner.Moreover,this work discusses some of the future research methodologies and improvements for BD AC systems.This study is valuable asset for Artificial Intelligence(AI)researchers,DB developers and DB analysts who need the latest AC security and privacy research perspective before using and/or improving a current BD AC framework.
基金supported by the National Natural Science Foundation of China under Grant 61972148。
文摘The traditional centralized data sharing systems have potential risks such as single point of failures and excessive working load on the central node.As a distributed and collaborative alternative,approaches based upon blockchain have been explored recently for Internet of Things(IoTs).However,the access from a legitimate user may be denied without the pre-defined policy and data update on the blockchain could be costly to the owners.In this paper,we first address these issues by incorporating the Accountable Subgroup Multi-Signature(ASM)algorithm into the Attribute-based Access Control(ABAC)method with Policy Smart Contract,to provide a finegrained and flexible solution.Next,we propose a policy-based Chameleon Hash algorithm that allows the data to be updated in a reliable and convenient way by the authorized users.Finally,we evaluate our work by comparing its performance with the benchmarks.The results demonstrate significant improvement on the effectiveness and efficiency.
文摘With the development of cloud computing, the mutual understandability among distributed data access control has become an important issue in the security field of cloud computing. To ensure security, confidentiality and fine-grained data access control of Cloud Data Storage (CDS) environment, we proposed Multi-Agent System (MAS) architecture. This architecture consists of two agents: Cloud Service Provider Agent (CSPA) and Cloud Data Confidentiality Agent (CDConA). CSPA provides a graphical interface to the cloud user that facilitates the access to the services offered by the system. CDConA provides each cloud user by definition and enforcement expressive and flexible access structure as a logic formula over cloud data file attributes. This new access control is named as Formula-Based Cloud Data Access Control (FCDAC). Our proposed FCDAC based on MAS architecture consists of four layers: interface layer, existing access control layer, proposed FCDAC layer and CDS layer as well as four types of entities of Cloud Service Provider (CSP), cloud users, knowledge base and confidentiality policy roles. FCDAC, it’s an access policy determined by our MAS architecture, not by the CSPs. A prototype of our proposed FCDAC scheme is implemented using the Java Agent Development Framework Security (JADE-S). Our results in the practical scenario defined formally in this paper, show the Round Trip Time (RTT) for an agent to travel in our system and measured by the times required for an agent to travel around different number of cloud users before and after implementing FCDAC.
基金supported by the National Natural Science Foundation of China(Grant Nos.61006003 and 61674038)the Natural Science Foundation of Fujian Province,China(Grant Nos.2015J01249 and 2010J05134)+1 种基金the Science Foundation of Fujian Education Department of China(Grant No.JAT160073)the Science Foundation of Fujian Provincial Economic and Information Technology Commission of China(Grant No.83016006)
文摘As an industry accepted storage scheme, hafnium oxide(HfO_x) based resistive random access memory(RRAM)should further improve its thermal stability and data retention for practical applications. We therefore fabricated RRAMs with HfO_x/ZnO double-layer as the storage medium to study their thermal stability as well as data retention. The HfO_x/ZnO double-layer is capable of reversible bipolar switching under ultralow switching current(〈 3 μA) with a Schottky emission dominant conduction for the high resistance state and a Poole–Frenkel emission governed conduction for the low resistance state. Compared with a drastically increased switching current at 120℃ for the single HfO_x layer RRAM, the HfO_x/ZnO double-layer exhibits excellent thermal stability and maintains neglectful fluctuations in switching current at high temperatures(up to 180℃), which might be attributed to the increased Schottky barrier height to suppress current at high temperatures. Additionally, the HfO_x/ZnO double-layer exhibits 10-year data retention @85℃ that is helpful for the practical applications in RRAMs.
文摘Digital transformation has been corner stone of business innovation in the last decade, and these innovations have dramatically changed the definition and boundaries of enterprise business applications. Introduction of new products/ services, version management of existing products/ services, management of customer/partner connections, management of multi-channel service delivery (web, social media, web etc.), merger/acquisitions of new businesses and adoption of new innovations/technologies will drive data growth in business applications. These datasets exist in different sharing nothing business applications at different locations and in various forms. So, to make sense of this information and derive insight, it is essential to break the data silos, streamline data retrieval and simplify information access across the entire organization. The information access framework must support just-in-time processing capabilities to bring data from multiple sources, be fast and powerful enough to transform and process huge amounts of data quickly, and be agile enough to accommodate new data sources per user needs. This paper discusses the SAP HANA Smart Data Access data-virtualization technology to enable unified access to heterogenous data across the organization and analysis of huge volume of data in real-time using SAP HANA in-memory platform.
基金partially supported by the National Natural Science Foundation of China under grant no.62372245the Foundation of Yunnan Key Laboratory of Blockchain Application Technology under Grant 202105AG070005+1 种基金in part by the Foundation of State Key Laboratory of Public Big Datain part by the Foundation of Key Laboratory of Computational Science and Application of Hainan Province under Grant JSKX202202。
文摘For the goals of security and privacy preservation,we propose a blind batch encryption-and public ledger-based data sharing protocol that allows the integrity of sensitive data to be audited by a public ledger and allows privacy information to be preserved.Data owners can tightly manage their data with efficient revocation and only grant one-time adaptive access for the fulfillment of the requester.We prove that our protocol is semanticallly secure,blind,and secure against oblivious requesters and malicious file keepers.We also provide security analysis in the context of four typical attacks.
文摘This paper proposes a security policy model for mandatory access control in class B1 database management system whose level of labeling is tuple. The relation hierarchical data model is extended to multilevel relation hierarchical data model. Based on the multilevel relation hierarchical data model, the concept of upper lower layer relational integrity is presented after we analyze and eliminate the covert channels caused by the database integrity. Two SQL statements are extended to process polyinstantiation in the multilevel secure environment. The system is based on the multilevel relation hierarchical data model and is capable of integratively storing and manipulating multilevel complicated objects ( e.g., multilevel spatial data) and multilevel conventional data ( e.g., integer, real number and character string).
基金the Foundation for Key Teachers of Chongqing University (200209055).
文摘A new design solution of data access layer for N-tier architecture is presented. It can solve the problems such as low efficiency of development and difficulties in transplantation, update and reuse. The solution utilizes the reflection technology of .NET and design pattern. A typical application of the solution demonstrates that the new solution of data access layer performs better than the current N-tier architecture. More importantly, the application suggests that the new solution of data access layer can be reused effectively.
基金the National Key Project Foundation of China (No. 2001BA201A0605) and partially supported by the State Key Lab for Mechanical Transmission..
文摘A new web product data management architecture is presented. The three-tier web architecture and Simple Object Access Protocol (SOAP) are combined to build the web-based product data management (PDM) system which includes three tiers: the user services tier, the business services tier, and the data services tier. The client service component uses the server-side technology, and Extensible Markup Language (XML) web service which uses SOAP as the communication protocol is chosen as the business service component. To illustrate how to build a web-based PDM system using the proposed architecture, a case PDM system which included three logical tires was built. To use the security and central management features of the database, a stored procedure was recommended in the data services tier. The business object was implemented as an XML web service so that client could use standard internet protocols to communicate with the business object from any platform. In order to satisfy users using all sorts of browser, the server-side technology and Microsoft ASP.NET was used to create the dynamic user interface.
文摘In the environment of big data,the traditional access control lacks effective and flexible access mechanism.Based on attribute access control,this paper proposes a HBMC-ABAC big data access control framework.It solves the problems of difficult authority change,complex management,over-authorization and lack of authorization in big data environment.At the same time,binary mapping codes are proposed to solve the problem of low efficiency of policy retrieval in traditional ABAC.Through experimental analysis,the results show that our proposed HBMC-ABAC model can meet the current large and complex environment of big data.
基金supported by National Key Research and Development Plan in China(Grant No.2020YFB1005500)Beijing Natural Science Foundation(Grant No.M21034)BUPT Excellent Ph.D Students Foundation(Grant No.CX2023218)。
文摘With the growth of requirements for data sharing,a novel business model of digital assets trading has emerged that allows data owners to sell their data for monetary gain.In the distributed ledger of blockchain,however,the privacy of stakeholder's identity and the confidentiality of data content are threatened.Therefore,we proposed a blockchainenabled privacy-preserving and access control scheme to address the above problems.First,the multi-channel mechanism is introduced to provide the privacy protection of distributed ledger inside the channel and achieve coarse-grained access control to digital assets.Then,we use multi-authority attribute-based encryption(MAABE)algorithm to build a fine-grained access control model for data trading in a single channel and describe its instantiation in detail.Security analysis shows that the scheme has IND-CPA secure and can provide privacy protection and collusion resistance.Compared with other schemes,our solution has better performance in privacy protection and access control.The evaluation results demonstrate its effectiveness and practicability.