Data Integrity is a critical component of Data lifecycle management. Its importance increases even more in a complex and dynamic landscape. Actions like unauthorized access, unauthorized modifications, data manipulati...Data Integrity is a critical component of Data lifecycle management. Its importance increases even more in a complex and dynamic landscape. Actions like unauthorized access, unauthorized modifications, data manipulations, audit tampering, data backdating, data falsification, phishing and spoofing are no longer restricted to rogue individuals but in fact also prevalent in systematic organizations and states as well. Therefore, data security requires strong data integrity measures and associated technical controls in place. Without proper customized framework in place, organizations are prone to high risk of financial, reputational, revenue losses, bankruptcies, and legal penalties which we shall discuss further throughout this paper. We will also explore some of the improvised and innovative techniques in product development to better tackle the challenges and requirements of data security and integrity.展开更多
Offshore waters provide resources for human beings,while on the other hand,threaten them because of marine disasters.Ocean stations are part of offshore observation networks,and the quality of their data is of great s...Offshore waters provide resources for human beings,while on the other hand,threaten them because of marine disasters.Ocean stations are part of offshore observation networks,and the quality of their data is of great significance for exploiting and protecting the ocean.We used hourly mean wave height,temperature,and pressure real-time observation data taken in the Xiaomaidao station(in Qingdao,China)from June 1,2017,to May 31,2018,to explore the data quality using eight quality control methods,and to discriminate the most effective method for Xiaomaidao station.After using the eight quality control methods,the percentages of the mean wave height,temperature,and pressure data that passed the tests were 89.6%,88.3%,and 98.6%,respectively.With the marine disaster(wave alarm report)data,the values failed in the test mainly due to the influence of aging observation equipment and missing data transmissions.The mean wave height is often affected by dynamic marine disasters,so the continuity test method is not effective.The correlation test with other related parameters would be more useful for the mean wave height.展开更多
Air pollution poses a critical threat to public health and environmental sustainability globally, and Nigeria is no exception. Despite significant economic growth and urban development, Nigeria faces substantial air q...Air pollution poses a critical threat to public health and environmental sustainability globally, and Nigeria is no exception. Despite significant economic growth and urban development, Nigeria faces substantial air quality challenges, particularly in urban centers. While outdoor air pollution has received considerable attention, the issue of indoor air quality remains underexplored yet equally critical. This study aims to develop a reliable, cost-effective, and user-friendly solution for continuous monitoring and reporting of indoor air quality, accessible from anywhere via a web interface. Addressing the urgent need for effective indoor air quality monitoring in urban hospitals, the research focuses on designing and implementing a smart indoor air quality monitoring system using Arduino technology. Employing an Arduino Uno, ESP8266 Wi-Fi module, and MQ135 gas sensor, the system collects real-time air quality data, transmits it to the ThingSpeak cloud platform, and visualizes it through a user-friendly web interface. This project offers a cost-effective, portable, and reliable solution for monitoring indoor air quality, aiming to mitigate health risks and promote a healthier living environment.展开更多
This paper proposes a virtual router cluster system based on the separation of the control plane and the data plane from multiple perspectives,such as architecture,key technologies,scenarios and standardization.To som...This paper proposes a virtual router cluster system based on the separation of the control plane and the data plane from multiple perspectives,such as architecture,key technologies,scenarios and standardization.To some extent,the virtual cluster simplifies network topology and management,achieves automatic conFig.uration and saves the IP address.It is a kind of low-cost expansion method of aggregation equipment port density.展开更多
With the development of cloud computing, the mutual understandability among distributed data access control has become an important issue in the security field of cloud computing. To ensure security, confidentiality a...With the development of cloud computing, the mutual understandability among distributed data access control has become an important issue in the security field of cloud computing. To ensure security, confidentiality and fine-grained data access control of Cloud Data Storage (CDS) environment, we proposed Multi-Agent System (MAS) architecture. This architecture consists of two agents: Cloud Service Provider Agent (CSPA) and Cloud Data Confidentiality Agent (CDConA). CSPA provides a graphical interface to the cloud user that facilitates the access to the services offered by the system. CDConA provides each cloud user by definition and enforcement expressive and flexible access structure as a logic formula over cloud data file attributes. This new access control is named as Formula-Based Cloud Data Access Control (FCDAC). Our proposed FCDAC based on MAS architecture consists of four layers: interface layer, existing access control layer, proposed FCDAC layer and CDS layer as well as four types of entities of Cloud Service Provider (CSP), cloud users, knowledge base and confidentiality policy roles. FCDAC, it’s an access policy determined by our MAS architecture, not by the CSPs. A prototype of our proposed FCDAC scheme is implemented using the Java Agent Development Framework Security (JADE-S). Our results in the practical scenario defined formally in this paper, show the Round Trip Time (RTT) for an agent to travel in our system and measured by the times required for an agent to travel around different number of cloud users before and after implementing FCDAC.展开更多
In order to cope with varying protection granularity levels of XML(extensible Markup Language) documents, we propose a TXAC (Two-level XML. Access Control) framework,in which an extended TRBAC ( Temporal Role-Based Ac...In order to cope with varying protection granularity levels of XML(extensible Markup Language) documents, we propose a TXAC (Two-level XML. Access Control) framework,in which an extended TRBAC ( Temporal Role-Based Access Control) approach is proposed to deal withthe dynamic XML data With different system components, LXAC algorithm evaluates access requestsefficiently by appropriate access control policy in dynamic web environment. The method is aflexible and powerful security system offering amulti-level access control solution.展开更多
Big data has a strong demand for a network infrastructure with the capability to support data sharing and retrieval efficiently. Information-centric networking (ICN) is an emerging approach to satisfy this demand, w...Big data has a strong demand for a network infrastructure with the capability to support data sharing and retrieval efficiently. Information-centric networking (ICN) is an emerging approach to satisfy this demand, where big data is cached ubiquitously in the network and retrieved using data names. However, existing authentication and authorization schemes rely mostly on centralized servers to provide certification and mediation services for data retrieval. This causes considerable traffic overhead for the secure distributed sharing of data. To solve this problem, we employ identity-based cryptography (IBC) to propose a Distributed Authentication and Authorization Scheme (DAAS), where an identity-based signature (IBS) is used to achieve distributed verifications of the identities of publishers and users. Moreover, Ciphertext-Policy Attribnte-based encryption (CP-ABE) is used to enable the distributed and fine-grained authorization. DAAS consists of three phases: initialization, secure data publication, and secure data retrieval, which seamlessly integrate authentication and authorization with the in- terest/data communication paradigm in ICN. In particular, we propose trustworthy registration and Network Operator and Authority Manifest (NOAM) dissemination to provide initial secure registration and enable efficient authentication for global data retrieval. Meanwhile, Attribute Manifest (AM) distribution coupled with automatic attribute update is proposed to reduce the cost of attribute retrieval. We examine the performance of the proposed DAAS, which shows that it can achieve a lower bandwidth cost than existing schemes.展开更多
Data warehouse (DW), a new technology invented in 1990s, is more useful for integrating and analyzing massive data than traditional database. Its application in geology field can be divided into 3 phrases: 1992-1996,...Data warehouse (DW), a new technology invented in 1990s, is more useful for integrating and analyzing massive data than traditional database. Its application in geology field can be divided into 3 phrases: 1992-1996, commercial data warehouse (CDW) appeared; 1996-1999, geological data warehouse (GDW) appeared and the geologists or geographers realized the importance of DW and began the studies on it, but the practical DW still followed the framework of DB; 2000 to present, geological data warehouse grows, and the theory of geo-spatial data warehouse (GSDW) has been developed but the research in geological area is still deficient except that in geography. Although some developments of GDW have been made, its core still follows the CDW-organizing data by time and brings about 3 problems: difficult to integrate the geological data, for the data feature more space than time; hard to store the massive data in different levels due to the same reason; hardly support the spatial analysis if the data are organized by time as CDW does. So the GDW should be redesigned by organizing data by scale in order to store mass data in different levels and synthesize the data in different granularities, and choosing space control points to replace the former time control points so as to integrate different types of data by the method of storing one type data as one layer and then to superpose the layers. In addition, data cube, a wide used technology in CDW, will be no use in GDW, for the causality among the geological data is not so obvious as commercial data, as the data are the mixed result of many complex rules, and their analysis always needs the special geological methods and software; on the other hand, data cube for mass and complex geo-data will devour too much store space to be practical. On this point, the main purpose of GDW may be fit for data integration unlike CDW for data analysis.展开更多
The Wireless Sensor network is distributed event based systems that differ from conventional communica-tion network. Sensor network has severe energy constraints, redundant low data rate, and many-to-one flows. Aggreg...The Wireless Sensor network is distributed event based systems that differ from conventional communica-tion network. Sensor network has severe energy constraints, redundant low data rate, and many-to-one flows. Aggregation is a technique to avoid redundant information to save energy and other resources. There are two types of aggregations. In one of the aggregation many sensor data are embedded into single packet, thus avoiding the unnecessary packet headers, this is called lossless aggregation. In the second case the sensor data goes under statistical process (average, maximum, minimum) and results are communicated to the base station, this is called lossy aggregation, because we cannot recover the original sensor data from the received aggregated packet. The number of sensor data to be aggregated in a single packet is known as degree of ag-gregation. The main contribution of this paper is to propose an algorithm which is adaptive to choose one of the aggregations based on scenarios and degree of aggregation based on traffic. We are also suggesting a suitable buffer management to offer best Quality of Service. Our initial experiment with NS-2 implementa-tion shows significant energy savings by reducing the number of packets optimally at any given moment of time.展开更多
文摘Data Integrity is a critical component of Data lifecycle management. Its importance increases even more in a complex and dynamic landscape. Actions like unauthorized access, unauthorized modifications, data manipulations, audit tampering, data backdating, data falsification, phishing and spoofing are no longer restricted to rogue individuals but in fact also prevalent in systematic organizations and states as well. Therefore, data security requires strong data integrity measures and associated technical controls in place. Without proper customized framework in place, organizations are prone to high risk of financial, reputational, revenue losses, bankruptcies, and legal penalties which we shall discuss further throughout this paper. We will also explore some of the improvised and innovative techniques in product development to better tackle the challenges and requirements of data security and integrity.
基金Supported by the National Key Research and Development Program of China(Nos.2016YFC1402000,2018YFC1407003,2017YFC1405300)
文摘Offshore waters provide resources for human beings,while on the other hand,threaten them because of marine disasters.Ocean stations are part of offshore observation networks,and the quality of their data is of great significance for exploiting and protecting the ocean.We used hourly mean wave height,temperature,and pressure real-time observation data taken in the Xiaomaidao station(in Qingdao,China)from June 1,2017,to May 31,2018,to explore the data quality using eight quality control methods,and to discriminate the most effective method for Xiaomaidao station.After using the eight quality control methods,the percentages of the mean wave height,temperature,and pressure data that passed the tests were 89.6%,88.3%,and 98.6%,respectively.With the marine disaster(wave alarm report)data,the values failed in the test mainly due to the influence of aging observation equipment and missing data transmissions.The mean wave height is often affected by dynamic marine disasters,so the continuity test method is not effective.The correlation test with other related parameters would be more useful for the mean wave height.
文摘Air pollution poses a critical threat to public health and environmental sustainability globally, and Nigeria is no exception. Despite significant economic growth and urban development, Nigeria faces substantial air quality challenges, particularly in urban centers. While outdoor air pollution has received considerable attention, the issue of indoor air quality remains underexplored yet equally critical. This study aims to develop a reliable, cost-effective, and user-friendly solution for continuous monitoring and reporting of indoor air quality, accessible from anywhere via a web interface. Addressing the urgent need for effective indoor air quality monitoring in urban hospitals, the research focuses on designing and implementing a smart indoor air quality monitoring system using Arduino technology. Employing an Arduino Uno, ESP8266 Wi-Fi module, and MQ135 gas sensor, the system collects real-time air quality data, transmits it to the ThingSpeak cloud platform, and visualizes it through a user-friendly web interface. This project offers a cost-effective, portable, and reliable solution for monitoring indoor air quality, aiming to mitigate health risks and promote a healthier living environment.
基金supported by the Collaboration Research on Key Techniques of Future Network between China,Japan and Korea(2010DFB13470)~~
文摘This paper proposes a virtual router cluster system based on the separation of the control plane and the data plane from multiple perspectives,such as architecture,key technologies,scenarios and standardization.To some extent,the virtual cluster simplifies network topology and management,achieves automatic conFig.uration and saves the IP address.It is a kind of low-cost expansion method of aggregation equipment port density.
文摘With the development of cloud computing, the mutual understandability among distributed data access control has become an important issue in the security field of cloud computing. To ensure security, confidentiality and fine-grained data access control of Cloud Data Storage (CDS) environment, we proposed Multi-Agent System (MAS) architecture. This architecture consists of two agents: Cloud Service Provider Agent (CSPA) and Cloud Data Confidentiality Agent (CDConA). CSPA provides a graphical interface to the cloud user that facilitates the access to the services offered by the system. CDConA provides each cloud user by definition and enforcement expressive and flexible access structure as a logic formula over cloud data file attributes. This new access control is named as Formula-Based Cloud Data Access Control (FCDAC). Our proposed FCDAC based on MAS architecture consists of four layers: interface layer, existing access control layer, proposed FCDAC layer and CDS layer as well as four types of entities of Cloud Service Provider (CSP), cloud users, knowledge base and confidentiality policy roles. FCDAC, it’s an access policy determined by our MAS architecture, not by the CSPs. A prototype of our proposed FCDAC scheme is implemented using the Java Agent Development Framework Security (JADE-S). Our results in the practical scenario defined formally in this paper, show the Round Trip Time (RTT) for an agent to travel in our system and measured by the times required for an agent to travel around different number of cloud users before and after implementing FCDAC.
文摘In order to cope with varying protection granularity levels of XML(extensible Markup Language) documents, we propose a TXAC (Two-level XML. Access Control) framework,in which an extended TRBAC ( Temporal Role-Based Access Control) approach is proposed to deal withthe dynamic XML data With different system components, LXAC algorithm evaluates access requestsefficiently by appropriate access control policy in dynamic web environment. The method is aflexible and powerful security system offering amulti-level access control solution.
文摘Big data has a strong demand for a network infrastructure with the capability to support data sharing and retrieval efficiently. Information-centric networking (ICN) is an emerging approach to satisfy this demand, where big data is cached ubiquitously in the network and retrieved using data names. However, existing authentication and authorization schemes rely mostly on centralized servers to provide certification and mediation services for data retrieval. This causes considerable traffic overhead for the secure distributed sharing of data. To solve this problem, we employ identity-based cryptography (IBC) to propose a Distributed Authentication and Authorization Scheme (DAAS), where an identity-based signature (IBS) is used to achieve distributed verifications of the identities of publishers and users. Moreover, Ciphertext-Policy Attribnte-based encryption (CP-ABE) is used to enable the distributed and fine-grained authorization. DAAS consists of three phases: initialization, secure data publication, and secure data retrieval, which seamlessly integrate authentication and authorization with the in- terest/data communication paradigm in ICN. In particular, we propose trustworthy registration and Network Operator and Authority Manifest (NOAM) dissemination to provide initial secure registration and enable efficient authentication for global data retrieval. Meanwhile, Attribute Manifest (AM) distribution coupled with automatic attribute update is proposed to reduce the cost of attribute retrieval. We examine the performance of the proposed DAAS, which shows that it can achieve a lower bandwidth cost than existing schemes.
文摘Data warehouse (DW), a new technology invented in 1990s, is more useful for integrating and analyzing massive data than traditional database. Its application in geology field can be divided into 3 phrases: 1992-1996, commercial data warehouse (CDW) appeared; 1996-1999, geological data warehouse (GDW) appeared and the geologists or geographers realized the importance of DW and began the studies on it, but the practical DW still followed the framework of DB; 2000 to present, geological data warehouse grows, and the theory of geo-spatial data warehouse (GSDW) has been developed but the research in geological area is still deficient except that in geography. Although some developments of GDW have been made, its core still follows the CDW-organizing data by time and brings about 3 problems: difficult to integrate the geological data, for the data feature more space than time; hard to store the massive data in different levels due to the same reason; hardly support the spatial analysis if the data are organized by time as CDW does. So the GDW should be redesigned by organizing data by scale in order to store mass data in different levels and synthesize the data in different granularities, and choosing space control points to replace the former time control points so as to integrate different types of data by the method of storing one type data as one layer and then to superpose the layers. In addition, data cube, a wide used technology in CDW, will be no use in GDW, for the causality among the geological data is not so obvious as commercial data, as the data are the mixed result of many complex rules, and their analysis always needs the special geological methods and software; on the other hand, data cube for mass and complex geo-data will devour too much store space to be practical. On this point, the main purpose of GDW may be fit for data integration unlike CDW for data analysis.
文摘The Wireless Sensor network is distributed event based systems that differ from conventional communica-tion network. Sensor network has severe energy constraints, redundant low data rate, and many-to-one flows. Aggregation is a technique to avoid redundant information to save energy and other resources. There are two types of aggregations. In one of the aggregation many sensor data are embedded into single packet, thus avoiding the unnecessary packet headers, this is called lossless aggregation. In the second case the sensor data goes under statistical process (average, maximum, minimum) and results are communicated to the base station, this is called lossy aggregation, because we cannot recover the original sensor data from the received aggregated packet. The number of sensor data to be aggregated in a single packet is known as degree of ag-gregation. The main contribution of this paper is to propose an algorithm which is adaptive to choose one of the aggregations based on scenarios and degree of aggregation based on traffic. We are also suggesting a suitable buffer management to offer best Quality of Service. Our initial experiment with NS-2 implementa-tion shows significant energy savings by reducing the number of packets optimally at any given moment of time.