As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The ...As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The eddy dissipation rate(EDR)has been established as the standard metric for quantifying turbulence in civil aviation.This study aims to explore a universally applicable symbolic classification approach based on genetic programming to detect turbulence anomalies using quick access recorder(QAR)data.The detection of atmospheric turbulence is approached as an anomaly detection problem.Comparative evaluations demonstrate that this approach performs on par with direct EDR calculation methods in identifying turbulence events.Moreover,comparisons with alternative machine learning techniques indicate that the proposed technique is the optimal methodology currently available.In summary,the use of symbolic classification via genetic programming enables accurate turbulence detection from QAR data,comparable to that with established EDR approaches and surpassing that achieved with machine learning algorithms.This finding highlights the potential of integrating symbolic classifiers into turbulence monitoring systems to enhance civil aviation safety amidst rising environmental and operational hazards.展开更多
MORPAS is a special GIS (geographic information system) software system, based on the MAPGIS platform whose aim is to prospect and evaluate mineral resources quantificationally by synthesizing geological, geophysical,...MORPAS is a special GIS (geographic information system) software system, based on the MAPGIS platform whose aim is to prospect and evaluate mineral resources quantificationally by synthesizing geological, geophysical, geochemical and remote sensing data. It overlays geological database management, geological background and geological abnormality analysis, image processing of remote sensing and comprehensive abnormality analysis, etc.. It puts forward an integrative solution for the application of GIS in basic-level units and the construction of information engineering in the geological field. As the popularization of computer networks and the request of data sharing, it is necessary to extend its functions in data management so that all its data files can be accessed in the network server. This paper utilizes some MAPGIS functions for the second development and ADO (access data object) technique to access multi-source geological data in SQL Server databases. Then remote visiting and congruous management will be realized in the MORPAS system.展开更多
Data organization requires high efficiency for large amount of data applied in the digital mine system. A new method of storing massive data of block model is proposed to meet the characteristics of the database, incl...Data organization requires high efficiency for large amount of data applied in the digital mine system. A new method of storing massive data of block model is proposed to meet the characteristics of the database, including ACID-compliant, concurrency support, data sharing, and efficient access. Each block model is organized by linear octree, stored in LMDB(lightning memory-mapped database). Geological attribute can be queried at any point of 3D space by comparison algorithm of location code and conversion algorithm from address code of geometry space to location code of storage. The performance and robustness of querying geological attribute at 3D spatial region are enhanced greatly by the transformation from 3D to 2D and the method of 2D grid scanning to screen the inner and outer points. Experimental results showed that this method can access the massive data of block model, meeting the database characteristics. The method with LMDB is at least 3 times faster than that with etree, especially when it is used to read. In addition, the larger the amount of data is processed, the more efficient the method would be.展开更多
The Internet of Everything(IoE)based cloud computing is one of the most prominent areas in the digital big data world.This approach allows efficient infrastructure to store and access big real-time data and smart IoE ...The Internet of Everything(IoE)based cloud computing is one of the most prominent areas in the digital big data world.This approach allows efficient infrastructure to store and access big real-time data and smart IoE services from the cloud.The IoE-based cloud computing services are located at remote locations without the control of the data owner.The data owners mostly depend on the untrusted Cloud Service Provider(CSP)and do not know the implemented security capabilities.The lack of knowledge about security capabilities and control over data raises several security issues.Deoxyribonucleic Acid(DNA)computing is a biological concept that can improve the security of IoE big data.The IoE big data security scheme consists of the Station-to-Station Key Agreement Protocol(StS KAP)and Feistel cipher algorithms.This paper proposed a DNA-based cryptographic scheme and access control model(DNACDS)to solve IoE big data security and access issues.The experimental results illustrated that DNACDS performs better than other DNA-based security schemes.The theoretical security analysis of the DNACDS shows better resistance capabilities.展开更多
Big data has a strong demand for a network infrastructure with the capability to support data sharing and retrieval efficiently. Information-centric networking (ICN) is an emerging approach to satisfy this demand, w...Big data has a strong demand for a network infrastructure with the capability to support data sharing and retrieval efficiently. Information-centric networking (ICN) is an emerging approach to satisfy this demand, where big data is cached ubiquitously in the network and retrieved using data names. However, existing authentication and authorization schemes rely mostly on centralized servers to provide certification and mediation services for data retrieval. This causes considerable traffic overhead for the secure distributed sharing of data. To solve this problem, we employ identity-based cryptography (IBC) to propose a Distributed Authentication and Authorization Scheme (DAAS), where an identity-based signature (IBS) is used to achieve distributed verifications of the identities of publishers and users. Moreover, Ciphertext-Policy Attribnte-based encryption (CP-ABE) is used to enable the distributed and fine-grained authorization. DAAS consists of three phases: initialization, secure data publication, and secure data retrieval, which seamlessly integrate authentication and authorization with the in- terest/data communication paradigm in ICN. In particular, we propose trustworthy registration and Network Operator and Authority Manifest (NOAM) dissemination to provide initial secure registration and enable efficient authentication for global data retrieval. Meanwhile, Attribute Manifest (AM) distribution coupled with automatic attribute update is proposed to reduce the cost of attribute retrieval. We examine the performance of the proposed DAAS, which shows that it can achieve a lower bandwidth cost than existing schemes.展开更多
A new design solution of data access layer for N-tier architecture is presented. It can solve the problems such as low efficiency of development and difficulties in transplantation, update and reuse. The solution util...A new design solution of data access layer for N-tier architecture is presented. It can solve the problems such as low efficiency of development and difficulties in transplantation, update and reuse. The solution utilizes the reflection technology of .NET and design pattern. A typical application of the solution demonstrates that the new solution of data access layer performs better than the current N-tier architecture. More importantly, the application suggests that the new solution of data access layer can be reused effectively.展开更多
Different efforts have been undertaken to customizing a security and privacy concern in clouddata access. Therefore, the security measures are reliable and the data access was verified as themajor problem in the cloud...Different efforts have been undertaken to customizing a security and privacy concern in clouddata access. Therefore, the security measures are reliable and the data access was verified as themajor problem in the cloud environment. To overcome this problem, we proposed an efficientdata access control using optimized homomorphic encryption (HE). Because users outsourcetheir sensitive information to cloud providers, data security and access control is one of themost difficult ongoing cloud computing research projects. Existing solutions that rely on cryptographictechnologies to address these security issues result in significant complexity for bothdata and cloud service providers. The experimental results show that the key generation is 7.6%decreased by HE and 14.14% less than the proposed method. The encryption time is 11.34% lessthan the optimized HE and 23.28% decreased by ECC. The decryption time is 13.18% and 24.07%when compared with HE and ECC respectively.展开更多
In cloud,data access control is a crucial way to ensure data security.Functional encryption(FE) is a novel cryptographic primitive supporting fine-grained access control of encrypted data in cloud.In FE,every cipherte...In cloud,data access control is a crucial way to ensure data security.Functional encryption(FE) is a novel cryptographic primitive supporting fine-grained access control of encrypted data in cloud.In FE,every ciphertext is specified with an access policy,a decryptor can access the data if and only if his secret key matches with the access policy.However,the FE cannot be directly applied to construct access control scheme due to the exposure of the access policy which may contain sensitive information.In this paper,we deal with the policy privacy issue and present a mechanism named multi-authority vector policy(MAVP) which provides hidden and expressive access policy for FE.Firstly,each access policy is encoded as a matrix and decryptors can only obtain the matched result from the matrix in MAVP.Then,we design a novel function encryption scheme based on the multi-authority spatial policy(MAVPFE),which can support privacy-preserving yet non-monotone access policy.Moreover,we greatly improve the efficiency of encryption and decryption in MAVP-FE by shifting the major computation of clients to the outsourced server.Finally,the security and performance analysis show that our MAVP-FE is secure and efficient in practice.展开更多
---Double data rate synchronous dynamic random access memory (DDR3) has become one of the most mainstream applications in current server and computer systems. In order to quickly set up a system-level signal integri...---Double data rate synchronous dynamic random access memory (DDR3) has become one of the most mainstream applications in current server and computer systems. In order to quickly set up a system-level signal integrity (SI) simulation flow for the DDR3 interface, two system-level SI simulation methodologies, which are board-level S-parameter extraction in the frequency-domain and system-level simulation assumptions in the time domain, are introduced in this paper. By comparing the flow of Speed2000 and PowerSI/Hspice, PowerSI is chosen for the printed circuit board (PCB) board-level S-parameter extraction, while Tektronix oscilloscope (TDS7404) is used for the DDR3 waveform measurement. The lab measurement shows good agreement between simulation and measurement. The study shows that the combination of PowerSI and Hspice is recommended for quick system-level DDR3 SI simulation.展开更多
The currently available compilation techniques are for general computing and are not optimized for physical layer computing in 5G micro base stations.In such cases,the foreseeable data sizes and small code size are ap...The currently available compilation techniques are for general computing and are not optimized for physical layer computing in 5G micro base stations.In such cases,the foreseeable data sizes and small code size are application specific opportunities for baseband algorithm optimizations.Therefore,the special attention can be paid,for example,the specific register allocation algorithm has not been studied so far.The compilation for kernel sub-routines of baseband in 5G micro base stations is our focusing point.For applications of known and fixed data size,we proposed a compilation scheme of parallel data accessing,while operands can be mainly allocated and stored in registers.Based on a small register group(48×32b),the target of our compilation scheme is the optimization of baseband algorithms based on 4×4 or smaller matrices,maximizing the utilization of register files,and eliminating the extra register data exchanging.Meanwhile,when data is allocated into register files,we used VLIW(Very Long Instruction Word)machine to hide the time of data accessing and minimize the cost of data accessing,thus the total execution time is minimum.Experiments indicate that for algorithms with small data size,the cost of data accessing and extra addressing can be minimized.展开更多
Cities are the most preferable dwelling places, having with better employment opportunities, educational hubs, medical services, recreational facilities, theme parks, and shopping malls etc. Cities are the driving for...Cities are the most preferable dwelling places, having with better employment opportunities, educational hubs, medical services, recreational facilities, theme parks, and shopping malls etc. Cities are the driving forces for any national economy too. Unfortunately now a days, these cities are producing circa 70% of pollutants, even though they only oeeupy 2% of surface of the Earth. Pub- lic utility services cannot meet the demands of unexpected growth. The filthiness in cities causing decreasing of Quality of Life. In this light our research paper is giving more concentration on necessity of " Smart Cities", which are the basis for civic centric services. This article is throwing light on Smart Cities and its important roles. The beauty of this manuscript is scribbling "Smart Cities" concepts in pictorially. Moreover this explains on "Barcelona Smart City" using lnternet of Things Technologies. It is a good example in urban paradigm shift. Braeelona is like the heaven on the earth with by providing Quality of Life to all urban citizens. The GOD is Interenet of Things.展开更多
Currently existing data access object (DAO) patterns have several limitations. First, the interface of the patterns and business objects is tightly-coupled, which affects seriously the dynamic extensibility of softw...Currently existing data access object (DAO) patterns have several limitations. First, the interface of the patterns and business objects is tightly-coupled, which affects seriously the dynamic extensibility of software systems. Second, the patterns have duplicated implementation codes, which add to difficulties of system maintenance. To solve these problems, a new DAO pattern with stronger independency and dynamic extensibility is proposed in this paper. An example is given to illustrate the using process of the new DAO pattern. The greatest advantages of the new DAO pattern are as follows. If any business object is needed to add to the system, we do not have to modify any codes of the class DAO Factory. All we need to do is to modify the mapping file. Furthermore, because we have only one DAO implementation class to accomplish all the data access to business objects, if some SQL statements are needed to be modified, all we need to do is to modify the DAO implementation class but not need to modify any business objects.展开更多
At present, it is projected that about 4 zettabytes (or 10^**21 bytes) of digital data are being generated per year by everything from underground physics experiments to retail transactions to security cameras to ...At present, it is projected that about 4 zettabytes (or 10^**21 bytes) of digital data are being generated per year by everything from underground physics experiments to retail transactions to security cameras to global positioning systems. In the U. S., major research programs are being funded to deal with big data in all five sectors (i.e., services, manufacturing, construction, agriculture and mining) of the economy. Big Data is a term applied to data sets whose size is beyond the ability of available tools to undertake their acquisition, access, analytics and/or application in a reasonable amount of time. Whereas Tien (2003) forewarned about the data rich, information poor (DRIP) problems that have been pervasive since the advent of large-scale data collections or warehouses, the DRIP conundrum has been somewhat mitigated by the Big Data approach which has unleashed information in a manner that can support informed - yet, not necessarily defensible or valid - decisions or choices. Thus, by somewhat overcoming data quality issues with data quantity, data access restrictions with on-demand cloud computing, causative analysis with correlative data analytics, and model-driven with evidence-driven applications, appropriate actions can be undertaken with the obtained information. New acquisition, access, analytics and application technologies are being developed to further Big Data as it is being employed to help resolve the 14 grand challenges (identified by the National Academy of Engineering in 2008), underpin the 10 breakthrough technologies (compiled by the Massachusetts Institute of Technology in 2013) and support the Third Industrial Revolution of mass customization.展开更多
Turning Earth observation(EO)data consistently and systematically into valuable global information layers is an ongoing challenge for the EO community.Recently,the term‘big Earth data’emerged to describe massive EO ...Turning Earth observation(EO)data consistently and systematically into valuable global information layers is an ongoing challenge for the EO community.Recently,the term‘big Earth data’emerged to describe massive EO datasets that confronts analysts and their traditional workflows with a range of challenges.We argue that the altered circumstances must be actively intercepted by an evolution of EO to revolutionise their application in various domains.The disruptive element is that analysts and end-users increasingly rely on Web-based workflows.In this contribution we study selected systems and portals,put them in the context of challenges and opportunities and highlight selected shortcomings and possible future developments that we consider relevant for the imminent uptake of big Earth data.展开更多
Big Earth Data-Cube infrastructures are becoming more and more popular to provide Analysis Ready Data,especially for managing satellite time series.These infrastructures build on the concept of multidimensional data m...Big Earth Data-Cube infrastructures are becoming more and more popular to provide Analysis Ready Data,especially for managing satellite time series.These infrastructures build on the concept of multidimensional data model(data hypercube)and are complex systems engaging different disciplines and expertise.For this reason,their interoperability capacity has become a challenge in the Global Change and Earth System science domains.To address this challenge,there is a pressing need in the community to reach a widely agreed definition of Data-Cube infrastructures and their key features.In this respect,a discussion has started recently about the definition of the possible facets characterizing a Data-Cube in the Earth Observation domain.This manuscript contributes to such debate by introducing a view-based model of Earth Data-Cube systems to design its infrastructural architecture and content schemas,with the final goal of enabling and facilitating interoperability.It introduces six modeling views,each of them is described according to:its main concerns,principal stakeholders,and possible patterns to be used.The manuscript considers the Business Intelligence experience with Data Warehouse and multidimensional“cubes”along with the more recent and analogous development in the Earth Observation domain,and puts forward a set of interoperability recommendations based on the modeling views.展开更多
Flight data of a twin-jet transport aircraft in revenue flight are analyzed for potential safety problems. Data from the quick access recorder (QAR) are first filtered through the kinematic compatibility analysis. T...Flight data of a twin-jet transport aircraft in revenue flight are analyzed for potential safety problems. Data from the quick access recorder (QAR) are first filtered through the kinematic compatibility analysis. The filtered data are then organized into longitudinal- and lateral-directional aerodynamic model data with dynamic ground effect. The dynamic ground effect requires the radio height and sink rate in the models. The model data are then refined into numerical models through a fuzzy logic algorithm without data smoothing in advance. These numerical models describe nonlinear and unsteady aerodynamics and are used in nonlinear flight dynamics simulation. For the jet transport under study, it is found that the effect of crosswind is significant enough to excite the Dutch roll motion. Through a linearized analysis in flight dynamics at every instant of time, the Dutch roll motion is found to be in nonlinear oscillation without clear damping of the amplitude. In the analysis, all stability derivatives vary with time and hence are nonlinear functions of state variables. Since the Dutch roll motion is not damped despite the fact that a full-time yaw damper is engaged, it is concluded that the design data for the yaw damper is not sufficiently realistic and the contribution of time derivative of sideslip angle to damping should be considered. As a result of nonlinear flight simulation, the vertical wind acting on the aircraft is estimated to be mostly updraft which varies along the flight path before touchdown. Varying updraft appears to make the descent rate more difficult to control to result in a higher g-load at touchdown.展开更多
To address the private data management problems and realize privacy-preserving data sharing,a blockchain-based transaction system named Ecare featuring information transparency,fairness and scalability is proposed.The...To address the private data management problems and realize privacy-preserving data sharing,a blockchain-based transaction system named Ecare featuring information transparency,fairness and scalability is proposed.The proposed system formulates multiple private data access control strategies,and realizes data trading and sharing through on-chain transactions,which makes transaction records transparent and immutable.In our system,the private data are encrypted,and the role-based account model ensures that access to the data requires owner’s authorization.Moreover,a new consensus protocol named Proof of Transactions(PoT)proposed by ourselves has been used to improve consensus efficiency.The value of Ecare is not only that it aggregates telemedicine,data transactions,and other features,but also that it translates these actions into transaction events stored in the blockchain,making them transparent and immutable to all participants.The proposed system can be extended to more general big data privacy protection and data transaction scenarios.展开更多
This article presents and analyses the modular architecture and capabilities of CODE-DE(Copernicus Data and Exploitation Platform–Deutschland,www.code-de.org),the integrated German operational environment for accessi...This article presents and analyses the modular architecture and capabilities of CODE-DE(Copernicus Data and Exploitation Platform–Deutschland,www.code-de.org),the integrated German operational environment for accessing and processing Copernicus data and products,as well as the methodology to establish and operate the system.Since March 2017,CODE-DE has been online with access to Sentinel-1 and Sentinel-2 data,to Sentinel-3 data shortly after this time,and since March 2019 with access to Sentinel-5P data.These products are available and accessed by 1,682 registered users as of March 2019.During this period 654,895 products were downloaded and a global catalogue was continuously updated,featuring a data volume of 814 TByte based on a rolling archive concept supported by a reload mechanism from a long-term archive.Since November 2017,the element for big data processing has been operational,where registered users can process and analyse data themselves specifically assisted by methods for value-added product generation.Utilizing 195,467 core and 696,406 memory hours,982,948 products of different applications were fully automatically generated in the cloud environment and made available as of March 2019.Special features include an improved visualization of available Sentinel-2 products,which are presented within the catalogue client at full 10 m resolution.展开更多
The volume of publically available geospatial data on the web is rapidly increasing due to advances in server-based technologies and the ease at which data can now be created.However,challenges remain with connecting ...The volume of publically available geospatial data on the web is rapidly increasing due to advances in server-based technologies and the ease at which data can now be created.However,challenges remain with connecting individuals searching for geospatial data with servers and websites where such data exist.The objective of this paper is to present a publically available Geospatial Search Engine(GSE)that utilizes a web crawler built on top of the Google search engine in order to search the web for geospatial data.The crawler seeding mechanism combines search terms entered by users with predefined keywords that identify geospatial data services.A procedure runs daily to update map server layers and metadata,and to eliminate servers that go offline.The GSE supports Web Map Services,ArcGIS services,and websites that have geospatial data for download.We applied the GSE to search for all available geospatial services under these formats and provide search results including the spatial distribution of all obtained services.While enhancements to our GSE and to web crawler technology in general lie ahead,our work represents an important step toward realizing the potential of a publically accessible tool for discovering the global availability of geospatial data.展开更多
Modern information systems require the orchestration of ontologies,conceptual data modeling techniques,and efficient data management so as to provide a means for better informed decision-making and to keep up with new...Modern information systems require the orchestration of ontologies,conceptual data modeling techniques,and efficient data management so as to provide a means for better informed decision-making and to keep up with new requirements in organizational needs.A major question in delivering such systems,is which components to design and put together to make up the required“knowledge to data”pipeline,as each component and process has trade-offs.In this paper,we introduce a new knowledge-to-data architecture,KnowID.It pulls together both recently proposed components and we add novel transformation rules between Enhanced Entity-Relationship(EER)and the Abstract Relational Model to complete the pipeline.KnowID’s main distinctive architectural features,compared to other ontology-based data access approaches,are that runtime use can avail of the closed world assumption commonly used in information systems and of full SQL augmented with path queries.展开更多
基金supported by the Meteorological Soft Science Project(Grant No.2023ZZXM29)the Natural Science Fund Project of Tianjin,China(Grant No.21JCYBJC00740)the Key Research and Development-Social Development Program of Jiangsu Province,China(Grant No.BE2021685).
文摘As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The eddy dissipation rate(EDR)has been established as the standard metric for quantifying turbulence in civil aviation.This study aims to explore a universally applicable symbolic classification approach based on genetic programming to detect turbulence anomalies using quick access recorder(QAR)data.The detection of atmospheric turbulence is approached as an anomaly detection problem.Comparative evaluations demonstrate that this approach performs on par with direct EDR calculation methods in identifying turbulence events.Moreover,comparisons with alternative machine learning techniques indicate that the proposed technique is the optimal methodology currently available.In summary,the use of symbolic classification via genetic programming enables accurate turbulence detection from QAR data,comparable to that with established EDR approaches and surpassing that achieved with machine learning algorithms.This finding highlights the potential of integrating symbolic classifiers into turbulence monitoring systems to enhance civil aviation safety amidst rising environmental and operational hazards.
文摘MORPAS is a special GIS (geographic information system) software system, based on the MAPGIS platform whose aim is to prospect and evaluate mineral resources quantificationally by synthesizing geological, geophysical, geochemical and remote sensing data. It overlays geological database management, geological background and geological abnormality analysis, image processing of remote sensing and comprehensive abnormality analysis, etc.. It puts forward an integrative solution for the application of GIS in basic-level units and the construction of information engineering in the geological field. As the popularization of computer networks and the request of data sharing, it is necessary to extend its functions in data management so that all its data files can be accessed in the network server. This paper utilizes some MAPGIS functions for the second development and ADO (access data object) technique to access multi-source geological data in SQL Server databases. Then remote visiting and congruous management will be realized in the MORPAS system.
基金Projects(41572317,51374242)supported by the National Natural Science Foundation of ChinaProject(2015CX005)supported by the Innovation Driven Plan of Central South University,China
文摘Data organization requires high efficiency for large amount of data applied in the digital mine system. A new method of storing massive data of block model is proposed to meet the characteristics of the database, including ACID-compliant, concurrency support, data sharing, and efficient access. Each block model is organized by linear octree, stored in LMDB(lightning memory-mapped database). Geological attribute can be queried at any point of 3D space by comparison algorithm of location code and conversion algorithm from address code of geometry space to location code of storage. The performance and robustness of querying geological attribute at 3D spatial region are enhanced greatly by the transformation from 3D to 2D and the method of 2D grid scanning to screen the inner and outer points. Experimental results showed that this method can access the massive data of block model, meeting the database characteristics. The method with LMDB is at least 3 times faster than that with etree, especially when it is used to read. In addition, the larger the amount of data is processed, the more efficient the method would be.
文摘The Internet of Everything(IoE)based cloud computing is one of the most prominent areas in the digital big data world.This approach allows efficient infrastructure to store and access big real-time data and smart IoE services from the cloud.The IoE-based cloud computing services are located at remote locations without the control of the data owner.The data owners mostly depend on the untrusted Cloud Service Provider(CSP)and do not know the implemented security capabilities.The lack of knowledge about security capabilities and control over data raises several security issues.Deoxyribonucleic Acid(DNA)computing is a biological concept that can improve the security of IoE big data.The IoE big data security scheme consists of the Station-to-Station Key Agreement Protocol(StS KAP)and Feistel cipher algorithms.This paper proposed a DNA-based cryptographic scheme and access control model(DNACDS)to solve IoE big data security and access issues.The experimental results illustrated that DNACDS performs better than other DNA-based security schemes.The theoretical security analysis of the DNACDS shows better resistance capabilities.
文摘Big data has a strong demand for a network infrastructure with the capability to support data sharing and retrieval efficiently. Information-centric networking (ICN) is an emerging approach to satisfy this demand, where big data is cached ubiquitously in the network and retrieved using data names. However, existing authentication and authorization schemes rely mostly on centralized servers to provide certification and mediation services for data retrieval. This causes considerable traffic overhead for the secure distributed sharing of data. To solve this problem, we employ identity-based cryptography (IBC) to propose a Distributed Authentication and Authorization Scheme (DAAS), where an identity-based signature (IBS) is used to achieve distributed verifications of the identities of publishers and users. Moreover, Ciphertext-Policy Attribnte-based encryption (CP-ABE) is used to enable the distributed and fine-grained authorization. DAAS consists of three phases: initialization, secure data publication, and secure data retrieval, which seamlessly integrate authentication and authorization with the in- terest/data communication paradigm in ICN. In particular, we propose trustworthy registration and Network Operator and Authority Manifest (NOAM) dissemination to provide initial secure registration and enable efficient authentication for global data retrieval. Meanwhile, Attribute Manifest (AM) distribution coupled with automatic attribute update is proposed to reduce the cost of attribute retrieval. We examine the performance of the proposed DAAS, which shows that it can achieve a lower bandwidth cost than existing schemes.
基金the Foundation for Key Teachers of Chongqing University (200209055).
文摘A new design solution of data access layer for N-tier architecture is presented. It can solve the problems such as low efficiency of development and difficulties in transplantation, update and reuse. The solution utilizes the reflection technology of .NET and design pattern. A typical application of the solution demonstrates that the new solution of data access layer performs better than the current N-tier architecture. More importantly, the application suggests that the new solution of data access layer can be reused effectively.
文摘Different efforts have been undertaken to customizing a security and privacy concern in clouddata access. Therefore, the security measures are reliable and the data access was verified as themajor problem in the cloud environment. To overcome this problem, we proposed an efficientdata access control using optimized homomorphic encryption (HE). Because users outsourcetheir sensitive information to cloud providers, data security and access control is one of themost difficult ongoing cloud computing research projects. Existing solutions that rely on cryptographictechnologies to address these security issues result in significant complexity for bothdata and cloud service providers. The experimental results show that the key generation is 7.6%decreased by HE and 14.14% less than the proposed method. The encryption time is 11.34% lessthan the optimized HE and 23.28% decreased by ECC. The decryption time is 13.18% and 24.07%when compared with HE and ECC respectively.
基金supported by the National Science Foundation of China (No.61373040,No.61173137)The Ph.D.Pro-grams Foundation of Ministry of Education of China(20120141110073)Key Project of Natural Science Foundation of Hubei Province (No.2010CDA004)
文摘In cloud,data access control is a crucial way to ensure data security.Functional encryption(FE) is a novel cryptographic primitive supporting fine-grained access control of encrypted data in cloud.In FE,every ciphertext is specified with an access policy,a decryptor can access the data if and only if his secret key matches with the access policy.However,the FE cannot be directly applied to construct access control scheme due to the exposure of the access policy which may contain sensitive information.In this paper,we deal with the policy privacy issue and present a mechanism named multi-authority vector policy(MAVP) which provides hidden and expressive access policy for FE.Firstly,each access policy is encoded as a matrix and decryptors can only obtain the matched result from the matrix in MAVP.Then,we design a novel function encryption scheme based on the multi-authority spatial policy(MAVPFE),which can support privacy-preserving yet non-monotone access policy.Moreover,we greatly improve the efficiency of encryption and decryption in MAVP-FE by shifting the major computation of clients to the outsourced server.Finally,the security and performance analysis show that our MAVP-FE is secure and efficient in practice.
基金supported by the National Natural Science Foundation of China under Grant No.61161001
文摘---Double data rate synchronous dynamic random access memory (DDR3) has become one of the most mainstream applications in current server and computer systems. In order to quickly set up a system-level signal integrity (SI) simulation flow for the DDR3 interface, two system-level SI simulation methodologies, which are board-level S-parameter extraction in the frequency-domain and system-level simulation assumptions in the time domain, are introduced in this paper. By comparing the flow of Speed2000 and PowerSI/Hspice, PowerSI is chosen for the printed circuit board (PCB) board-level S-parameter extraction, while Tektronix oscilloscope (TDS7404) is used for the DDR3 waveform measurement. The lab measurement shows good agreement between simulation and measurement. The study shows that the combination of PowerSI and Hspice is recommended for quick system-level DDR3 SI simulation.
基金supported by the research funding KYQD(ZR)1974 from Hainan University.
文摘The currently available compilation techniques are for general computing and are not optimized for physical layer computing in 5G micro base stations.In such cases,the foreseeable data sizes and small code size are application specific opportunities for baseband algorithm optimizations.Therefore,the special attention can be paid,for example,the specific register allocation algorithm has not been studied so far.The compilation for kernel sub-routines of baseband in 5G micro base stations is our focusing point.For applications of known and fixed data size,we proposed a compilation scheme of parallel data accessing,while operands can be mainly allocated and stored in registers.Based on a small register group(48×32b),the target of our compilation scheme is the optimization of baseband algorithms based on 4×4 or smaller matrices,maximizing the utilization of register files,and eliminating the extra register data exchanging.Meanwhile,when data is allocated into register files,we used VLIW(Very Long Instruction Word)machine to hide the time of data accessing and minimize the cost of data accessing,thus the total execution time is minimum.Experiments indicate that for algorithms with small data size,the cost of data accessing and extra addressing can be minimized.
基金The financial support is fully funding by Ministry of Human Resource Development(MHRD)
文摘Cities are the most preferable dwelling places, having with better employment opportunities, educational hubs, medical services, recreational facilities, theme parks, and shopping malls etc. Cities are the driving forces for any national economy too. Unfortunately now a days, these cities are producing circa 70% of pollutants, even though they only oeeupy 2% of surface of the Earth. Pub- lic utility services cannot meet the demands of unexpected growth. The filthiness in cities causing decreasing of Quality of Life. In this light our research paper is giving more concentration on necessity of " Smart Cities", which are the basis for civic centric services. This article is throwing light on Smart Cities and its important roles. The beauty of this manuscript is scribbling "Smart Cities" concepts in pictorially. Moreover this explains on "Barcelona Smart City" using lnternet of Things Technologies. It is a good example in urban paradigm shift. Braeelona is like the heaven on the earth with by providing Quality of Life to all urban citizens. The GOD is Interenet of Things.
文摘Currently existing data access object (DAO) patterns have several limitations. First, the interface of the patterns and business objects is tightly-coupled, which affects seriously the dynamic extensibility of software systems. Second, the patterns have duplicated implementation codes, which add to difficulties of system maintenance. To solve these problems, a new DAO pattern with stronger independency and dynamic extensibility is proposed in this paper. An example is given to illustrate the using process of the new DAO pattern. The greatest advantages of the new DAO pattern are as follows. If any business object is needed to add to the system, we do not have to modify any codes of the class DAO Factory. All we need to do is to modify the mapping file. Furthermore, because we have only one DAO implementation class to accomplish all the data access to business objects, if some SQL statements are needed to be modified, all we need to do is to modify the DAO implementation class but not need to modify any business objects.
文摘At present, it is projected that about 4 zettabytes (or 10^**21 bytes) of digital data are being generated per year by everything from underground physics experiments to retail transactions to security cameras to global positioning systems. In the U. S., major research programs are being funded to deal with big data in all five sectors (i.e., services, manufacturing, construction, agriculture and mining) of the economy. Big Data is a term applied to data sets whose size is beyond the ability of available tools to undertake their acquisition, access, analytics and/or application in a reasonable amount of time. Whereas Tien (2003) forewarned about the data rich, information poor (DRIP) problems that have been pervasive since the advent of large-scale data collections or warehouses, the DRIP conundrum has been somewhat mitigated by the Big Data approach which has unleashed information in a manner that can support informed - yet, not necessarily defensible or valid - decisions or choices. Thus, by somewhat overcoming data quality issues with data quantity, data access restrictions with on-demand cloud computing, causative analysis with correlative data analytics, and model-driven with evidence-driven applications, appropriate actions can be undertaken with the obtained information. New acquisition, access, analytics and application technologies are being developed to further Big Data as it is being employed to help resolve the 14 grand challenges (identified by the National Academy of Engineering in 2008), underpin the 10 breakthrough technologies (compiled by the Massachusetts Institute of Technology in 2013) and support the Third Industrial Revolution of mass customization.
基金the Austrian Science Fund(FWF)through the Doctoral College GIScience(DK W1237-N23)Contributions of Dirk Tiede and Hannah Augustin were supported by the Austrian Research Promotion Agency(FFG)the Austrian Space Application Programme(ASAP)within the project Sen2Cube.at(project no.:866016).
文摘Turning Earth observation(EO)data consistently and systematically into valuable global information layers is an ongoing challenge for the EO community.Recently,the term‘big Earth data’emerged to describe massive EO datasets that confronts analysts and their traditional workflows with a range of challenges.We argue that the altered circumstances must be actively intercepted by an evolution of EO to revolutionise their application in various domains.The disruptive element is that analysts and end-users increasingly rely on Web-based workflows.In this contribution we study selected systems and portals,put them in the context of challenges and opportunities and highlight selected shortcomings and possible future developments that we consider relevant for the imminent uptake of big Earth data.
基金This research was supported by the European Commission in the framework of the H2020 ECOPOTENTIAL project(ID 641762)the H2020 SeaDataCloud project(ID 730960),and the FP7 EarthServer project(ID 283610).
文摘Big Earth Data-Cube infrastructures are becoming more and more popular to provide Analysis Ready Data,especially for managing satellite time series.These infrastructures build on the concept of multidimensional data model(data hypercube)and are complex systems engaging different disciplines and expertise.For this reason,their interoperability capacity has become a challenge in the Global Change and Earth System science domains.To address this challenge,there is a pressing need in the community to reach a widely agreed definition of Data-Cube infrastructures and their key features.In this respect,a discussion has started recently about the definition of the possible facets characterizing a Data-Cube in the Earth Observation domain.This manuscript contributes to such debate by introducing a view-based model of Earth Data-Cube systems to design its infrastructural architecture and content schemas,with the final goal of enabling and facilitating interoperability.It introduces six modeling views,each of them is described according to:its main concerns,principal stakeholders,and possible patterns to be used.The manuscript considers the Business Intelligence experience with Data Warehouse and multidimensional“cubes”along with the more recent and analogous development in the Earth Observation domain,and puts forward a set of interoperability recommendations based on the modeling views.
基金Foundation item: National Natural Science Foundation of China (60832012)
文摘Flight data of a twin-jet transport aircraft in revenue flight are analyzed for potential safety problems. Data from the quick access recorder (QAR) are first filtered through the kinematic compatibility analysis. The filtered data are then organized into longitudinal- and lateral-directional aerodynamic model data with dynamic ground effect. The dynamic ground effect requires the radio height and sink rate in the models. The model data are then refined into numerical models through a fuzzy logic algorithm without data smoothing in advance. These numerical models describe nonlinear and unsteady aerodynamics and are used in nonlinear flight dynamics simulation. For the jet transport under study, it is found that the effect of crosswind is significant enough to excite the Dutch roll motion. Through a linearized analysis in flight dynamics at every instant of time, the Dutch roll motion is found to be in nonlinear oscillation without clear damping of the amplitude. In the analysis, all stability derivatives vary with time and hence are nonlinear functions of state variables. Since the Dutch roll motion is not damped despite the fact that a full-time yaw damper is engaged, it is concluded that the design data for the yaw damper is not sufficiently realistic and the contribution of time derivative of sideslip angle to damping should be considered. As a result of nonlinear flight simulation, the vertical wind acting on the aircraft is estimated to be mostly updraft which varies along the flight path before touchdown. Varying updraft appears to make the descent rate more difficult to control to result in a higher g-load at touchdown.
基金This work was supported by the National Key R&D Program of China(No.2018YFB1700100)the National Natural Science Foundation of China(No.61873317)。
文摘To address the private data management problems and realize privacy-preserving data sharing,a blockchain-based transaction system named Ecare featuring information transparency,fairness and scalability is proposed.The proposed system formulates multiple private data access control strategies,and realizes data trading and sharing through on-chain transactions,which makes transaction records transparent and immutable.In our system,the private data are encrypted,and the role-based account model ensures that access to the data requires owner’s authorization.Moreover,a new consensus protocol named Proof of Transactions(PoT)proposed by ourselves has been used to improve consensus efficiency.The value of Ecare is not only that it aggregates telemedicine,data transactions,and other features,but also that it translates these actions into transaction events stored in the blockchain,making them transparent and immutable to all participants.The proposed system can be extended to more general big data privacy protection and data transaction scenarios.
基金funding from the German Federal Ministry of Transport and Digital Infrastructure(BMVI).
文摘This article presents and analyses the modular architecture and capabilities of CODE-DE(Copernicus Data and Exploitation Platform–Deutschland,www.code-de.org),the integrated German operational environment for accessing and processing Copernicus data and products,as well as the methodology to establish and operate the system.Since March 2017,CODE-DE has been online with access to Sentinel-1 and Sentinel-2 data,to Sentinel-3 data shortly after this time,and since March 2019 with access to Sentinel-5P data.These products are available and accessed by 1,682 registered users as of March 2019.During this period 654,895 products were downloaded and a global catalogue was continuously updated,featuring a data volume of 814 TByte based on a rolling archive concept supported by a reload mechanism from a long-term archive.Since November 2017,the element for big data processing has been operational,where registered users can process and analyse data themselves specifically assisted by methods for value-added product generation.Utilizing 195,467 core and 696,406 memory hours,982,948 products of different applications were fully automatically generated in the cloud environment and made available as of March 2019.Special features include an improved visualization of available Sentinel-2 products,which are presented within the catalogue client at full 10 m resolution.
文摘The volume of publically available geospatial data on the web is rapidly increasing due to advances in server-based technologies and the ease at which data can now be created.However,challenges remain with connecting individuals searching for geospatial data with servers and websites where such data exist.The objective of this paper is to present a publically available Geospatial Search Engine(GSE)that utilizes a web crawler built on top of the Google search engine in order to search the web for geospatial data.The crawler seeding mechanism combines search terms entered by users with predefined keywords that identify geospatial data services.A procedure runs daily to update map server layers and metadata,and to eliminate servers that go offline.The GSE supports Web Map Services,ArcGIS services,and websites that have geospatial data for download.We applied the GSE to search for all available geospatial services under these formats and provide search results including the spatial distribution of all obtained services.While enhancements to our GSE and to web crawler technology in general lie ahead,our work represents an important step toward realizing the potential of a publically accessible tool for discovering the global availability of geospatial data.
文摘Modern information systems require the orchestration of ontologies,conceptual data modeling techniques,and efficient data management so as to provide a means for better informed decision-making and to keep up with new requirements in organizational needs.A major question in delivering such systems,is which components to design and put together to make up the required“knowledge to data”pipeline,as each component and process has trade-offs.In this paper,we introduce a new knowledge-to-data architecture,KnowID.It pulls together both recently proposed components and we add novel transformation rules between Enhanced Entity-Relationship(EER)and the Abstract Relational Model to complete the pipeline.KnowID’s main distinctive architectural features,compared to other ontology-based data access approaches,are that runtime use can avail of the closed world assumption commonly used in information systems and of full SQL augmented with path queries.