Building model data organization is often programmed to solve a specific problem,resulting in the inability to organize indoor and outdoor 3D scenes in an integrated manner.In this paper,existing building spatial data...Building model data organization is often programmed to solve a specific problem,resulting in the inability to organize indoor and outdoor 3D scenes in an integrated manner.In this paper,existing building spatial data models are studied,and the characteristics of building information modeling standards(IFC),city geographic modeling language(CityGML),indoor modeling language(IndoorGML),and other models are compared and analyzed.CityGML and IndoorGML models face challenges in satisfying diverse application scenarios and requirements due to limitations in their expression capabilities.It is proposed to combine the semantic information of the model objects to effectively partition and organize the indoor and outdoor spatial 3D model data and to construct the indoor and outdoor data organization mechanism of“chunk-layer-subobject-entrances-area-detail object.”This method is verified by proposing a 3D data organization method for indoor and outdoor space and constructing a 3D visualization system based on it.展开更多
Availability of digital elevation models (DEMs) of a high quality is becoming more and more important in spatial studies. Standard methods for DEM creation use only intentionally acquired data sources. Two approache...Availability of digital elevation models (DEMs) of a high quality is becoming more and more important in spatial studies. Standard methods for DEM creation use only intentionally acquired data sources. Two approaches which employ various types of data sets for DEM production are proposed: (1) Method of weighted sum of different data sources with morphological enhancement that conflates any additional data sources to principal DEM, and (2) DEM updating methods of modeling absolute and relative temporal changes, considering landslides, earthquakes, quarries, watererosion, building and highway constructions, etc. Spatial modeling of environmental variables concerning both approaches for (a) quality control of data sources, considering regions, (b) pre-processing of data sources, and (c) processing of the final DEM, have been applied. The variables are called rate of karst, morphologic roughness (modeled from slope, profile curvature and elevation), characteristic features, rate of forestation, hydrological network, and rate of urbanization. Only the variables evidenced as significant were used in spatial modeling to generate homogeneous regions in spatial modeling a-c. The production process uses different regions to define high quality conflation of data sources to the final DEM. The methodology had been confirmed by case studies. The result is an overall high quality DEM with various well-known parameters.展开更多
Integrating marketing and distribution businesses is crucial for improving the coordination of equipment and the efficient management of multi-energy systems.New energy sources are continuously being connected to dist...Integrating marketing and distribution businesses is crucial for improving the coordination of equipment and the efficient management of multi-energy systems.New energy sources are continuously being connected to distribution grids;this,however,increases the complexity of the information structure of marketing and distribution businesses.The existing unified data model and the coordinated application of marketing and distribution suffer from various drawbacks.As a solution,this paper presents a data model of"one graph of marketing and distribution"and a framework for graph computing,by analyzing the current trends of business and data in the marketing and distribution fields and using graph data theory.Specifically,this work aims to determine the correlation between distribution transformers and marketing users,which is crucial for elucidating the connection between marketing and distribution.In this manner,a novel identification algorithm is proposed based on the collected data for marketing and distribution.Lastly,a forecasting application is developed based on the proposed algorithm to realize the coordinated prediction and consumption of distributed photovoltaic power generation and distribution loads.Furthermore,an operation and maintenance(O&M)knowledge graph reasoning application is developed to improve the intelligent O&M ability of marketing and distribution equipment.展开更多
Objective: To correlate climatic and environmental factors such as land surface temperature, rainfall, humidity and normalized difference vegetation index with the incidence of dengue to develop prediction models for ...Objective: To correlate climatic and environmental factors such as land surface temperature, rainfall, humidity and normalized difference vegetation index with the incidence of dengue to develop prediction models for the Philippines using remote-sensing data.Methods: Timeseries analysis was performed using dengue cases in four regions of the Philippines and monthly climatic variables extracted from Global Satellite Mapping of Precipitation for rainfall, and MODIS for the land surface temperature and normalized difference vegetation index from 2008-2015.Consistent dataset during the period of study was utilized in Autoregressive Integrated Moving Average models to predict dengue incidence in the four regions being studied.Results: The best-fitting models were selected to characterize the relationship between dengue incidence and climate variables.The predicted cases of dengue for January to December 2015 period fitted well with the actual dengue cases of the same timeframe.It also showed significantly good linear regression with a square of correlation of 0.869 5 for the four regions combined.Conclusion: Climatic and environmental variables are positively associated with dengue incidence and suit best as predictor factors using Autoregressive Integrated Moving Average models.This finding could be a meaningful tool in developing an early warning model based on weather forecasts to deliver effective public health prevention and mitigation programs.展开更多
With market competition becoming fiercer,enterprises must update their products by constantly assimilating new big data knowledge and private knowledge to maintain their market shares at different time points in the b...With market competition becoming fiercer,enterprises must update their products by constantly assimilating new big data knowledge and private knowledge to maintain their market shares at different time points in the big data environment.Typically,there is mutual influence between each knowledge transfer if the time interval is not too long.It is necessary to study the problem of continuous knowledge transfer in the big data environment.Based on research on one-time knowledge transfer,a model of continuous knowledge transfer is presented,which can consider the interaction between knowledge transfer and determine the optimal knowledge transfer time at different time points in the big data environment.Simulation experiments were performed by adjusting several parameters.The experimental results verified the model’s validity and facilitated conclusions regarding their practical application values.The experimental results can provide more effective decisions for enterprises that must carry out continuous knowledge transfer in the big data environment.展开更多
With the development of meteorological services, there are more and more types of real-time observation data, and the timeliness requirements are getting higher and higher. The monitoring methods of existing meteorolo...With the development of meteorological services, there are more and more types of real-time observation data, and the timeliness requirements are getting higher and higher. The monitoring methods of existing meteorological observation data transmission can no longer meet the needs. This paper proposes a new monitoring model, namely the “integrated monitoring model” for provincial meteorological observation data transmission. The model can complete the whole network monitoring of meteorological observation data transmission process. Based on this model, the integrated monitoring system for meteorological observation data transmission in Guangdong Province is developed. The system uses Java as the programming language, and integrates J2EE, Hibernate, Quartz, Snmp4j and Slf4j frameworks, and uses Oracle database as the data storage carrier, following the MVC specification and agile development concept. The system development uses four key technologies, including simple network management protocol, network connectivity detection technology, remote host management technology and thread pool technology. The integrated monitoring system has been put into business application. As a highlight of Guangdong’s meteorological modernization, it has played an active role in many major meteorological services.展开更多
In allusion to the difficulty of integrating data with different models in integrating spatial information, the characteristics of raster structure, vector structure and mixed model were analyzed, and a hierarchical v...In allusion to the difficulty of integrating data with different models in integrating spatial information, the characteristics of raster structure, vector structure and mixed model were analyzed, and a hierarchical vector-raster integrative full feature model was put forward by integrating the advantage of vector and raster model and using the object-oriented method. The data structures of the four basic features, i.e. point, line, surface and solid, were described. An application was analyzed and described, and the characteristics of this model were described. In this model, all objects in the real world are divided into and described as features with hierarchy, and all the data are organized in vector. This model can describe data based on feature, field, network and other models, and avoid the disadvantage of inability to integrate data based on different models and perform spatial analysis on them in spatial information integration.展开更多
To construct mediators for data integration systems that integrate structured and semi-structured data, and to facilitate the reformulation and decomposition of the query, the presented system uses the XML processing ...To construct mediators for data integration systems that integrate structured and semi-structured data, and to facilitate the reformulation and decomposition of the query, the presented system uses the XML processing language (XPL) for the mediator. With XPL, it is easy to construct mediators for data integration based on XML, and it can accelerate the work in the mediator.展开更多
A multilevel secure relation hierarchical data model for multilevel secure database is extended from the relation hierarchical data model in single level environment in this paper. Based on the model, an upper lowe...A multilevel secure relation hierarchical data model for multilevel secure database is extended from the relation hierarchical data model in single level environment in this paper. Based on the model, an upper lower layer relationalintegrity is presented after we analyze and eliminate the covert channels caused by the database integrity.Two SQL statements are extended to process polyinstantiation in the multilevel secure environment.The system based on the multilevel secure relation hierarchical data model is capable of integratively storing and manipulating complicated objects ( e.g. , multilevel spatial data) and conventional data ( e.g. , integer, real number and character string) in multilevel secure database.展开更多
Marine information has been increasing quickly. The traditional database technologies have disadvantages in manipulating large amounts of marine information which relates to the position in 3-D with the time. Recently...Marine information has been increasing quickly. The traditional database technologies have disadvantages in manipulating large amounts of marine information which relates to the position in 3-D with the time. Recently, greater emphasis has been placed on GIS (geographical information system)to deal with the marine information. The GIS has shown great success for terrestrial applications in the last decades, but its use in marine fields has been far more restricted. One of the main reasons is that most of the GIS systems or their data models are designed for land applications. They cannot do well with the nature of the marine environment and for the marine information. And this becomes a fundamental challenge to the traditional GIS and its data structure. This work designed a data model, the raster-based spatio-temporal hierarchical data model (RSHDM), for the marine information system, or for the knowledge discovery fi'om spatio-temporal data, which bases itself on the nature of the marine data and overcomes the shortages of the current spatio-temporal models when they are used in the field. As an experiment, the marine fishery data warehouse (FDW) for marine fishery management was set up, which was based on the RSHDM. The experiment proved that the RSHDM can do well with the data and can extract easily the aggregations that the management needs at different levels.展开更多
Facing the development of future 5 G, the emerging technologies such as Internet of things, big data, cloud computing, and artificial intelligence is enhancing an explosive growth in data traffic. Radical changes in c...Facing the development of future 5 G, the emerging technologies such as Internet of things, big data, cloud computing, and artificial intelligence is enhancing an explosive growth in data traffic. Radical changes in communication theory and implement technologies, the wireless communications and wireless networks have entered a new era. Among them, wireless big data(WBD) has tremendous value, and artificial intelligence(AI) gives unthinkable possibilities. However, in the big data development and artificial intelligence application groups, the lack of a sound theoretical foundation and mathematical methods is regarded as a real challenge that needs to be solved. From the basic problem of wireless communication, the interrelationship of demand, environment and ability, this paper intends to investigate the concept and data model of WBD, the wireless data mining, the wireless knowledge and wireless knowledge learning(WKL), and typical practices examples, to facilitate and open up more opportunities of WBD research and developments. Such research is beneficial for creating new theoretical foundation and emerging technologies of future wireless communications.展开更多
Compaction correction is a key part of paleogeomorphic recovery methods. Yet, the influence of lithology on the porosity evolution is not usually taken into account. Present methods merely classify the lithologies as ...Compaction correction is a key part of paleogeomorphic recovery methods. Yet, the influence of lithology on the porosity evolution is not usually taken into account. Present methods merely classify the lithologies as sandstone and mudstone to undertake separate porositydepth compaction modeling. However, using just two lithologies is an oversimplification that cannot represent the compaction history. In such schemes, the precision of the compaction recovery is inadequate. To improve the precision of compaction recovery, a depth compaction model has been proposed that involves both porosity and clay content. A clastic lithological compaction unit classification method, based on clay content, has been designed to identify lithological boundaries and establish sets of compaction units. Also, on the basis of the clastic compaction unit classification, two methods of compaction recovery that integrate well and seismic data are employed to extrapolate well-based compaction information outward along seismic lines and recover the paleo-topography of the clastic strata in the region. The examples presented here show that a better understanding of paleo-geomorphology can be gained by applying the proposed compaction recovery technology.展开更多
In this paper, the high-level knowledge of financial data modeled by ordinary differential equations (ODEs) is discovered in dynamic data by using an asynchronous parallel evolutionary modeling algorithm (APHEMA). A n...In this paper, the high-level knowledge of financial data modeled by ordinary differential equations (ODEs) is discovered in dynamic data by using an asynchronous parallel evolutionary modeling algorithm (APHEMA). A numerical example of Nasdaq index analysis is used to demonstrate the potential of APHEMA. The results show that the dynamic models automatically discovered in dynamic data by computer can be used to predict the financial trends.展开更多
This paper proposes a security policy model for mandatory access control in class B1 database management system whose level of labeling is tuple. The relation hierarchical data model is extended to multilevel relatio...This paper proposes a security policy model for mandatory access control in class B1 database management system whose level of labeling is tuple. The relation hierarchical data model is extended to multilevel relation hierarchical data model. Based on the multilevel relation hierarchical data model, the concept of upper lower layer relational integrity is presented after we analyze and eliminate the covert channels caused by the database integrity. Two SQL statements are extended to process polyinstantiation in the multilevel secure environment. The system is based on the multilevel relation hierarchical data model and is capable of integratively storing and manipulating multilevel complicated objects ( e.g., multilevel spatial data) and multilevel conventional data ( e.g., integer, real number and character string).展开更多
Based on dynamic capability theory and legitimacy theory,a theoretical model is constructed to examine how big data capability,through the mediation of knowledge dynamic capability,drive business model innovation unde...Based on dynamic capability theory and legitimacy theory,a theoretical model is constructed to examine how big data capability,through the mediation of knowledge dynamic capability,drive business model innovation under the moderation effect of innovation legitimacy.The eanalys is isconducted using regression analysis and fuzzy set qualitative comparative analysis(fsQCA)on survey data from 302 enterprises that have already implemented big data application practices.The study finds the following four conclusions.(1)Big data capability has a significant positive impact on business model innovation.(2)Dynamic knowledge capability partially mediates the relationship between big data capability and business model innovation.(3)Innovation legitimacy positively influences business model innovation and positively moderates the relationship between big data capability and businessmodel innovation.(4)Through further qualitative comparative analysis,two causal paths that influence business model innovation are identified.展开更多
Currently the indoor environment quality is described or evaluated mainly by the subjective or objective data.However,research increasingly has demonstrated that objective and subjective data both had some weaknesses ...Currently the indoor environment quality is described or evaluated mainly by the subjective or objective data.However,research increasingly has demonstrated that objective and subjective data both had some weaknesses to characterize the indoor environment quality,and they can compensate for each other's relative weaknesses.Hence,this study aims to develop an integration model to allow indoor subjective and objective data to be combined based on the structural equation modeling approach,using the Northeast China residential indoor environmental survey data.The results indicated that the integration model had a good fit for the survey data,and the model validity was confirmed.Moreover,in contrast to the subjective data(R^(2)=0.363)and objective data(R^(2)=0.239),the integrated data(R^(2)=0.553)improved the explanatory power on the satisfaction with the overall indoor environment.Furthermore,this integration model demonstrated that indoor subjective data assigned more weights to the integrated data than the corresponding objective data.The association strength of thermal environment and indoor air quality(0.43 or 0.47)was the strongest among the interactions of thermal,air quality,acoustic,and lighting environments.Consequently,the main contribution of this paper was that it provided a comprehensive model to accomplish the integration of indoor environmental subjective and objective data,promoting the ability to describe and assess the indoor environment quality.展开更多
This paper presents an procedure for purifying training data sets (i.e., past occurrences of slope failures) for inverse estimation on unobserved trigger factors of "different types of simultaneous slope failures"...This paper presents an procedure for purifying training data sets (i.e., past occurrences of slope failures) for inverse estimation on unobserved trigger factors of "different types of simultaneous slope failures". Due to difficulties in pixel-by-pixel observations of trigger factors, as one of the measures, the authors had proposed an inverse analysis algorithm on trigger factors based on SEM (structural equation modeling). Through a measurement equation, the trigger factor is inversely estimated, and a TFI (trigger factor influence) map can be also produced. As a subsequence subject, a purification procedure of training data set should be constructed to improve the accuracy of TFI map which depends on the representativeness of given training data sets of different types of slope failures. The proposed procedure resamples the matched pixels between original groups of past slope failures (i.e., surface slope failures, deep-seated slope failures, landslides) and classified three groups by K-means clustering for all pixels corresponding to those slope failures. For all cases of three types of slope failures, the improvement of success rates with respect to resampled training data sets was confirmed. As a final outcome, the differences between TFI maps produced by using original and resampled training data sets, respectively, are delineated on a DIF map (difference map) which is useful for analyzing trigger factor influence in terms of "risky- and safe-side assessment" sub-areas with respect to "different types of simultaneous slope failures".展开更多
Cyber Threat Intelligence(CTI)is a valuable resource for cybersecurity defense,but it also poses challenges due to its multi-source and heterogeneous nature.Security personnel may be unable to use CTI effectively to u...Cyber Threat Intelligence(CTI)is a valuable resource for cybersecurity defense,but it also poses challenges due to its multi-source and heterogeneous nature.Security personnel may be unable to use CTI effectively to understand the condition and trend of a cyberattack and respond promptly.To address these challenges,we propose a novel approach that consists of three steps.First,we construct the attack and defense analysis of the cybersecurity ontology(ADACO)model by integrating multiple cybersecurity databases.Second,we develop the threat evolution prediction algorithm(TEPA),which can automatically detect threats at device nodes,correlate and map multisource threat information,and dynamically infer the threat evolution process.TEPA leverages knowledge graphs to represent comprehensive threat scenarios and achieves better performance in simulated experiments by combining structural and textual features of entities.Third,we design the intelligent defense decision algorithm(IDDA),which can provide intelligent recommendations for security personnel regarding the most suitable defense techniques.IDDA outperforms the baseline methods in the comparative experiment.展开更多
The vision of a Digital Earth calls for more dynamic information systems,new sources of information,and stronger capabilities for their integration.Sensor networks have been identified as a major information source fo...The vision of a Digital Earth calls for more dynamic information systems,new sources of information,and stronger capabilities for their integration.Sensor networks have been identified as a major information source for the Digital Earth,while Semantic Web technologies have been proposed to facilitate integration.So far,sensor data are stored and published using the Observations&Measurements standard of the Open Geospatial Consortium(OGC)as data model.With the advent of Volunteered Geographic Information and the Semantic Sensor Web,work on an ontological model gained importance within Sensor Web Enablement(SWE).In contrast to data models,an ontological approach abstracts from implementation details by focusing on modeling the physical world from the perspective of a particular domain.Ontologies restrict the interpretation of vocabularies toward their intended meaning.The ongoing paradigm shift to Linked Sensor Data complements this attempt.Two questions have to be addressed:(1)how to refer to changing and frequently updated data sets using Uniform Resource Identifiers,and(2)how to establish meaningful links between those data sets,that is,observations,sensors,features of interest,and observed properties?In this paper,we present a Linked Data model and a RESTful proxy for OGC’s Sensor Observation Service to improve integration and inter-linkage of observation data for the Digital Earth.展开更多
It is very important for the development of electric power big data technology to use the electric power knowledge.A new electric power knowledge theory model is proposed here to solve the problem of normalized modele...It is very important for the development of electric power big data technology to use the electric power knowledge.A new electric power knowledge theory model is proposed here to solve the problem of normalized modeled electric power knowledge for the management and analysis of electric power big data.Current modeling techniques of electric power knowledge are viewed as inadequate because of the complexity and variety of the relationships among electric power system data.Ontology theory and semantic web technologies used in electric power systems and in many other industry domains provide a new kind of knowledge modeling method.Based on this,this paper proposes the structure,elements,basic calculations and multidimensional reasoning method of the new knowledge model.A modeling example of the regulations defined in electric power system operation standard is demonstrated.Different forms of the model and related technologies are also introduced,including electric power system standard modeling,multi-type data management,unstructured data searching,knowledge display and data analysis based on semantic expansion and reduction.Research shows that the new model developed here is powerful and can adapt to various knowledge expression requirements of electric power big data.With the development of electric power big data technology,it is expected that the knowledge model will be improved and will be used in more applications.展开更多
文摘Building model data organization is often programmed to solve a specific problem,resulting in the inability to organize indoor and outdoor 3D scenes in an integrated manner.In this paper,existing building spatial data models are studied,and the characteristics of building information modeling standards(IFC),city geographic modeling language(CityGML),indoor modeling language(IndoorGML),and other models are compared and analyzed.CityGML and IndoorGML models face challenges in satisfying diverse application scenarios and requirements due to limitations in their expression capabilities.It is proposed to combine the semantic information of the model objects to effectively partition and organize the indoor and outdoor spatial 3D model data and to construct the indoor and outdoor data organization mechanism of“chunk-layer-subobject-entrances-area-detail object.”This method is verified by proposing a 3D data organization method for indoor and outdoor space and constructing a 3D visualization system based on it.
文摘Availability of digital elevation models (DEMs) of a high quality is becoming more and more important in spatial studies. Standard methods for DEM creation use only intentionally acquired data sources. Two approaches which employ various types of data sets for DEM production are proposed: (1) Method of weighted sum of different data sources with morphological enhancement that conflates any additional data sources to principal DEM, and (2) DEM updating methods of modeling absolute and relative temporal changes, considering landslides, earthquakes, quarries, watererosion, building and highway constructions, etc. Spatial modeling of environmental variables concerning both approaches for (a) quality control of data sources, considering regions, (b) pre-processing of data sources, and (c) processing of the final DEM, have been applied. The variables are called rate of karst, morphologic roughness (modeled from slope, profile curvature and elevation), characteristic features, rate of forestation, hydrological network, and rate of urbanization. Only the variables evidenced as significant were used in spatial modeling to generate homogeneous regions in spatial modeling a-c. The production process uses different regions to define high quality conflation of data sources to the final DEM. The methodology had been confirmed by case studies. The result is an overall high quality DEM with various well-known parameters.
基金This work was supported by the National Key R&D Program of China(2020YFB0905900).
文摘Integrating marketing and distribution businesses is crucial for improving the coordination of equipment and the efficient management of multi-energy systems.New energy sources are continuously being connected to distribution grids;this,however,increases the complexity of the information structure of marketing and distribution businesses.The existing unified data model and the coordinated application of marketing and distribution suffer from various drawbacks.As a solution,this paper presents a data model of"one graph of marketing and distribution"and a framework for graph computing,by analyzing the current trends of business and data in the marketing and distribution fields and using graph data theory.Specifically,this work aims to determine the correlation between distribution transformers and marketing users,which is crucial for elucidating the connection between marketing and distribution.In this manner,a novel identification algorithm is proposed based on the collected data for marketing and distribution.Lastly,a forecasting application is developed based on the proposed algorithm to realize the coordinated prediction and consumption of distributed photovoltaic power generation and distribution loads.Furthermore,an operation and maintenance(O&M)knowledge graph reasoning application is developed to improve the intelligent O&M ability of marketing and distribution equipment.
基金funded by the Asia Pacific Network for Global Change Research(APN)-CAF2016-RR11-CMY-Pham
文摘Objective: To correlate climatic and environmental factors such as land surface temperature, rainfall, humidity and normalized difference vegetation index with the incidence of dengue to develop prediction models for the Philippines using remote-sensing data.Methods: Timeseries analysis was performed using dengue cases in four regions of the Philippines and monthly climatic variables extracted from Global Satellite Mapping of Precipitation for rainfall, and MODIS for the land surface temperature and normalized difference vegetation index from 2008-2015.Consistent dataset during the period of study was utilized in Autoregressive Integrated Moving Average models to predict dengue incidence in the four regions being studied.Results: The best-fitting models were selected to characterize the relationship between dengue incidence and climate variables.The predicted cases of dengue for January to December 2015 period fitted well with the actual dengue cases of the same timeframe.It also showed significantly good linear regression with a square of correlation of 0.869 5 for the four regions combined.Conclusion: Climatic and environmental variables are positively associated with dengue incidence and suit best as predictor factors using Autoregressive Integrated Moving Average models.This finding could be a meaningful tool in developing an early warning model based on weather forecasts to deliver effective public health prevention and mitigation programs.
基金supported by the National Natural Science Foundation of China(Grant No.71704016,71331008)the Natural Science Foundation of Hunan Province(Grant No.2017JJ2267)+1 种基金Key Projects of Chinese Ministry of Education(17JZD022)the Project of China Scholarship Council for Overseas Studies(201208430233,201508430121),which are acknowledged.
文摘With market competition becoming fiercer,enterprises must update their products by constantly assimilating new big data knowledge and private knowledge to maintain their market shares at different time points in the big data environment.Typically,there is mutual influence between each knowledge transfer if the time interval is not too long.It is necessary to study the problem of continuous knowledge transfer in the big data environment.Based on research on one-time knowledge transfer,a model of continuous knowledge transfer is presented,which can consider the interaction between knowledge transfer and determine the optimal knowledge transfer time at different time points in the big data environment.Simulation experiments were performed by adjusting several parameters.The experimental results verified the model’s validity and facilitated conclusions regarding their practical application values.The experimental results can provide more effective decisions for enterprises that must carry out continuous knowledge transfer in the big data environment.
文摘With the development of meteorological services, there are more and more types of real-time observation data, and the timeliness requirements are getting higher and higher. The monitoring methods of existing meteorological observation data transmission can no longer meet the needs. This paper proposes a new monitoring model, namely the “integrated monitoring model” for provincial meteorological observation data transmission. The model can complete the whole network monitoring of meteorological observation data transmission process. Based on this model, the integrated monitoring system for meteorological observation data transmission in Guangdong Province is developed. The system uses Java as the programming language, and integrates J2EE, Hibernate, Quartz, Snmp4j and Slf4j frameworks, and uses Oracle database as the data storage carrier, following the MVC specification and agile development concept. The system development uses four key technologies, including simple network management protocol, network connectivity detection technology, remote host management technology and thread pool technology. The integrated monitoring system has been put into business application. As a highlight of Guangdong’s meteorological modernization, it has played an active role in many major meteorological services.
基金Project (40473029) supported bythe National Natural Science Foundation of China project (04JJ3046) supported bytheNatural Science Foundation of Hunan Province , China
文摘In allusion to the difficulty of integrating data with different models in integrating spatial information, the characteristics of raster structure, vector structure and mixed model were analyzed, and a hierarchical vector-raster integrative full feature model was put forward by integrating the advantage of vector and raster model and using the object-oriented method. The data structures of the four basic features, i.e. point, line, surface and solid, were described. An application was analyzed and described, and the characteristics of this model were described. In this model, all objects in the real world are divided into and described as features with hierarchy, and all the data are organized in vector. This model can describe data based on feature, field, network and other models, and avoid the disadvantage of inability to integrate data based on different models and perform spatial analysis on them in spatial information integration.
文摘To construct mediators for data integration systems that integrate structured and semi-structured data, and to facilitate the reformulation and decomposition of the query, the presented system uses the XML processing language (XPL) for the mediator. With XPL, it is easy to construct mediators for data integration based on XML, and it can accelerate the work in the mediator.
文摘A multilevel secure relation hierarchical data model for multilevel secure database is extended from the relation hierarchical data model in single level environment in this paper. Based on the model, an upper lower layer relationalintegrity is presented after we analyze and eliminate the covert channels caused by the database integrity.Two SQL statements are extended to process polyinstantiation in the multilevel secure environment.The system based on the multilevel secure relation hierarchical data model is capable of integratively storing and manipulating complicated objects ( e.g. , multilevel spatial data) and conventional data ( e.g. , integer, real number and character string) in multilevel secure database.
基金supported by the National Key Basic Research and Development Program of China under contract No.2006CB701305the National Natural Science Foundation of China under coutract No.40571129the National High-Technology Program of China under contract Nos 2002AA639400,2003AA604040 and 2003AA637030.
文摘Marine information has been increasing quickly. The traditional database technologies have disadvantages in manipulating large amounts of marine information which relates to the position in 3-D with the time. Recently, greater emphasis has been placed on GIS (geographical information system)to deal with the marine information. The GIS has shown great success for terrestrial applications in the last decades, but its use in marine fields has been far more restricted. One of the main reasons is that most of the GIS systems or their data models are designed for land applications. They cannot do well with the nature of the marine environment and for the marine information. And this becomes a fundamental challenge to the traditional GIS and its data structure. This work designed a data model, the raster-based spatio-temporal hierarchical data model (RSHDM), for the marine information system, or for the knowledge discovery fi'om spatio-temporal data, which bases itself on the nature of the marine data and overcomes the shortages of the current spatio-temporal models when they are used in the field. As an experiment, the marine fishery data warehouse (FDW) for marine fishery management was set up, which was based on the RSHDM. The experiment proved that the RSHDM can do well with the data and can extract easily the aggregations that the management needs at different levels.
文摘Facing the development of future 5 G, the emerging technologies such as Internet of things, big data, cloud computing, and artificial intelligence is enhancing an explosive growth in data traffic. Radical changes in communication theory and implement technologies, the wireless communications and wireless networks have entered a new era. Among them, wireless big data(WBD) has tremendous value, and artificial intelligence(AI) gives unthinkable possibilities. However, in the big data development and artificial intelligence application groups, the lack of a sound theoretical foundation and mathematical methods is regarded as a real challenge that needs to be solved. From the basic problem of wireless communication, the interrelationship of demand, environment and ability, this paper intends to investigate the concept and data model of WBD, the wireless data mining, the wireless knowledge and wireless knowledge learning(WKL), and typical practices examples, to facilitate and open up more opportunities of WBD research and developments. Such research is beneficial for creating new theoretical foundation and emerging technologies of future wireless communications.
文摘Compaction correction is a key part of paleogeomorphic recovery methods. Yet, the influence of lithology on the porosity evolution is not usually taken into account. Present methods merely classify the lithologies as sandstone and mudstone to undertake separate porositydepth compaction modeling. However, using just two lithologies is an oversimplification that cannot represent the compaction history. In such schemes, the precision of the compaction recovery is inadequate. To improve the precision of compaction recovery, a depth compaction model has been proposed that involves both porosity and clay content. A clastic lithological compaction unit classification method, based on clay content, has been designed to identify lithological boundaries and establish sets of compaction units. Also, on the basis of the clastic compaction unit classification, two methods of compaction recovery that integrate well and seismic data are employed to extrapolate well-based compaction information outward along seismic lines and recover the paleo-topography of the clastic strata in the region. The examples presented here show that a better understanding of paleo-geomorphology can be gained by applying the proposed compaction recovery technology.
文摘In this paper, the high-level knowledge of financial data modeled by ordinary differential equations (ODEs) is discovered in dynamic data by using an asynchronous parallel evolutionary modeling algorithm (APHEMA). A numerical example of Nasdaq index analysis is used to demonstrate the potential of APHEMA. The results show that the dynamic models automatically discovered in dynamic data by computer can be used to predict the financial trends.
文摘This paper proposes a security policy model for mandatory access control in class B1 database management system whose level of labeling is tuple. The relation hierarchical data model is extended to multilevel relation hierarchical data model. Based on the multilevel relation hierarchical data model, the concept of upper lower layer relational integrity is presented after we analyze and eliminate the covert channels caused by the database integrity. Two SQL statements are extended to process polyinstantiation in the multilevel secure environment. The system is based on the multilevel relation hierarchical data model and is capable of integratively storing and manipulating multilevel complicated objects ( e.g., multilevel spatial data) and multilevel conventional data ( e.g., integer, real number and character string).
基金general project(No.71672080,72072086)of the National Natural ScienceFoundation of China.
文摘Based on dynamic capability theory and legitimacy theory,a theoretical model is constructed to examine how big data capability,through the mediation of knowledge dynamic capability,drive business model innovation under the moderation effect of innovation legitimacy.The eanalys is isconducted using regression analysis and fuzzy set qualitative comparative analysis(fsQCA)on survey data from 302 enterprises that have already implemented big data application practices.The study finds the following four conclusions.(1)Big data capability has a significant positive impact on business model innovation.(2)Dynamic knowledge capability partially mediates the relationship between big data capability and business model innovation.(3)Innovation legitimacy positively influences business model innovation and positively moderates the relationship between big data capability and businessmodel innovation.(4)Through further qualitative comparative analysis,two causal paths that influence business model innovation are identified.
基金supported by the National Natural Science Foundation of China(No.51978121 and No.51578103)the Key Projects in the National Science&Technology Pillar Program during the 12th Five-year Plan Period of China(No.2012BAJ 02B05)the National Key R&D Program during the 13th Five-year Plan Period of China(No.2018YFD1100701).
文摘Currently the indoor environment quality is described or evaluated mainly by the subjective or objective data.However,research increasingly has demonstrated that objective and subjective data both had some weaknesses to characterize the indoor environment quality,and they can compensate for each other's relative weaknesses.Hence,this study aims to develop an integration model to allow indoor subjective and objective data to be combined based on the structural equation modeling approach,using the Northeast China residential indoor environmental survey data.The results indicated that the integration model had a good fit for the survey data,and the model validity was confirmed.Moreover,in contrast to the subjective data(R^(2)=0.363)and objective data(R^(2)=0.239),the integrated data(R^(2)=0.553)improved the explanatory power on the satisfaction with the overall indoor environment.Furthermore,this integration model demonstrated that indoor subjective data assigned more weights to the integrated data than the corresponding objective data.The association strength of thermal environment and indoor air quality(0.43 or 0.47)was the strongest among the interactions of thermal,air quality,acoustic,and lighting environments.Consequently,the main contribution of this paper was that it provided a comprehensive model to accomplish the integration of indoor environmental subjective and objective data,promoting the ability to describe and assess the indoor environment quality.
文摘This paper presents an procedure for purifying training data sets (i.e., past occurrences of slope failures) for inverse estimation on unobserved trigger factors of "different types of simultaneous slope failures". Due to difficulties in pixel-by-pixel observations of trigger factors, as one of the measures, the authors had proposed an inverse analysis algorithm on trigger factors based on SEM (structural equation modeling). Through a measurement equation, the trigger factor is inversely estimated, and a TFI (trigger factor influence) map can be also produced. As a subsequence subject, a purification procedure of training data set should be constructed to improve the accuracy of TFI map which depends on the representativeness of given training data sets of different types of slope failures. The proposed procedure resamples the matched pixels between original groups of past slope failures (i.e., surface slope failures, deep-seated slope failures, landslides) and classified three groups by K-means clustering for all pixels corresponding to those slope failures. For all cases of three types of slope failures, the improvement of success rates with respect to resampled training data sets was confirmed. As a final outcome, the differences between TFI maps produced by using original and resampled training data sets, respectively, are delineated on a DIF map (difference map) which is useful for analyzing trigger factor influence in terms of "risky- and safe-side assessment" sub-areas with respect to "different types of simultaneous slope failures".
文摘Cyber Threat Intelligence(CTI)is a valuable resource for cybersecurity defense,but it also poses challenges due to its multi-source and heterogeneous nature.Security personnel may be unable to use CTI effectively to understand the condition and trend of a cyberattack and respond promptly.To address these challenges,we propose a novel approach that consists of three steps.First,we construct the attack and defense analysis of the cybersecurity ontology(ADACO)model by integrating multiple cybersecurity databases.Second,we develop the threat evolution prediction algorithm(TEPA),which can automatically detect threats at device nodes,correlate and map multisource threat information,and dynamically infer the threat evolution process.TEPA leverages knowledge graphs to represent comprehensive threat scenarios and achieves better performance in simulated experiments by combining structural and textual features of entities.Third,we design the intelligent defense decision algorithm(IDDA),which can provide intelligent recommendations for security personnel regarding the most suitable defense techniques.IDDA outperforms the baseline methods in the comparative experiment.
基金The presented work is developed within the 528 North semantics community,and partly funded by the European projects UncertWeb(FP7-248488)ENVISION(FP7-249170)the GENESIS project(an Integrated Project,contract number 223996).
文摘The vision of a Digital Earth calls for more dynamic information systems,new sources of information,and stronger capabilities for their integration.Sensor networks have been identified as a major information source for the Digital Earth,while Semantic Web technologies have been proposed to facilitate integration.So far,sensor data are stored and published using the Observations&Measurements standard of the Open Geospatial Consortium(OGC)as data model.With the advent of Volunteered Geographic Information and the Semantic Sensor Web,work on an ontological model gained importance within Sensor Web Enablement(SWE).In contrast to data models,an ontological approach abstracts from implementation details by focusing on modeling the physical world from the perspective of a particular domain.Ontologies restrict the interpretation of vocabularies toward their intended meaning.The ongoing paradigm shift to Linked Sensor Data complements this attempt.Two questions have to be addressed:(1)how to refer to changing and frequently updated data sets using Uniform Resource Identifiers,and(2)how to establish meaningful links between those data sets,that is,observations,sensors,features of interest,and observed properties?In this paper,we present a Linked Data model and a RESTful proxy for OGC’s Sensor Observation Service to improve integration and inter-linkage of observation data for the Digital Earth.
基金supported by Science and Technology Foundation of the State Grid Corporation of China(XT71-14-043).
文摘It is very important for the development of electric power big data technology to use the electric power knowledge.A new electric power knowledge theory model is proposed here to solve the problem of normalized modeled electric power knowledge for the management and analysis of electric power big data.Current modeling techniques of electric power knowledge are viewed as inadequate because of the complexity and variety of the relationships among electric power system data.Ontology theory and semantic web technologies used in electric power systems and in many other industry domains provide a new kind of knowledge modeling method.Based on this,this paper proposes the structure,elements,basic calculations and multidimensional reasoning method of the new knowledge model.A modeling example of the regulations defined in electric power system operation standard is demonstrated.Different forms of the model and related technologies are also introduced,including electric power system standard modeling,multi-type data management,unstructured data searching,knowledge display and data analysis based on semantic expansion and reduction.Research shows that the new model developed here is powerful and can adapt to various knowledge expression requirements of electric power big data.With the development of electric power big data technology,it is expected that the knowledge model will be improved and will be used in more applications.