In this review, we highlight some recent methodological and theoretical develop- ments in estimation and testing of large panel data models with cross-sectional dependence. The paper begins with a discussion of issues...In this review, we highlight some recent methodological and theoretical develop- ments in estimation and testing of large panel data models with cross-sectional dependence. The paper begins with a discussion of issues of cross-sectional dependence, and introduces the concepts of weak and strong cross-sectional dependence. Then, the main attention is primarily paid to spatial and factor approaches for modeling cross-sectional dependence for both linear and nonlinear (nonparametric and semiparametric) panel data models. Finally, we conclude with some speculations on future research directions.展开更多
Data model is the core knowledge of database course.A deep understanding of data model is the key to mastering database design and application.The data models of NoSQL databases are categorized as key-value stores,col...Data model is the core knowledge of database course.A deep understanding of data model is the key to mastering database design and application.The data models of NoSQL databases are categorized as key-value stores,column-oriented stores,document-oriented stores and graph databases.This paper makes a comparative analysis of the characteristics of the relational data model and NoSQL data models,and gives the design and implementation of different data models combined with cases,so that students can master the relevant theories and application methods of the database model.展开更多
Building model data organization is often programmed to solve a specific problem,resulting in the inability to organize indoor and outdoor 3D scenes in an integrated manner.In this paper,existing building spatial data...Building model data organization is often programmed to solve a specific problem,resulting in the inability to organize indoor and outdoor 3D scenes in an integrated manner.In this paper,existing building spatial data models are studied,and the characteristics of building information modeling standards(IFC),city geographic modeling language(CityGML),indoor modeling language(IndoorGML),and other models are compared and analyzed.CityGML and IndoorGML models face challenges in satisfying diverse application scenarios and requirements due to limitations in their expression capabilities.It is proposed to combine the semantic information of the model objects to effectively partition and organize the indoor and outdoor spatial 3D model data and to construct the indoor and outdoor data organization mechanism of“chunk-layer-subobject-entrances-area-detail object.”This method is verified by proposing a 3D data organization method for indoor and outdoor space and constructing a 3D visualization system based on it.展开更多
Gestational Diabetes Mellitus (GDM) is a significant health concern affecting pregnant women worldwide. It is characterized by elevated blood sugar levels during pregnancy and poses risks to both maternal and fetal he...Gestational Diabetes Mellitus (GDM) is a significant health concern affecting pregnant women worldwide. It is characterized by elevated blood sugar levels during pregnancy and poses risks to both maternal and fetal health. Maternal complications of GDM include an increased risk of developing type 2 diabetes later in life, as well as hypertension and preeclampsia during pregnancy. Fetal complications may include macrosomia (large birth weight), birth injuries, and an increased risk of developing metabolic disorders later in life. Understanding the demographics, risk factors, and biomarkers associated with GDM is crucial for effective management and prevention strategies. This research aims to address these aspects comprehensively through the analysis of a dataset comprising 600 pregnant women. By exploring the demographics of the dataset and employing data modeling techniques, the study seeks to identify key risk factors associated with GDM. Moreover, by analyzing various biomarkers, the research aims to gain insights into the physiological mechanisms underlying GDM and its implications for maternal and fetal health. The significance of this research lies in its potential to inform clinical practice and public health policies related to GDM. By identifying demographic patterns and risk factors, healthcare providers can better tailor screening and intervention strategies for pregnant women at risk of GDM. Additionally, insights into biomarkers associated with GDM may contribute to the development of novel diagnostic tools and therapeutic approaches. Ultimately, by enhancing our understanding of GDM, this research aims to improve maternal and fetal outcomes and reduce the burden of this condition on healthcare systems and society. However, it’s important to acknowledge the limitations of the dataset used in this study. Further research utilizing larger and more diverse datasets, perhaps employing advanced data analysis techniques such as Power BI, is warranted to corroborate and expand upon the findings of this research. This underscores the ongoing need for continued investigation into GDM to refine our understanding and improve clinical management strategies.展开更多
An empirical likelihood approach to estimate the coefficients in linear model with interval censored responses is developed in this paper. By constructing unbiased transformation of interval censored data,an empirical...An empirical likelihood approach to estimate the coefficients in linear model with interval censored responses is developed in this paper. By constructing unbiased transformation of interval censored data,an empirical log-likelihood function with asymptotic X^2 is derived. The confidence regions for the coefficients are constructed. Some simulation results indicate that the method performs better than the normal approximation method in term of coverage accuracies.展开更多
In order to find an effective way to improve the quality of school management,finding valuable information from students' original data and providing feedback for student management are necessary. Firstly,some new...In order to find an effective way to improve the quality of school management,finding valuable information from students' original data and providing feedback for student management are necessary. Firstly,some new and successful educational data mining models were analyzed and compared. These models have better performance than traditional models( such as Knowledge Tracing Model) in efficiency,comprehensiveness,ease of use,stability and so on. Then,the neural network algorithm was conducted to explore the feasibility of the application of educational data mining in student management,and the results show that it has enough predictive accuracy and reliability to be put into practice. In the end,the possibility and prospect of the application of educational data mining in teaching management system for university students was assessed.展开更多
Pneumonia is an acute lung infection that has caused many fatalitiesglobally. Radiologists often employ chest X-rays to identify pneumoniasince they are presently the most effective imaging method for this purpose.Com...Pneumonia is an acute lung infection that has caused many fatalitiesglobally. Radiologists often employ chest X-rays to identify pneumoniasince they are presently the most effective imaging method for this purpose.Computer-aided diagnosis of pneumonia using deep learning techniques iswidely used due to its effectiveness and performance. In the proposed method,the Synthetic Minority Oversampling Technique (SMOTE) approach is usedto eliminate the class imbalance in the X-ray dataset. To compensate forthe paucity of accessible data, pre-trained transfer learning is used, and anensemble Convolutional Neural Network (CNN) model is developed. Theensemble model consists of all possible combinations of the MobileNetv2,Visual Geometry Group (VGG16), and DenseNet169 models. MobileNetV2and DenseNet169 performed well in the Single classifier model, with anaccuracy of 94%, while the ensemble model (MobileNetV2+DenseNet169)achieved an accuracy of 96.9%. Using the data synchronous parallel modelin Distributed Tensorflow, the training process accelerated performance by98.6% and outperformed other conventional approaches.展开更多
This paper presents a methodology driven by database constraints for designing and developing(database)software applications.Much needed and with excellent results,this paradigm guarantees the highest possible quality...This paper presents a methodology driven by database constraints for designing and developing(database)software applications.Much needed and with excellent results,this paradigm guarantees the highest possible quality of the managed data.The proposed methodology is illustrated with an easy to understand,yet complex medium-sized genealogy software application driven by more than 200 database constraints,which fully meets such expectations.展开更多
Pedestrian positioning system(PPS)using wearable inertial sensors has wide applications towards various emerging fields such as smart healthcare,emergency rescue,soldier positioning,etc.The performance of traditional ...Pedestrian positioning system(PPS)using wearable inertial sensors has wide applications towards various emerging fields such as smart healthcare,emergency rescue,soldier positioning,etc.The performance of traditional PPS is limited by the cumulative error of inertial sensors,complex motion modes of pedestrians,and the low robustness of the multi-sensor collaboration structure.This paper presents a hybrid pedestrian positioning system using the combination of wearable inertial sensors and ultrasonic ranging(H-PPS).A robust two nodes integration structure is developed to adaptively combine the motion data acquired from the single waist-mounted and foot-mounted node,and enhanced by a novel ellipsoid constraint model.In addition,a deep-learning-based walking speed estimator is proposed by considering all the motion features provided by different nodes,which effectively reduces the cumulative error originating from inertial sensors.Finally,a comprehensive data and model dual-driven model is presented to effectively combine the motion data provided by different sensor nodes and walking speed estimator,and multi-level constraints are extracted to further improve the performance of the overall system.Experimental results indicate that the proposed H-PPS significantly improves the performance of the single PPS and outperforms existing algorithms in accuracy index under complex indoor scenarios.展开更多
Structural change in panel data is a widespread phenomena. This paper proposes a fluctuation test to detect a structural change at an unknown date in heterogeneous panel data models with or without common correlated e...Structural change in panel data is a widespread phenomena. This paper proposes a fluctuation test to detect a structural change at an unknown date in heterogeneous panel data models with or without common correlated effects. The asymptotic properties of the fluctuation statistics in two cases are developed under the null and local alternative hypothesis. Furthermore, the consistency of the change point estimator is proven. Monte Carlo simulation shows that the fluctuation test can control the probability of type I error in most cases, and the empirical power is high in case of small and moderate sample sizes. An application of the procedure to a real data is presented.展开更多
A multilevel secure relation hierarchical data model for multilevel secure database is extended from the relation hierarchical data model in single level environment in this paper. Based on the model, an upper lowe...A multilevel secure relation hierarchical data model for multilevel secure database is extended from the relation hierarchical data model in single level environment in this paper. Based on the model, an upper lower layer relationalintegrity is presented after we analyze and eliminate the covert channels caused by the database integrity.Two SQL statements are extended to process polyinstantiation in the multilevel secure environment.The system based on the multilevel secure relation hierarchical data model is capable of integratively storing and manipulating complicated objects ( e.g. , multilevel spatial data) and conventional data ( e.g. , integer, real number and character string) in multilevel secure database.展开更多
Homogeneous binary function products are frequently encountered in the sub-universes modeled by databases,spanning from genealogical trees and sports to education and healthcare,etc.Their properties must be discovered...Homogeneous binary function products are frequently encountered in the sub-universes modeled by databases,spanning from genealogical trees and sports to education and healthcare,etc.Their properties must be discovered and enforced by the software applications managing such data to guarantee plausibility.The(Elementary)Mathematical Data Model provides 17 types of dyadic-based homogeneous binary function product constraint categories.MatBase,an intelligent data and knowledge base management system prototype,allows database designers to simply declare them by only clicking corresponding checkboxes and automatically generates code for enforcing them.This paper describes the algorithms that MatBase uses for enforcing all 17 types of homogeneous binary function product constraint,which may also be employed by developers without access to MatBase.展开更多
Hydrocarbon production from shale has attracted much attention in the recent years. When applied to this prolific and hydrocarbon rich resource plays, our understanding of the complexities of the flow mechanism(sorpt...Hydrocarbon production from shale has attracted much attention in the recent years. When applied to this prolific and hydrocarbon rich resource plays, our understanding of the complexities of the flow mechanism(sorption process and flow behavior in complex fracture systems- induced or natural) leaves much to be desired. In this paper, we present and discuss a novel approach to modeling, history matching of hydrocarbon production from a Marcellus shale asset in southwestern Pennsylvania using advanced data mining, pattern recognition and machine learning technologies. In this new approach instead of imposing our understanding of the flow mechanism, the impact of multi-stage hydraulic fractures, and the production process on the reservoir model, we allow the production history, well log, completion and hydraulic fracturing data to guide our model and determine its behavior. The uniqueness of this technology is that it incorporates the so-called "hard data" directly into the reservoir model, so that the model can be used to optimize the hydraulic fracture process. The "hard data" refers to field measurements during the hydraulic fracturing process such as fluid and proppant type and amount, injection pressure and rate as well as proppant concentration. This novel approach contrasts with the current industry focus on the use of "soft data"(non-measured, interpretive data such as frac length, width,height and conductivity) in the reservoir models. The study focuses on a Marcellus shale asset that includes 135 wells with multiple pads, different landing targets, well length and reservoir properties. The full field history matching process was successfully completed using this data driven approach thus capturing the production behavior with acceptable accuracy for individual wells and for the entire asset.展开更多
Multidatabase systems are designed to achieve schema integration and data interoperation among distributed and heterogeneous database systems. But data model heterogeneity and schema heterogeneity make this a challeng...Multidatabase systems are designed to achieve schema integration and data interoperation among distributed and heterogeneous database systems. But data model heterogeneity and schema heterogeneity make this a challenging task. A multidatabase common data model is firstly introduced based on XML, named XML-based Integration Data Model (XIDM), which is suitable for integrating different types of schemas. Then an approach of schema mappings based on XIDM in multidatabase systems has been presented. The mappings include global mappings, dealing with horizontal and vertical partitioning between global schemas and export schemas, and local mappings, processing the transformation between export schemas and local schemas. Finally, the illustration and implementation of schema mappings in a multidatabase prototype - Panorama system are also discussed. The implementation results demonstrate that the XIDM is an efficient model for managing multiple heterogeneous data sources and the approaches of schema mapping based on XIDM behave very well when integrating relational, object-oriented database systems and other file systems.展开更多
Marine information has been increasing quickly. The traditional database technologies have disadvantages in manipulating large amounts of marine information which relates to the position in 3-D with the time. Recently...Marine information has been increasing quickly. The traditional database technologies have disadvantages in manipulating large amounts of marine information which relates to the position in 3-D with the time. Recently, greater emphasis has been placed on GIS (geographical information system)to deal with the marine information. The GIS has shown great success for terrestrial applications in the last decades, but its use in marine fields has been far more restricted. One of the main reasons is that most of the GIS systems or their data models are designed for land applications. They cannot do well with the nature of the marine environment and for the marine information. And this becomes a fundamental challenge to the traditional GIS and its data structure. This work designed a data model, the raster-based spatio-temporal hierarchical data model (RSHDM), for the marine information system, or for the knowledge discovery fi'om spatio-temporal data, which bases itself on the nature of the marine data and overcomes the shortages of the current spatio-temporal models when they are used in the field. As an experiment, the marine fishery data warehouse (FDW) for marine fishery management was set up, which was based on the RSHDM. The experiment proved that the RSHDM can do well with the data and can extract easily the aggregations that the management needs at different levels.展开更多
Atmospheric CO_(2)is one of key parameters to estimate air-sea CO_(2)flux.The Orbiting Carbon Observatory-2(OCO-2)satellite has observed the column-averaged dry-air mole fractions of global atmospheric carbon dioxide(...Atmospheric CO_(2)is one of key parameters to estimate air-sea CO_(2)flux.The Orbiting Carbon Observatory-2(OCO-2)satellite has observed the column-averaged dry-air mole fractions of global atmospheric carbon dioxide(XCO_(2))since 2014.In this study,the OCO-2 XCO_(2)products were compared between in-situ data from the Total Carbon Column Network(TCCON)and Global Monitoring Division(GMD),and modeling data from CarbonTracker2019 over global ocean and land.Results showed that the OCO-2 XCO_(2)data are consistent with the TCCON and GMD in situ XCO_(2)data,with mean absolute biases of 0.25×10^(-6)and 0.67×10^(-6),respectively.Moreover,the OCO-2 XCO_(2)data are also consistent with the CarbonTracker2019 modeling XCO_(2)data,with mean absolute biases of 0.78×10^(-6)over ocean and 1.02×10^(-6)over land.The results indicated the high accuracy of the OCO-2 XCO_(2)product over global ocean which could be applied to estimate the air-sea CO_(2)flux.展开更多
To improve the effectiveness of dam safety monitoring database systems, the development process of a multi-dimensional conceptual data model was analyzed and a logic design wasachieved in multi-dimensional database mo...To improve the effectiveness of dam safety monitoring database systems, the development process of a multi-dimensional conceptual data model was analyzed and a logic design wasachieved in multi-dimensional database mode. The optimal data model was confirmed by identifying data objects, defining relations and reviewing entities. The conversion of relations among entities to external keys and entities and physical attributes to tables and fields was interpreted completely. On this basis, a multi-dimensional database that reflects the management and analysis of a dam safety monitoring system on monitoring data information has been established, for which factual tables and dimensional tables have been designed. Finally, based on service design and user interface design, the dam safety monitoring system has been developed with Delphi as the development tool. This development project shows that the multi-dimensional database can simplify the development process and minimize hidden dangers in the database structure design. It is superior to other dam safety monitoring system development models and can provide a new research direction for system developers.展开更多
This paper describes multi view modeling and data model transformation for the modeling. We have proposed a reference model of CAD system generation, which can be applied to various domain specific languages. Howeve...This paper describes multi view modeling and data model transformation for the modeling. We have proposed a reference model of CAD system generation, which can be applied to various domain specific languages. However, the current CAD system generation cannot integrate data of multiple domains. Generally each domain has its own view of products. For example, in the domain of architectural structure, designers extract the necessary data from the data in architecture design. Domain experts translate one view into another view beyond domains using their own brains.The multi view modeling is a way to integrate product data of multiple domains, and make it possible to translate views among various domains by computers.展开更多
Flexible roll forming is a promising manufacturing method for the production of variable cross section products. Considering the large plastic strain in this forming process which is much larger than that of uniform d...Flexible roll forming is a promising manufacturing method for the production of variable cross section products. Considering the large plastic strain in this forming process which is much larger than that of uniform deformation phase of uniaxial tensile test, the widely adopted method of simulating the forming processes with non-supplemented material data from uniaxial tensile test will certainly lead to large error. To reduce this error, the material data is supplemented based on three constitutive models. Then a finite element model of a six passes flexible roll forming process is established based on the supplemented material data and the original material data from the uniaxial tensile test. The flexible roll forming experiment of a B pillar reinforcing plate is carried out to verify the proposed method. Final cross section shapes of the experimental and the simulated results are compared. It is shown that the simulation calculated with supplemented material data based on Swift model agrees well with the experimental results, while the simulation based on original material data could not predict the actual deformation accurately. The results indicate that this material supplement method is reliable and indispensible, and the simulation model can well reflect the real metal forming process. Detailed analysis of the distribution and history of plastic strain at different positions are performed. A new material data supplement method is proposed to tackle the problem which is ignored in other roll forming simulations, and thus the forming process simulation accuracy can be greatly improved.展开更多
In the course of network supported collaborative design, the data processing plays a very vital role. Much effort has been spent in this area, and many kinds of approaches have been proposed. Based on the correlative ...In the course of network supported collaborative design, the data processing plays a very vital role. Much effort has been spent in this area, and many kinds of approaches have been proposed. Based on the correlative materials, this paper presents extensible markup language (XML) based strategy for several important problems of data processing in network supported collaborative design, such as the representation of standard for the exchange of product model data (STEP) with XML in the product information expression and the management of XML documents using relational database. The paper gives a detailed exposition on how to clarify the mapping between XML structure and the relationship database structure and how XML-QL queries can be translated into structured query language (SQL) queries. Finally, the structure of data processing system based on XML is presented.展开更多
基金Supported by the National Natural Science Foundation of China(71131008(Key Project)and 71271179)
文摘In this review, we highlight some recent methodological and theoretical develop- ments in estimation and testing of large panel data models with cross-sectional dependence. The paper begins with a discussion of issues of cross-sectional dependence, and introduces the concepts of weak and strong cross-sectional dependence. Then, the main attention is primarily paid to spatial and factor approaches for modeling cross-sectional dependence for both linear and nonlinear (nonparametric and semiparametric) panel data models. Finally, we conclude with some speculations on future research directions.
基金This work was partly supported through the collaborative education projects of production and learning,and 2019 Sichuan teaching reform and research project,and teaching reform and research project of University of Electronic Science and technology in 2019.
文摘Data model is the core knowledge of database course.A deep understanding of data model is the key to mastering database design and application.The data models of NoSQL databases are categorized as key-value stores,column-oriented stores,document-oriented stores and graph databases.This paper makes a comparative analysis of the characteristics of the relational data model and NoSQL data models,and gives the design and implementation of different data models combined with cases,so that students can master the relevant theories and application methods of the database model.
文摘Building model data organization is often programmed to solve a specific problem,resulting in the inability to organize indoor and outdoor 3D scenes in an integrated manner.In this paper,existing building spatial data models are studied,and the characteristics of building information modeling standards(IFC),city geographic modeling language(CityGML),indoor modeling language(IndoorGML),and other models are compared and analyzed.CityGML and IndoorGML models face challenges in satisfying diverse application scenarios and requirements due to limitations in their expression capabilities.It is proposed to combine the semantic information of the model objects to effectively partition and organize the indoor and outdoor spatial 3D model data and to construct the indoor and outdoor data organization mechanism of“chunk-layer-subobject-entrances-area-detail object.”This method is verified by proposing a 3D data organization method for indoor and outdoor space and constructing a 3D visualization system based on it.
文摘Gestational Diabetes Mellitus (GDM) is a significant health concern affecting pregnant women worldwide. It is characterized by elevated blood sugar levels during pregnancy and poses risks to both maternal and fetal health. Maternal complications of GDM include an increased risk of developing type 2 diabetes later in life, as well as hypertension and preeclampsia during pregnancy. Fetal complications may include macrosomia (large birth weight), birth injuries, and an increased risk of developing metabolic disorders later in life. Understanding the demographics, risk factors, and biomarkers associated with GDM is crucial for effective management and prevention strategies. This research aims to address these aspects comprehensively through the analysis of a dataset comprising 600 pregnant women. By exploring the demographics of the dataset and employing data modeling techniques, the study seeks to identify key risk factors associated with GDM. Moreover, by analyzing various biomarkers, the research aims to gain insights into the physiological mechanisms underlying GDM and its implications for maternal and fetal health. The significance of this research lies in its potential to inform clinical practice and public health policies related to GDM. By identifying demographic patterns and risk factors, healthcare providers can better tailor screening and intervention strategies for pregnant women at risk of GDM. Additionally, insights into biomarkers associated with GDM may contribute to the development of novel diagnostic tools and therapeutic approaches. Ultimately, by enhancing our understanding of GDM, this research aims to improve maternal and fetal outcomes and reduce the burden of this condition on healthcare systems and society. However, it’s important to acknowledge the limitations of the dataset used in this study. Further research utilizing larger and more diverse datasets, perhaps employing advanced data analysis techniques such as Power BI, is warranted to corroborate and expand upon the findings of this research. This underscores the ongoing need for continued investigation into GDM to refine our understanding and improve clinical management strategies.
文摘An empirical likelihood approach to estimate the coefficients in linear model with interval censored responses is developed in this paper. By constructing unbiased transformation of interval censored data,an empirical log-likelihood function with asymptotic X^2 is derived. The confidence regions for the coefficients are constructed. Some simulation results indicate that the method performs better than the normal approximation method in term of coverage accuracies.
基金Sponsored by the Ability Enhancement Project of Teaching Staff in Harbin Institute of Technology(Grant No.06)
文摘In order to find an effective way to improve the quality of school management,finding valuable information from students' original data and providing feedback for student management are necessary. Firstly,some new and successful educational data mining models were analyzed and compared. These models have better performance than traditional models( such as Knowledge Tracing Model) in efficiency,comprehensiveness,ease of use,stability and so on. Then,the neural network algorithm was conducted to explore the feasibility of the application of educational data mining in student management,and the results show that it has enough predictive accuracy and reliability to be put into practice. In the end,the possibility and prospect of the application of educational data mining in teaching management system for university students was assessed.
文摘Pneumonia is an acute lung infection that has caused many fatalitiesglobally. Radiologists often employ chest X-rays to identify pneumoniasince they are presently the most effective imaging method for this purpose.Computer-aided diagnosis of pneumonia using deep learning techniques iswidely used due to its effectiveness and performance. In the proposed method,the Synthetic Minority Oversampling Technique (SMOTE) approach is usedto eliminate the class imbalance in the X-ray dataset. To compensate forthe paucity of accessible data, pre-trained transfer learning is used, and anensemble Convolutional Neural Network (CNN) model is developed. Theensemble model consists of all possible combinations of the MobileNetv2,Visual Geometry Group (VGG16), and DenseNet169 models. MobileNetV2and DenseNet169 performed well in the Single classifier model, with anaccuracy of 94%, while the ensemble model (MobileNetV2+DenseNet169)achieved an accuracy of 96.9%. Using the data synchronous parallel modelin Distributed Tensorflow, the training process accelerated performance by98.6% and outperformed other conventional approaches.
文摘This paper presents a methodology driven by database constraints for designing and developing(database)software applications.Much needed and with excellent results,this paradigm guarantees the highest possible quality of the managed data.The proposed methodology is illustrated with an easy to understand,yet complex medium-sized genealogy software application driven by more than 200 database constraints,which fully meets such expectations.
基金supported by the National Natural Science Foundation of China under(Grant No.52175531)in part by the Science and Technology Research Program of Chongqing Municipal Education Commission under Grant(Grant Nos.KJQN202000605 and KJZD-M202000602)。
文摘Pedestrian positioning system(PPS)using wearable inertial sensors has wide applications towards various emerging fields such as smart healthcare,emergency rescue,soldier positioning,etc.The performance of traditional PPS is limited by the cumulative error of inertial sensors,complex motion modes of pedestrians,and the low robustness of the multi-sensor collaboration structure.This paper presents a hybrid pedestrian positioning system using the combination of wearable inertial sensors and ultrasonic ranging(H-PPS).A robust two nodes integration structure is developed to adaptively combine the motion data acquired from the single waist-mounted and foot-mounted node,and enhanced by a novel ellipsoid constraint model.In addition,a deep-learning-based walking speed estimator is proposed by considering all the motion features provided by different nodes,which effectively reduces the cumulative error originating from inertial sensors.Finally,a comprehensive data and model dual-driven model is presented to effectively combine the motion data provided by different sensor nodes and walking speed estimator,and multi-level constraints are extracted to further improve the performance of the overall system.Experimental results indicate that the proposed H-PPS significantly improves the performance of the single PPS and outperforms existing algorithms in accuracy index under complex indoor scenarios.
基金supported by the National Natural Science Foundation of China under Grant Nos. 11801438,12161072 and 12171388the Natural Science Basic Research Plan in Shaanxi Province of China under Grant No. 2023-JC-YB-058the Innovation Capability Support Program of Shaanxi under Grant No. 2020PT-023。
文摘Structural change in panel data is a widespread phenomena. This paper proposes a fluctuation test to detect a structural change at an unknown date in heterogeneous panel data models with or without common correlated effects. The asymptotic properties of the fluctuation statistics in two cases are developed under the null and local alternative hypothesis. Furthermore, the consistency of the change point estimator is proven. Monte Carlo simulation shows that the fluctuation test can control the probability of type I error in most cases, and the empirical power is high in case of small and moderate sample sizes. An application of the procedure to a real data is presented.
文摘A multilevel secure relation hierarchical data model for multilevel secure database is extended from the relation hierarchical data model in single level environment in this paper. Based on the model, an upper lower layer relationalintegrity is presented after we analyze and eliminate the covert channels caused by the database integrity.Two SQL statements are extended to process polyinstantiation in the multilevel secure environment.The system based on the multilevel secure relation hierarchical data model is capable of integratively storing and manipulating complicated objects ( e.g. , multilevel spatial data) and conventional data ( e.g. , integer, real number and character string) in multilevel secure database.
文摘Homogeneous binary function products are frequently encountered in the sub-universes modeled by databases,spanning from genealogical trees and sports to education and healthcare,etc.Their properties must be discovered and enforced by the software applications managing such data to guarantee plausibility.The(Elementary)Mathematical Data Model provides 17 types of dyadic-based homogeneous binary function product constraint categories.MatBase,an intelligent data and knowledge base management system prototype,allows database designers to simply declare them by only clicking corresponding checkboxes and automatically generates code for enforcing them.This paper describes the algorithms that MatBase uses for enforcing all 17 types of homogeneous binary function product constraint,which may also be employed by developers without access to MatBase.
基金RPSEA and U.S.Department of Energy for partially funding this study
文摘Hydrocarbon production from shale has attracted much attention in the recent years. When applied to this prolific and hydrocarbon rich resource plays, our understanding of the complexities of the flow mechanism(sorption process and flow behavior in complex fracture systems- induced or natural) leaves much to be desired. In this paper, we present and discuss a novel approach to modeling, history matching of hydrocarbon production from a Marcellus shale asset in southwestern Pennsylvania using advanced data mining, pattern recognition and machine learning technologies. In this new approach instead of imposing our understanding of the flow mechanism, the impact of multi-stage hydraulic fractures, and the production process on the reservoir model, we allow the production history, well log, completion and hydraulic fracturing data to guide our model and determine its behavior. The uniqueness of this technology is that it incorporates the so-called "hard data" directly into the reservoir model, so that the model can be used to optimize the hydraulic fracture process. The "hard data" refers to field measurements during the hydraulic fracturing process such as fluid and proppant type and amount, injection pressure and rate as well as proppant concentration. This novel approach contrasts with the current industry focus on the use of "soft data"(non-measured, interpretive data such as frac length, width,height and conductivity) in the reservoir models. The study focuses on a Marcellus shale asset that includes 135 wells with multiple pads, different landing targets, well length and reservoir properties. The full field history matching process was successfully completed using this data driven approach thus capturing the production behavior with acceptable accuracy for individual wells and for the entire asset.
文摘Multidatabase systems are designed to achieve schema integration and data interoperation among distributed and heterogeneous database systems. But data model heterogeneity and schema heterogeneity make this a challenging task. A multidatabase common data model is firstly introduced based on XML, named XML-based Integration Data Model (XIDM), which is suitable for integrating different types of schemas. Then an approach of schema mappings based on XIDM in multidatabase systems has been presented. The mappings include global mappings, dealing with horizontal and vertical partitioning between global schemas and export schemas, and local mappings, processing the transformation between export schemas and local schemas. Finally, the illustration and implementation of schema mappings in a multidatabase prototype - Panorama system are also discussed. The implementation results demonstrate that the XIDM is an efficient model for managing multiple heterogeneous data sources and the approaches of schema mapping based on XIDM behave very well when integrating relational, object-oriented database systems and other file systems.
基金supported by the National Key Basic Research and Development Program of China under contract No.2006CB701305the National Natural Science Foundation of China under coutract No.40571129the National High-Technology Program of China under contract Nos 2002AA639400,2003AA604040 and 2003AA637030.
文摘Marine information has been increasing quickly. The traditional database technologies have disadvantages in manipulating large amounts of marine information which relates to the position in 3-D with the time. Recently, greater emphasis has been placed on GIS (geographical information system)to deal with the marine information. The GIS has shown great success for terrestrial applications in the last decades, but its use in marine fields has been far more restricted. One of the main reasons is that most of the GIS systems or their data models are designed for land applications. They cannot do well with the nature of the marine environment and for the marine information. And this becomes a fundamental challenge to the traditional GIS and its data structure. This work designed a data model, the raster-based spatio-temporal hierarchical data model (RSHDM), for the marine information system, or for the knowledge discovery fi'om spatio-temporal data, which bases itself on the nature of the marine data and overcomes the shortages of the current spatio-temporal models when they are used in the field. As an experiment, the marine fishery data warehouse (FDW) for marine fishery management was set up, which was based on the RSHDM. The experiment proved that the RSHDM can do well with the data and can extract easily the aggregations that the management needs at different levels.
基金The National Key Research and Development Programme of China under contract No.2017YFA0603004the Fund of Southern Marine Science and Engineering Guangdong Laboratory(Zhanjiang)(Zhanjiang Bay Laboratory)under contract No.ZJW-2019-08+1 种基金the National Natural Science Foundation of China under contract Nos 41825014,41676172 and 41676170the Global Change and Air-Sea Interaction Project of China under contract Nos GASI-02-SCS-YGST2-01,GASI-02-PACYGST2-01 and GASI-02-IND-YGST2-01。
文摘Atmospheric CO_(2)is one of key parameters to estimate air-sea CO_(2)flux.The Orbiting Carbon Observatory-2(OCO-2)satellite has observed the column-averaged dry-air mole fractions of global atmospheric carbon dioxide(XCO_(2))since 2014.In this study,the OCO-2 XCO_(2)products were compared between in-situ data from the Total Carbon Column Network(TCCON)and Global Monitoring Division(GMD),and modeling data from CarbonTracker2019 over global ocean and land.Results showed that the OCO-2 XCO_(2)data are consistent with the TCCON and GMD in situ XCO_(2)data,with mean absolute biases of 0.25×10^(-6)and 0.67×10^(-6),respectively.Moreover,the OCO-2 XCO_(2)data are also consistent with the CarbonTracker2019 modeling XCO_(2)data,with mean absolute biases of 0.78×10^(-6)over ocean and 1.02×10^(-6)over land.The results indicated the high accuracy of the OCO-2 XCO_(2)product over global ocean which could be applied to estimate the air-sea CO_(2)flux.
基金supported by the National Natural Science Foundation of China (Grant No. 50539010, 50539110, 50579010, 50539030 and 50809025)
文摘To improve the effectiveness of dam safety monitoring database systems, the development process of a multi-dimensional conceptual data model was analyzed and a logic design wasachieved in multi-dimensional database mode. The optimal data model was confirmed by identifying data objects, defining relations and reviewing entities. The conversion of relations among entities to external keys and entities and physical attributes to tables and fields was interpreted completely. On this basis, a multi-dimensional database that reflects the management and analysis of a dam safety monitoring system on monitoring data information has been established, for which factual tables and dimensional tables have been designed. Finally, based on service design and user interface design, the dam safety monitoring system has been developed with Delphi as the development tool. This development project shows that the multi-dimensional database can simplify the development process and minimize hidden dangers in the database structure design. It is superior to other dam safety monitoring system development models and can provide a new research direction for system developers.
文摘This paper describes multi view modeling and data model transformation for the modeling. We have proposed a reference model of CAD system generation, which can be applied to various domain specific languages. However, the current CAD system generation cannot integrate data of multiple domains. Generally each domain has its own view of products. For example, in the domain of architectural structure, designers extract the necessary data from the data in architecture design. Domain experts translate one view into another view beyond domains using their own brains.The multi view modeling is a way to integrate product data of multiple domains, and make it possible to translate views among various domains by computers.
基金Supported by National Natural Science Foundation of China(Grant Nos.51205004,51475003)Beijing Municipal Natural Science Foundation of China(Grant No.3152010)Beijing Municipal Education Committee Science and Technology Program,China(Grant No.KM201510009004)
文摘Flexible roll forming is a promising manufacturing method for the production of variable cross section products. Considering the large plastic strain in this forming process which is much larger than that of uniform deformation phase of uniaxial tensile test, the widely adopted method of simulating the forming processes with non-supplemented material data from uniaxial tensile test will certainly lead to large error. To reduce this error, the material data is supplemented based on three constitutive models. Then a finite element model of a six passes flexible roll forming process is established based on the supplemented material data and the original material data from the uniaxial tensile test. The flexible roll forming experiment of a B pillar reinforcing plate is carried out to verify the proposed method. Final cross section shapes of the experimental and the simulated results are compared. It is shown that the simulation calculated with supplemented material data based on Swift model agrees well with the experimental results, while the simulation based on original material data could not predict the actual deformation accurately. The results indicate that this material supplement method is reliable and indispensible, and the simulation model can well reflect the real metal forming process. Detailed analysis of the distribution and history of plastic strain at different positions are performed. A new material data supplement method is proposed to tackle the problem which is ignored in other roll forming simulations, and thus the forming process simulation accuracy can be greatly improved.
基金supported by National High Technology Research and Development Program of China (863 Program) (No. AA420060)
文摘In the course of network supported collaborative design, the data processing plays a very vital role. Much effort has been spent in this area, and many kinds of approaches have been proposed. Based on the correlative materials, this paper presents extensible markup language (XML) based strategy for several important problems of data processing in network supported collaborative design, such as the representation of standard for the exchange of product model data (STEP) with XML in the product information expression and the management of XML documents using relational database. The paper gives a detailed exposition on how to clarify the mapping between XML structure and the relationship database structure and how XML-QL queries can be translated into structured query language (SQL) queries. Finally, the structure of data processing system based on XML is presented.