An object oriented data modelling in computer aided design (CAD) databases is focused. Starting with the discussion of data modelling requirements for CAD applications, appropriate data modelling features are introdu...An object oriented data modelling in computer aided design (CAD) databases is focused. Starting with the discussion of data modelling requirements for CAD applications, appropriate data modelling features are introduced herewith. A feasible approach to select the “best” data model for an application is to analyze the data which has to be stored in the database. A data model is appropriate for modelling a given task if the information of the application environment can be easily mapped to the data model. Thus, the involved data are analyzed and then object oriented data model appropriate for CAD applications are derived. Based on the reviewed object oriented techniques applied in CAD, object oriented data modelling in CAD is addressed in details. At last 3D geometrical data models and implementation of their data model using the object oriented method are presented.展开更多
In this paper, the authors present the development of a data modelling tool that visualizes the transformation process of an "Entity-Relationship" Diagram (ERD) into a relational database schema. The authors' foc...In this paper, the authors present the development of a data modelling tool that visualizes the transformation process of an "Entity-Relationship" Diagram (ERD) into a relational database schema. The authors' focus is the design of a tool for educational purposes and its implementation on e-learning database course. The tool presents two stages of database design. The first stage is to draw ERD graphically and validate it. The drawing is done by a learner. Then at second stage, the system enables automatically transformation of ERD to relational database schema by using common rules. Thus, the learner could understand more easily how to apply the theoretical material. A detailed description of system functionalities and algorithm for the conversion are proposed. Finally, a user interface and usage aspects are exposed.展开更多
Gestational Diabetes Mellitus (GDM) is a significant health concern affecting pregnant women worldwide. It is characterized by elevated blood sugar levels during pregnancy and poses risks to both maternal and fetal he...Gestational Diabetes Mellitus (GDM) is a significant health concern affecting pregnant women worldwide. It is characterized by elevated blood sugar levels during pregnancy and poses risks to both maternal and fetal health. Maternal complications of GDM include an increased risk of developing type 2 diabetes later in life, as well as hypertension and preeclampsia during pregnancy. Fetal complications may include macrosomia (large birth weight), birth injuries, and an increased risk of developing metabolic disorders later in life. Understanding the demographics, risk factors, and biomarkers associated with GDM is crucial for effective management and prevention strategies. This research aims to address these aspects comprehensively through the analysis of a dataset comprising 600 pregnant women. By exploring the demographics of the dataset and employing data modeling techniques, the study seeks to identify key risk factors associated with GDM. Moreover, by analyzing various biomarkers, the research aims to gain insights into the physiological mechanisms underlying GDM and its implications for maternal and fetal health. The significance of this research lies in its potential to inform clinical practice and public health policies related to GDM. By identifying demographic patterns and risk factors, healthcare providers can better tailor screening and intervention strategies for pregnant women at risk of GDM. Additionally, insights into biomarkers associated with GDM may contribute to the development of novel diagnostic tools and therapeutic approaches. Ultimately, by enhancing our understanding of GDM, this research aims to improve maternal and fetal outcomes and reduce the burden of this condition on healthcare systems and society. However, it’s important to acknowledge the limitations of the dataset used in this study. Further research utilizing larger and more diverse datasets, perhaps employing advanced data analysis techniques such as Power BI, is warranted to corroborate and expand upon the findings of this research. This underscores the ongoing need for continued investigation into GDM to refine our understanding and improve clinical management strategies.展开更多
Product data management (PDM) has been accepted as an important tool for the manufacturing industries. In recent years, more and mor e researches have been conducted in the development of PDM. Their research area s in...Product data management (PDM) has been accepted as an important tool for the manufacturing industries. In recent years, more and mor e researches have been conducted in the development of PDM. Their research area s include system design, integration of object-oriented technology, data distri bution, collaborative and distributed manufacturing working environment, secur ity, and web-based integration. However, there are limitations on their rese arches. In particular, they cannot cater for PDM in distributed manufacturing e nvironment. This is especially true in South China, where many Hong Kong (HK) ma nufacturers have moved their production plants to different locations in Pearl R iver Delta for cost reduction. However, they retain their main offices in HK. Development of PDM system is inherently complex. Product related data cover prod uct name, product part number (product identification), drawings, material speci fications, dimension requirement, quality specification, test result, log size, production schedules, product data version and date of release, special tooling (e.g. jig and fixture), mould design, project engineering in charge, cost spread sheets, while process data includes engineering release, engineering change info rmation management, and other workflow related to the process information. Accor ding to Cornelissen et al., the contemporary PDM system should contains manageme nt functions in structure, retrieval, release, change, and workflow. In system design, development and implementation, a formal specification is nece ssary. However, there is no formal representation model for PDM system. Theref ore a graphical representation model is constructed to express the various scena rios of interactions between users and the PDM system. Statechart is then used to model the operations of PDM system, Fig.1. Statechart model bridges the curr ent gap between requirements, scenarios, and the initial design specifications o f PDM system. After properly analyzing the PDM system, a new distributed PDM (DPDM) system is proposed. Both graphical representation and statechart models are constructed f or the new DPDM system, Fig.2. New product data of DPDM and new system function s are then investigated to support product information flow in the new distribut ed environment. It is found that statecharts allow formal representations to capture the informa tion and control flows of both PDM and DPDM. In particular, statechart offers a dditional expressive power, when compared to conventional state transition diagr am, in terms of hierarchy, concurrency, history, and timing for DPDM behavioral modeling.展开更多
Long-term navigation ability based on consumer-level wearable inertial sensors plays an essential role towards various emerging fields, for instance, smart healthcare, emergency rescue, soldier positioning et al. The ...Long-term navigation ability based on consumer-level wearable inertial sensors plays an essential role towards various emerging fields, for instance, smart healthcare, emergency rescue, soldier positioning et al. The performance of existing long-term navigation algorithm is limited by the cumulative error of inertial sensors, disturbed local magnetic field, and complex motion modes of the pedestrian. This paper develops a robust data and physical model dual-driven based trajectory estimation(DPDD-TE) framework, which can be applied for long-term navigation tasks. A Bi-directional Long Short-Term Memory(Bi-LSTM) based quasi-static magnetic field(QSMF) detection algorithm is developed for extracting useful magnetic observation for heading calibration, and another Bi-LSTM is adopted for walking speed estimation by considering hybrid human motion information under a specific time period. In addition, a data and physical model dual-driven based multi-source fusion model is proposed to integrate basic INS mechanization and multi-level constraint and observations for maintaining accuracy under long-term navigation tasks, and enhanced by the magnetic and trajectory features assisted loop detection algorithm. Real-world experiments indicate that the proposed DPDD-TE outperforms than existing algorithms, and final estimated heading and positioning accuracy indexes reaches 5° and less than 2 m under the time period of 30 min, respectively.展开更多
Airline passenger volume is an important reference for the implementation of aviation capacity and route adjustment plans.This paper explores the determinants of airline passenger volume and proposes a comprehensive p...Airline passenger volume is an important reference for the implementation of aviation capacity and route adjustment plans.This paper explores the determinants of airline passenger volume and proposes a comprehensive panel data model for predicting volume.First,potential factors influencing airline passenger volume are analyzed from Geo-economic and service-related aspects.Second,the principal component analysis(PCA)is applied to identify key factors that impact the airline passenger volume of city pairs.Then the panel data model is estimated using 120 sets of data,which are a collection of observations for multiple subjects at multiple instances.Finally,the airline data from Chongqing to Shanghai,from 2003 to 2012,was used as a test case to verify the validity of the prediction model.Results show that railway and highway transportation assumed a certain proportion of passenger volumes,and total retail sales of consumer goods in the departure and arrival cities are significantly associated with airline passenger volume.According to the validity test results,the prediction accuracies of the model for 10 sets of data are all greater than 90%.The model performs better than a multivariate regression model,thus assisting airport operators decide which routes to adjust and which new routes to introduce.展开更多
This study proposes the use of the MERISE conceptual data model to create indicators for monitoring and evaluating the effectiveness of vocational training in the Republic of Congo. The importance of MERISE for struct...This study proposes the use of the MERISE conceptual data model to create indicators for monitoring and evaluating the effectiveness of vocational training in the Republic of Congo. The importance of MERISE for structuring and analyzing data is underlined, as it enables the measurement of the adequacy between training and the needs of the labor market. The innovation of the study lies in the adaptation of the MERISE model to the local context, the development of innovative indicators, and the integration of a participatory approach including all relevant stakeholders. Contextual adaptation and local innovation: The study suggests adapting MERISE to the specific context of the Republic of Congo, considering the local particularities of the labor market. Development of innovative indicators and new measurement tools: It proposes creating indicators to assess skills matching and employer satisfaction, which are crucial for evaluating the effectiveness of vocational training. Participatory approach and inclusion of stakeholders: The study emphasizes actively involving training centers, employers, and recruitment agencies in the evaluation process. This participatory approach ensures that the perspectives of all stakeholders are considered, leading to more relevant and practical outcomes. Using the MERISE model allows for: • Rigorous data structuring, organization, and standardization: Clearly defining entities and relationships facilitates data organization and standardization, crucial for effective data analysis. • Facilitation of monitoring, analysis, and relevant indicators: Developing both quantitative and qualitative indicators helps measure the effectiveness of training in relation to the labor market, allowing for a comprehensive evaluation. • Improved communication and common language: By providing a common language for different stakeholders, MERISE enhances communication and collaboration, ensuring that all parties have a shared understanding. The study’s approach and contribution to existing research lie in: • Structured theoretical and practical framework and holistic approach: The study offers a structured framework for data collection and analysis, covering both quantitative and qualitative aspects, thus providing a comprehensive view of the training system. • Reproducible methodology and international comparison: The proposed methodology can be replicated in other contexts, facilitating international comparison and the adoption of best practices. • Extension of knowledge and new perspective: By integrating a participatory approach and developing indicators adapted to local needs, the study extends existing research and offers new perspectives on vocational training evaluation.展开更多
Building model data organization is often programmed to solve a specific problem,resulting in the inability to organize indoor and outdoor 3D scenes in an integrated manner.In this paper,existing building spatial data...Building model data organization is often programmed to solve a specific problem,resulting in the inability to organize indoor and outdoor 3D scenes in an integrated manner.In this paper,existing building spatial data models are studied,and the characteristics of building information modeling standards(IFC),city geographic modeling language(CityGML),indoor modeling language(IndoorGML),and other models are compared and analyzed.CityGML and IndoorGML models face challenges in satisfying diverse application scenarios and requirements due to limitations in their expression capabilities.It is proposed to combine the semantic information of the model objects to effectively partition and organize the indoor and outdoor spatial 3D model data and to construct the indoor and outdoor data organization mechanism of“chunk-layer-subobject-entrances-area-detail object.”This method is verified by proposing a 3D data organization method for indoor and outdoor space and constructing a 3D visualization system based on it.展开更多
To improve the performance of the traditional map matching algorithms in freeway traffic state monitoring systems using the low logging frequency GPS (global positioning system) probe data, a map matching algorithm ...To improve the performance of the traditional map matching algorithms in freeway traffic state monitoring systems using the low logging frequency GPS (global positioning system) probe data, a map matching algorithm based on the Oracle spatial data model is proposed. The algorithm uses the Oracle road network data model to analyze the spatial relationships between massive GPS positioning points and freeway networks, builds an N-shortest path algorithm to find reasonable candidate routes between GPS positioning points efficiently, and uses the fuzzy logic inference system to determine the final matched traveling route. According to the implementation with field data from Los Angeles, the computation speed of the algorithm is about 135 GPS positioning points per second and the accuracy is 98.9%. The results demonstrate the effectiveness and accuracy of the proposed algorithm for mapping massive GPS positioning data onto freeway networks with complex geometric characteristics.展开更多
The conception of multilevel security (MLS) is commonly used in the study of data model for secure database. But there are some limitations in the basic MLS model, such as inference channels. The availability and data...The conception of multilevel security (MLS) is commonly used in the study of data model for secure database. But there are some limitations in the basic MLS model, such as inference channels. The availability and data integrity of the system are seriously constrained by it′s 'No Read Up, No Write Down' property in the basic MLS model. In order to eliminate the covert channels, the polyinstantiation and the cover story are used in the new data model. The read and write rules have been redefined for improving the agility and usability of the system based on the MLS model. All the methods in the improved data model make the system more secure, agile and usable.展开更多
A uniform metadata representation is introduced for heterogeneous databases, multi media information and other information sources. Some features about metadata are analyzed. The limitation of existing metadata model...A uniform metadata representation is introduced for heterogeneous databases, multi media information and other information sources. Some features about metadata are analyzed. The limitation of existing metadata model is compared with the new one. The metadata model is described in XML which is fit for metadata denotation and exchange. The well structured data, semi structured data and those exterior file data without structure are described in the metadata model. The model provides feasibility and extensibility for constructing uniform metadata model of data warehouse.展开更多
BACKGROUND Chronic hepatitis B often progresses silently toward hepatocellular carcinoma(HCC),a leading cause of mortality worldwide.Early detection of HCC is crucial,yet challenging.AIM To investigate the role of dyn...BACKGROUND Chronic hepatitis B often progresses silently toward hepatocellular carcinoma(HCC),a leading cause of mortality worldwide.Early detection of HCC is crucial,yet challenging.AIM To investigate the role of dynamic changes in alkaline phosphatase to prealbumin ratio(APR)in hepatitis B progression to HCC.METHODS Data from 4843 patients with hepatitis B(January 2015 to January 2024)were analyzed.HCC incidence rates in males and females were compared using the log-rank test.Data were evaluated using Kaplan–Meier analysis.The Linear Mixed-Effects Model was applied to track the fluctuation of APR levels over time.Furthermore,Joint Modeling of Longitudinal and Survival data was employed to investigate the temporal relationship between APR and HCC risk.RESULTS The incidence of HCC was higher in males.To ensure the model’s normality assumption,this study applied a logarithmic transformation to APR,yielding ratio.Ratio levels were higher in females(t=5.26,P<0.01).A 1-unit increase in ratio correlated with a 2.005-fold higher risk of HCC in males(95%CI:1.653-2.431)and a 2.273-fold higher risk in females(95%CI:1.620-3.190).CONCLUSION Males are more prone to HCC,while females have higher APR levels.Despite no baseline APR link,rising APR indicates a higher HCC risk.展开更多
To manipulate the heterogeneous and distributed data better in the data grid,a dataspace management framework for grid data is proposed based on in-depth research on grid technology.Combining technologies in dataspace...To manipulate the heterogeneous and distributed data better in the data grid,a dataspace management framework for grid data is proposed based on in-depth research on grid technology.Combining technologies in dataspace management,such as data model iDM and query language iTrails,with the grid data access middleware OGSA-DAI,a grid dataspace management prototype system is built,in which tasks like data accessing,Abstraction,indexing,services management and answer-query are implemented by the OGSA-DAI workflows.Experimental results show that it is feasible to apply a dataspace management mechanism to the grid environment.Dataspace meets the grid data management needs in that it hides the heterogeneity and distribution of grid data and can adapt to the dynamic characteristics of the grid.The proposed grid dataspace management provides a new method for grid data management.展开更多
This is the first of a three-part series of pape rs which introduces a general background of building trajectory-oriented road net work data models, including motivation, related works, and basic concepts. The p urpos...This is the first of a three-part series of pape rs which introduces a general background of building trajectory-oriented road net work data models, including motivation, related works, and basic concepts. The p urpose of the series is to develop a trajectory-oriented road network data mode l, namely carriageway-based road network data model (CRNM). Part 1 deals with t he modeling background. Part 2 proposes the principle and architecture of the CR NM. Part 3 investigates the implementation of the CRNM in a case study. In the p resent paper, the challenges of managing trajectory data are discussed. Then, de veloping trajectory-oriented road network data models is proposed as a solution and existing road network data models are reviewed. Basic representation approa ches of a road network are introduced as well as its constitution.展开更多
This is the second of a three-part series of papers which presents the principle and architecture of the CRNM, a trajectory-oriented, carriageway-based road network data model. The first part of the series has introdu...This is the second of a three-part series of papers which presents the principle and architecture of the CRNM, a trajectory-oriented, carriageway-based road network data model. The first part of the series has introduced a general background of building trajectory-oriented road network data models, including motivation, related works, and basic concepts. Based on it, this paper describs the CRNM in detail. At first, the notion of basic roadway entity is proposed and discussed. Secondly, carriageway is selected as the basic roadway entity after compared with other kinds of roadway, and approaches to representing other roadways with carriageways are introduced. At last, an overall architecture of the CRNM is proposed.展开更多
This is the final of a three-part series of papers which mainly discusses the implementation issues of the CRNM. The first two papers in the series have introduced the modeling background and methodology, respectively...This is the final of a three-part series of papers which mainly discusses the implementation issues of the CRNM. The first two papers in the series have introduced the modeling background and methodology, respectively. An overall architecture of the CRNM has been proposed in the last paper. On the basis of the above discusses, a linear reference method (LRM) for providing spatial references for location points of a trajectory is developed. A case study is introduced to illustrate the application of the CRNM for modeling a road network in the real world is given. A comprehensive conclusion is given for the series of papers.展开更多
Hydrocarbon production from shale has attracted much attention in the recent years. When applied to this prolific and hydrocarbon rich resource plays, our understanding of the complexities of the flow mechanism(sorpt...Hydrocarbon production from shale has attracted much attention in the recent years. When applied to this prolific and hydrocarbon rich resource plays, our understanding of the complexities of the flow mechanism(sorption process and flow behavior in complex fracture systems- induced or natural) leaves much to be desired. In this paper, we present and discuss a novel approach to modeling, history matching of hydrocarbon production from a Marcellus shale asset in southwestern Pennsylvania using advanced data mining, pattern recognition and machine learning technologies. In this new approach instead of imposing our understanding of the flow mechanism, the impact of multi-stage hydraulic fractures, and the production process on the reservoir model, we allow the production history, well log, completion and hydraulic fracturing data to guide our model and determine its behavior. The uniqueness of this technology is that it incorporates the so-called "hard data" directly into the reservoir model, so that the model can be used to optimize the hydraulic fracture process. The "hard data" refers to field measurements during the hydraulic fracturing process such as fluid and proppant type and amount, injection pressure and rate as well as proppant concentration. This novel approach contrasts with the current industry focus on the use of "soft data"(non-measured, interpretive data such as frac length, width,height and conductivity) in the reservoir models. The study focuses on a Marcellus shale asset that includes 135 wells with multiple pads, different landing targets, well length and reservoir properties. The full field history matching process was successfully completed using this data driven approach thus capturing the production behavior with acceptable accuracy for individual wells and for the entire asset.展开更多
The parametric temporal data model captures a real world entity in a single tuple, which reduces query language complexity. Such a data model, however, is difficult to be implemented on top of conventional databases b...The parametric temporal data model captures a real world entity in a single tuple, which reduces query language complexity. Such a data model, however, is difficult to be implemented on top of conventional databases because of its unfixed attribute sizes. XML is a matured technology and can be an elegant solution for such challenge. Representing data in XML trigger a question about storage efficiency. The goal of this work is to provide a straightforward answer to such a question. To this end, we compare three different storage models for the parametric temporal data model and show that XML is not worse than any other approaches. Furthermore, XML outperforms the other storages under certain conditions. Therefore, our simulation results provide a positive indication that the myth about XML is not true in the parametric temporal data model.展开更多
Multidatabase systems are designed to achieve schema integration and data interoperation among distributed and heterogeneous database systems. But data model heterogeneity and schema heterogeneity make this a challeng...Multidatabase systems are designed to achieve schema integration and data interoperation among distributed and heterogeneous database systems. But data model heterogeneity and schema heterogeneity make this a challenging task. A multidatabase common data model is firstly introduced based on XML, named XML-based Integration Data Model (XIDM), which is suitable for integrating different types of schemas. Then an approach of schema mappings based on XIDM in multidatabase systems has been presented. The mappings include global mappings, dealing with horizontal and vertical partitioning between global schemas and export schemas, and local mappings, processing the transformation between export schemas and local schemas. Finally, the illustration and implementation of schema mappings in a multidatabase prototype - Panorama system are also discussed. The implementation results demonstrate that the XIDM is an efficient model for managing multiple heterogeneous data sources and the approaches of schema mapping based on XIDM behave very well when integrating relational, object-oriented database systems and other file systems.展开更多
Instead of establishing mathematical hydraulic system models from physical laws usually done with the problems of complex modelling processes, low reliability and practicality caused by large uncertainties, a novel mo...Instead of establishing mathematical hydraulic system models from physical laws usually done with the problems of complex modelling processes, low reliability and practicality caused by large uncertainties, a novel modelling method for a highly nonlinear system of a hydraulic excavator is presented. Based on the data collected in the excavator's arms driving experiments, a data-based excavator dynamic model using Simplified Refined Instrumental Variable (SRIV) identification and estimation algorithms is established. The validity of the proposed data-based model is indirectly demonstrated by the performance of computer simulation and the.real machine motion control exoeriments.展开更多
文摘An object oriented data modelling in computer aided design (CAD) databases is focused. Starting with the discussion of data modelling requirements for CAD applications, appropriate data modelling features are introduced herewith. A feasible approach to select the “best” data model for an application is to analyze the data which has to be stored in the database. A data model is appropriate for modelling a given task if the information of the application environment can be easily mapped to the data model. Thus, the involved data are analyzed and then object oriented data model appropriate for CAD applications are derived. Based on the reviewed object oriented techniques applied in CAD, object oriented data modelling in CAD is addressed in details. At last 3D geometrical data models and implementation of their data model using the object oriented method are presented.
文摘In this paper, the authors present the development of a data modelling tool that visualizes the transformation process of an "Entity-Relationship" Diagram (ERD) into a relational database schema. The authors' focus is the design of a tool for educational purposes and its implementation on e-learning database course. The tool presents two stages of database design. The first stage is to draw ERD graphically and validate it. The drawing is done by a learner. Then at second stage, the system enables automatically transformation of ERD to relational database schema by using common rules. Thus, the learner could understand more easily how to apply the theoretical material. A detailed description of system functionalities and algorithm for the conversion are proposed. Finally, a user interface and usage aspects are exposed.
文摘Gestational Diabetes Mellitus (GDM) is a significant health concern affecting pregnant women worldwide. It is characterized by elevated blood sugar levels during pregnancy and poses risks to both maternal and fetal health. Maternal complications of GDM include an increased risk of developing type 2 diabetes later in life, as well as hypertension and preeclampsia during pregnancy. Fetal complications may include macrosomia (large birth weight), birth injuries, and an increased risk of developing metabolic disorders later in life. Understanding the demographics, risk factors, and biomarkers associated with GDM is crucial for effective management and prevention strategies. This research aims to address these aspects comprehensively through the analysis of a dataset comprising 600 pregnant women. By exploring the demographics of the dataset and employing data modeling techniques, the study seeks to identify key risk factors associated with GDM. Moreover, by analyzing various biomarkers, the research aims to gain insights into the physiological mechanisms underlying GDM and its implications for maternal and fetal health. The significance of this research lies in its potential to inform clinical practice and public health policies related to GDM. By identifying demographic patterns and risk factors, healthcare providers can better tailor screening and intervention strategies for pregnant women at risk of GDM. Additionally, insights into biomarkers associated with GDM may contribute to the development of novel diagnostic tools and therapeutic approaches. Ultimately, by enhancing our understanding of GDM, this research aims to improve maternal and fetal outcomes and reduce the burden of this condition on healthcare systems and society. However, it’s important to acknowledge the limitations of the dataset used in this study. Further research utilizing larger and more diverse datasets, perhaps employing advanced data analysis techniques such as Power BI, is warranted to corroborate and expand upon the findings of this research. This underscores the ongoing need for continued investigation into GDM to refine our understanding and improve clinical management strategies.
文摘Product data management (PDM) has been accepted as an important tool for the manufacturing industries. In recent years, more and mor e researches have been conducted in the development of PDM. Their research area s include system design, integration of object-oriented technology, data distri bution, collaborative and distributed manufacturing working environment, secur ity, and web-based integration. However, there are limitations on their rese arches. In particular, they cannot cater for PDM in distributed manufacturing e nvironment. This is especially true in South China, where many Hong Kong (HK) ma nufacturers have moved their production plants to different locations in Pearl R iver Delta for cost reduction. However, they retain their main offices in HK. Development of PDM system is inherently complex. Product related data cover prod uct name, product part number (product identification), drawings, material speci fications, dimension requirement, quality specification, test result, log size, production schedules, product data version and date of release, special tooling (e.g. jig and fixture), mould design, project engineering in charge, cost spread sheets, while process data includes engineering release, engineering change info rmation management, and other workflow related to the process information. Accor ding to Cornelissen et al., the contemporary PDM system should contains manageme nt functions in structure, retrieval, release, change, and workflow. In system design, development and implementation, a formal specification is nece ssary. However, there is no formal representation model for PDM system. Theref ore a graphical representation model is constructed to express the various scena rios of interactions between users and the PDM system. Statechart is then used to model the operations of PDM system, Fig.1. Statechart model bridges the curr ent gap between requirements, scenarios, and the initial design specifications o f PDM system. After properly analyzing the PDM system, a new distributed PDM (DPDM) system is proposed. Both graphical representation and statechart models are constructed f or the new DPDM system, Fig.2. New product data of DPDM and new system function s are then investigated to support product information flow in the new distribut ed environment. It is found that statecharts allow formal representations to capture the informa tion and control flows of both PDM and DPDM. In particular, statechart offers a dditional expressive power, when compared to conventional state transition diagr am, in terms of hierarchy, concurrency, history, and timing for DPDM behavioral modeling.
文摘Long-term navigation ability based on consumer-level wearable inertial sensors plays an essential role towards various emerging fields, for instance, smart healthcare, emergency rescue, soldier positioning et al. The performance of existing long-term navigation algorithm is limited by the cumulative error of inertial sensors, disturbed local magnetic field, and complex motion modes of the pedestrian. This paper develops a robust data and physical model dual-driven based trajectory estimation(DPDD-TE) framework, which can be applied for long-term navigation tasks. A Bi-directional Long Short-Term Memory(Bi-LSTM) based quasi-static magnetic field(QSMF) detection algorithm is developed for extracting useful magnetic observation for heading calibration, and another Bi-LSTM is adopted for walking speed estimation by considering hybrid human motion information under a specific time period. In addition, a data and physical model dual-driven based multi-source fusion model is proposed to integrate basic INS mechanization and multi-level constraint and observations for maintaining accuracy under long-term navigation tasks, and enhanced by the magnetic and trajectory features assisted loop detection algorithm. Real-world experiments indicate that the proposed DPDD-TE outperforms than existing algorithms, and final estimated heading and positioning accuracy indexes reaches 5° and less than 2 m under the time period of 30 min, respectively.
基金The National Natural Science Fund of China(No.U1564201 and No.U51675235).
文摘Airline passenger volume is an important reference for the implementation of aviation capacity and route adjustment plans.This paper explores the determinants of airline passenger volume and proposes a comprehensive panel data model for predicting volume.First,potential factors influencing airline passenger volume are analyzed from Geo-economic and service-related aspects.Second,the principal component analysis(PCA)is applied to identify key factors that impact the airline passenger volume of city pairs.Then the panel data model is estimated using 120 sets of data,which are a collection of observations for multiple subjects at multiple instances.Finally,the airline data from Chongqing to Shanghai,from 2003 to 2012,was used as a test case to verify the validity of the prediction model.Results show that railway and highway transportation assumed a certain proportion of passenger volumes,and total retail sales of consumer goods in the departure and arrival cities are significantly associated with airline passenger volume.According to the validity test results,the prediction accuracies of the model for 10 sets of data are all greater than 90%.The model performs better than a multivariate regression model,thus assisting airport operators decide which routes to adjust and which new routes to introduce.
文摘This study proposes the use of the MERISE conceptual data model to create indicators for monitoring and evaluating the effectiveness of vocational training in the Republic of Congo. The importance of MERISE for structuring and analyzing data is underlined, as it enables the measurement of the adequacy between training and the needs of the labor market. The innovation of the study lies in the adaptation of the MERISE model to the local context, the development of innovative indicators, and the integration of a participatory approach including all relevant stakeholders. Contextual adaptation and local innovation: The study suggests adapting MERISE to the specific context of the Republic of Congo, considering the local particularities of the labor market. Development of innovative indicators and new measurement tools: It proposes creating indicators to assess skills matching and employer satisfaction, which are crucial for evaluating the effectiveness of vocational training. Participatory approach and inclusion of stakeholders: The study emphasizes actively involving training centers, employers, and recruitment agencies in the evaluation process. This participatory approach ensures that the perspectives of all stakeholders are considered, leading to more relevant and practical outcomes. Using the MERISE model allows for: • Rigorous data structuring, organization, and standardization: Clearly defining entities and relationships facilitates data organization and standardization, crucial for effective data analysis. • Facilitation of monitoring, analysis, and relevant indicators: Developing both quantitative and qualitative indicators helps measure the effectiveness of training in relation to the labor market, allowing for a comprehensive evaluation. • Improved communication and common language: By providing a common language for different stakeholders, MERISE enhances communication and collaboration, ensuring that all parties have a shared understanding. The study’s approach and contribution to existing research lie in: • Structured theoretical and practical framework and holistic approach: The study offers a structured framework for data collection and analysis, covering both quantitative and qualitative aspects, thus providing a comprehensive view of the training system. • Reproducible methodology and international comparison: The proposed methodology can be replicated in other contexts, facilitating international comparison and the adoption of best practices. • Extension of knowledge and new perspective: By integrating a participatory approach and developing indicators adapted to local needs, the study extends existing research and offers new perspectives on vocational training evaluation.
文摘Building model data organization is often programmed to solve a specific problem,resulting in the inability to organize indoor and outdoor 3D scenes in an integrated manner.In this paper,existing building spatial data models are studied,and the characteristics of building information modeling standards(IFC),city geographic modeling language(CityGML),indoor modeling language(IndoorGML),and other models are compared and analyzed.CityGML and IndoorGML models face challenges in satisfying diverse application scenarios and requirements due to limitations in their expression capabilities.It is proposed to combine the semantic information of the model objects to effectively partition and organize the indoor and outdoor spatial 3D model data and to construct the indoor and outdoor data organization mechanism of“chunk-layer-subobject-entrances-area-detail object.”This method is verified by proposing a 3D data organization method for indoor and outdoor space and constructing a 3D visualization system based on it.
文摘To improve the performance of the traditional map matching algorithms in freeway traffic state monitoring systems using the low logging frequency GPS (global positioning system) probe data, a map matching algorithm based on the Oracle spatial data model is proposed. The algorithm uses the Oracle road network data model to analyze the spatial relationships between massive GPS positioning points and freeway networks, builds an N-shortest path algorithm to find reasonable candidate routes between GPS positioning points efficiently, and uses the fuzzy logic inference system to determine the final matched traveling route. According to the implementation with field data from Los Angeles, the computation speed of the algorithm is about 135 GPS positioning points per second and the accuracy is 98.9%. The results demonstrate the effectiveness and accuracy of the proposed algorithm for mapping massive GPS positioning data onto freeway networks with complex geometric characteristics.
文摘The conception of multilevel security (MLS) is commonly used in the study of data model for secure database. But there are some limitations in the basic MLS model, such as inference channels. The availability and data integrity of the system are seriously constrained by it′s 'No Read Up, No Write Down' property in the basic MLS model. In order to eliminate the covert channels, the polyinstantiation and the cover story are used in the new data model. The read and write rules have been redefined for improving the agility and usability of the system based on the MLS model. All the methods in the improved data model make the system more secure, agile and usable.
文摘A uniform metadata representation is introduced for heterogeneous databases, multi media information and other information sources. Some features about metadata are analyzed. The limitation of existing metadata model is compared with the new one. The metadata model is described in XML which is fit for metadata denotation and exchange. The well structured data, semi structured data and those exterior file data without structure are described in the metadata model. The model provides feasibility and extensibility for constructing uniform metadata model of data warehouse.
文摘BACKGROUND Chronic hepatitis B often progresses silently toward hepatocellular carcinoma(HCC),a leading cause of mortality worldwide.Early detection of HCC is crucial,yet challenging.AIM To investigate the role of dynamic changes in alkaline phosphatase to prealbumin ratio(APR)in hepatitis B progression to HCC.METHODS Data from 4843 patients with hepatitis B(January 2015 to January 2024)were analyzed.HCC incidence rates in males and females were compared using the log-rank test.Data were evaluated using Kaplan–Meier analysis.The Linear Mixed-Effects Model was applied to track the fluctuation of APR levels over time.Furthermore,Joint Modeling of Longitudinal and Survival data was employed to investigate the temporal relationship between APR and HCC risk.RESULTS The incidence of HCC was higher in males.To ensure the model’s normality assumption,this study applied a logarithmic transformation to APR,yielding ratio.Ratio levels were higher in females(t=5.26,P<0.01).A 1-unit increase in ratio correlated with a 2.005-fold higher risk of HCC in males(95%CI:1.653-2.431)and a 2.273-fold higher risk in females(95%CI:1.620-3.190).CONCLUSION Males are more prone to HCC,while females have higher APR levels.Despite no baseline APR link,rising APR indicates a higher HCC risk.
文摘To manipulate the heterogeneous and distributed data better in the data grid,a dataspace management framework for grid data is proposed based on in-depth research on grid technology.Combining technologies in dataspace management,such as data model iDM and query language iTrails,with the grid data access middleware OGSA-DAI,a grid dataspace management prototype system is built,in which tasks like data accessing,Abstraction,indexing,services management and answer-query are implemented by the OGSA-DAI workflows.Experimental results show that it is feasible to apply a dataspace management mechanism to the grid environment.Dataspace meets the grid data management needs in that it hides the heterogeneity and distribution of grid data and can adapt to the dynamic characteristics of the grid.The proposed grid dataspace management provides a new method for grid data management.
文摘This is the first of a three-part series of pape rs which introduces a general background of building trajectory-oriented road net work data models, including motivation, related works, and basic concepts. The p urpose of the series is to develop a trajectory-oriented road network data mode l, namely carriageway-based road network data model (CRNM). Part 1 deals with t he modeling background. Part 2 proposes the principle and architecture of the CR NM. Part 3 investigates the implementation of the CRNM in a case study. In the p resent paper, the challenges of managing trajectory data are discussed. Then, de veloping trajectory-oriented road network data models is proposed as a solution and existing road network data models are reviewed. Basic representation approa ches of a road network are introduced as well as its constitution.
文摘This is the second of a three-part series of papers which presents the principle and architecture of the CRNM, a trajectory-oriented, carriageway-based road network data model. The first part of the series has introduced a general background of building trajectory-oriented road network data models, including motivation, related works, and basic concepts. Based on it, this paper describs the CRNM in detail. At first, the notion of basic roadway entity is proposed and discussed. Secondly, carriageway is selected as the basic roadway entity after compared with other kinds of roadway, and approaches to representing other roadways with carriageways are introduced. At last, an overall architecture of the CRNM is proposed.
文摘This is the final of a three-part series of papers which mainly discusses the implementation issues of the CRNM. The first two papers in the series have introduced the modeling background and methodology, respectively. An overall architecture of the CRNM has been proposed in the last paper. On the basis of the above discusses, a linear reference method (LRM) for providing spatial references for location points of a trajectory is developed. A case study is introduced to illustrate the application of the CRNM for modeling a road network in the real world is given. A comprehensive conclusion is given for the series of papers.
基金RPSEA and U.S.Department of Energy for partially funding this study
文摘Hydrocarbon production from shale has attracted much attention in the recent years. When applied to this prolific and hydrocarbon rich resource plays, our understanding of the complexities of the flow mechanism(sorption process and flow behavior in complex fracture systems- induced or natural) leaves much to be desired. In this paper, we present and discuss a novel approach to modeling, history matching of hydrocarbon production from a Marcellus shale asset in southwestern Pennsylvania using advanced data mining, pattern recognition and machine learning technologies. In this new approach instead of imposing our understanding of the flow mechanism, the impact of multi-stage hydraulic fractures, and the production process on the reservoir model, we allow the production history, well log, completion and hydraulic fracturing data to guide our model and determine its behavior. The uniqueness of this technology is that it incorporates the so-called "hard data" directly into the reservoir model, so that the model can be used to optimize the hydraulic fracture process. The "hard data" refers to field measurements during the hydraulic fracturing process such as fluid and proppant type and amount, injection pressure and rate as well as proppant concentration. This novel approach contrasts with the current industry focus on the use of "soft data"(non-measured, interpretive data such as frac length, width,height and conductivity) in the reservoir models. The study focuses on a Marcellus shale asset that includes 135 wells with multiple pads, different landing targets, well length and reservoir properties. The full field history matching process was successfully completed using this data driven approach thus capturing the production behavior with acceptable accuracy for individual wells and for the entire asset.
基金supported by the National Research Foundation in Korea through contract N-12-NM-IR05
文摘The parametric temporal data model captures a real world entity in a single tuple, which reduces query language complexity. Such a data model, however, is difficult to be implemented on top of conventional databases because of its unfixed attribute sizes. XML is a matured technology and can be an elegant solution for such challenge. Representing data in XML trigger a question about storage efficiency. The goal of this work is to provide a straightforward answer to such a question. To this end, we compare three different storage models for the parametric temporal data model and show that XML is not worse than any other approaches. Furthermore, XML outperforms the other storages under certain conditions. Therefore, our simulation results provide a positive indication that the myth about XML is not true in the parametric temporal data model.
文摘Multidatabase systems are designed to achieve schema integration and data interoperation among distributed and heterogeneous database systems. But data model heterogeneity and schema heterogeneity make this a challenging task. A multidatabase common data model is firstly introduced based on XML, named XML-based Integration Data Model (XIDM), which is suitable for integrating different types of schemas. Then an approach of schema mappings based on XIDM in multidatabase systems has been presented. The mappings include global mappings, dealing with horizontal and vertical partitioning between global schemas and export schemas, and local mappings, processing the transformation between export schemas and local schemas. Finally, the illustration and implementation of schema mappings in a multidatabase prototype - Panorama system are also discussed. The implementation results demonstrate that the XIDM is an efficient model for managing multiple heterogeneous data sources and the approaches of schema mapping based on XIDM behave very well when integrating relational, object-oriented database systems and other file systems.
文摘Instead of establishing mathematical hydraulic system models from physical laws usually done with the problems of complex modelling processes, low reliability and practicality caused by large uncertainties, a novel modelling method for a highly nonlinear system of a hydraulic excavator is presented. Based on the data collected in the excavator's arms driving experiments, a data-based excavator dynamic model using Simplified Refined Instrumental Variable (SRIV) identification and estimation algorithms is established. The validity of the proposed data-based model is indirectly demonstrated by the performance of computer simulation and the.real machine motion control exoeriments.