The effectiveness of the Business Intelligence(BI)system mainly depends on the quality of knowledge it produces.The decision-making process is hindered,and the user’s trust is lost,if the knowledge offered is undesir...The effectiveness of the Business Intelligence(BI)system mainly depends on the quality of knowledge it produces.The decision-making process is hindered,and the user’s trust is lost,if the knowledge offered is undesired or of poor quality.A Data Warehouse(DW)is a huge collection of data gathered from many sources and an important part of any BI solution to assist management in making better decisions.The Extract,Transform,and Load(ETL)process is the backbone of a DW system,and it is responsible for moving data from source systems into the DW system.The more mature the ETL process the more reliable the DW system.In this paper,we propose the ETL Maturity Model(EMM)that assists organizations in achieving a high-quality ETL system and thereby enhancing the quality of knowledge produced.The EMM is made up of five levels of maturity i.e.,Chaotic,Acceptable,Stable,Efficient and Reliable.Each level of maturity contains Key Process Areas(KPAs)that have been endorsed by industry experts and include all critical features of a good ETL system.Quality Objectives(QOs)are defined procedures that,when implemented,resulted in a high-quality ETL process.Each KPA has its own set of QOs,the execution of which meets the requirements of that KPA.Multiple brainstorming sessions with relevant industry experts helped to enhance the model.EMMwas deployed in two key projects utilizing multiple case studies to supplement the validation process and support our claim.This model can assist organizations in improving their current ETL process and transforming it into a more mature ETL system.This model can also provide high-quality information to assist users inmaking better decisions and gaining their trust.展开更多
Universities collect and generate a considerable amount of data on students throughout their academic career. Currently in South Kivu, most universities have an information system in the form of a database made up of ...Universities collect and generate a considerable amount of data on students throughout their academic career. Currently in South Kivu, most universities have an information system in the form of a database made up of several disparate files. This makes it difficult to use this data efficiently and profitably. The aim of this study is to develop this transactional database-based information system into a data warehouse-oriented system. This tool will be able to collect, organize and archive data on the student’s career path, year after year, and transform it for analysis purposes. In the age of Big Data, a number of artificial intelligence techniques have been developed, making it possible to extract useful information from large databases. This extracted information is of paramount importance in decision-making. By way of example, the information extracted by these techniques can be used to predict which stream a student should choose when applying to university. In order to develop our contribution, we analyzed the IT information systems used in the various universities and applied the bottom-up method to design our data warehouse model. We used the relational model to design the data warehouse.展开更多
Expenditure on wells constitute a significant part of the operational costs for a petroleum enterprise, where most of the cost results from drilling. This has prompted drilling departments to continuously look for wa...Expenditure on wells constitute a significant part of the operational costs for a petroleum enterprise, where most of the cost results from drilling. This has prompted drilling departments to continuously look for ways to reduce their drilling costs and be as efficient as possible. A system called the Drilling Comprehensive Information Management and Application System (DCIMAS) is developed and presented here, with an aim at collecting, storing and making full use of the valuable well data and information relating to all drilling activities and operations. The DCIMAS comprises three main parts, including a data collection and transmission system, a data warehouse (DW) management system, and an integrated platform of core applications. With the support of the application platform, the DW management system is introduced, whereby the operation data are captured at well sites and transmitted electronically to a data warehouse via transmission equipment and ETL (extract, transformation and load) tools. With the high quality of the data guaranteed, our central task is to make the best use of the operation data and information for drilling analysis and to provide further information to guide later production stages. Applications have been developed and integrated on a uniform platform to interface directly with different layers of the multi-tier DW. Now, engineers in every department spend less time on data handling and more time on applying technology in their real work with the system.展开更多
A uniform metadata representation is introduced for heterogeneous databases, multi media information and other information sources. Some features about metadata are analyzed. The limitation of existing metadata model...A uniform metadata representation is introduced for heterogeneous databases, multi media information and other information sources. Some features about metadata are analyzed. The limitation of existing metadata model is compared with the new one. The metadata model is described in XML which is fit for metadata denotation and exchange. The well structured data, semi structured data and those exterior file data without structure are described in the metadata model. The model provides feasibility and extensibility for constructing uniform metadata model of data warehouse.展开更多
This paper describes the process of design and construction of a data warehouse(“DW”)for an online learning platform using three prominent technologies,Microsoft SQL Server,MongoDB and Apache Hive.The three systems ...This paper describes the process of design and construction of a data warehouse(“DW”)for an online learning platform using three prominent technologies,Microsoft SQL Server,MongoDB and Apache Hive.The three systems are evaluated for corpus construction and descriptive analytics.The case also demonstrates the value of evidence-centered design principles for data warehouse design that is sustainable enough to adapt to the demands of handling big data in a variety of contexts.Additionally,the paper addresses maintainability-performance tradeoff,storage considerations and accessibility of big data corpora.In this NSF-sponsored work,the data were processed,transformed,and stored in the three versions of a data warehouse in search for a better performing and more suitable platform.The data warehouse engines-a relational database,a No-SQL database,and a big data technology for parallel computations-were subjected to principled analysis.Design,construction and evaluation of a data warehouse were scrutinized to find improved ways of storing,organizing and extracting information.The work also examines building corpora,performing ad-hoc extractions,and ensuring confidentiality.It was found that Apache Hive demonstrated the best processing time followed by SQL Server and MongoDB.In the aspect of analytical queries,the SQL Server was a top performer followed by MongoDB and Hive.This paper also discusses a novel process for render students anonymity complying with Family Educational Rights and Privacy Act regulations.Five phases for DW design are recommended:1)Establishing goals at the outset based on Evidence-Centered Design principles;2)Recognizing the unique demands of student data and use;3)Adopting a model that integrates cost with technical considerations;4)Designing a comparative database and 5)Planning for a DW design that is sustainable.Recommendations for future research include attempting DW design in contexts involving larger data sets,more refined operations,and ensuring attention is paid to sustainability of operations.展开更多
Data warehouse (DW), a new technology invented in 1990s, is more useful for integrating and analyzing massive data than traditional database. Its application in geology field can be divided into 3 phrases: 1992-1996,...Data warehouse (DW), a new technology invented in 1990s, is more useful for integrating and analyzing massive data than traditional database. Its application in geology field can be divided into 3 phrases: 1992-1996, commercial data warehouse (CDW) appeared; 1996-1999, geological data warehouse (GDW) appeared and the geologists or geographers realized the importance of DW and began the studies on it, but the practical DW still followed the framework of DB; 2000 to present, geological data warehouse grows, and the theory of geo-spatial data warehouse (GSDW) has been developed but the research in geological area is still deficient except that in geography. Although some developments of GDW have been made, its core still follows the CDW-organizing data by time and brings about 3 problems: difficult to integrate the geological data, for the data feature more space than time; hard to store the massive data in different levels due to the same reason; hardly support the spatial analysis if the data are organized by time as CDW does. So the GDW should be redesigned by organizing data by scale in order to store mass data in different levels and synthesize the data in different granularities, and choosing space control points to replace the former time control points so as to integrate different types of data by the method of storing one type data as one layer and then to superpose the layers. In addition, data cube, a wide used technology in CDW, will be no use in GDW, for the causality among the geological data is not so obvious as commercial data, as the data are the mixed result of many complex rules, and their analysis always needs the special geological methods and software; on the other hand, data cube for mass and complex geo-data will devour too much store space to be practical. On this point, the main purpose of GDW may be fit for data integration unlike CDW for data analysis.展开更多
Engineering data are separately organized and their schemas are increasingly complex and variable. Engineering data management systems are needed to be able to manage the unified data and to be both customizable and e...Engineering data are separately organized and their schemas are increasingly complex and variable. Engineering data management systems are needed to be able to manage the unified data and to be both customizable and extensible. The design of the systems is heavily dependent on the flexibility and self-description of the data model. The characteristics of engineering data and their management facts are analyzed. Then engineering data warehouse (EDW) architecture and multi-layer metamodels are presented. Also an approach to manage anduse engineering data by a meta object is proposed. Finally, an application flight test EDW system (FTEDWS) is described and meta-objects to manage engineering data in the data warehouse are used. It shows that adopting a meta-modeling approach provides a support for interchangeability and a sufficiently flexible environment in which the system evolution and the reusability can be handled.展开更多
Integrating heterogeneous data sources is a precondition to share data for enterprises. Highly-efficient data updating can both save system expenses, and offer real-time data. It is one of the hot issues to modify dat...Integrating heterogeneous data sources is a precondition to share data for enterprises. Highly-efficient data updating can both save system expenses, and offer real-time data. It is one of the hot issues to modify data rapidly in the pre-processing area of the data warehouse. An extract transform loading design is proposed based on a new data algorithm called Diff-Match,which is developed by utilizing mode matching and data-filtering technology. It can accelerate data renewal, filter the heterogeneous data, and seek out different sets of data. Its efficiency has been proved by its successful application in an enterprise of electric apparatus groups.展开更多
This paper presents a simple complete K level tree (CKT) architecture for text database organization and rapid data filtering. A database is constructed as a CKT forest and each CKT contains data of the same length. T...This paper presents a simple complete K level tree (CKT) architecture for text database organization and rapid data filtering. A database is constructed as a CKT forest and each CKT contains data of the same length. The maximum depth and the minimum depth of an individual CKT are equal and identical to data’s length. Insertion and deletion operations are defined; storage method and filtering algorithm are also designed for good compensation between efficiency and complexity. Applications to computer aided teaching of Chinese and protein selection show that an about 30% reduction of storage consumption and an over 60% reduction of computation may be easily obtained.展开更多
To efficiently solve the materialized view selection problem, an optimal genetic algorithm of how to select a set of views to be materialized is proposed so as to achieve both good query performance and low view maint...To efficiently solve the materialized view selection problem, an optimal genetic algorithm of how to select a set of views to be materialized is proposed so as to achieve both good query performance and low view maintenance cost under a storage space constraint. First, a pre-processing algorithm based on the maximum benefit per unit space is used to generate initial solutions. Then, the initial solutions are improved by the genetic algorithm having the mixture of optimal strategies. Furthermore, the generated infeasible solutions during the evolution process are repaired by loss function. The experimental results show that the proposed algorithm outperforms the heuristic algorithm and canonical genetic algorithm in finding optimal solutions.展开更多
The analysis of relevant standards and guidelines proved the lack of information on actions and activities concerning data warehouse testing. The absence of the complex data warehouse testing methodology seems to be c...The analysis of relevant standards and guidelines proved the lack of information on actions and activities concerning data warehouse testing. The absence of the complex data warehouse testing methodology seems to be crucial particularly in the phase of the data warehouse implementation. The aim of this article is to suggest basic data warehouse testing activities as a final part of data warehouse testing methodology. The testing activities that must be implemented in the process of the data warehouse testing can be split into four logical units regarding the multidimensional database testing, data pump testing, metadata and OLAP (Online Analytical Processing) testing. Between main testing activities can be included: revision of the multidimensional database scheme, optimizing of fact tables number, problem of data explosion, testing for correctness of aggregation and summation of data etc.展开更多
Discussing the matter of organizational data management implies, almost automatically, the concept of data warehousing as one of the most important parts of decision support system (DSS), as it supports the integrat...Discussing the matter of organizational data management implies, almost automatically, the concept of data warehousing as one of the most important parts of decision support system (DSS), as it supports the integration of information management by aggregating all data formats and provisioning external systems with consistent data content and flows, together with the metadata concept, as one of the easiest ways of integration for software and database systems. Since organizational data management uses the metadata channel for creating a bi-directional flow, when correctly managed, metadata can save both time and resources for organizations. This paperI will focus on providing theoretical aspects of the two concepts, together with a short brief over a proposed model of design for an organizational management tool.展开更多
Recently, due to the rapid growth increment of data sensors, a massive volume of data is generated from different sources. The way of administering such data in a sense storing, managing, analyzing, and extracting ins...Recently, due to the rapid growth increment of data sensors, a massive volume of data is generated from different sources. The way of administering such data in a sense storing, managing, analyzing, and extracting insightful information from the massive volume of data is a challenging task. Big data analytics is becoming a vital research area in domains such as climate data analysis which demands fast access to data. Nowadays, an open-source platform namely MapReduce which is a distributed computing framework is widely used in many domains of big data analysis. In our work, we have developed a conceptual framework of data modeling essentially useful for the implementation of a hybrid data warehouse model to store the features of National Climatic Data Center (NCDC) climate data. The hybrid data warehouse model for climate big data enables for the identification of weather patterns that would be applicable in agricultural and other similar climate change-related studies that will play a major role in recommending actions to be taken by domain experts and make contingency plans over extreme cases of weather variability.展开更多
Data structure and semantics of the traditional data model cannot effectively represent the data warehouse, it is difficult to effectively support online analytical processing (referred to as OLAP). This paper is pr...Data structure and semantics of the traditional data model cannot effectively represent the data warehouse, it is difficult to effectively support online analytical processing (referred to as OLAP). This paper is propose a new multidimensional data model based on the partial ordering and mapping. The data model can fully express the complex data structure and semantics of data warehouse, and provide an OLAP operation as the core of the operation of algebra, support structure in levels of complex aggregation operation sequence, which can effectively support the application of OLAE The data model supports the concept of aggregation function constraint, and provides constraint mechanism of the hierarchy aggregation function.展开更多
Along with the rapid development of internet, CRM has become one of the most important facts leading the enterprises to be competent. At the same time, the analytical CRM based on Date Warehouse is the kernel of CRM s...Along with the rapid development of internet, CRM has become one of the most important facts leading the enterprises to be competent. At the same time, the analytical CRM based on Date Warehouse is the kernel of CRM system. This paper mainly explains the idea of CRM and the DW model of analytical CRM system.展开更多
In modern workforce management,the demand for new ways to maximize worker satisfaction,productivity,and security levels is endless.Workforce movement data such as those source data from an access control system can su...In modern workforce management,the demand for new ways to maximize worker satisfaction,productivity,and security levels is endless.Workforce movement data such as those source data from an access control system can support this ongoing process with subsequent analysis.In this study,a solution to attaining this goal is proposed,based on the design and implementation of a data mart as part of a dimensional trajectory data warehouse(TDW)that acts as a repository for the management of movement data.A novel methodological approach is proposed for modeling multiple spatial and temporal dimensions in a logical model.The case study presented in this paper for modeling and analyzing workforce movement data is to support human resource management decision-making and the following discussion provides a representative example of the contribution of a TDW in the process of information management and decision support systems.The entire process of exporting,cleaning,consolidating,and transforming data is implemented to achieve an appropriate format for final import.Structured query language(SQL)queries demonstrate the convenience of dimensional design for data analysis,and valuable information can be extracted from the movements of employees on company premises to manage the workforce efficiently and effectively.Visual analytics through data visualization support the analysis and facilitate decisionmaking and business intelligence.展开更多
The characters of marine data, such as multi-source, polymorphism, diversity and large amount, determine their differences from other data. How to store and manage marine data rationally and effectively to provide pow...The characters of marine data, such as multi-source, polymorphism, diversity and large amount, determine their differences from other data. How to store and manage marine data rationally and effectively to provide powerful data support for marine management information system and "Digital Ocean" prototype system construction is an urgent problem to solve. Different types of system planning data, such as marine resource, marine environment, marine econotny and marine management, and establishing marine data architecture frame with uniform standard are to realize the effective management of all level marine data, such as national marine data, the provincial (municipal) marine data, and meet the need of fundamental information-platform construction.展开更多
Data warehouse provides storage and management for mass data, but data schema evolves with time on. When data schema is changed, added or deleted, the data in data warehouse must comply with the changed data schema, ...Data warehouse provides storage and management for mass data, but data schema evolves with time on. When data schema is changed, added or deleted, the data in data warehouse must comply with the changed data schema, so data warehouse must be re organized or re constructed, but this process is exhausting and wasteful. In order to cope with these problems, this paper develops an approach to model data cube with XML, which emerges as a universal format for data exchange on the Web and which can make data warehouse flexible and scalable. This paper also extends OLAP algebra for XML based data cube, which is called X OLAP.展开更多
This paper presented a rule merging and simplifying method and an improved analysis deviation algorithm. The fuzzy equivalence theory avoids the rigid way (either this or that) of traditional equivalence theory. Durin...This paper presented a rule merging and simplifying method and an improved analysis deviation algorithm. The fuzzy equivalence theory avoids the rigid way (either this or that) of traditional equivalence theory. During a data cleaning process task, some rules exist such as included/being included relations with each other. The equivalence degree of the being-included rule is smaller than that of the including rule, so a rule merging and simplifying method is introduced to reduce the total computing time. And this kind of relation will affect the deviation of fuzzy equivalence degree. An improved analysis deviation algorithm that omits the influence of the included rules' equivalence degree was also presented. Normally the duplicate records are logged in a file, and users have to check and verify them one by one. It's time-cost. The proposed algorithm can save users' labor during duplicate records checking. Finally, an experiment was presented which demonstrates the possibility of the rule.展开更多
A three-layer model for digital communication in a mine is proposed. Two basic platforms are discussed: A uniform transmission network and a uniform data warehouse. An actual,ControlNet based,transmission network plat...A three-layer model for digital communication in a mine is proposed. Two basic platforms are discussed: A uniform transmission network and a uniform data warehouse. An actual,ControlNet based,transmission network plat-form suitable for the Jining No.3 coal mine is presented. This network is an information superhighway intended to inte-grate all existing and new automation subsystems. Its standard interface can be used with future subsystems. The net-work,data structure and management decision-making all employ this uniform hardware and software. This effectively avoids the problems of system and information islands seen in traditional mine-automation systems. The construction of the network provides a stable foundation for digital communication in the Jining No.3 coal mine.展开更多
基金King Saud University for funding this work through Researchers Supporting Project Number(RSP-2021/387),King Saud University,Riyadh,Saudi Arabia.
文摘The effectiveness of the Business Intelligence(BI)system mainly depends on the quality of knowledge it produces.The decision-making process is hindered,and the user’s trust is lost,if the knowledge offered is undesired or of poor quality.A Data Warehouse(DW)is a huge collection of data gathered from many sources and an important part of any BI solution to assist management in making better decisions.The Extract,Transform,and Load(ETL)process is the backbone of a DW system,and it is responsible for moving data from source systems into the DW system.The more mature the ETL process the more reliable the DW system.In this paper,we propose the ETL Maturity Model(EMM)that assists organizations in achieving a high-quality ETL system and thereby enhancing the quality of knowledge produced.The EMM is made up of five levels of maturity i.e.,Chaotic,Acceptable,Stable,Efficient and Reliable.Each level of maturity contains Key Process Areas(KPAs)that have been endorsed by industry experts and include all critical features of a good ETL system.Quality Objectives(QOs)are defined procedures that,when implemented,resulted in a high-quality ETL process.Each KPA has its own set of QOs,the execution of which meets the requirements of that KPA.Multiple brainstorming sessions with relevant industry experts helped to enhance the model.EMMwas deployed in two key projects utilizing multiple case studies to supplement the validation process and support our claim.This model can assist organizations in improving their current ETL process and transforming it into a more mature ETL system.This model can also provide high-quality information to assist users inmaking better decisions and gaining their trust.
文摘Universities collect and generate a considerable amount of data on students throughout their academic career. Currently in South Kivu, most universities have an information system in the form of a database made up of several disparate files. This makes it difficult to use this data efficiently and profitably. The aim of this study is to develop this transactional database-based information system into a data warehouse-oriented system. This tool will be able to collect, organize and archive data on the student’s career path, year after year, and transform it for analysis purposes. In the age of Big Data, a number of artificial intelligence techniques have been developed, making it possible to extract useful information from large databases. This extracted information is of paramount importance in decision-making. By way of example, the information extracted by these techniques can be used to predict which stream a student should choose when applying to university. In order to develop our contribution, we analyzed the IT information systems used in the various universities and applied the bottom-up method to design our data warehouse model. We used the relational model to design the data warehouse.
文摘Expenditure on wells constitute a significant part of the operational costs for a petroleum enterprise, where most of the cost results from drilling. This has prompted drilling departments to continuously look for ways to reduce their drilling costs and be as efficient as possible. A system called the Drilling Comprehensive Information Management and Application System (DCIMAS) is developed and presented here, with an aim at collecting, storing and making full use of the valuable well data and information relating to all drilling activities and operations. The DCIMAS comprises three main parts, including a data collection and transmission system, a data warehouse (DW) management system, and an integrated platform of core applications. With the support of the application platform, the DW management system is introduced, whereby the operation data are captured at well sites and transmitted electronically to a data warehouse via transmission equipment and ETL (extract, transformation and load) tools. With the high quality of the data guaranteed, our central task is to make the best use of the operation data and information for drilling analysis and to provide further information to guide later production stages. Applications have been developed and integrated on a uniform platform to interface directly with different layers of the multi-tier DW. Now, engineers in every department spend less time on data handling and more time on applying technology in their real work with the system.
文摘A uniform metadata representation is introduced for heterogeneous databases, multi media information and other information sources. Some features about metadata are analyzed. The limitation of existing metadata model is compared with the new one. The metadata model is described in XML which is fit for metadata denotation and exchange. The well structured data, semi structured data and those exterior file data without structure are described in the metadata model. The model provides feasibility and extensibility for constructing uniform metadata model of data warehouse.
文摘This paper describes the process of design and construction of a data warehouse(“DW”)for an online learning platform using three prominent technologies,Microsoft SQL Server,MongoDB and Apache Hive.The three systems are evaluated for corpus construction and descriptive analytics.The case also demonstrates the value of evidence-centered design principles for data warehouse design that is sustainable enough to adapt to the demands of handling big data in a variety of contexts.Additionally,the paper addresses maintainability-performance tradeoff,storage considerations and accessibility of big data corpora.In this NSF-sponsored work,the data were processed,transformed,and stored in the three versions of a data warehouse in search for a better performing and more suitable platform.The data warehouse engines-a relational database,a No-SQL database,and a big data technology for parallel computations-were subjected to principled analysis.Design,construction and evaluation of a data warehouse were scrutinized to find improved ways of storing,organizing and extracting information.The work also examines building corpora,performing ad-hoc extractions,and ensuring confidentiality.It was found that Apache Hive demonstrated the best processing time followed by SQL Server and MongoDB.In the aspect of analytical queries,the SQL Server was a top performer followed by MongoDB and Hive.This paper also discusses a novel process for render students anonymity complying with Family Educational Rights and Privacy Act regulations.Five phases for DW design are recommended:1)Establishing goals at the outset based on Evidence-Centered Design principles;2)Recognizing the unique demands of student data and use;3)Adopting a model that integrates cost with technical considerations;4)Designing a comparative database and 5)Planning for a DW design that is sustainable.Recommendations for future research include attempting DW design in contexts involving larger data sets,more refined operations,and ensuring attention is paid to sustainability of operations.
文摘Data warehouse (DW), a new technology invented in 1990s, is more useful for integrating and analyzing massive data than traditional database. Its application in geology field can be divided into 3 phrases: 1992-1996, commercial data warehouse (CDW) appeared; 1996-1999, geological data warehouse (GDW) appeared and the geologists or geographers realized the importance of DW and began the studies on it, but the practical DW still followed the framework of DB; 2000 to present, geological data warehouse grows, and the theory of geo-spatial data warehouse (GSDW) has been developed but the research in geological area is still deficient except that in geography. Although some developments of GDW have been made, its core still follows the CDW-organizing data by time and brings about 3 problems: difficult to integrate the geological data, for the data feature more space than time; hard to store the massive data in different levels due to the same reason; hardly support the spatial analysis if the data are organized by time as CDW does. So the GDW should be redesigned by organizing data by scale in order to store mass data in different levels and synthesize the data in different granularities, and choosing space control points to replace the former time control points so as to integrate different types of data by the method of storing one type data as one layer and then to superpose the layers. In addition, data cube, a wide used technology in CDW, will be no use in GDW, for the causality among the geological data is not so obvious as commercial data, as the data are the mixed result of many complex rules, and their analysis always needs the special geological methods and software; on the other hand, data cube for mass and complex geo-data will devour too much store space to be practical. On this point, the main purpose of GDW may be fit for data integration unlike CDW for data analysis.
文摘Engineering data are separately organized and their schemas are increasingly complex and variable. Engineering data management systems are needed to be able to manage the unified data and to be both customizable and extensible. The design of the systems is heavily dependent on the flexibility and self-description of the data model. The characteristics of engineering data and their management facts are analyzed. Then engineering data warehouse (EDW) architecture and multi-layer metamodels are presented. Also an approach to manage anduse engineering data by a meta object is proposed. Finally, an application flight test EDW system (FTEDWS) is described and meta-objects to manage engineering data in the data warehouse are used. It shows that adopting a meta-modeling approach provides a support for interchangeability and a sufficiently flexible environment in which the system evolution and the reusability can be handled.
基金Supported by National Natural Science Foundation of China (No. 50475117)Tianjin Natural Science Foundation (No.06YFJMJC03700).
文摘Integrating heterogeneous data sources is a precondition to share data for enterprises. Highly-efficient data updating can both save system expenses, and offer real-time data. It is one of the hot issues to modify data rapidly in the pre-processing area of the data warehouse. An extract transform loading design is proposed based on a new data algorithm called Diff-Match,which is developed by utilizing mode matching and data-filtering technology. It can accelerate data renewal, filter the heterogeneous data, and seek out different sets of data. Its efficiency has been proved by its successful application in an enterprise of electric apparatus groups.
文摘This paper presents a simple complete K level tree (CKT) architecture for text database organization and rapid data filtering. A database is constructed as a CKT forest and each CKT contains data of the same length. The maximum depth and the minimum depth of an individual CKT are equal and identical to data’s length. Insertion and deletion operations are defined; storage method and filtering algorithm are also designed for good compensation between efficiency and complexity. Applications to computer aided teaching of Chinese and protein selection show that an about 30% reduction of storage consumption and an over 60% reduction of computation may be easily obtained.
文摘To efficiently solve the materialized view selection problem, an optimal genetic algorithm of how to select a set of views to be materialized is proposed so as to achieve both good query performance and low view maintenance cost under a storage space constraint. First, a pre-processing algorithm based on the maximum benefit per unit space is used to generate initial solutions. Then, the initial solutions are improved by the genetic algorithm having the mixture of optimal strategies. Furthermore, the generated infeasible solutions during the evolution process are repaired by loss function. The experimental results show that the proposed algorithm outperforms the heuristic algorithm and canonical genetic algorithm in finding optimal solutions.
文摘The analysis of relevant standards and guidelines proved the lack of information on actions and activities concerning data warehouse testing. The absence of the complex data warehouse testing methodology seems to be crucial particularly in the phase of the data warehouse implementation. The aim of this article is to suggest basic data warehouse testing activities as a final part of data warehouse testing methodology. The testing activities that must be implemented in the process of the data warehouse testing can be split into four logical units regarding the multidimensional database testing, data pump testing, metadata and OLAP (Online Analytical Processing) testing. Between main testing activities can be included: revision of the multidimensional database scheme, optimizing of fact tables number, problem of data explosion, testing for correctness of aggregation and summation of data etc.
文摘Discussing the matter of organizational data management implies, almost automatically, the concept of data warehousing as one of the most important parts of decision support system (DSS), as it supports the integration of information management by aggregating all data formats and provisioning external systems with consistent data content and flows, together with the metadata concept, as one of the easiest ways of integration for software and database systems. Since organizational data management uses the metadata channel for creating a bi-directional flow, when correctly managed, metadata can save both time and resources for organizations. This paperI will focus on providing theoretical aspects of the two concepts, together with a short brief over a proposed model of design for an organizational management tool.
文摘Recently, due to the rapid growth increment of data sensors, a massive volume of data is generated from different sources. The way of administering such data in a sense storing, managing, analyzing, and extracting insightful information from the massive volume of data is a challenging task. Big data analytics is becoming a vital research area in domains such as climate data analysis which demands fast access to data. Nowadays, an open-source platform namely MapReduce which is a distributed computing framework is widely used in many domains of big data analysis. In our work, we have developed a conceptual framework of data modeling essentially useful for the implementation of a hybrid data warehouse model to store the features of National Climatic Data Center (NCDC) climate data. The hybrid data warehouse model for climate big data enables for the identification of weather patterns that would be applicable in agricultural and other similar climate change-related studies that will play a major role in recommending actions to be taken by domain experts and make contingency plans over extreme cases of weather variability.
文摘Data structure and semantics of the traditional data model cannot effectively represent the data warehouse, it is difficult to effectively support online analytical processing (referred to as OLAP). This paper is propose a new multidimensional data model based on the partial ordering and mapping. The data model can fully express the complex data structure and semantics of data warehouse, and provide an OLAP operation as the core of the operation of algebra, support structure in levels of complex aggregation operation sequence, which can effectively support the application of OLAE The data model supports the concept of aggregation function constraint, and provides constraint mechanism of the hierarchy aggregation function.
文摘Along with the rapid development of internet, CRM has become one of the most important facts leading the enterprises to be competent. At the same time, the analytical CRM based on Date Warehouse is the kernel of CRM system. This paper mainly explains the idea of CRM and the DW model of analytical CRM system.
文摘In modern workforce management,the demand for new ways to maximize worker satisfaction,productivity,and security levels is endless.Workforce movement data such as those source data from an access control system can support this ongoing process with subsequent analysis.In this study,a solution to attaining this goal is proposed,based on the design and implementation of a data mart as part of a dimensional trajectory data warehouse(TDW)that acts as a repository for the management of movement data.A novel methodological approach is proposed for modeling multiple spatial and temporal dimensions in a logical model.The case study presented in this paper for modeling and analyzing workforce movement data is to support human resource management decision-making and the following discussion provides a representative example of the contribution of a TDW in the process of information management and decision support systems.The entire process of exporting,cleaning,consolidating,and transforming data is implemented to achieve an appropriate format for final import.Structured query language(SQL)queries demonstrate the convenience of dimensional design for data analysis,and valuable information can be extracted from the movements of employees on company premises to manage the workforce efficiently and effectively.Visual analytics through data visualization support the analysis and facilitate decisionmaking and business intelligence.
文摘The characters of marine data, such as multi-source, polymorphism, diversity and large amount, determine their differences from other data. How to store and manage marine data rationally and effectively to provide powerful data support for marine management information system and "Digital Ocean" prototype system construction is an urgent problem to solve. Different types of system planning data, such as marine resource, marine environment, marine econotny and marine management, and establishing marine data architecture frame with uniform standard are to realize the effective management of all level marine data, such as national marine data, the provincial (municipal) marine data, and meet the need of fundamental information-platform construction.
文摘Data warehouse provides storage and management for mass data, but data schema evolves with time on. When data schema is changed, added or deleted, the data in data warehouse must comply with the changed data schema, so data warehouse must be re organized or re constructed, but this process is exhausting and wasteful. In order to cope with these problems, this paper develops an approach to model data cube with XML, which emerges as a universal format for data exchange on the Web and which can make data warehouse flexible and scalable. This paper also extends OLAP algebra for XML based data cube, which is called X OLAP.
文摘This paper presented a rule merging and simplifying method and an improved analysis deviation algorithm. The fuzzy equivalence theory avoids the rigid way (either this or that) of traditional equivalence theory. During a data cleaning process task, some rules exist such as included/being included relations with each other. The equivalence degree of the being-included rule is smaller than that of the including rule, so a rule merging and simplifying method is introduced to reduce the total computing time. And this kind of relation will affect the deviation of fuzzy equivalence degree. An improved analysis deviation algorithm that omits the influence of the included rules' equivalence degree was also presented. Normally the duplicate records are logged in a file, and users have to check and verify them one by one. It's time-cost. The proposed algorithm can save users' labor during duplicate records checking. Finally, an experiment was presented which demonstrates the possibility of the rule.
基金Project 50574094 supported by the National Natural Science Foundation of China
文摘A three-layer model for digital communication in a mine is proposed. Two basic platforms are discussed: A uniform transmission network and a uniform data warehouse. An actual,ControlNet based,transmission network plat-form suitable for the Jining No.3 coal mine is presented. This network is an information superhighway intended to inte-grate all existing and new automation subsystems. Its standard interface can be used with future subsystems. The net-work,data structure and management decision-making all employ this uniform hardware and software. This effectively avoids the problems of system and information islands seen in traditional mine-automation systems. The construction of the network provides a stable foundation for digital communication in the Jining No.3 coal mine.