In the era of Big Data,many NoSQL databases emerged for the storage and later processing of vast volumes of data,using data structures that can follow columnar,key-value,document or graph formats.For analytical contex...In the era of Big Data,many NoSQL databases emerged for the storage and later processing of vast volumes of data,using data structures that can follow columnar,key-value,document or graph formats.For analytical contexts,requiring a Big Data Warehouse,Hive is used as the driving force,allowing the analysis of vast amounts of data.Data models in Hive are usually defined taking into consideration the queries that need to be answered.In this work,a set of rules is presented for the transformation of multidimensional data models into Hive tables,making available data at different levels of detail.These several levels are suited for answering different queries,depending on the analytical needs.After the identification of the Hive tables,this paper summarizes a demonstration case in which the implementation of a specific Big Data architecture shows how the evolution from a traditional Data Warehouse to a Big Data Warehouse is possible.展开更多
Multidimensional aggregation is a dominant operation on data ware-houses for on-line analytical processing (OLAP). Many efficient algorithms to compute multidimensional aggregation on relational database based data wa...Multidimensional aggregation is a dominant operation on data ware-houses for on-line analytical processing (OLAP). Many efficient algorithms to compute multidimensional aggregation on relational database based data warehouseshave been developed. However, to our knowledge, there is nothing to date in theliterature about aggregation algorithms on multidimensional data warehouses thatstore datasets in multidimensional arrays rather than in tables. This paper presentsa set of multidimensional aggregation algorithms on very large and compressed mul-tidimensional data warehouses. These algorithms operate directly on compresseddatasets in multidimensional data warehouses without the need to first decompressthem. They are applicable to a variety of data compression methods. The algorithmshave differefit performance behavior as a function of dataset parameters, sizes of out-puts and main memory availability. The algorithms are described and analyzed withrespect to the I/O and CPU costs. A decision procedure to select the most efficientalgorithm, given an aggregation request, is also proposed. The analytical and ex-perimental results show that the algorithms are more efficient than the traditionalaggregation algorithms.展开更多
A data warehouse often accommodates enormous summary information in various granularities and is mainly used to support on-line analytical processing. Ideally all detailed data should be accessible by residing in some...A data warehouse often accommodates enormous summary information in various granularities and is mainly used to support on-line analytical processing. Ideally all detailed data should be accessible by residing in some legacy systems or on-line transaction processing systems. In many cases, however, data sources in computers are also kinds of summary data due to technological problems or budget limits and also because different aggregation hierarchies may need to be used among various transaction systems. In such circumstances, it is necessary to investigate how to design dimensions, which play a major role in dimensiona1 mode1 for a data warehouse, and how to estimate summary information, which is not stored in the data warehouse. In this paper, the rough set theory is applied to support the dimension design and information estimation.展开更多
The problem of storage and querying of large volumes of spatial grids is an issue to solve.In this paper,we propose a method to optimize queries to aggregate raster grids stored in databases.In our approach,we propose...The problem of storage and querying of large volumes of spatial grids is an issue to solve.In this paper,we propose a method to optimize queries to aggregate raster grids stored in databases.In our approach,we propose to estimate the exact result rather than calculate the exact result.This approach reduces query execution time.One advantage of our method is that it does not require implementing or modifying functionalities of database management systems.Our approach is based on a new data structure and a specific model of SQL queries.Our work is applied here to relational data warehouses.展开更多
The effectiveness of the Business Intelligence(BI)system mainly depends on the quality of knowledge it produces.The decision-making process is hindered,and the user’s trust is lost,if the knowledge offered is undesir...The effectiveness of the Business Intelligence(BI)system mainly depends on the quality of knowledge it produces.The decision-making process is hindered,and the user’s trust is lost,if the knowledge offered is undesired or of poor quality.A Data Warehouse(DW)is a huge collection of data gathered from many sources and an important part of any BI solution to assist management in making better decisions.The Extract,Transform,and Load(ETL)process is the backbone of a DW system,and it is responsible for moving data from source systems into the DW system.The more mature the ETL process the more reliable the DW system.In this paper,we propose the ETL Maturity Model(EMM)that assists organizations in achieving a high-quality ETL system and thereby enhancing the quality of knowledge produced.The EMM is made up of five levels of maturity i.e.,Chaotic,Acceptable,Stable,Efficient and Reliable.Each level of maturity contains Key Process Areas(KPAs)that have been endorsed by industry experts and include all critical features of a good ETL system.Quality Objectives(QOs)are defined procedures that,when implemented,resulted in a high-quality ETL process.Each KPA has its own set of QOs,the execution of which meets the requirements of that KPA.Multiple brainstorming sessions with relevant industry experts helped to enhance the model.EMMwas deployed in two key projects utilizing multiple case studies to supplement the validation process and support our claim.This model can assist organizations in improving their current ETL process and transforming it into a more mature ETL system.This model can also provide high-quality information to assist users inmaking better decisions and gaining their trust.展开更多
Many approaches have been proposed to pre-compute data cubes in order to efficiently respond to OLAP queries in data warehouses. However, few have proposed solutions integrating all of the possible outcomes, and it is...Many approaches have been proposed to pre-compute data cubes in order to efficiently respond to OLAP queries in data warehouses. However, few have proposed solutions integrating all of the possible outcomes, and it is this idea that leads the integration of hierarchical dimensions into these responses. To meet this need, we propose, in this paper, a complete redefinition of the framework and the formal definition of traditional database analysis through the prism of hierarchical dimensions. After characterizing the hierarchical data cube lattice, we introduce the hierarchical data cube and its most concise reduced representation, the closed hierarchical data cube. It offers compact replication so as to optimize storage space by removing redundancies of strongly correlated data. Such data are typical of data warehouses, and in particular in video games, our field of study and experimentation, where hierarchical dimension attributes are widely represented.展开更多
Universities collect and generate a considerable amount of data on students throughout their academic career. Currently in South Kivu, most universities have an information system in the form of a database made up of ...Universities collect and generate a considerable amount of data on students throughout their academic career. Currently in South Kivu, most universities have an information system in the form of a database made up of several disparate files. This makes it difficult to use this data efficiently and profitably. The aim of this study is to develop this transactional database-based information system into a data warehouse-oriented system. This tool will be able to collect, organize and archive data on the student’s career path, year after year, and transform it for analysis purposes. In the age of Big Data, a number of artificial intelligence techniques have been developed, making it possible to extract useful information from large databases. This extracted information is of paramount importance in decision-making. By way of example, the information extracted by these techniques can be used to predict which stream a student should choose when applying to university. In order to develop our contribution, we analyzed the IT information systems used in the various universities and applied the bottom-up method to design our data warehouse model. We used the relational model to design the data warehouse.展开更多
Data warehouse (DW), a new technology invented in 1990s, is more useful for integrating and analyzing massive data than traditional database. Its application in geology field can be divided into 3 phrases: 1992-1996,...Data warehouse (DW), a new technology invented in 1990s, is more useful for integrating and analyzing massive data than traditional database. Its application in geology field can be divided into 3 phrases: 1992-1996, commercial data warehouse (CDW) appeared; 1996-1999, geological data warehouse (GDW) appeared and the geologists or geographers realized the importance of DW and began the studies on it, but the practical DW still followed the framework of DB; 2000 to present, geological data warehouse grows, and the theory of geo-spatial data warehouse (GSDW) has been developed but the research in geological area is still deficient except that in geography. Although some developments of GDW have been made, its core still follows the CDW-organizing data by time and brings about 3 problems: difficult to integrate the geological data, for the data feature more space than time; hard to store the massive data in different levels due to the same reason; hardly support the spatial analysis if the data are organized by time as CDW does. So the GDW should be redesigned by organizing data by scale in order to store mass data in different levels and synthesize the data in different granularities, and choosing space control points to replace the former time control points so as to integrate different types of data by the method of storing one type data as one layer and then to superpose the layers. In addition, data cube, a wide used technology in CDW, will be no use in GDW, for the causality among the geological data is not so obvious as commercial data, as the data are the mixed result of many complex rules, and their analysis always needs the special geological methods and software; on the other hand, data cube for mass and complex geo-data will devour too much store space to be practical. On this point, the main purpose of GDW may be fit for data integration unlike CDW for data analysis.展开更多
Marine information has been increasing quickly. The traditional database technologies have disadvantages in manipulating large amounts of marine information which relates to the position in 3-D with the time. Recently...Marine information has been increasing quickly. The traditional database technologies have disadvantages in manipulating large amounts of marine information which relates to the position in 3-D with the time. Recently, greater emphasis has been placed on GIS (geographical information system)to deal with the marine information. The GIS has shown great success for terrestrial applications in the last decades, but its use in marine fields has been far more restricted. One of the main reasons is that most of the GIS systems or their data models are designed for land applications. They cannot do well with the nature of the marine environment and for the marine information. And this becomes a fundamental challenge to the traditional GIS and its data structure. This work designed a data model, the raster-based spatio-temporal hierarchical data model (RSHDM), for the marine information system, or for the knowledge discovery fi'om spatio-temporal data, which bases itself on the nature of the marine data and overcomes the shortages of the current spatio-temporal models when they are used in the field. As an experiment, the marine fishery data warehouse (FDW) for marine fishery management was set up, which was based on the RSHDM. The experiment proved that the RSHDM can do well with the data and can extract easily the aggregations that the management needs at different levels.展开更多
This paper describes the process of design and construction of a data warehouse(“DW”)for an online learning platform using three prominent technologies,Microsoft SQL Server,MongoDB and Apache Hive.The three systems ...This paper describes the process of design and construction of a data warehouse(“DW”)for an online learning platform using three prominent technologies,Microsoft SQL Server,MongoDB and Apache Hive.The three systems are evaluated for corpus construction and descriptive analytics.The case also demonstrates the value of evidence-centered design principles for data warehouse design that is sustainable enough to adapt to the demands of handling big data in a variety of contexts.Additionally,the paper addresses maintainability-performance tradeoff,storage considerations and accessibility of big data corpora.In this NSF-sponsored work,the data were processed,transformed,and stored in the three versions of a data warehouse in search for a better performing and more suitable platform.The data warehouse engines-a relational database,a No-SQL database,and a big data technology for parallel computations-were subjected to principled analysis.Design,construction and evaluation of a data warehouse were scrutinized to find improved ways of storing,organizing and extracting information.The work also examines building corpora,performing ad-hoc extractions,and ensuring confidentiality.It was found that Apache Hive demonstrated the best processing time followed by SQL Server and MongoDB.In the aspect of analytical queries,the SQL Server was a top performer followed by MongoDB and Hive.This paper also discusses a novel process for render students anonymity complying with Family Educational Rights and Privacy Act regulations.Five phases for DW design are recommended:1)Establishing goals at the outset based on Evidence-Centered Design principles;2)Recognizing the unique demands of student data and use;3)Adopting a model that integrates cost with technical considerations;4)Designing a comparative database and 5)Planning for a DW design that is sustainable.Recommendations for future research include attempting DW design in contexts involving larger data sets,more refined operations,and ensuring attention is paid to sustainability of operations.展开更多
In this paper, we designed a customer-centered data warehouse system with five subjects: listing, bidding, transaction, accounts, and customer contact based on the business process of online auction companies. For ea...In this paper, we designed a customer-centered data warehouse system with five subjects: listing, bidding, transaction, accounts, and customer contact based on the business process of online auction companies. For each subject, we analyzed its fact indexes and dimensions. Then take transaction subject as example, analyzed the data warehouse model in detail, and got the multi-dimensional analysis structure of transaction subject. At last, using data mining to do customer segmentation, we divided customers into four types: impulse customer, prudent customer, potential customer, and ordinary customer. By the result of multi-dimensional customer data analysis, online auction companies can do more target marketing and increase customer loyalty.展开更多
Surface quality has been one of the key factors influencing the ongoing improvement of the quality of steel. Therefore,it is urgent to provide methods for efficient supervision of surface defects. This paper first exp...Surface quality has been one of the key factors influencing the ongoing improvement of the quality of steel. Therefore,it is urgent to provide methods for efficient supervision of surface defects. This paper first expressed the main problems existing in defect management and then focused on constructing a data platform of surface defect management using a multidimensional database. Finally, some onqine applications of the platform at Baosteel were demonstrated. Results show that the constructed multidimensional database provides more structured defect data, and thus it is suitable for swift and multi-angle analysis of the defect data.展开更多
To efficiently solve the materialized view selection problem, an optimal genetic algorithm of how to select a set of views to be materialized is proposed so as to achieve both good query performance and low view maint...To efficiently solve the materialized view selection problem, an optimal genetic algorithm of how to select a set of views to be materialized is proposed so as to achieve both good query performance and low view maintenance cost under a storage space constraint. First, a pre-processing algorithm based on the maximum benefit per unit space is used to generate initial solutions. Then, the initial solutions are improved by the genetic algorithm having the mixture of optimal strategies. Furthermore, the generated infeasible solutions during the evolution process are repaired by loss function. The experimental results show that the proposed algorithm outperforms the heuristic algorithm and canonical genetic algorithm in finding optimal solutions.展开更多
Recently, due to the rapid growth increment of data sensors, a massive volume of data is generated from different sources. The way of administering such data in a sense storing, managing, analyzing, and extracting ins...Recently, due to the rapid growth increment of data sensors, a massive volume of data is generated from different sources. The way of administering such data in a sense storing, managing, analyzing, and extracting insightful information from the massive volume of data is a challenging task. Big data analytics is becoming a vital research area in domains such as climate data analysis which demands fast access to data. Nowadays, an open-source platform namely MapReduce which is a distributed computing framework is widely used in many domains of big data analysis. In our work, we have developed a conceptual framework of data modeling essentially useful for the implementation of a hybrid data warehouse model to store the features of National Climatic Data Center (NCDC) climate data. The hybrid data warehouse model for climate big data enables for the identification of weather patterns that would be applicable in agricultural and other similar climate change-related studies that will play a major role in recommending actions to be taken by domain experts and make contingency plans over extreme cases of weather variability.展开更多
This paper presents the aim and the design structure of the metallic mineral resources assessment and analysis system. This system adopts an integrated technique of data warehouse composed of affairs processing layer...This paper presents the aim and the design structure of the metallic mineral resources assessment and analysis system. This system adopts an integrated technique of data warehouse composed of affairs processing layer and analysis application layer. The affairs processing layer includes multiform databases (such as geological database, geophysical database, geochemical database), while the analysis application layer includes data warehouse, online analysis processing and data mining. This paper also presents in detail the data warehouse of the present system and the appropriate spatial analysis methods and models. Finally, this paper presents the prospect of the system.展开更多
In order to exchange and share information among the conceptual models of data warehouse, and to build a solid base for the integration and share of metadata, a new multidimensional concept model is presented based on...In order to exchange and share information among the conceptual models of data warehouse, and to build a solid base for the integration and share of metadata, a new multidimensional concept model is presented based on XML and its DTD is defined, which can perfectly describe various semantic characteristics of multidimensional conceptual model. According to the multidimensional conceptual modeling technique which is based on UML, the mapping algorithm between the multidimensional conceptual model is described based on XML and UML class diagram, and an application base for the wide use of this technique is given.展开更多
This paper analyzes the main characteristics, benefits, and disadvantages of existing traditional ETL (extraction, transformation, loading) methods, and summaries some factors affecting the performance of ETL tools....This paper analyzes the main characteristics, benefits, and disadvantages of existing traditional ETL (extraction, transformation, loading) methods, and summaries some factors affecting the performance of ETL tools. Then, a new ETL approach, E-LT (extraction, loading and transformation), is proposed. The E-LT approach applies database mapping technique to realize that loading stage and transformation stage in the ETL process are performed at the same time after the extraction stage. Thus, it can use SQL commands to complete loading and transformation processing, and eliminates the staging area before loading in traditional ETL process. The framework of an ETL engine based on E-LT method is presented. The ETL process including initial loading and incremental refreshment is discussed in detail, and the SQL-based algorithm for initial loading is presented. The performance of E-LT method on loading throughout outperforms some commercial ETL approaches by experimental proof and theoretical analysis. At last, a real case in marine data warehousing of the E-LT method is discussed for illustrating the validity of the proposed method.展开更多
Business process improvement is a systematic approach used by several organizations to continuously improve their quality of service.Integral to that is analyzing the current performance of each task of the process an...Business process improvement is a systematic approach used by several organizations to continuously improve their quality of service.Integral to that is analyzing the current performance of each task of the process and assigning the most appropriate resources to each task.In continuation of our previous work,we categorize resources into human and non-human resources.For instance,in the healthcare domain,human resources include doctors,nurses,and other associated staff responsible for the execution of healthcare activities;whereas the non-human resources include surgical and other equipment needed for execution.In this study,we contend that the two types of resources(human and non-human)have a different impact on the process performance,so their suitability should be measured differently.However,no work has been done to evaluate the suitability of non-human resources for the tasks of a process.Consequently,it becomes difficult to identify and subsequently overcome the inefficiencies caused by the non-human resources to the task.To address this problem,we present a three-step method to compute a suitability score of non-human resources for the task.As an evaluation of the proposed method,a healthcare case study is used to illustrate the applicability of the proposed method.Furthermore,we performed a controlled experiment to evaluate the usability of the proposed method.The encouraging response shows the usefulness of the proposed method.展开更多
Enterprises are continuously aiming at improving the execution of processes to achieve a competitive edge.One of the established ways of improving process performance is to assign the most appropriate resources to eac...Enterprises are continuously aiming at improving the execution of processes to achieve a competitive edge.One of the established ways of improving process performance is to assign the most appropriate resources to each task of the process.However,evaluations of business process improvement approaches have established that a method that can guide decision-makers to identify the most appropriate resources for a task of process improvement in a structured way,is missing.It is because the relationship between resources and tasks is less understood and advancement in business process intelligence is also ignored.To address this problem an integrated resource classification framework is presenting that identifies competence,suitability,and preference as the relationship of task with resources.But,only the competence relationship of human resources with a task is presented in this research as a resource competence model.Furthermore,the competency calculation method is presented as a user guider layer for business process intelligencebased resource competence evaluation.The computed capabilities serve as a basic input for choosing the most appropriate resources for each task of the process.Applicability of method is illustrated through a heathcare case study.展开更多
The fourth international conference on Web information systems and applications (WISA 2007) has received 409 submissions and has accepted 37 papers for publication in this issue. The papers cover broad research area...The fourth international conference on Web information systems and applications (WISA 2007) has received 409 submissions and has accepted 37 papers for publication in this issue. The papers cover broad research areas, including Web mining and data warehouse, Deep Web and Web integration, P2P networks, text processing and information retrieval, as well as Web Services and Web infrastructure. After briefly introducing the WISA conference, the survey outlines the current activities and future trends concerning Web information systems and applications based on the papers accepted for publication.展开更多
基金This work has been supported by COMPETE:POCI-01-0145-FEDER-007043 and FCT(Fundação para a Ciência e Tecnologia)within the Project Scope:UID/CEC/00319/2013This work has been funded by the SusCity project(MITP-TB/CS/0026/2013)by Portugal Incentive System for Research and Technological Development,Project in co-promotion no 002814/2015(iFACTORY 2015-2018).
文摘In the era of Big Data,many NoSQL databases emerged for the storage and later processing of vast volumes of data,using data structures that can follow columnar,key-value,document or graph formats.For analytical contexts,requiring a Big Data Warehouse,Hive is used as the driving force,allowing the analysis of vast amounts of data.Data models in Hive are usually defined taking into consideration the queries that need to be answered.In this work,a set of rules is presented for the transformation of multidimensional data models into Hive tables,making available data at different levels of detail.These several levels are suited for answering different queries,depending on the analytical needs.After the identification of the Hive tables,this paper summarizes a demonstration case in which the implementation of a specific Big Data architecture shows how the evolution from a traditional Data Warehouse to a Big Data Warehouse is possible.
文摘Multidimensional aggregation is a dominant operation on data ware-houses for on-line analytical processing (OLAP). Many efficient algorithms to compute multidimensional aggregation on relational database based data warehouseshave been developed. However, to our knowledge, there is nothing to date in theliterature about aggregation algorithms on multidimensional data warehouses thatstore datasets in multidimensional arrays rather than in tables. This paper presentsa set of multidimensional aggregation algorithms on very large and compressed mul-tidimensional data warehouses. These algorithms operate directly on compresseddatasets in multidimensional data warehouses without the need to first decompressthem. They are applicable to a variety of data compression methods. The algorithmshave differefit performance behavior as a function of dataset parameters, sizes of out-puts and main memory availability. The algorithms are described and analyzed withrespect to the I/O and CPU costs. A decision procedure to select the most efficientalgorithm, given an aggregation request, is also proposed. The analytical and ex-perimental results show that the algorithms are more efficient than the traditionalaggregation algorithms.
文摘A data warehouse often accommodates enormous summary information in various granularities and is mainly used to support on-line analytical processing. Ideally all detailed data should be accessible by residing in some legacy systems or on-line transaction processing systems. In many cases, however, data sources in computers are also kinds of summary data due to technological problems or budget limits and also because different aggregation hierarchies may need to be used among various transaction systems. In such circumstances, it is necessary to investigate how to design dimensions, which play a major role in dimensiona1 mode1 for a data warehouse, and how to estimate summary information, which is not stored in the data warehouse. In this paper, the rough set theory is applied to support the dimension design and information estimation.
基金This work is funded by Auvergne region,Feder,Agaetis,and Irstea.
文摘The problem of storage and querying of large volumes of spatial grids is an issue to solve.In this paper,we propose a method to optimize queries to aggregate raster grids stored in databases.In our approach,we propose to estimate the exact result rather than calculate the exact result.This approach reduces query execution time.One advantage of our method is that it does not require implementing or modifying functionalities of database management systems.Our approach is based on a new data structure and a specific model of SQL queries.Our work is applied here to relational data warehouses.
基金King Saud University for funding this work through Researchers Supporting Project Number(RSP-2021/387),King Saud University,Riyadh,Saudi Arabia.
文摘The effectiveness of the Business Intelligence(BI)system mainly depends on the quality of knowledge it produces.The decision-making process is hindered,and the user’s trust is lost,if the knowledge offered is undesired or of poor quality.A Data Warehouse(DW)is a huge collection of data gathered from many sources and an important part of any BI solution to assist management in making better decisions.The Extract,Transform,and Load(ETL)process is the backbone of a DW system,and it is responsible for moving data from source systems into the DW system.The more mature the ETL process the more reliable the DW system.In this paper,we propose the ETL Maturity Model(EMM)that assists organizations in achieving a high-quality ETL system and thereby enhancing the quality of knowledge produced.The EMM is made up of five levels of maturity i.e.,Chaotic,Acceptable,Stable,Efficient and Reliable.Each level of maturity contains Key Process Areas(KPAs)that have been endorsed by industry experts and include all critical features of a good ETL system.Quality Objectives(QOs)are defined procedures that,when implemented,resulted in a high-quality ETL process.Each KPA has its own set of QOs,the execution of which meets the requirements of that KPA.Multiple brainstorming sessions with relevant industry experts helped to enhance the model.EMMwas deployed in two key projects utilizing multiple case studies to supplement the validation process and support our claim.This model can assist organizations in improving their current ETL process and transforming it into a more mature ETL system.This model can also provide high-quality information to assist users inmaking better decisions and gaining their trust.
文摘Many approaches have been proposed to pre-compute data cubes in order to efficiently respond to OLAP queries in data warehouses. However, few have proposed solutions integrating all of the possible outcomes, and it is this idea that leads the integration of hierarchical dimensions into these responses. To meet this need, we propose, in this paper, a complete redefinition of the framework and the formal definition of traditional database analysis through the prism of hierarchical dimensions. After characterizing the hierarchical data cube lattice, we introduce the hierarchical data cube and its most concise reduced representation, the closed hierarchical data cube. It offers compact replication so as to optimize storage space by removing redundancies of strongly correlated data. Such data are typical of data warehouses, and in particular in video games, our field of study and experimentation, where hierarchical dimension attributes are widely represented.
文摘Universities collect and generate a considerable amount of data on students throughout their academic career. Currently in South Kivu, most universities have an information system in the form of a database made up of several disparate files. This makes it difficult to use this data efficiently and profitably. The aim of this study is to develop this transactional database-based information system into a data warehouse-oriented system. This tool will be able to collect, organize and archive data on the student’s career path, year after year, and transform it for analysis purposes. In the age of Big Data, a number of artificial intelligence techniques have been developed, making it possible to extract useful information from large databases. This extracted information is of paramount importance in decision-making. By way of example, the information extracted by these techniques can be used to predict which stream a student should choose when applying to university. In order to develop our contribution, we analyzed the IT information systems used in the various universities and applied the bottom-up method to design our data warehouse model. We used the relational model to design the data warehouse.
文摘Data warehouse (DW), a new technology invented in 1990s, is more useful for integrating and analyzing massive data than traditional database. Its application in geology field can be divided into 3 phrases: 1992-1996, commercial data warehouse (CDW) appeared; 1996-1999, geological data warehouse (GDW) appeared and the geologists or geographers realized the importance of DW and began the studies on it, but the practical DW still followed the framework of DB; 2000 to present, geological data warehouse grows, and the theory of geo-spatial data warehouse (GSDW) has been developed but the research in geological area is still deficient except that in geography. Although some developments of GDW have been made, its core still follows the CDW-organizing data by time and brings about 3 problems: difficult to integrate the geological data, for the data feature more space than time; hard to store the massive data in different levels due to the same reason; hardly support the spatial analysis if the data are organized by time as CDW does. So the GDW should be redesigned by organizing data by scale in order to store mass data in different levels and synthesize the data in different granularities, and choosing space control points to replace the former time control points so as to integrate different types of data by the method of storing one type data as one layer and then to superpose the layers. In addition, data cube, a wide used technology in CDW, will be no use in GDW, for the causality among the geological data is not so obvious as commercial data, as the data are the mixed result of many complex rules, and their analysis always needs the special geological methods and software; on the other hand, data cube for mass and complex geo-data will devour too much store space to be practical. On this point, the main purpose of GDW may be fit for data integration unlike CDW for data analysis.
基金supported by the National Key Basic Research and Development Program of China under contract No.2006CB701305the National Natural Science Foundation of China under coutract No.40571129the National High-Technology Program of China under contract Nos 2002AA639400,2003AA604040 and 2003AA637030.
文摘Marine information has been increasing quickly. The traditional database technologies have disadvantages in manipulating large amounts of marine information which relates to the position in 3-D with the time. Recently, greater emphasis has been placed on GIS (geographical information system)to deal with the marine information. The GIS has shown great success for terrestrial applications in the last decades, but its use in marine fields has been far more restricted. One of the main reasons is that most of the GIS systems or their data models are designed for land applications. They cannot do well with the nature of the marine environment and for the marine information. And this becomes a fundamental challenge to the traditional GIS and its data structure. This work designed a data model, the raster-based spatio-temporal hierarchical data model (RSHDM), for the marine information system, or for the knowledge discovery fi'om spatio-temporal data, which bases itself on the nature of the marine data and overcomes the shortages of the current spatio-temporal models when they are used in the field. As an experiment, the marine fishery data warehouse (FDW) for marine fishery management was set up, which was based on the RSHDM. The experiment proved that the RSHDM can do well with the data and can extract easily the aggregations that the management needs at different levels.
文摘This paper describes the process of design and construction of a data warehouse(“DW”)for an online learning platform using three prominent technologies,Microsoft SQL Server,MongoDB and Apache Hive.The three systems are evaluated for corpus construction and descriptive analytics.The case also demonstrates the value of evidence-centered design principles for data warehouse design that is sustainable enough to adapt to the demands of handling big data in a variety of contexts.Additionally,the paper addresses maintainability-performance tradeoff,storage considerations and accessibility of big data corpora.In this NSF-sponsored work,the data were processed,transformed,and stored in the three versions of a data warehouse in search for a better performing and more suitable platform.The data warehouse engines-a relational database,a No-SQL database,and a big data technology for parallel computations-were subjected to principled analysis.Design,construction and evaluation of a data warehouse were scrutinized to find improved ways of storing,organizing and extracting information.The work also examines building corpora,performing ad-hoc extractions,and ensuring confidentiality.It was found that Apache Hive demonstrated the best processing time followed by SQL Server and MongoDB.In the aspect of analytical queries,the SQL Server was a top performer followed by MongoDB and Hive.This paper also discusses a novel process for render students anonymity complying with Family Educational Rights and Privacy Act regulations.Five phases for DW design are recommended:1)Establishing goals at the outset based on Evidence-Centered Design principles;2)Recognizing the unique demands of student data and use;3)Adopting a model that integrates cost with technical considerations;4)Designing a comparative database and 5)Planning for a DW design that is sustainable.Recommendations for future research include attempting DW design in contexts involving larger data sets,more refined operations,and ensuring attention is paid to sustainability of operations.
基金Supported by the National Natural Science Foundation of China (70471037)211 Project Foundation of Shanghai University (8011040506)
文摘In this paper, we designed a customer-centered data warehouse system with five subjects: listing, bidding, transaction, accounts, and customer contact based on the business process of online auction companies. For each subject, we analyzed its fact indexes and dimensions. Then take transaction subject as example, analyzed the data warehouse model in detail, and got the multi-dimensional analysis structure of transaction subject. At last, using data mining to do customer segmentation, we divided customers into four types: impulse customer, prudent customer, potential customer, and ordinary customer. By the result of multi-dimensional customer data analysis, online auction companies can do more target marketing and increase customer loyalty.
文摘Surface quality has been one of the key factors influencing the ongoing improvement of the quality of steel. Therefore,it is urgent to provide methods for efficient supervision of surface defects. This paper first expressed the main problems existing in defect management and then focused on constructing a data platform of surface defect management using a multidimensional database. Finally, some onqine applications of the platform at Baosteel were demonstrated. Results show that the constructed multidimensional database provides more structured defect data, and thus it is suitable for swift and multi-angle analysis of the defect data.
文摘To efficiently solve the materialized view selection problem, an optimal genetic algorithm of how to select a set of views to be materialized is proposed so as to achieve both good query performance and low view maintenance cost under a storage space constraint. First, a pre-processing algorithm based on the maximum benefit per unit space is used to generate initial solutions. Then, the initial solutions are improved by the genetic algorithm having the mixture of optimal strategies. Furthermore, the generated infeasible solutions during the evolution process are repaired by loss function. The experimental results show that the proposed algorithm outperforms the heuristic algorithm and canonical genetic algorithm in finding optimal solutions.
文摘Recently, due to the rapid growth increment of data sensors, a massive volume of data is generated from different sources. The way of administering such data in a sense storing, managing, analyzing, and extracting insightful information from the massive volume of data is a challenging task. Big data analytics is becoming a vital research area in domains such as climate data analysis which demands fast access to data. Nowadays, an open-source platform namely MapReduce which is a distributed computing framework is widely used in many domains of big data analysis. In our work, we have developed a conceptual framework of data modeling essentially useful for the implementation of a hybrid data warehouse model to store the features of National Climatic Data Center (NCDC) climate data. The hybrid data warehouse model for climate big data enables for the identification of weather patterns that would be applicable in agricultural and other similar climate change-related studies that will play a major role in recommending actions to be taken by domain experts and make contingency plans over extreme cases of weather variability.
基金The study is supported by the Ministry of Science and Technology of China( No.96-914 -0 5)
文摘This paper presents the aim and the design structure of the metallic mineral resources assessment and analysis system. This system adopts an integrated technique of data warehouse composed of affairs processing layer and analysis application layer. The affairs processing layer includes multiform databases (such as geological database, geophysical database, geochemical database), while the analysis application layer includes data warehouse, online analysis processing and data mining. This paper also presents in detail the data warehouse of the present system and the appropriate spatial analysis methods and models. Finally, this paper presents the prospect of the system.
文摘In order to exchange and share information among the conceptual models of data warehouse, and to build a solid base for the integration and share of metadata, a new multidimensional concept model is presented based on XML and its DTD is defined, which can perfectly describe various semantic characteristics of multidimensional conceptual model. According to the multidimensional conceptual modeling technique which is based on UML, the mapping algorithm between the multidimensional conceptual model is described based on XML and UML class diagram, and an application base for the wide use of this technique is given.
基金Supported by the National Natural Science Foundation of China (60673139, 60573090)
文摘This paper analyzes the main characteristics, benefits, and disadvantages of existing traditional ETL (extraction, transformation, loading) methods, and summaries some factors affecting the performance of ETL tools. Then, a new ETL approach, E-LT (extraction, loading and transformation), is proposed. The E-LT approach applies database mapping technique to realize that loading stage and transformation stage in the ETL process are performed at the same time after the extraction stage. Thus, it can use SQL commands to complete loading and transformation processing, and eliminates the staging area before loading in traditional ETL process. The framework of an ETL engine based on E-LT method is presented. The ETL process including initial loading and incremental refreshment is discussed in detail, and the SQL-based algorithm for initial loading is presented. The performance of E-LT method on loading throughout outperforms some commercial ETL approaches by experimental proof and theoretical analysis. At last, a real case in marine data warehousing of the E-LT method is discussed for illustrating the validity of the proposed method.
文摘Business process improvement is a systematic approach used by several organizations to continuously improve their quality of service.Integral to that is analyzing the current performance of each task of the process and assigning the most appropriate resources to each task.In continuation of our previous work,we categorize resources into human and non-human resources.For instance,in the healthcare domain,human resources include doctors,nurses,and other associated staff responsible for the execution of healthcare activities;whereas the non-human resources include surgical and other equipment needed for execution.In this study,we contend that the two types of resources(human and non-human)have a different impact on the process performance,so their suitability should be measured differently.However,no work has been done to evaluate the suitability of non-human resources for the tasks of a process.Consequently,it becomes difficult to identify and subsequently overcome the inefficiencies caused by the non-human resources to the task.To address this problem,we present a three-step method to compute a suitability score of non-human resources for the task.As an evaluation of the proposed method,a healthcare case study is used to illustrate the applicability of the proposed method.Furthermore,we performed a controlled experiment to evaluate the usability of the proposed method.The encouraging response shows the usefulness of the proposed method.
文摘Enterprises are continuously aiming at improving the execution of processes to achieve a competitive edge.One of the established ways of improving process performance is to assign the most appropriate resources to each task of the process.However,evaluations of business process improvement approaches have established that a method that can guide decision-makers to identify the most appropriate resources for a task of process improvement in a structured way,is missing.It is because the relationship between resources and tasks is less understood and advancement in business process intelligence is also ignored.To address this problem an integrated resource classification framework is presenting that identifies competence,suitability,and preference as the relationship of task with resources.But,only the competence relationship of human resources with a task is presented in this research as a resource competence model.Furthermore,the competency calculation method is presented as a user guider layer for business process intelligencebased resource competence evaluation.The computed capabilities serve as a basic input for choosing the most appropriate resources for each task of the process.Applicability of method is illustrated through a heathcare case study.
文摘The fourth international conference on Web information systems and applications (WISA 2007) has received 409 submissions and has accepted 37 papers for publication in this issue. The papers cover broad research areas, including Web mining and data warehouse, Deep Web and Web integration, P2P networks, text processing and information retrieval, as well as Web Services and Web infrastructure. After briefly introducing the WISA conference, the survey outlines the current activities and future trends concerning Web information systems and applications based on the papers accepted for publication.