期刊文献+
共找到56篇文章
< 1 2 3 >
每页显示 20 50 100
Modelling and implementing big data warehouses for decision support 被引量:2
1
作者 Maribel Yasmina Santos Bruno Martinho Carlos Costa 《Journal of Management Analytics》 EI 2017年第2期111-129,共19页
In the era of Big Data,many NoSQL databases emerged for the storage and later processing of vast volumes of data,using data structures that can follow columnar,key-value,document or graph formats.For analytical contex... In the era of Big Data,many NoSQL databases emerged for the storage and later processing of vast volumes of data,using data structures that can follow columnar,key-value,document or graph formats.For analytical contexts,requiring a Big Data Warehouse,Hive is used as the driving force,allowing the analysis of vast amounts of data.Data models in Hive are usually defined taking into consideration the queries that need to be answered.In this work,a set of rules is presented for the transformation of multidimensional data models into Hive tables,making available data at different levels of detail.These several levels are suited for answering different queries,depending on the analytical needs.After the identification of the Hive tables,this paper summarizes a demonstration case in which the implementation of a specific Big Data architecture shows how the evolution from a traditional Data Warehouse to a Big Data Warehouse is possible. 展开更多
关键词 big data data model data warehouse hive NOSQL
原文传递
Efficient Aggregation Algorithms on Very LargeCompressed Data Warehouses 被引量:1
2
作者 李建中 李英姝 Jaideep Srivastava 《Journal of Computer Science & Technology》 SCIE EI CSCD 2000年第3期213-229,共17页
Multidimensional aggregation is a dominant operation on data ware-houses for on-line analytical processing (OLAP). Many efficient algorithms to compute multidimensional aggregation on relational database based data wa... Multidimensional aggregation is a dominant operation on data ware-houses for on-line analytical processing (OLAP). Many efficient algorithms to compute multidimensional aggregation on relational database based data warehouseshave been developed. However, to our knowledge, there is nothing to date in theliterature about aggregation algorithms on multidimensional data warehouses thatstore datasets in multidimensional arrays rather than in tables. This paper presentsa set of multidimensional aggregation algorithms on very large and compressed mul-tidimensional data warehouses. These algorithms operate directly on compresseddatasets in multidimensional data warehouses without the need to first decompressthem. They are applicable to a variety of data compression methods. The algorithmshave differefit performance behavior as a function of dataset parameters, sizes of out-puts and main memory availability. The algorithms are described and analyzed withrespect to the I/O and CPU costs. A decision procedure to select the most efficientalgorithm, given an aggregation request, is also proposed. The analytical and ex-perimental results show that the algorithms are more efficient than the traditionalaggregation algorithms. 展开更多
关键词 OLAP AGGREGATION data warehouse
原文传递
An Application of Rough Set Theory to Modelling and Utilising Data Warehouses
3
作者 DENG Ming-rong1, YANG Jian-bo2, PAN Yun-he3 1. School of Management, Zhejiang University, Hangzhou 310028, China 2. Manchester School of Management, 1nstitute of Science and Technology, University of Manchester, M60 1QD, UK 3. Zhejiang University, Hangzho 《Journal of Systems Science and Systems Engineering》 SCIE EI CSCD 2001年第4期489-496,共8页
A data warehouse often accommodates enormous summary information in various granularities and is mainly used to support on-line analytical processing. Ideally all detailed data should be accessible by residing in some... A data warehouse often accommodates enormous summary information in various granularities and is mainly used to support on-line analytical processing. Ideally all detailed data should be accessible by residing in some legacy systems or on-line transaction processing systems. In many cases, however, data sources in computers are also kinds of summary data due to technological problems or budget limits and also because different aggregation hierarchies may need to be used among various transaction systems. In such circumstances, it is necessary to investigate how to design dimensions, which play a major role in dimensiona1 mode1 for a data warehouse, and how to estimate summary information, which is not stored in the data warehouse. In this paper, the rough set theory is applied to support the dimension design and information estimation. 展开更多
关键词 rough sets data warehouse DIMENSION
原文传递
Performance optimization of grid aggregation in spatial data warehouses
4
作者 Myoung-Ah Kang Mehdi Zaamoune +2 位作者 François Pinet Sandro Bimonte Philippe Beaune 《International Journal of Digital Earth》 SCIE EI CSCD 2015年第12期970-988,共19页
The problem of storage and querying of large volumes of spatial grids is an issue to solve.In this paper,we propose a method to optimize queries to aggregate raster grids stored in databases.In our approach,we propose... The problem of storage and querying of large volumes of spatial grids is an issue to solve.In this paper,we propose a method to optimize queries to aggregate raster grids stored in databases.In our approach,we propose to estimate the exact result rather than calculate the exact result.This approach reduces query execution time.One advantage of our method is that it does not require implementing or modifying functionalities of database management systems.Our approach is based on a new data structure and a specific model of SQL queries.Our work is applied here to relational data warehouses. 展开更多
关键词 data warehouse database modelling geographical information system
原文传递
Application of fuzzy equivalence theory in data cleaning
5
作者 李华旸 刘玉葆 李又奎 《Journal of Southeast University(English Edition)》 EI CAS 2004年第4期454-457,共4页
This paper presented a rule merging and simplifying method and an improved analysis deviation algorithm. The fuzzy equivalence theory avoids the rigid way (either this or that) of traditional equivalence theory. Durin... This paper presented a rule merging and simplifying method and an improved analysis deviation algorithm. The fuzzy equivalence theory avoids the rigid way (either this or that) of traditional equivalence theory. During a data cleaning process task, some rules exist such as included/being included relations with each other. The equivalence degree of the being-included rule is smaller than that of the including rule, so a rule merging and simplifying method is introduced to reduce the total computing time. And this kind of relation will affect the deviation of fuzzy equivalence degree. An improved analysis deviation algorithm that omits the influence of the included rules' equivalence degree was also presented. Normally the duplicate records are logged in a file, and users have to check and verify them one by one. It's time-cost. The proposed algorithm can save users' labor during duplicate records checking. Finally, an experiment was presented which demonstrates the possibility of the rule. 展开更多
关键词 data recording data warehouses database systems MERGING SEMANTICS
下载PDF
An Application of a Multi-Tier Data Warehouse in Oil and Gas Drilling Information Management 被引量:2
6
作者 张宁生 王志伟 《Petroleum Science》 SCIE CAS CSCD 2004年第4期1-5,共5页
Expenditure on wells constitute a significant part of the operational costs for a petroleum enterprise, where most of the cost results from drilling. This has prompted drilling departments to continuously look for wa... Expenditure on wells constitute a significant part of the operational costs for a petroleum enterprise, where most of the cost results from drilling. This has prompted drilling departments to continuously look for ways to reduce their drilling costs and be as efficient as possible. A system called the Drilling Comprehensive Information Management and Application System (DCIMAS) is developed and presented here, with an aim at collecting, storing and making full use of the valuable well data and information relating to all drilling activities and operations. The DCIMAS comprises three main parts, including a data collection and transmission system, a data warehouse (DW) management system, and an integrated platform of core applications. With the support of the application platform, the DW management system is introduced, whereby the operation data are captured at well sites and transmitted electronically to a data warehouse via transmission equipment and ETL (extract, transformation and load) tools. With the high quality of the data guaranteed, our central task is to make the best use of the operation data and information for drilling analysis and to provide further information to guide later production stages. Applications have been developed and integrated on a uniform platform to interface directly with different layers of the multi-tier DW. Now, engineers in every department spend less time on data handling and more time on applying technology in their real work with the system. 展开更多
关键词 drilling information management multi-tier data warehouse information processing application system
下载PDF
Research of data architecture in digital ocean
7
作者 张峰 李四海 石绥祥 《Marine Science Bulletin》 CAS 2010年第2期85-96,共12页
The characters of marine data, such as multi-source, polymorphism, diversity and large amount, determine their differences from other data. How to store and manage marine data rationally and effectively to provide pow... The characters of marine data, such as multi-source, polymorphism, diversity and large amount, determine their differences from other data. How to store and manage marine data rationally and effectively to provide powerful data support for marine management information system and "Digital Ocean" prototype system construction is an urgent problem to solve. Different types of system planning data, such as marine resource, marine environment, marine econotny and marine management, and establishing marine data architecture frame with uniform standard are to realize the effective management of all level marine data, such as national marine data, the provincial (municipal) marine data, and meet the need of fundamental information-platform construction. 展开更多
关键词 digital ocean data architecture data warehouse data mart METAdata
下载PDF
XML Based Data Cube and X-OLAP
8
作者 王晓玲 董逸生 《Journal of Southeast University(English Edition)》 EI CAS 2001年第2期5-9,共5页
Data warehouse provides storage and management for mass data, but data schema evolves with time on. When data schema is changed, added or deleted, the data in data warehouse must comply with the changed data schema, ... Data warehouse provides storage and management for mass data, but data schema evolves with time on. When data schema is changed, added or deleted, the data in data warehouse must comply with the changed data schema, so data warehouse must be re organized or re constructed, but this process is exhausting and wasteful. In order to cope with these problems, this paper develops an approach to model data cube with XML, which emerges as a universal format for data exchange on the Web and which can make data warehouse flexible and scalable. This paper also extends OLAP algebra for XML based data cube, which is called X OLAP. 展开更多
关键词 data warehouse data cube XML X OLAP semi structured data
下载PDF
Uniform Representation Model for Metadata of Data Warehouse
9
作者 王建芬 曹元大 《Journal of Beijing Institute of Technology》 EI CAS 2002年第1期85-88,共4页
A uniform metadata representation is introduced for heterogeneous databases, multi media information and other information sources. Some features about metadata are analyzed. The limitation of existing metadata model... A uniform metadata representation is introduced for heterogeneous databases, multi media information and other information sources. Some features about metadata are analyzed. The limitation of existing metadata model is compared with the new one. The metadata model is described in XML which is fit for metadata denotation and exchange. The well structured data, semi structured data and those exterior file data without structure are described in the metadata model. The model provides feasibility and extensibility for constructing uniform metadata model of data warehouse. 展开更多
关键词 data warehouse METAdata data model XML
下载PDF
Development of Geological Data Warehouse 被引量:2
10
作者 LiZhenhua HuGuangdao ZhangZhenfei 《Journal of China University of Geosciences》 SCIE CSCD 2003年第3期261-264,共4页
Data warehouse (DW), a new technology invented in 1990s, is more useful for integrating and analyzing massive data than traditional database. Its application in geology field can be divided into 3 phrases: 1992-1996,... Data warehouse (DW), a new technology invented in 1990s, is more useful for integrating and analyzing massive data than traditional database. Its application in geology field can be divided into 3 phrases: 1992-1996, commercial data warehouse (CDW) appeared; 1996-1999, geological data warehouse (GDW) appeared and the geologists or geographers realized the importance of DW and began the studies on it, but the practical DW still followed the framework of DB; 2000 to present, geological data warehouse grows, and the theory of geo-spatial data warehouse (GSDW) has been developed but the research in geological area is still deficient except that in geography. Although some developments of GDW have been made, its core still follows the CDW-organizing data by time and brings about 3 problems: difficult to integrate the geological data, for the data feature more space than time; hard to store the massive data in different levels due to the same reason; hardly support the spatial analysis if the data are organized by time as CDW does. So the GDW should be redesigned by organizing data by scale in order to store mass data in different levels and synthesize the data in different granularities, and choosing space control points to replace the former time control points so as to integrate different types of data by the method of storing one type data as one layer and then to superpose the layers. In addition, data cube, a wide used technology in CDW, will be no use in GDW, for the causality among the geological data is not so obvious as commercial data, as the data are the mixed result of many complex rules, and their analysis always needs the special geological methods and software; on the other hand, data cube for mass and complex geo-data will devour too much store space to be practical. On this point, the main purpose of GDW may be fit for data integration unlike CDW for data analysis. 展开更多
关键词 data warehouse (DW) geological data warehouse (GDW) space control points data cube
下载PDF
Constructing a raster-based spatio-temporal hierarchical data model for marine fisheries application 被引量:2
11
作者 SU Fenzhen ZHOU Chenhu ZHANG Tianyu 《Acta Oceanologica Sinica》 SCIE CAS CSCD 2006年第1期57-63,共7页
Marine information has been increasing quickly. The traditional database technologies have disadvantages in manipulating large amounts of marine information which relates to the position in 3-D with the time. Recently... Marine information has been increasing quickly. The traditional database technologies have disadvantages in manipulating large amounts of marine information which relates to the position in 3-D with the time. Recently, greater emphasis has been placed on GIS (geographical information system)to deal with the marine information. The GIS has shown great success for terrestrial applications in the last decades, but its use in marine fields has been far more restricted. One of the main reasons is that most of the GIS systems or their data models are designed for land applications. They cannot do well with the nature of the marine environment and for the marine information. And this becomes a fundamental challenge to the traditional GIS and its data structure. This work designed a data model, the raster-based spatio-temporal hierarchical data model (RSHDM), for the marine information system, or for the knowledge discovery fi'om spatio-temporal data, which bases itself on the nature of the marine data and overcomes the shortages of the current spatio-temporal models when they are used in the field. As an experiment, the marine fishery data warehouse (FDW) for marine fishery management was set up, which was based on the RSHDM. The experiment proved that the RSHDM can do well with the data and can extract easily the aggregations that the management needs at different levels. 展开更多
关键词 marine geographical information system spatio-temporal data model knowledge discovery fishery management data warehouse
下载PDF
Data Warehouse Design for Big Data in Academia 被引量:2
12
作者 Alex Rudniy 《Computers, Materials & Continua》 SCIE EI 2022年第4期979-992,共14页
This paper describes the process of design and construction of a data warehouse(“DW”)for an online learning platform using three prominent technologies,Microsoft SQL Server,MongoDB and Apache Hive.The three systems ... This paper describes the process of design and construction of a data warehouse(“DW”)for an online learning platform using three prominent technologies,Microsoft SQL Server,MongoDB and Apache Hive.The three systems are evaluated for corpus construction and descriptive analytics.The case also demonstrates the value of evidence-centered design principles for data warehouse design that is sustainable enough to adapt to the demands of handling big data in a variety of contexts.Additionally,the paper addresses maintainability-performance tradeoff,storage considerations and accessibility of big data corpora.In this NSF-sponsored work,the data were processed,transformed,and stored in the three versions of a data warehouse in search for a better performing and more suitable platform.The data warehouse engines-a relational database,a No-SQL database,and a big data technology for parallel computations-were subjected to principled analysis.Design,construction and evaluation of a data warehouse were scrutinized to find improved ways of storing,organizing and extracting information.The work also examines building corpora,performing ad-hoc extractions,and ensuring confidentiality.It was found that Apache Hive demonstrated the best processing time followed by SQL Server and MongoDB.In the aspect of analytical queries,the SQL Server was a top performer followed by MongoDB and Hive.This paper also discusses a novel process for render students anonymity complying with Family Educational Rights and Privacy Act regulations.Five phases for DW design are recommended:1)Establishing goals at the outset based on Evidence-Centered Design principles;2)Recognizing the unique demands of student data and use;3)Adopting a model that integrates cost with technical considerations;4)Designing a comparative database and 5)Planning for a DW design that is sustainable.Recommendations for future research include attempting DW design in contexts involving larger data sets,more refined operations,and ensuring attention is paid to sustainability of operations. 展开更多
关键词 Big data data warehouse MONGODB Apache hive SQL server
下载PDF
Research on three-dimension ocean observation data integration and service technology
13
作者 张新 董文 郑志刚 《Chinese Journal of Oceanology and Limnology》 SCIE CAS CSCD 2011年第2期482-490,共9页
Currently,ocean data portals are being developed around the world based on Geographic Information Systems(GIS) as a source of ocean data and information.However,given the relatively high temporal frequency and the int... Currently,ocean data portals are being developed around the world based on Geographic Information Systems(GIS) as a source of ocean data and information.However,given the relatively high temporal frequency and the intrinsic spatial nature of ocean data and information,no current GIS software is adequate to deal effectively and efficiently with spatiotemporal data.Furthermore,while existing ocean data portals are generally designed to meet the basic needs of a broad range of users,they are sometimes very complicated for general audiences,especially for those without training in GIS.In this paper,a new technical architecture for an ocean data integration and service system is put forward that consists of four layers:the operation layer,the extract,transform,and load(ETL) layer,the data warehouse layer,and the presentation layer.The integration technology based on the XML,ontology,and spatiotemporal data organization scheme for the data warehouse layer is then discussed.In addition,the ocean observing data service technology realized in the presentation layer is also discussed in detail,including the development of the web portal and ocean data sharing platform.The application on the Taiwan Strait shows that the technology studied in this paper can facilitate sharing,access,and use of ocean observation data.The paper is based on an ongoing research project for the development of an ocean observing information system for the Taiwan Strait that will facilitate the prevention of ocean disasters. 展开更多
关键词 data integration data service spatiotemporal data warehouse STANDARD
下载PDF
ROLE OF META-MODEL IN ENGINEERING DATA WAREHOUSE
14
作者 SHENGuo-hua HUANGZhi-qiu WANGChuan-dong 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI 2004年第4期317-321,共5页
Engineering data are separately organized and their schemas are increasingly complex and variable. Engineering data management systems are needed to be able to manage the unified data and to be both customizable and e... Engineering data are separately organized and their schemas are increasingly complex and variable. Engineering data management systems are needed to be able to manage the unified data and to be both customizable and extensible. The design of the systems is heavily dependent on the flexibility and self-description of the data model. The characteristics of engineering data and their management facts are analyzed. Then engineering data warehouse (EDW) architecture and multi-layer metamodels are presented. Also an approach to manage anduse engineering data by a meta object is proposed. Finally, an application flight test EDW system (FTEDWS) is described and meta-objects to manage engineering data in the data warehouse are used. It shows that adopting a meta-modeling approach provides a support for interchangeability and a sufficiently flexible environment in which the system evolution and the reusability can be handled. 展开更多
关键词 data warehouse meta model engineering data management SELF-DESCRIPTION
下载PDF
Multi-Dimensional Customer Data Analysis in Online Auctions
15
作者 LAO Guoling XIONG Kuan QIN Zheng 《Wuhan University Journal of Natural Sciences》 CAS 2007年第5期793-798,共6页
In this paper, we designed a customer-centered data warehouse system with five subjects: listing, bidding, transaction, accounts, and customer contact based on the business process of online auction companies. For ea... In this paper, we designed a customer-centered data warehouse system with five subjects: listing, bidding, transaction, accounts, and customer contact based on the business process of online auction companies. For each subject, we analyzed its fact indexes and dimensions. Then take transaction subject as example, analyzed the data warehouse model in detail, and got the multi-dimensional analysis structure of transaction subject. At last, using data mining to do customer segmentation, we divided customers into four types: impulse customer, prudent customer, potential customer, and ordinary customer. By the result of multi-dimensional customer data analysis, online auction companies can do more target marketing and increase customer loyalty. 展开更多
关键词 online auction data warehouse online analytic process (OLAP) data mining E-COMMERCE
下载PDF
ETL Maturity Model for Data Warehouse Systems:A CMMI Compliant Framework
16
作者 Musawwer Khan Islam Ali +6 位作者 Shahzada Khurram Salman Naseer Shafiq Ahmad Ahmed T.Soliman Akber Abid Gardezi Muhammad Shafiq Jin-Ghoo Choi 《Computers, Materials & Continua》 SCIE EI 2023年第2期3849-3863,共15页
The effectiveness of the Business Intelligence(BI)system mainly depends on the quality of knowledge it produces.The decision-making process is hindered,and the user’s trust is lost,if the knowledge offered is undesir... The effectiveness of the Business Intelligence(BI)system mainly depends on the quality of knowledge it produces.The decision-making process is hindered,and the user’s trust is lost,if the knowledge offered is undesired or of poor quality.A Data Warehouse(DW)is a huge collection of data gathered from many sources and an important part of any BI solution to assist management in making better decisions.The Extract,Transform,and Load(ETL)process is the backbone of a DW system,and it is responsible for moving data from source systems into the DW system.The more mature the ETL process the more reliable the DW system.In this paper,we propose the ETL Maturity Model(EMM)that assists organizations in achieving a high-quality ETL system and thereby enhancing the quality of knowledge produced.The EMM is made up of five levels of maturity i.e.,Chaotic,Acceptable,Stable,Efficient and Reliable.Each level of maturity contains Key Process Areas(KPAs)that have been endorsed by industry experts and include all critical features of a good ETL system.Quality Objectives(QOs)are defined procedures that,when implemented,resulted in a high-quality ETL process.Each KPA has its own set of QOs,the execution of which meets the requirements of that KPA.Multiple brainstorming sessions with relevant industry experts helped to enhance the model.EMMwas deployed in two key projects utilizing multiple case studies to supplement the validation process and support our claim.This model can assist organizations in improving their current ETL process and transforming it into a more mature ETL system.This model can also provide high-quality information to assist users inmaking better decisions and gaining their trust. 展开更多
关键词 ETL maturity model CMMI data warehouse maturity model
下载PDF
Constructing a data platform for surface defect management using a multidimensional database
17
作者 SU Yicai OU Peng GAO Wenwu 《Baosteel Technical Research》 CAS 2013年第4期16-20,共5页
Surface quality has been one of the key factors influencing the ongoing improvement of the quality of steel. Therefore,it is urgent to provide methods for efficient supervision of surface defects. This paper first exp... Surface quality has been one of the key factors influencing the ongoing improvement of the quality of steel. Therefore,it is urgent to provide methods for efficient supervision of surface defects. This paper first expressed the main problems existing in defect management and then focused on constructing a data platform of surface defect management using a multidimensional database. Finally, some onqine applications of the platform at Baosteel were demonstrated. Results show that the constructed multidimensional database provides more structured defect data, and thus it is suitable for swift and multi-angle analysis of the defect data. 展开更多
关键词 surface defect multidimensional database data warehouse on-line analysis
下载PDF
Refreshing File Aggregate of Distributed Data Warehouse in Sets of Electric Apparatus
18
作者 于宝琴 王太勇 +3 位作者 张君 周明 何改云 李国琴 《Transactions of Tianjin University》 EI CAS 2006年第3期174-179,共6页
Integrating heterogeneous data sources is a precondition to share data for enterprises. Highly-efficient data updating can both save system expenses, and offer real-time data. It is one of the hot issues to modify dat... Integrating heterogeneous data sources is a precondition to share data for enterprises. Highly-efficient data updating can both save system expenses, and offer real-time data. It is one of the hot issues to modify data rapidly in the pre-processing area of the data warehouse. An extract transform loading design is proposed based on a new data algorithm called Diff-Match,which is developed by utilizing mode matching and data-filtering technology. It can accelerate data renewal, filter the heterogeneous data, and seek out different sets of data. Its efficiency has been proved by its successful application in an enterprise of electric apparatus groups. 展开更多
关键词 distributed data warehouse Diff-Match algorithm KMP algorithm file aggregates extract transform loading
下载PDF
The Complete K-Level Tree and Its Application to Data Warehouse Filtering
19
作者 马琳 Wang Kuanquan +1 位作者 Li Haifeng Zucker J D 《High Technology Letters》 EI CAS 2003年第4期13-16,共4页
This paper presents a simple complete K level tree (CKT) architecture for text database organization and rapid data filtering. A database is constructed as a CKT forest and each CKT contains data of the same length. T... This paper presents a simple complete K level tree (CKT) architecture for text database organization and rapid data filtering. A database is constructed as a CKT forest and each CKT contains data of the same length. The maximum depth and the minimum depth of an individual CKT are equal and identical to data’s length. Insertion and deletion operations are defined; storage method and filtering algorithm are also designed for good compensation between efficiency and complexity. Applications to computer aided teaching of Chinese and protein selection show that an about 30% reduction of storage consumption and an over 60% reduction of computation may be easily obtained. 展开更多
关键词 complete K level tree data warehouse organization data filtering data retrieval
下载PDF
The Proposal of Data Warehouse Validation
20
作者 Pavol Tanuska Michal Kebisek +1 位作者 Oliver Moravcik Pavel Vazan 《Computer Technology and Application》 2011年第8期650-657,共8页
The analysis of relevant standards and guidelines proved the lack of information on actions and activities concerning data warehouse testing. The absence of the complex data warehouse testing methodology seems to be c... The analysis of relevant standards and guidelines proved the lack of information on actions and activities concerning data warehouse testing. The absence of the complex data warehouse testing methodology seems to be crucial particularly in the phase of the data warehouse implementation. The aim of this article is to suggest basic data warehouse testing activities as a final part of data warehouse testing methodology. The testing activities that must be implemented in the process of the data warehouse testing can be split into four logical units regarding the multidimensional database testing, data pump testing, metadata and OLAP (Online Analytical Processing) testing. Between main testing activities can be included: revision of the multidimensional database scheme, optimizing of fact tables number, problem of data explosion, testing for correctness of aggregation and summation of data etc. 展开更多
关键词 data warehouse test case testing activities METHODOLOGY VALIDATION UML (unified modeling language).
下载PDF
上一页 1 2 3 下一页 到第
使用帮助 返回顶部