Expenditure on wells constitute a significant part of the operational costs for a petroleum enterprise, where most of the cost results from drilling. This has prompted drilling departments to continuously look for wa...Expenditure on wells constitute a significant part of the operational costs for a petroleum enterprise, where most of the cost results from drilling. This has prompted drilling departments to continuously look for ways to reduce their drilling costs and be as efficient as possible. A system called the Drilling Comprehensive Information Management and Application System (DCIMAS) is developed and presented here, with an aim at collecting, storing and making full use of the valuable well data and information relating to all drilling activities and operations. The DCIMAS comprises three main parts, including a data collection and transmission system, a data warehouse (DW) management system, and an integrated platform of core applications. With the support of the application platform, the DW management system is introduced, whereby the operation data are captured at well sites and transmitted electronically to a data warehouse via transmission equipment and ETL (extract, transformation and load) tools. With the high quality of the data guaranteed, our central task is to make the best use of the operation data and information for drilling analysis and to provide further information to guide later production stages. Applications have been developed and integrated on a uniform platform to interface directly with different layers of the multi-tier DW. Now, engineers in every department spend less time on data handling and more time on applying technology in their real work with the system.展开更多
Objective: To establish an interactive management model for community-oriented high-risk osteoporosis in conjunction with a rural community health service center. Materials and Methods: Toward multidimensional analysi...Objective: To establish an interactive management model for community-oriented high-risk osteoporosis in conjunction with a rural community health service center. Materials and Methods: Toward multidimensional analysis of data, the system we developed combines basic principles of data warehouse technology oriented to the needs of community health services. This paper introduces the steps we took in constructing the data warehouse;the case presented here is that of a district community health management information system in Changshu, Jiangsu Province, China. For our data warehouse, we chose the MySQL 4.5 relational database, the Browser/Server, (B/S) model, and hypertext preprocessor as the development tools. Results: The system allowed online analysis processing and next-stage work preparation, and provided a platform for data management, data query, online analysis, etc., in community health service center, specialist outpatient for osteoporosis, and health administration sectors. Conclusion: The users of remote management system and data warehouse can include community health service centers, osteoporosis departments of hospitals, and health administration departments;provide reference for policymaking of health administrators, residents’ health information, and intervention suggestions for general practitioners in community health service centers, patients’ follow-up information for osteoporosis specialists in general hospitals.展开更多
Geo-data is a foundation for the prediction and assessment of ore resources, so managing and making full use of those data, including geography database, geology database, mineral deposits database, aeromagnetics data...Geo-data is a foundation for the prediction and assessment of ore resources, so managing and making full use of those data, including geography database, geology database, mineral deposits database, aeromagnetics database, gravity database, geochemistry database and remote sensing database, is very significant. We developed national important mining zone database (NIMZDB) to manage 14 national important mining zone databases to support a new round prediction of ore deposit. We found that attention should be paid to the following issues: ① data accuracy: integrity, logic consistency, attribute, spatial and time accuracy; ② management of both attribute and spatial data in the same system;③ transforming data between MapGIS and ArcGIS; ④ data sharing and security; ⑤ data searches that can query both attribute and spatial data. Accuracy of input data is guaranteed and the search, analysis and translation of data between MapGIS and ArcGIS has been made convenient via the development of a checking data module and a managing data module based on MapGIS and ArcGIS. Using AreSDE, we based data sharing on a client/server system, and attribute and spatial data are also managed in the same system.展开更多
On the bas is of the reality of material supply management of the coal enterprise, this paper expounds plans of material management systems based on specific IT, and indicates the deficiencies, the problems of them an...On the bas is of the reality of material supply management of the coal enterprise, this paper expounds plans of material management systems based on specific IT, and indicates the deficiencies, the problems of them and the necessity of improving them. The structure, models and data organizing schema of the material management decision support system are investigated based on a new data management technology (data warehousing technology).展开更多
Discussing the matter of organizational data management implies, almost automatically, the concept of data warehousing as one of the most important parts of decision support system (DSS), as it supports the integrat...Discussing the matter of organizational data management implies, almost automatically, the concept of data warehousing as one of the most important parts of decision support system (DSS), as it supports the integration of information management by aggregating all data formats and provisioning external systems with consistent data content and flows, together with the metadata concept, as one of the easiest ways of integration for software and database systems. Since organizational data management uses the metadata channel for creating a bi-directional flow, when correctly managed, metadata can save both time and resources for organizations. This paperI will focus on providing theoretical aspects of the two concepts, together with a short brief over a proposed model of design for an organizational management tool.展开更多
In modern workforce management,the demand for new ways to maximize worker satisfaction,productivity,and security levels is endless.Workforce movement data such as those source data from an access control system can su...In modern workforce management,the demand for new ways to maximize worker satisfaction,productivity,and security levels is endless.Workforce movement data such as those source data from an access control system can support this ongoing process with subsequent analysis.In this study,a solution to attaining this goal is proposed,based on the design and implementation of a data mart as part of a dimensional trajectory data warehouse(TDW)that acts as a repository for the management of movement data.A novel methodological approach is proposed for modeling multiple spatial and temporal dimensions in a logical model.The case study presented in this paper for modeling and analyzing workforce movement data is to support human resource management decision-making and the following discussion provides a representative example of the contribution of a TDW in the process of information management and decision support systems.The entire process of exporting,cleaning,consolidating,and transforming data is implemented to achieve an appropriate format for final import.Structured query language(SQL)queries demonstrate the convenience of dimensional design for data analysis,and valuable information can be extracted from the movements of employees on company premises to manage the workforce efficiently and effectively.Visual analytics through data visualization support the analysis and facilitate decisionmaking and business intelligence.展开更多
Engineering data are separately organized and their schemas are increasingly complex and variable. Engineering data management systems are needed to be able to manage the unified data and to be both customizable and e...Engineering data are separately organized and their schemas are increasingly complex and variable. Engineering data management systems are needed to be able to manage the unified data and to be both customizable and extensible. The design of the systems is heavily dependent on the flexibility and self-description of the data model. The characteristics of engineering data and their management facts are analyzed. Then engineering data warehouse (EDW) architecture and multi-layer metamodels are presented. Also an approach to manage anduse engineering data by a meta object is proposed. Finally, an application flight test EDW system (FTEDWS) is described and meta-objects to manage engineering data in the data warehouse are used. It shows that adopting a meta-modeling approach provides a support for interchangeability and a sufficiently flexible environment in which the system evolution and the reusability can be handled.展开更多
Background Existing hospital information systems with simple statistical functions cannot meet current management needs. It is well known that hospital resources are distributed with private property rights among hosp...Background Existing hospital information systems with simple statistical functions cannot meet current management needs. It is well known that hospital resources are distributed with private property rights among hospitals, such as in the case of the regional coordination of medical services. In this study, to integrate and make full use of medical data effectively, we propose a data warehouse modeling method for the hospital information system. The method can also be employed for a distributed-hospital medical service system. Methods To ensure that hospital information supports the diverse needs of health care, the framework of the hospital information system has three layers: datacenter layer, system-function layer, and user-interface layer. This paper discusses the role of a data warehouse management system in handling hospital information from the establishment of the data theme to the design of a data model to the establishment of a data warehouse. Online analytical processing tools assist user-friendly multidimensional analysis from a number of different angles to extract the required data and information. Results Use of the data warehouse improves online analytical processing and mitigates deficiencies in the decision support system. The hospital information system based on a data warehouse effectively employs statistical analysis and data mining technology to handle massive quantities of historical data, and summarizes from clinical and hospital information for decision making. Conclusions This paper proposes the use of a data warehouse for a hospital information system, specifically a data warehouse for the theme of hospital information to determine latitude, modeling and so on. The processing of patient information is given as an example that demonstrates the usefulness of this method in the case of hospital information management. Data warehouse technology is an evolving technology, and more and more decision support information extracted by data mining and with decision-making technology is required for further research.展开更多
The whole-process project cost management based on building information modeling(BIM)is a new management method,aiming to realize the comprehensive optimization and improvement of project cost management through the a...The whole-process project cost management based on building information modeling(BIM)is a new management method,aiming to realize the comprehensive optimization and improvement of project cost management through the application of BIM technology.This paper summarizes and analyzes the whole-process project cost management based on BIM,aiming to explore its application and development prospects in the construction industry.Firstly,this paper introduces the role and advantages of BIM technology in engineering cost management,including information integration,data sharing,and collaborative work.Secondly,the paper analyzes the key technologies and methods of the whole-process project cost management based on BIM,including model construction,data management,and cost control.In addition,the paper also discusses the challenges and limitations of the whole-process BIM project cost management,such as the inconsistency of technical standards,personnel training,and consciousness change.Finally,the paper summarizes the advantages and development prospects of the whole-process project cost management based on BIM and puts forward the direction and suggestions for future research.Through the research of this paper,it can provide a reference for construction cost management and promote innovation and development in the construction industry.展开更多
The fourth international conference on Web information systems and applications (WISA 2007) has received 409 submissions and has accepted 37 papers for publication in this issue. The papers cover broad research area...The fourth international conference on Web information systems and applications (WISA 2007) has received 409 submissions and has accepted 37 papers for publication in this issue. The papers cover broad research areas, including Web mining and data warehouse, Deep Web and Web integration, P2P networks, text processing and information retrieval, as well as Web Services and Web infrastructure. After briefly introducing the WISA conference, the survey outlines the current activities and future trends concerning Web information systems and applications based on the papers accepted for publication.展开更多
The problem of storage and querying of large volumes of spatial grids is an issue to solve.In this paper,we propose a method to optimize queries to aggregate raster grids stored in databases.In our approach,we propose...The problem of storage and querying of large volumes of spatial grids is an issue to solve.In this paper,we propose a method to optimize queries to aggregate raster grids stored in databases.In our approach,we propose to estimate the exact result rather than calculate the exact result.This approach reduces query execution time.One advantage of our method is that it does not require implementing or modifying functionalities of database management systems.Our approach is based on a new data structure and a specific model of SQL queries.Our work is applied here to relational data warehouses.展开更多
Multi-sensor system is becoming increasingly important in a variety of military and civilian applications. In general, single sensor system can only provide partial information about environment while multi-sensor sys...Multi-sensor system is becoming increasingly important in a variety of military and civilian applications. In general, single sensor system can only provide partial information about environment while multi-sensor system provides a synergistic effect, which improves the quality and availability of information. Data fusion techniques can effectively combine this environmental information from similar and/or dissimilar sensors. Sensor management, aiming at improving data fusion performance by controlling sensor behavior, plays an important role in a data fusion process. This paper presents a method using fisher information gain based sensor effectiveness metric for sensor assignment in multi-sensor and multi-target tracking applications. The fisher information gain is computed for every sensor-target pairing on each scan. The advantage for this metric over other ones is that the fisher information gain for the target obtained by multi-sensors is equal to the sum of ones obtained by the individual sensor, so standard transportation problem formulation can be used to solve this problem without importing the concept of pseudo sensor. The simulation results show the effectiveness of the method.展开更多
The traditional Apriori applied in books management system causes slow system operation due to frequent scanning of database and excessive quantity of candidate item-sets, so an information recommendation book managem...The traditional Apriori applied in books management system causes slow system operation due to frequent scanning of database and excessive quantity of candidate item-sets, so an information recommendation book management system based on improved Apriori data mining algorithm is designed, in which the C/S (client/server) architecture and B/S (browser/server) architecture are integrated, so as to open the book information to library staff and borrowers. The related information data of the borrowers and books can be extracted from books lending database by the data preprocessing sub-module in the system function module. After the data is cleaned, converted and integrated, the association rule mining sub-module is used to mine the strong association rules with support degree greater than minimum support degree threshold and confidence coefficient greater than minimum confidence coefficient threshold according to the processed data and by means of the improved Apriori data mining algorithm to generate association rule database. The association matching is performed by the personalized recommendation sub-module according to the borrower and his selected books in the association rule database. The book information associated with the books read by borrower is recommended to him to realize personalized recommendation of the book information. The experimental results show that the system can effectively recommend book related information, and its CPU occupation rate is only 6.47% under the condition that 50 clients are running it at the same time. Anyway, it has good performance.展开更多
In this paper, a new multimedia data model, namely object-relation hypermedia data model (O-RHDM) which is an advanced and effective multimedia data model is proposed and designed based on the extension and integratio...In this paper, a new multimedia data model, namely object-relation hypermedia data model (O-RHDM) which is an advanced and effective multimedia data model is proposed and designed based on the extension and integration of non first normal form (NF2) multimedia data model. Its principle, mathematical description, algebra operation, organization method and store model are also discussed. And its specific application example, in the multimedia spatial data management is given combining with the Hainan multimedia touring information system.展开更多
Smart farming has become a strategic approach of sustainable agriculture management and monitoring with the infrastructure to exploit modern technologies,including big data,the cloud,and the Internet of Things(IoT).Ma...Smart farming has become a strategic approach of sustainable agriculture management and monitoring with the infrastructure to exploit modern technologies,including big data,the cloud,and the Internet of Things(IoT).Many researchers try to integrate IoT-based smart farming on cloud platforms effectively.They define various frameworks on smart farming and monitoring system and still lacks to define effective data management schemes.Since IoT-cloud systems involve massive structured and unstructured data,data optimization comes into the picture.Hence,this research designs an Information-Centric IoT-based Smart Farming with Dynamic Data Optimization(ICISF-DDO),which enhances the performance of the smart farming infrastructure with minimal energy consumption and improved lifetime.Here,a conceptual framework of the proposed scheme and statistical design model has beenwell defined.The information storage and management with DDO has been expanded individually to show the effective use of membership parameters in data optimization.The simulation outcomes state that the proposed ICISF-DDO can surpass existing smart farming systems with a data optimization ratio of 97.71%,reliability ratio of 98.63%,a coverage ratio of 99.67%,least sensor error rate of 8.96%,and efficient energy consumption ratio of 4.84%.展开更多
Data mining involves extracting information from large data sets, discovering the hidden relationships and unknown dependencies, and supporting strategic decision-making tasks. The alignment of data mining and busines...Data mining involves extracting information from large data sets, discovering the hidden relationships and unknown dependencies, and supporting strategic decision-making tasks. The alignment of data mining and business would bring benefits to the organization's management. The study investigated the adoption of data mining technologies in managerial accounting system, concentrating on the challenges and opportunities. The research showed that with the technology adoption, managerial functions could be improved and current information system could be upgraded. Since the technical progresses are reshaping the world of business and accountancy, it is significant for accountants and finance professionals to exploit information technologies.展开更多
Both opportunities and challenges are currently faced by government management innovation in the age of "big data". Traditionally, relative studies view the management of governments as the effective means to improv...Both opportunities and challenges are currently faced by government management innovation in the age of "big data". Traditionally, relative studies view the management of governments as the effective means to improve governmental services, without really understanding the structural influence of big data and network technology on governmental mode of thinking. Against such backdrop, this paper tries to conduct critical analysis based upon traditional outcomes in this regard, trying to make full use of the function of big data technology. With these efforts, this paper contributes to the building of an interaction theory that could promote transparency of information and customization and segmentation of the policies. By constructing a mode in which management could be carried out based on the law of big data, by building an information management system in which balance could be achieved between responsibility and freedom, by promoting the rebalancing among public power, online civil society and civil rights, the innovation of governmental management would be achieved.展开更多
We advance here a novel methodology for robust intelligent biometric information management with inferences and predictions made using randomness and complexity concepts. Intelligence refers to learning, adap- tation,...We advance here a novel methodology for robust intelligent biometric information management with inferences and predictions made using randomness and complexity concepts. Intelligence refers to learning, adap- tation, and functionality, and robustness refers to the ability to handle incomplete and/or corrupt adversarial information, on one side, and image and or device variability, on the other side. The proposed methodology is model-free and non-parametric. It draws support from discriminative methods using likelihood ratios to link at the conceptual level biometrics and forensics. It further links, at the modeling and implementation level, the Bayesian framework, statistical learning theory (SLT) using transduction and semi-supervised lea- rning, and Information Theory (IY) using mutual information. The key concepts supporting the proposed methodology are a) local estimation to facilitate learning and prediction using both labeled and unlabeled data;b) similarity metrics using regularity of patterns, randomness deficiency, and Kolmogorov complexity (similar to MDL) using strangeness/typicality and ranking p-values;and c) the Cover – Hart theorem on the asymptotical performance of k-nearest neighbors approaching the optimal Bayes error. Several topics on biometric inference and prediction related to 1) multi-level and multi-layer data fusion including quality and multi-modal biometrics;2) score normalization and revision theory;3) face selection and tracking;and 4) identity management, are described here using an integrated approach that includes transduction and boosting for ranking and sequential fusion/aggregation, respectively, on one side, and active learning and change/ outlier/intrusion detection realized using information gain and martingale, respectively, on the other side. The methodology proposed can be mapped to additional types of information beyond biometrics.展开更多
Clinical data have strong features of complexity and multi-disciplinarity. Clinical data are generated both from the documentation of physicians' interactions with the patient and by diagnostic systems. During the ca...Clinical data have strong features of complexity and multi-disciplinarity. Clinical data are generated both from the documentation of physicians' interactions with the patient and by diagnostic systems. During the care process, a number of different actors and roles (physicians, specialists, nurses, etc.) have the need to access patient data and document clinical activities in different moments and settings. Thus, data sharing and flexible aggregation based on different users' needs have become more and more important for supporting continuity of care at home, at hospitals, at outpatient clinics. In this paper, the authors identify and describe needs and challenges for patient data management at provider level and regional- (or inter-organizational-) level, because nowadays sharing patient data is needed to improve continuity and quality of care. For each level, the authors describe state-of-the-art Information and Communication Technology solutions to collect, manage, aggregate and share patient data. For each level some examples of best practices and solution scenarios being implemented in the Italian Healthcare setting are described as well.展开更多
In order to improve the efficiency of product design and reduce the logistics cost, this paper first analyzes the information integration demand of product design department and logistics service department. Based on ...In order to improve the efficiency of product design and reduce the logistics cost, this paper first analyzes the information integration demand of product design department and logistics service department. Based on Web service technology, this paper builds the product logistics service information model for integration. Furthermore, through a sanitary appliance company application, a solid base is laid for increasing the product R&D speed and logistics service.展开更多
文摘Expenditure on wells constitute a significant part of the operational costs for a petroleum enterprise, where most of the cost results from drilling. This has prompted drilling departments to continuously look for ways to reduce their drilling costs and be as efficient as possible. A system called the Drilling Comprehensive Information Management and Application System (DCIMAS) is developed and presented here, with an aim at collecting, storing and making full use of the valuable well data and information relating to all drilling activities and operations. The DCIMAS comprises three main parts, including a data collection and transmission system, a data warehouse (DW) management system, and an integrated platform of core applications. With the support of the application platform, the DW management system is introduced, whereby the operation data are captured at well sites and transmitted electronically to a data warehouse via transmission equipment and ETL (extract, transformation and load) tools. With the high quality of the data guaranteed, our central task is to make the best use of the operation data and information for drilling analysis and to provide further information to guide later production stages. Applications have been developed and integrated on a uniform platform to interface directly with different layers of the multi-tier DW. Now, engineers in every department spend less time on data handling and more time on applying technology in their real work with the system.
文摘Objective: To establish an interactive management model for community-oriented high-risk osteoporosis in conjunction with a rural community health service center. Materials and Methods: Toward multidimensional analysis of data, the system we developed combines basic principles of data warehouse technology oriented to the needs of community health services. This paper introduces the steps we took in constructing the data warehouse;the case presented here is that of a district community health management information system in Changshu, Jiangsu Province, China. For our data warehouse, we chose the MySQL 4.5 relational database, the Browser/Server, (B/S) model, and hypertext preprocessor as the development tools. Results: The system allowed online analysis processing and next-stage work preparation, and provided a platform for data management, data query, online analysis, etc., in community health service center, specialist outpatient for osteoporosis, and health administration sectors. Conclusion: The users of remote management system and data warehouse can include community health service centers, osteoporosis departments of hospitals, and health administration departments;provide reference for policymaking of health administrators, residents’ health information, and intervention suggestions for general practitioners in community health service centers, patients’ follow-up information for osteoporosis specialists in general hospitals.
基金This paper is financially supported by the National I mportant MiningZone Database ( No .200210000004)Prediction and Assessment ofMineral Resources and Social Service (No .1212010331402) .
文摘Geo-data is a foundation for the prediction and assessment of ore resources, so managing and making full use of those data, including geography database, geology database, mineral deposits database, aeromagnetics database, gravity database, geochemistry database and remote sensing database, is very significant. We developed national important mining zone database (NIMZDB) to manage 14 national important mining zone databases to support a new round prediction of ore deposit. We found that attention should be paid to the following issues: ① data accuracy: integrity, logic consistency, attribute, spatial and time accuracy; ② management of both attribute and spatial data in the same system;③ transforming data between MapGIS and ArcGIS; ④ data sharing and security; ⑤ data searches that can query both attribute and spatial data. Accuracy of input data is guaranteed and the search, analysis and translation of data between MapGIS and ArcGIS has been made convenient via the development of a checking data module and a managing data module based on MapGIS and ArcGIS. Using AreSDE, we based data sharing on a client/server system, and attribute and spatial data are also managed in the same system.
文摘On the bas is of the reality of material supply management of the coal enterprise, this paper expounds plans of material management systems based on specific IT, and indicates the deficiencies, the problems of them and the necessity of improving them. The structure, models and data organizing schema of the material management decision support system are investigated based on a new data management technology (data warehousing technology).
文摘Discussing the matter of organizational data management implies, almost automatically, the concept of data warehousing as one of the most important parts of decision support system (DSS), as it supports the integration of information management by aggregating all data formats and provisioning external systems with consistent data content and flows, together with the metadata concept, as one of the easiest ways of integration for software and database systems. Since organizational data management uses the metadata channel for creating a bi-directional flow, when correctly managed, metadata can save both time and resources for organizations. This paperI will focus on providing theoretical aspects of the two concepts, together with a short brief over a proposed model of design for an organizational management tool.
文摘In modern workforce management,the demand for new ways to maximize worker satisfaction,productivity,and security levels is endless.Workforce movement data such as those source data from an access control system can support this ongoing process with subsequent analysis.In this study,a solution to attaining this goal is proposed,based on the design and implementation of a data mart as part of a dimensional trajectory data warehouse(TDW)that acts as a repository for the management of movement data.A novel methodological approach is proposed for modeling multiple spatial and temporal dimensions in a logical model.The case study presented in this paper for modeling and analyzing workforce movement data is to support human resource management decision-making and the following discussion provides a representative example of the contribution of a TDW in the process of information management and decision support systems.The entire process of exporting,cleaning,consolidating,and transforming data is implemented to achieve an appropriate format for final import.Structured query language(SQL)queries demonstrate the convenience of dimensional design for data analysis,and valuable information can be extracted from the movements of employees on company premises to manage the workforce efficiently and effectively.Visual analytics through data visualization support the analysis and facilitate decisionmaking and business intelligence.
文摘Engineering data are separately organized and their schemas are increasingly complex and variable. Engineering data management systems are needed to be able to manage the unified data and to be both customizable and extensible. The design of the systems is heavily dependent on the flexibility and self-description of the data model. The characteristics of engineering data and their management facts are analyzed. Then engineering data warehouse (EDW) architecture and multi-layer metamodels are presented. Also an approach to manage anduse engineering data by a meta object is proposed. Finally, an application flight test EDW system (FTEDWS) is described and meta-objects to manage engineering data in the data warehouse are used. It shows that adopting a meta-modeling approach provides a support for interchangeability and a sufficiently flexible environment in which the system evolution and the reusability can be handled.
文摘Background Existing hospital information systems with simple statistical functions cannot meet current management needs. It is well known that hospital resources are distributed with private property rights among hospitals, such as in the case of the regional coordination of medical services. In this study, to integrate and make full use of medical data effectively, we propose a data warehouse modeling method for the hospital information system. The method can also be employed for a distributed-hospital medical service system. Methods To ensure that hospital information supports the diverse needs of health care, the framework of the hospital information system has three layers: datacenter layer, system-function layer, and user-interface layer. This paper discusses the role of a data warehouse management system in handling hospital information from the establishment of the data theme to the design of a data model to the establishment of a data warehouse. Online analytical processing tools assist user-friendly multidimensional analysis from a number of different angles to extract the required data and information. Results Use of the data warehouse improves online analytical processing and mitigates deficiencies in the decision support system. The hospital information system based on a data warehouse effectively employs statistical analysis and data mining technology to handle massive quantities of historical data, and summarizes from clinical and hospital information for decision making. Conclusions This paper proposes the use of a data warehouse for a hospital information system, specifically a data warehouse for the theme of hospital information to determine latitude, modeling and so on. The processing of patient information is given as an example that demonstrates the usefulness of this method in the case of hospital information management. Data warehouse technology is an evolving technology, and more and more decision support information extracted by data mining and with decision-making technology is required for further research.
文摘The whole-process project cost management based on building information modeling(BIM)is a new management method,aiming to realize the comprehensive optimization and improvement of project cost management through the application of BIM technology.This paper summarizes and analyzes the whole-process project cost management based on BIM,aiming to explore its application and development prospects in the construction industry.Firstly,this paper introduces the role and advantages of BIM technology in engineering cost management,including information integration,data sharing,and collaborative work.Secondly,the paper analyzes the key technologies and methods of the whole-process project cost management based on BIM,including model construction,data management,and cost control.In addition,the paper also discusses the challenges and limitations of the whole-process BIM project cost management,such as the inconsistency of technical standards,personnel training,and consciousness change.Finally,the paper summarizes the advantages and development prospects of the whole-process project cost management based on BIM and puts forward the direction and suggestions for future research.Through the research of this paper,it can provide a reference for construction cost management and promote innovation and development in the construction industry.
文摘The fourth international conference on Web information systems and applications (WISA 2007) has received 409 submissions and has accepted 37 papers for publication in this issue. The papers cover broad research areas, including Web mining and data warehouse, Deep Web and Web integration, P2P networks, text processing and information retrieval, as well as Web Services and Web infrastructure. After briefly introducing the WISA conference, the survey outlines the current activities and future trends concerning Web information systems and applications based on the papers accepted for publication.
基金This work is funded by Auvergne region,Feder,Agaetis,and Irstea.
文摘The problem of storage and querying of large volumes of spatial grids is an issue to solve.In this paper,we propose a method to optimize queries to aggregate raster grids stored in databases.In our approach,we propose to estimate the exact result rather than calculate the exact result.This approach reduces query execution time.One advantage of our method is that it does not require implementing or modifying functionalities of database management systems.Our approach is based on a new data structure and a specific model of SQL queries.Our work is applied here to relational data warehouses.
文摘Multi-sensor system is becoming increasingly important in a variety of military and civilian applications. In general, single sensor system can only provide partial information about environment while multi-sensor system provides a synergistic effect, which improves the quality and availability of information. Data fusion techniques can effectively combine this environmental information from similar and/or dissimilar sensors. Sensor management, aiming at improving data fusion performance by controlling sensor behavior, plays an important role in a data fusion process. This paper presents a method using fisher information gain based sensor effectiveness metric for sensor assignment in multi-sensor and multi-target tracking applications. The fisher information gain is computed for every sensor-target pairing on each scan. The advantage for this metric over other ones is that the fisher information gain for the target obtained by multi-sensors is equal to the sum of ones obtained by the individual sensor, so standard transportation problem formulation can be used to solve this problem without importing the concept of pseudo sensor. The simulation results show the effectiveness of the method.
文摘The traditional Apriori applied in books management system causes slow system operation due to frequent scanning of database and excessive quantity of candidate item-sets, so an information recommendation book management system based on improved Apriori data mining algorithm is designed, in which the C/S (client/server) architecture and B/S (browser/server) architecture are integrated, so as to open the book information to library staff and borrowers. The related information data of the borrowers and books can be extracted from books lending database by the data preprocessing sub-module in the system function module. After the data is cleaned, converted and integrated, the association rule mining sub-module is used to mine the strong association rules with support degree greater than minimum support degree threshold and confidence coefficient greater than minimum confidence coefficient threshold according to the processed data and by means of the improved Apriori data mining algorithm to generate association rule database. The association matching is performed by the personalized recommendation sub-module according to the borrower and his selected books in the association rule database. The book information associated with the books read by borrower is recommended to him to realize personalized recommendation of the book information. The experimental results show that the system can effectively recommend book related information, and its CPU occupation rate is only 6.47% under the condition that 50 clients are running it at the same time. Anyway, it has good performance.
文摘In this paper, a new multimedia data model, namely object-relation hypermedia data model (O-RHDM) which is an advanced and effective multimedia data model is proposed and designed based on the extension and integration of non first normal form (NF2) multimedia data model. Its principle, mathematical description, algebra operation, organization method and store model are also discussed. And its specific application example, in the multimedia spatial data management is given combining with the Hainan multimedia touring information system.
文摘Smart farming has become a strategic approach of sustainable agriculture management and monitoring with the infrastructure to exploit modern technologies,including big data,the cloud,and the Internet of Things(IoT).Many researchers try to integrate IoT-based smart farming on cloud platforms effectively.They define various frameworks on smart farming and monitoring system and still lacks to define effective data management schemes.Since IoT-cloud systems involve massive structured and unstructured data,data optimization comes into the picture.Hence,this research designs an Information-Centric IoT-based Smart Farming with Dynamic Data Optimization(ICISF-DDO),which enhances the performance of the smart farming infrastructure with minimal energy consumption and improved lifetime.Here,a conceptual framework of the proposed scheme and statistical design model has beenwell defined.The information storage and management with DDO has been expanded individually to show the effective use of membership parameters in data optimization.The simulation outcomes state that the proposed ICISF-DDO can surpass existing smart farming systems with a data optimization ratio of 97.71%,reliability ratio of 98.63%,a coverage ratio of 99.67%,least sensor error rate of 8.96%,and efficient energy consumption ratio of 4.84%.
文摘Data mining involves extracting information from large data sets, discovering the hidden relationships and unknown dependencies, and supporting strategic decision-making tasks. The alignment of data mining and business would bring benefits to the organization's management. The study investigated the adoption of data mining technologies in managerial accounting system, concentrating on the challenges and opportunities. The research showed that with the technology adoption, managerial functions could be improved and current information system could be upgraded. Since the technical progresses are reshaping the world of business and accountancy, it is significant for accountants and finance professionals to exploit information technologies.
文摘Both opportunities and challenges are currently faced by government management innovation in the age of "big data". Traditionally, relative studies view the management of governments as the effective means to improve governmental services, without really understanding the structural influence of big data and network technology on governmental mode of thinking. Against such backdrop, this paper tries to conduct critical analysis based upon traditional outcomes in this regard, trying to make full use of the function of big data technology. With these efforts, this paper contributes to the building of an interaction theory that could promote transparency of information and customization and segmentation of the policies. By constructing a mode in which management could be carried out based on the law of big data, by building an information management system in which balance could be achieved between responsibility and freedom, by promoting the rebalancing among public power, online civil society and civil rights, the innovation of governmental management would be achieved.
文摘We advance here a novel methodology for robust intelligent biometric information management with inferences and predictions made using randomness and complexity concepts. Intelligence refers to learning, adap- tation, and functionality, and robustness refers to the ability to handle incomplete and/or corrupt adversarial information, on one side, and image and or device variability, on the other side. The proposed methodology is model-free and non-parametric. It draws support from discriminative methods using likelihood ratios to link at the conceptual level biometrics and forensics. It further links, at the modeling and implementation level, the Bayesian framework, statistical learning theory (SLT) using transduction and semi-supervised lea- rning, and Information Theory (IY) using mutual information. The key concepts supporting the proposed methodology are a) local estimation to facilitate learning and prediction using both labeled and unlabeled data;b) similarity metrics using regularity of patterns, randomness deficiency, and Kolmogorov complexity (similar to MDL) using strangeness/typicality and ranking p-values;and c) the Cover – Hart theorem on the asymptotical performance of k-nearest neighbors approaching the optimal Bayes error. Several topics on biometric inference and prediction related to 1) multi-level and multi-layer data fusion including quality and multi-modal biometrics;2) score normalization and revision theory;3) face selection and tracking;and 4) identity management, are described here using an integrated approach that includes transduction and boosting for ranking and sequential fusion/aggregation, respectively, on one side, and active learning and change/ outlier/intrusion detection realized using information gain and martingale, respectively, on the other side. The methodology proposed can be mapped to additional types of information beyond biometrics.
文摘Clinical data have strong features of complexity and multi-disciplinarity. Clinical data are generated both from the documentation of physicians' interactions with the patient and by diagnostic systems. During the care process, a number of different actors and roles (physicians, specialists, nurses, etc.) have the need to access patient data and document clinical activities in different moments and settings. Thus, data sharing and flexible aggregation based on different users' needs have become more and more important for supporting continuity of care at home, at hospitals, at outpatient clinics. In this paper, the authors identify and describe needs and challenges for patient data management at provider level and regional- (or inter-organizational-) level, because nowadays sharing patient data is needed to improve continuity and quality of care. For each level, the authors describe state-of-the-art Information and Communication Technology solutions to collect, manage, aggregate and share patient data. For each level some examples of best practices and solution scenarios being implemented in the Italian Healthcare setting are described as well.
文摘In order to improve the efficiency of product design and reduce the logistics cost, this paper first analyzes the information integration demand of product design department and logistics service department. Based on Web service technology, this paper builds the product logistics service information model for integration. Furthermore, through a sanitary appliance company application, a solid base is laid for increasing the product R&D speed and logistics service.