Database systems have consistently been prime targets for cyber-attacks and threats due to the critical nature of the data they store.Despite the increasing reliance on database management systems,this field continues...Database systems have consistently been prime targets for cyber-attacks and threats due to the critical nature of the data they store.Despite the increasing reliance on database management systems,this field continues to face numerous cyber-attacks.Database management systems serve as the foundation of any information system or application.Any cyber-attack can result in significant damage to the database system and loss of sensitive data.Consequently,cyber risk classifications and assessments play a crucial role in risk management and establish an essential framework for identifying and responding to cyber threats.Risk assessment aids in understanding the impact of cyber threats and developing appropriate security controls to mitigate risks.The primary objective of this study is to conduct a comprehensive analysis of cyber risks in database management systems,including classifying threats,vulnerabilities,impacts,and countermeasures.This classification helps to identify suitable security controls to mitigate cyber risks for each type of threat.Additionally,this research aims to explore technical countermeasures to protect database systems from cyber threats.This study employs the content analysis method to collect,analyze,and classify data in terms of types of threats,vulnerabilities,and countermeasures.The results indicate that SQL injection attacks and Denial of Service(DoS)attacks were the most prevalent technical threats in database systems,each accounting for 9%of incidents.Vulnerable audit trails,intrusion attempts,and ransomware attacks were classified as the second level of technical threats in database systems,comprising 7%and 5%of incidents,respectively.Furthermore,the findings reveal that insider threats were the most common non-technical threats in database systems,accounting for 5%of incidents.Moreover,the results indicate that weak authentication,unpatched databases,weak audit trails,and multiple usage of an account were the most common technical vulnerabilities in database systems,each accounting for 9%of vulnerabilities.Additionally,software bugs,insecure coding practices,weak security controls,insecure networks,password misuse,weak encryption practices,and weak data masking were classified as the second level of security vulnerabilities in database systems,each accounting for 4%of vulnerabilities.The findings from this work can assist organizations in understanding the types of cyber threats and developing robust strategies against cyber-attacks.展开更多
Nowadays,it is extremely urgent for the software engineering education to cultivate the knowledge and ability of database talents in the era of big data.To this end,this paper proposes a talent training teaching modal...Nowadays,it is extremely urgent for the software engineering education to cultivate the knowledge and ability of database talents in the era of big data.To this end,this paper proposes a talent training teaching modality that integrates knowledge,ability,practice,and innovation(KAPI)for Database System Course.The teaching modality contains three parts:top-level design,course learning process,and course assurance and evaluation.The top-level design sorts out the core knowledge of the course and determines a mixed online and offline teaching platform.The course learning process emphasizes the correspondence transformation relationship between core knowledge points and ability enhancement,and the course is practiced in the form of experimental projects to finally enhance students’innovation consciousness and ability.The assurance and evaluation of the course are based on the outcome-based education(OBE)orientation,which realizes the objective evaluation of students’learning process and final performance.The teaching results of the course in the past 2 years show that the KAPI-based teaching modality has achieved better results.Meanwhile,students are satisfied with the evaluation of the modality.The teaching modality in this paper helps to stimulate students’initiatives,and improve their knowledge vision and practical ability,and thus helps to cultivate innovative and high-quality engineering talents required by the emerging engineering education.展开更多
The book chapter is an extended version of the research paper entitled “Use of Component Integration Services in Multidatabase Systems”, which is presented and published by the 13<sup>th</sup> ISITA, the...The book chapter is an extended version of the research paper entitled “Use of Component Integration Services in Multidatabase Systems”, which is presented and published by the 13<sup>th</sup> ISITA, the National Conference of Recent Trends in Mathematical and Computer Sciences, T.M.B. University, Bhagalpur, India, January 3-4, 2015. Information is widely distributed across many remote, distributed, and autonomous databases (local component databases) in heterogeneous formats. The integration of heterogeneous remote databases is a difficult task, and it has already been addressed by several projects to certain extents. In this chapter, we have discussed how to integrate heterogeneous distributed local relational databases because of their simplicity, excellent security, performance, power, flexibility, data independence, support for new hardware technologies, and spread across the globe. We have also discussed how to constitute a global conceptual schema in the multidatabase system using Sybase Adaptive Server Enterprise’s Component Integration Services (CIS) and OmniConnect. This is feasible for higher education institutions and commercial industries as well. Considering the higher educational institutions, the CIS will improve IT integration for educational institutions with their subsidiaries or with other institutions within the country and abroad in terms of educational management, teaching, learning, and research, including promoting international students’ academic integration, collaboration, and governance. This will prove an innovative strategy to support the modernization and large expansion of academic institutions. This will be considered IT-institutional alignment within a higher education context. This will also support achieving one of the sustainable development goals set by the United Nations: “Goal 4: ensure inclusive and quality education for all and promote lifelong learning”. However, the process of IT integration into higher educational institutions must be thoroughly evaluated, identifying the vital data access points. In this chapter, Section 1 provides an introduction, including the evolution of various database systems, data models, and the emergence of multidatabase systems and their importance. Section 2 discusses component integration services (CIS), OmniConnect and considering heterogeneous relational distributed local databases from the perspective of academics, Section 3 discusses the Sybase Adaptive Server Enterprise (ASE), Section 4 discusses the role of component integration services and OmniConnect of Sybase ASE under the Multidatabase System, Section 5 shows the database architectural framework, Section 6 provides an implementation overview of the global conceptual schema in the multidatabase system, Section 7 discusses query processing in the CIS, and finally, Section 8 concludes the chapter. The chapter will help our students a lot, as we have discussed well the evolution of databases and data models and the emergence of multidatabases. Since some additional useful information is cited, the source of information for each citation is properly mentioned in the references column.展开更多
The selection of titanium alloys has become a complex decision-making task due to the growing number of creation and utilization for titanium alloys,with each having its own characteristics,advantages,and limitations....The selection of titanium alloys has become a complex decision-making task due to the growing number of creation and utilization for titanium alloys,with each having its own characteristics,advantages,and limitations.In choosing the most appropriate titanium alloys,it is very essential to offer a reasonable and intelligent service for technical engineers.One possible solution of this problem is to develop a database system(DS) to help retrieve rational proposals from different databases and information sources and analyze them to provide useful and explicit information.For this purpose,a design strategy of the fuzzy set theory is proposed,and a distributed database system is developed.Through ranking of the candidate titanium alloys,the most suitable material is determined.It is found that the selection results are in good agreement with the practical situation.展开更多
An excellent cardinality estimation can make the query optimiser produce a good execution plan.Although there are some studies on cardinality estimation,the prediction results of existing cardinality estimators are in...An excellent cardinality estimation can make the query optimiser produce a good execution plan.Although there are some studies on cardinality estimation,the prediction results of existing cardinality estimators are inaccurate and the query efficiency cannot be guaranteed as well.In particular,they are difficult to accurately obtain the complex relationships between multiple tables in complex database systems.When dealing with complex queries,the existing cardinality estimators cannot achieve good results.In this study,a novel cardinality estimator is proposed.It uses the core techniques with the BiLSTM network structure and adds the attention mechanism.First,the columns involved in the query statements in the training set are sampled and compressed into bitmaps.Then,the Word2vec model is used to embed the word vectors about the query statements.Finally,the BiLSTM network and attention mechanism are employed to deal with word vectors.The proposed model takes into consideration not only the correlation between tables but also the processing of complex predicates.Extensive experiments and the evaluation of BiLSTM-Attention Cardinality Estimator(BACE)on the IMDB datasets are conducted.The results show that the deep learning model can significantly improve the quality of cardinality estimation,which is a vital role in query optimisation for complex databases.展开更多
A DMVOCC-MVDA (distributed multiversion optimistic concurrency control with multiversion dynamic adjustment) protocol was presented to process mobile distributed real-time transaction in mobile broadcast environment...A DMVOCC-MVDA (distributed multiversion optimistic concurrency control with multiversion dynamic adjustment) protocol was presented to process mobile distributed real-time transaction in mobile broadcast environments. At the mobile hosts, all transactions perform local pre-validation. The local pre-validation process is carried out against the committed transactions at the server in the last broadcast cycle. Transactions that survive in local pre-validation must be submitted to the server for local final validation. The new protocol eliminates conflicts between mobile read-only and mobile update transactions, and resolves data conflicts flexibly by using multiversion dynamic adjustment of serialization order to avoid unnecessary restarts of transactions. Mobile read-only transactions can be committed with no-blocking, and respond time of mobile read-only transactions is greatly shortened. The tolerance of mobile transactions of disconnections from the broadcast channel is increased. In global validation mobile distributed transactions have to do check to ensure distributed serializability in all participants. The simulation results show that the new concurrency control protocol proposed offers better performance than other protocols in terms of miss rate, restart rate, commit rate. Under high work load (think time is ls) the miss rate of DMVOCC-MVDA is only 14.6%, is significantly lower than that of other protocols. The restart rate of DMVOCC-MVDA is only 32.3%, showing that DMVOCC-MVDA can effectively reduce the restart rate of mobile transactions. And the commit rate of DMVOCC-MVDA is up to 61.2%, which is obviously higher than that of other protocols.展开更多
Aim To develop a heterogeneous database united system(HDBUS)that combines the local database of Oracle, Sybase and SQL server distributed on different server into a global database,and supports the global transaction...Aim To develop a heterogeneous database united system(HDBUS)that combines the local database of Oracle, Sybase and SQL server distributed on different server into a global database,and supports the global transaction management and parallel query over the Intranet Methods In the designing and implementation of HDBUS two important concepts heterogeneous tables join. Results and Conclu- tion The first concept can be used to process the parallel query of multiple database server, the second one is the key technology of heterogeneous is the key technology of heterogeneous distribute database.展开更多
In this research paper, we research on the automatic pattern abstraction and recognition method for large-scale database system based on natural language processing. In distributed database, through the network connec...In this research paper, we research on the automatic pattern abstraction and recognition method for large-scale database system based on natural language processing. In distributed database, through the network connection between nodes, data across different nodes and even regional distribution are well recognized. In order to reduce data redundancy and model design of the database will usually contain a lot of forms we combine the NLP theory to optimize the traditional method. The experimental analysis and simulation proves the correctness of our method.展开更多
This paper introduces a multi-granularity locking model (MGL) for concurrency control in object-oriented database system briefiy, and presents a MGL model formally. Four lockingscheduling algorithms for MGL are propos...This paper introduces a multi-granularity locking model (MGL) for concurrency control in object-oriented database system briefiy, and presents a MGL model formally. Four lockingscheduling algorithms for MGL are proposed in the paper. The ideas of single queue scheduling(SQS) and dual queue scheduling (DQS) are proposed and the algorithm and the performance evaluation for these two scheduling are presented in some paper. This paper describes a new idea of thescheduling for MGL, compatible requests first (CRF). Combining the new idea with SQS and DQS,we propose two new scheduling algorithms called CRFS and CRFD. After describing the simulationmodel, this paper illustrates the comparisons of the performance among these four algorithms. Asshown in the experiments, DQS has better performance than SQS, CRFD is better than DQS, CRFSperforms better than SQS, and CRFS is the best one of these four scheduling algorithms.展开更多
Recovery performance in the event of failures is very important for distributed real-time database systems. This paper presents a time-cognizant logging-based crash recovery scheme (TCLCRS) that aims at distributed ...Recovery performance in the event of failures is very important for distributed real-time database systems. This paper presents a time-cognizant logging-based crash recovery scheme (TCLCRS) that aims at distributed real-time databases, which adopts a main memory database as its ground support. In our scheme, each site maintains a real-time log for local transactions and the subtransactions, which execute at the site, and execte local checkpointing independently. Log records are stored in non-volatile high- speed store, which is divided into four different partitions based on transaction classes. During restart recovery after a site crash, partitioned crash recovery strategy is adopted to ensure that the site can be brought up before the entire local secondary database is reloaded in main memory. The partitioned crash recovery strategy not only guarantees the internal consistency to be recovered, but also guarantee the temporal consistency and recovery of the sates of physical world influenced by uncommitted transactions. Combined with two- phase commit protocol, TCLCRS can guarantee failure atomicity of distributed real-time transactions.展开更多
Most of the proposed concurrency control protocols for real time database systems are based on serializability theorem. Owing to the unique characteristics of real time database applications and the importance of sa...Most of the proposed concurrency control protocols for real time database systems are based on serializability theorem. Owing to the unique characteristics of real time database applications and the importance of satisfying the timing constraints of transactions, serializability is too strong as a correctness criterion and not suitable for real time databases in most cases. On the other hand, relaxed serializability including epsilon serializability and similarity serializability can allow more real time transactions to satisfy their timing constraints, but database consistency may be sacrificed to some extent. We thus propose the use of weak serializability(WSR) that is more relaxed than conflicting serializability while database consistency is maintained. In this paper, we first formally define the new notion of correctness called weak serializability. After the necessary and sufficient conditions for weak serializability are shown, corresponding concurrency control protocol WDHP(weak serializable distributed high priority protocol) is outlined for distributed real time databases, where a new lock mode called mask lock mode is proposed for simplifying the condition of global consistency. Finally, through a series of simulation studies, it is shown that using the new concurrency control protocol the performance of distributed real time databases can be greatly improved.展开更多
The purpose of this paper is to design and implement a secure open database system for organizations that are increasingly opened up their information for easy access by different users. The work proposed some functio...The purpose of this paper is to design and implement a secure open database system for organizations that are increasingly opened up their information for easy access by different users. The work proposed some functionalities such as open password entry with active boxes, combined encryption methods and agent that can be incorporated into an open database system. It designed and implemented an algorithm that would not allow users to have free access into open database system. A user entering his password only needs to carefully study the sequence of codes and active boxes that describe his password and then enter these codes in place of his active boxes. The approach does not require the input code to be hidden from anyone or converted to place holder characters for security reasons. Integrating this scheme into an open database system is viable in practice in term of easy use and will improve security level of information.展开更多
This paper formally defines and analyses the new notion of correctness called quasi serializability, and then outlines corresponding concurrency control protocol QDHP for distributed real-time databases. Finally, thro...This paper formally defines and analyses the new notion of correctness called quasi serializability, and then outlines corresponding concurrency control protocol QDHP for distributed real-time databases. Finally, through a series of simulation studies, it shows that using the new concurrency control protocol the performance of distributed real-time databases can be much improved.展开更多
The increasing number of XML repositories has stimulated the design of systems that can store and query XML data efficiently. OrientX, a native XML database sys tern, is designed to meet this requirement. In this pape...The increasing number of XML repositories has stimulated the design of systems that can store and query XML data efficiently. OrientX, a native XML database sys tern, is designed to meet this requirement. In this paper, we described the system structure and design of OrientX, an integrated, schema-based native XML database. The main contributions of OrientX are: a)We have implemented an integrated native XML database system, which supports native storage of XML data, and based on it we can handle XPath& XQuery efficiently; b)In our OrientX system, schema information is fully explored to guide the storage, optimization and query processing.展开更多
Outpatients receive medical treatment without being admitted to a hospital. They are not hospitalized for 24 hours or more but visit hospital, clinic or associated facility for diagnosis or treatment [1]. But the prob...Outpatients receive medical treatment without being admitted to a hospital. They are not hospitalized for 24 hours or more but visit hospital, clinic or associated facility for diagnosis or treatment [1]. But the problems of keeping their records for quick access by the management and provision of confidential, secure medical report that facilitates planning and decision making and hence improves medical service delivery are vital issues. This paper explores the challenges of manual outpatient records system for General Hospital, Minna and infers solutions to the current challenges by designing an online outpatient’s database system. The main method used for this research work is interview. Two (2) doctors, three (3) nurses on duty and two (2) staff at the record room were interviewed. Fifty (50) sampled outpatient records were collected. The combination of PHP, MYSQL and MACROMIDIA DREAMVEAVER was used to design the webpage and input data. The records were implemented on the designed outpatient management system and the outputs were produced. The finding shows these challenges facing the manual system of inventory management system. Distortion of patient’s folder and difficulty in searching a patient’s folder, difficulty in relating previous complaint with the new complains because of volume of the folder, slow access to patient diagnosis history during emergency, lack of back up when an information is lost, and preparation of accurate and prompt reports make it become a difficult task as information is difficult to collect from various register. Based on the findings, this paper highlights the possible solutions to the above problems. An online outpatient database system was designed to keep the outpatients records and improve medical service delivery.展开更多
In parallel real-time database systems, concurrency control protocols must satisfy time constraints as well as the integrity constraints. The authors present a validation concurrency control(VCC) protocol, which can e...In parallel real-time database systems, concurrency control protocols must satisfy time constraints as well as the integrity constraints. The authors present a validation concurrency control(VCC) protocol, which can enhance the performance of real-time concurrency control mechanism by reducing the number of transactions that might miss their deadlines, and compare the performance of validation concurrency control protocol with that of HP2PL(High priority two phase locking) protocol and OCC-TI-WAIT-50(Optimistic concurrency control-time interval-wait-50) protocol under shared-disk architecture by simulation. The simulation results reveal that the protocol the author presented can effectively reduce the number of transactions restarting which might miss their deadlines and performs better than HP2PL and OCC-TI-WAIT-50. It works well when arrival rate of transaction is lesser than threshold. However, due to resource contention the percentage of missing deadline increases sharply when arrival rate is greater than the threshold.展开更多
Schema incompatibility is a major challenge to a federated database systemfor data sharing among heterogeneous,multiple and autonomous databases.This paperpresents a mapping approach based on import schema,export sche...Schema incompatibility is a major challenge to a federated database systemfor data sharing among heterogeneous,multiple and autonomous databases.This paperpresents a mapping approach based on import schema,export schema and domain conver-sion function,through which schema incompatibility problems such as naming conflict,domain incompatibility and entity definition incompatibility can be resolved effectively.The implementation techniques are also discussed.展开更多
The main purpose of the model is to present how the Unified Modeling Language (UML) can be used for modeling digital video database system (VDBS). It demonstrates the modeling process that can be followed during the a...The main purpose of the model is to present how the Unified Modeling Language (UML) can be used for modeling digital video database system (VDBS). It demonstrates the modeling process that can be followed during the analysis phase of complex applications. In order to guarantee the continuity mapping of the models, the authors propose some suggestions to transform the use case diagrams into an object diagram, which is one of the main diagrams for the next development phases.展开更多
An approach to implementing the multimedia database system NHMDB based on NF2(Non-First-Normal-Form) data model is presented. This approach is easily implemented because NF2 structure can efficiently store various med...An approach to implementing the multimedia database system NHMDB based on NF2(Non-First-Normal-Form) data model is presented. This approach is easily implemented because NF2 structure can efficiently store various media data such as formatted data, text,graphics, image and voice. The main idea is to expand conceptual schema to maintain the consistency of tbreelevel schema in system NHMDB. We developed, implemented and experimented the storage structure and the multimedia data representation by an object identifier.implementation techniques are also discussed.展开更多
In this paper,the entity_relation data model for integrating spatio_temporal data is designed.In the design,spatio_temporal data can be effectively stored and spatiao_temporal analysis can be easily realized.
基金supported by the Deanship of Scientific Research,Vice Presidency for Graduate Studies and Scientific Research,King Faisal University,Saudi Arabia(Grant No.KFU242068).
文摘Database systems have consistently been prime targets for cyber-attacks and threats due to the critical nature of the data they store.Despite the increasing reliance on database management systems,this field continues to face numerous cyber-attacks.Database management systems serve as the foundation of any information system or application.Any cyber-attack can result in significant damage to the database system and loss of sensitive data.Consequently,cyber risk classifications and assessments play a crucial role in risk management and establish an essential framework for identifying and responding to cyber threats.Risk assessment aids in understanding the impact of cyber threats and developing appropriate security controls to mitigate risks.The primary objective of this study is to conduct a comprehensive analysis of cyber risks in database management systems,including classifying threats,vulnerabilities,impacts,and countermeasures.This classification helps to identify suitable security controls to mitigate cyber risks for each type of threat.Additionally,this research aims to explore technical countermeasures to protect database systems from cyber threats.This study employs the content analysis method to collect,analyze,and classify data in terms of types of threats,vulnerabilities,and countermeasures.The results indicate that SQL injection attacks and Denial of Service(DoS)attacks were the most prevalent technical threats in database systems,each accounting for 9%of incidents.Vulnerable audit trails,intrusion attempts,and ransomware attacks were classified as the second level of technical threats in database systems,comprising 7%and 5%of incidents,respectively.Furthermore,the findings reveal that insider threats were the most common non-technical threats in database systems,accounting for 5%of incidents.Moreover,the results indicate that weak authentication,unpatched databases,weak audit trails,and multiple usage of an account were the most common technical vulnerabilities in database systems,each accounting for 9%of vulnerabilities.Additionally,software bugs,insecure coding practices,weak security controls,insecure networks,password misuse,weak encryption practices,and weak data masking were classified as the second level of security vulnerabilities in database systems,each accounting for 4%of vulnerabilities.The findings from this work can assist organizations in understanding the types of cyber threats and developing robust strategies against cyber-attacks.
基金the support from the General Program of the Educational Teaching Reform Research Project of Northwestern Polytechnical University(Grant No.2023JGY35)the Guangdong Basic and Applied Basic Research Foundation(Grant No.2022A1515110252)+1 种基金the Double First-class Construction Foundation(Grant No.22GH010616)the Northwestern Polytechnical University of Graduate Student Quality Improvement Program(Grant No.22GZ210101)。
文摘Nowadays,it is extremely urgent for the software engineering education to cultivate the knowledge and ability of database talents in the era of big data.To this end,this paper proposes a talent training teaching modality that integrates knowledge,ability,practice,and innovation(KAPI)for Database System Course.The teaching modality contains three parts:top-level design,course learning process,and course assurance and evaluation.The top-level design sorts out the core knowledge of the course and determines a mixed online and offline teaching platform.The course learning process emphasizes the correspondence transformation relationship between core knowledge points and ability enhancement,and the course is practiced in the form of experimental projects to finally enhance students’innovation consciousness and ability.The assurance and evaluation of the course are based on the outcome-based education(OBE)orientation,which realizes the objective evaluation of students’learning process and final performance.The teaching results of the course in the past 2 years show that the KAPI-based teaching modality has achieved better results.Meanwhile,students are satisfied with the evaluation of the modality.The teaching modality in this paper helps to stimulate students’initiatives,and improve their knowledge vision and practical ability,and thus helps to cultivate innovative and high-quality engineering talents required by the emerging engineering education.
文摘The book chapter is an extended version of the research paper entitled “Use of Component Integration Services in Multidatabase Systems”, which is presented and published by the 13<sup>th</sup> ISITA, the National Conference of Recent Trends in Mathematical and Computer Sciences, T.M.B. University, Bhagalpur, India, January 3-4, 2015. Information is widely distributed across many remote, distributed, and autonomous databases (local component databases) in heterogeneous formats. The integration of heterogeneous remote databases is a difficult task, and it has already been addressed by several projects to certain extents. In this chapter, we have discussed how to integrate heterogeneous distributed local relational databases because of their simplicity, excellent security, performance, power, flexibility, data independence, support for new hardware technologies, and spread across the globe. We have also discussed how to constitute a global conceptual schema in the multidatabase system using Sybase Adaptive Server Enterprise’s Component Integration Services (CIS) and OmniConnect. This is feasible for higher education institutions and commercial industries as well. Considering the higher educational institutions, the CIS will improve IT integration for educational institutions with their subsidiaries or with other institutions within the country and abroad in terms of educational management, teaching, learning, and research, including promoting international students’ academic integration, collaboration, and governance. This will prove an innovative strategy to support the modernization and large expansion of academic institutions. This will be considered IT-institutional alignment within a higher education context. This will also support achieving one of the sustainable development goals set by the United Nations: “Goal 4: ensure inclusive and quality education for all and promote lifelong learning”. However, the process of IT integration into higher educational institutions must be thoroughly evaluated, identifying the vital data access points. In this chapter, Section 1 provides an introduction, including the evolution of various database systems, data models, and the emergence of multidatabase systems and their importance. Section 2 discusses component integration services (CIS), OmniConnect and considering heterogeneous relational distributed local databases from the perspective of academics, Section 3 discusses the Sybase Adaptive Server Enterprise (ASE), Section 4 discusses the role of component integration services and OmniConnect of Sybase ASE under the Multidatabase System, Section 5 shows the database architectural framework, Section 6 provides an implementation overview of the global conceptual schema in the multidatabase system, Section 7 discusses query processing in the CIS, and finally, Section 8 concludes the chapter. The chapter will help our students a lot, as we have discussed well the evolution of databases and data models and the emergence of multidatabases. Since some additional useful information is cited, the source of information for each citation is properly mentioned in the references column.
基金supported by the Major State Basic Research and Development Program of China (No.2007CB613807)the Doctorate Foundation of Northwestern Polytechnical University (CX201105)+1 种基金the Program for New Century Excellent Talents in Chinese Universities (No.NCET-07-0696)the fund of the State Key Laboratory of Solidification Processing in NWPU (No.35-TP-2009)
文摘The selection of titanium alloys has become a complex decision-making task due to the growing number of creation and utilization for titanium alloys,with each having its own characteristics,advantages,and limitations.In choosing the most appropriate titanium alloys,it is very essential to offer a reasonable and intelligent service for technical engineers.One possible solution of this problem is to develop a database system(DS) to help retrieve rational proposals from different databases and information sources and analyze them to provide useful and explicit information.For this purpose,a design strategy of the fuzzy set theory is proposed,and a distributed database system is developed.Through ranking of the candidate titanium alloys,the most suitable material is determined.It is found that the selection results are in good agreement with the practical situation.
基金supported by the National Natural Science Foundation of China under grant nos.61772091,61802035,61962006,61962038,U1802271,U2001212,and 62072311the Sichuan Science and Technology Program under grant nos.2021JDJQ0021 and 22ZDYF2680+7 种基金the CCF‐Huawei Database System Innovation Research Plan under grant no.CCF‐HuaweiDBIR2020004ADigital Media Art,Key Laboratory of Sichuan Province,Sichuan Conservatory of Music,Chengdu,China under grant no.21DMAKL02the Chengdu Major Science and Technology Innovation Project under grant no.2021‐YF08‐00156‐GXthe Chengdu Technology Innovation and Research and Development Project under grant no.2021‐YF05‐00491‐SNthe Natural Science Foundation of Guangxi under grant no.2018GXNSFDA138005the Guangdong Basic and Applied Basic Research Foundation under grant no.2020B1515120028the Science and Technology Innovation Seedling Project of Sichuan Province under grant no 2021006the College Student Innovation and Entrepreneurship Training Program of Chengdu University of Information Technology under grant nos.202110621179 and 202110621186.
文摘An excellent cardinality estimation can make the query optimiser produce a good execution plan.Although there are some studies on cardinality estimation,the prediction results of existing cardinality estimators are inaccurate and the query efficiency cannot be guaranteed as well.In particular,they are difficult to accurately obtain the complex relationships between multiple tables in complex database systems.When dealing with complex queries,the existing cardinality estimators cannot achieve good results.In this study,a novel cardinality estimator is proposed.It uses the core techniques with the BiLSTM network structure and adds the attention mechanism.First,the columns involved in the query statements in the training set are sampled and compressed into bitmaps.Then,the Word2vec model is used to embed the word vectors about the query statements.Finally,the BiLSTM network and attention mechanism are employed to deal with word vectors.The proposed model takes into consideration not only the correlation between tables but also the processing of complex predicates.Extensive experiments and the evaluation of BiLSTM-Attention Cardinality Estimator(BACE)on the IMDB datasets are conducted.The results show that the deep learning model can significantly improve the quality of cardinality estimation,which is a vital role in query optimisation for complex databases.
基金Project(20030533011)supported by the National Research Foundation for the Doctoral Program of Higher Education of China
文摘A DMVOCC-MVDA (distributed multiversion optimistic concurrency control with multiversion dynamic adjustment) protocol was presented to process mobile distributed real-time transaction in mobile broadcast environments. At the mobile hosts, all transactions perform local pre-validation. The local pre-validation process is carried out against the committed transactions at the server in the last broadcast cycle. Transactions that survive in local pre-validation must be submitted to the server for local final validation. The new protocol eliminates conflicts between mobile read-only and mobile update transactions, and resolves data conflicts flexibly by using multiversion dynamic adjustment of serialization order to avoid unnecessary restarts of transactions. Mobile read-only transactions can be committed with no-blocking, and respond time of mobile read-only transactions is greatly shortened. The tolerance of mobile transactions of disconnections from the broadcast channel is increased. In global validation mobile distributed transactions have to do check to ensure distributed serializability in all participants. The simulation results show that the new concurrency control protocol proposed offers better performance than other protocols in terms of miss rate, restart rate, commit rate. Under high work load (think time is ls) the miss rate of DMVOCC-MVDA is only 14.6%, is significantly lower than that of other protocols. The restart rate of DMVOCC-MVDA is only 32.3%, showing that DMVOCC-MVDA can effectively reduce the restart rate of mobile transactions. And the commit rate of DMVOCC-MVDA is up to 61.2%, which is obviously higher than that of other protocols.
文摘Aim To develop a heterogeneous database united system(HDBUS)that combines the local database of Oracle, Sybase and SQL server distributed on different server into a global database,and supports the global transaction management and parallel query over the Intranet Methods In the designing and implementation of HDBUS two important concepts heterogeneous tables join. Results and Conclu- tion The first concept can be used to process the parallel query of multiple database server, the second one is the key technology of heterogeneous is the key technology of heterogeneous distribute database.
文摘In this research paper, we research on the automatic pattern abstraction and recognition method for large-scale database system based on natural language processing. In distributed database, through the network connection between nodes, data across different nodes and even regional distribution are well recognized. In order to reduce data redundancy and model design of the database will usually contain a lot of forms we combine the NLP theory to optimize the traditional method. The experimental analysis and simulation proves the correctness of our method.
文摘This paper introduces a multi-granularity locking model (MGL) for concurrency control in object-oriented database system briefiy, and presents a MGL model formally. Four lockingscheduling algorithms for MGL are proposed in the paper. The ideas of single queue scheduling(SQS) and dual queue scheduling (DQS) are proposed and the algorithm and the performance evaluation for these two scheduling are presented in some paper. This paper describes a new idea of thescheduling for MGL, compatible requests first (CRF). Combining the new idea with SQS and DQS,we propose two new scheduling algorithms called CRFS and CRFD. After describing the simulationmodel, this paper illustrates the comparisons of the performance among these four algorithms. Asshown in the experiments, DQS has better performance than SQS, CRFD is better than DQS, CRFSperforms better than SQS, and CRFS is the best one of these four scheduling algorithms.
基金Project supported by National Natural Science Foundation ofChina (Grant No .60203017) Defense Pre-research Projectof the"Tenth Five-Year-Plan"of China (Grant No .413150403)
文摘Recovery performance in the event of failures is very important for distributed real-time database systems. This paper presents a time-cognizant logging-based crash recovery scheme (TCLCRS) that aims at distributed real-time databases, which adopts a main memory database as its ground support. In our scheme, each site maintains a real-time log for local transactions and the subtransactions, which execute at the site, and execte local checkpointing independently. Log records are stored in non-volatile high- speed store, which is divided into four different partitions based on transaction classes. During restart recovery after a site crash, partitioned crash recovery strategy is adopted to ensure that the site can be brought up before the entire local secondary database is reloaded in main memory. The partitioned crash recovery strategy not only guarantees the internal consistency to be recovered, but also guarantee the temporal consistency and recovery of the sates of physical world influenced by uncommitted transactions. Combined with two- phase commit protocol, TCLCRS can guarantee failure atomicity of distributed real-time transactions.
文摘Most of the proposed concurrency control protocols for real time database systems are based on serializability theorem. Owing to the unique characteristics of real time database applications and the importance of satisfying the timing constraints of transactions, serializability is too strong as a correctness criterion and not suitable for real time databases in most cases. On the other hand, relaxed serializability including epsilon serializability and similarity serializability can allow more real time transactions to satisfy their timing constraints, but database consistency may be sacrificed to some extent. We thus propose the use of weak serializability(WSR) that is more relaxed than conflicting serializability while database consistency is maintained. In this paper, we first formally define the new notion of correctness called weak serializability. After the necessary and sufficient conditions for weak serializability are shown, corresponding concurrency control protocol WDHP(weak serializable distributed high priority protocol) is outlined for distributed real time databases, where a new lock mode called mask lock mode is proposed for simplifying the condition of global consistency. Finally, through a series of simulation studies, it is shown that using the new concurrency control protocol the performance of distributed real time databases can be greatly improved.
文摘The purpose of this paper is to design and implement a secure open database system for organizations that are increasingly opened up their information for easy access by different users. The work proposed some functionalities such as open password entry with active boxes, combined encryption methods and agent that can be incorporated into an open database system. It designed and implemented an algorithm that would not allow users to have free access into open database system. A user entering his password only needs to carefully study the sequence of codes and active boxes that describe his password and then enter these codes in place of his active boxes. The approach does not require the input code to be hidden from anyone or converted to place holder characters for security reasons. Integrating this scheme into an open database system is viable in practice in term of easy use and will improve security level of information.
基金the National Natural Science Foundation of China and the Commission of Science,Technokgy and Industry for National Defense
文摘This paper formally defines and analyses the new notion of correctness called quasi serializability, and then outlines corresponding concurrency control protocol QDHP for distributed real-time databases. Finally, through a series of simulation studies, it shows that using the new concurrency control protocol the performance of distributed real-time databases can be much improved.
基金Supported by the National Natural Science Foun-dation of China (60573091 ,60273018)
文摘The increasing number of XML repositories has stimulated the design of systems that can store and query XML data efficiently. OrientX, a native XML database sys tern, is designed to meet this requirement. In this paper, we described the system structure and design of OrientX, an integrated, schema-based native XML database. The main contributions of OrientX are: a)We have implemented an integrated native XML database system, which supports native storage of XML data, and based on it we can handle XPath& XQuery efficiently; b)In our OrientX system, schema information is fully explored to guide the storage, optimization and query processing.
文摘Outpatients receive medical treatment without being admitted to a hospital. They are not hospitalized for 24 hours or more but visit hospital, clinic or associated facility for diagnosis or treatment [1]. But the problems of keeping their records for quick access by the management and provision of confidential, secure medical report that facilitates planning and decision making and hence improves medical service delivery are vital issues. This paper explores the challenges of manual outpatient records system for General Hospital, Minna and infers solutions to the current challenges by designing an online outpatient’s database system. The main method used for this research work is interview. Two (2) doctors, three (3) nurses on duty and two (2) staff at the record room were interviewed. Fifty (50) sampled outpatient records were collected. The combination of PHP, MYSQL and MACROMIDIA DREAMVEAVER was used to design the webpage and input data. The records were implemented on the designed outpatient management system and the outputs were produced. The finding shows these challenges facing the manual system of inventory management system. Distortion of patient’s folder and difficulty in searching a patient’s folder, difficulty in relating previous complaint with the new complains because of volume of the folder, slow access to patient diagnosis history during emergency, lack of back up when an information is lost, and preparation of accurate and prompt reports make it become a difficult task as information is difficult to collect from various register. Based on the findings, this paper highlights the possible solutions to the above problems. An online outpatient database system was designed to keep the outpatients records and improve medical service delivery.
文摘In parallel real-time database systems, concurrency control protocols must satisfy time constraints as well as the integrity constraints. The authors present a validation concurrency control(VCC) protocol, which can enhance the performance of real-time concurrency control mechanism by reducing the number of transactions that might miss their deadlines, and compare the performance of validation concurrency control protocol with that of HP2PL(High priority two phase locking) protocol and OCC-TI-WAIT-50(Optimistic concurrency control-time interval-wait-50) protocol under shared-disk architecture by simulation. The simulation results reveal that the protocol the author presented can effectively reduce the number of transactions restarting which might miss their deadlines and performs better than HP2PL and OCC-TI-WAIT-50. It works well when arrival rate of transaction is lesser than threshold. However, due to resource contention the percentage of missing deadline increases sharply when arrival rate is greater than the threshold.
文摘Schema incompatibility is a major challenge to a federated database systemfor data sharing among heterogeneous,multiple and autonomous databases.This paperpresents a mapping approach based on import schema,export schema and domain conver-sion function,through which schema incompatibility problems such as naming conflict,domain incompatibility and entity definition incompatibility can be resolved effectively.The implementation techniques are also discussed.
基金Supported by the Scientific Item of National Power Company(SPKJ0 16 -0 71)
文摘The main purpose of the model is to present how the Unified Modeling Language (UML) can be used for modeling digital video database system (VDBS). It demonstrates the modeling process that can be followed during the analysis phase of complex applications. In order to guarantee the continuity mapping of the models, the authors propose some suggestions to transform the use case diagrams into an object diagram, which is one of the main diagrams for the next development phases.
文摘An approach to implementing the multimedia database system NHMDB based on NF2(Non-First-Normal-Form) data model is presented. This approach is easily implemented because NF2 structure can efficiently store various media data such as formatted data, text,graphics, image and voice. The main idea is to expand conceptual schema to maintain the consistency of tbreelevel schema in system NHMDB. We developed, implemented and experimented the storage structure and the multimedia data representation by an object identifier.implementation techniques are also discussed.
文摘In this paper,the entity_relation data model for integrating spatio_temporal data is designed.In the design,spatio_temporal data can be effectively stored and spatiao_temporal analysis can be easily realized.