BACKGROUND Ampullary adenocarcinoma is a rare malignant tumor of the gastrointestinal tract.Currently,only a few cases have been reported,resulting in limited information on survival.AIM To develop a dynamic nomogram ...BACKGROUND Ampullary adenocarcinoma is a rare malignant tumor of the gastrointestinal tract.Currently,only a few cases have been reported,resulting in limited information on survival.AIM To develop a dynamic nomogram using internal and external validation to predict survival in patients with ampullary adenocarcinoma.METHODS Data were sourced from the surveillance,epidemiology,and end results stat database.The patients in the database were randomized in a 7:3 ratio into training and validation groups.Using Cox regression univariate and multivariate analyses in the training group,we identified independent risk factors for overall survival and cancer-specific survival to develop the nomogram.The nomogram was validated with a cohort of patients from the First Affiliated Hospital of the Army Medical University.RESULTS For overall and cancer-specific survival,12(sex,age,race,lymph node ratio,tumor size,chemotherapy,surgical modality,T stage,tumor differentiation,brain metastasis,lung metastasis,and extension)and 6(age;surveillance,epidemiology,and end results stage;lymph node ratio;chemotherapy;surgical modality;and tumor differentiation)independent risk factors,respectively,were incorporated into the nomogram.The area under the curve values at 1,3,and 5 years,respectively,were 0.807,0.842,and 0.826 for overall survival and 0.816,0.835,and 0.841 for cancer-specific survival.The internal and external validation cohorts indicated good consistency of the nomogram.CONCLUSION The dynamic nomogram offers robust predictive efficacy for the overall and cancer-specific survival of ampullary adenocarcinoma.展开更多
The college innovation and entrepreneurship program is a powerful means to enhance students’innovation and entrepreneurship skills.Evaluating the maturity of innovation and entrepreneurship projects can stimulate stu...The college innovation and entrepreneurship program is a powerful means to enhance students’innovation and entrepreneurship skills.Evaluating the maturity of innovation and entrepreneurship projects can stimulate students’enthusiasm and initiative to participate.Utilizing computer database technology for maturity evaluation can make the process more efficient,accurate,and convenient,aligning with the needs of the information age.Exploring strategies for applying computer database technology in the maturity evaluation of innovation and entrepreneurship projects offers valuable insights and directions for developing these projects,while also providing strong support for enhancing students’innovation and entrepreneurship abilities.展开更多
With the continuous development of computer network technology, its applications in daily life and work have become increasingly widespread, greatly improving efficiency. However, certain security risks remain. To ens...With the continuous development of computer network technology, its applications in daily life and work have become increasingly widespread, greatly improving efficiency. However, certain security risks remain. To ensure the security of computer networks and databases, it is essential to enhance the security of both through optimization of technology. This includes improving management practices, optimizing data processing methods, and establishing comprehensive laws and regulations. This paper analyzes the current security risks in computer networks and databases and proposes corresponding solutions, offering reference points for relevant personnel.展开更多
Sixty-three major inbred varieties and parental lines of major F1 hybrids used in the commercial rice production in China were assayed with rice microsatellites screened in a previous study and additional microsatelli...Sixty-three major inbred varieties and parental lines of major F1 hybrids used in the commercial rice production in China were assayed with rice microsatellites screened in a previous study and additional microsatellites on four chromosomes. A set of 24 markers was selected and proposed for its application in the variety identification of rice, which are distributed on all the 12 rice chromosomes with 2 markers on each chromosome. The 63 major varieties and parental lines, as well as 41 major F1 hybrids, were genotyped with the markers. Alleles detected in each line at each marker locus were verified. By matching marker genotypes of corresponding F1, maternal and paternal lines of hybrid rice, high reliability of the maternal lines was verified, data on the paternal lines were modified, and a false hybrid was removed. A database containing genotype data of 103 major rice vadeties and parental lines at the 24 marker loci was constructed and analyzed.展开更多
Analyzing polysorbate 20(PS20)composition and the impact of each component on stability and safety is crucial due to formulation variations and individual tolerance.The similar structures and polarities of PS20 compon...Analyzing polysorbate 20(PS20)composition and the impact of each component on stability and safety is crucial due to formulation variations and individual tolerance.The similar structures and polarities of PS20 components make accurate separation,identification,and quantification challenging.In this work,a high-resolution quantitative method was developed using single-dimensional high-performance liquid chromatography(HPLC)with charged aerosol detection(CAD)to separate 18 key components with multiple esters.The separated components were characterized by ultra-high-performance liquid chromatography-quadrupole time-of-flight mass spectrometry(UHPLC-Q-TOF-MS)with an identical gradient as the HPLC-CAD analysis.The polysorbate compound database and library were expanded over 7-time compared to the commercial database.The method investigated differences in PS20 samples from various origins and grades for different dosage forms to evaluate the composition-process relationship.UHPLC-Q-TOF-MS identified 1329 to 1511 compounds in 4 batches of PS20 from different sources.The method observed the impact of 4 degradation conditions on peak components,identifying stable components and their tendencies to change.HPLC-CAD and UHPLC-Q-TOF-MS results provided insights into fingerprint differences,distinguishing quasi products.展开更多
Database systems have consistently been prime targets for cyber-attacks and threats due to the critical nature of the data they store.Despite the increasing reliance on database management systems,this field continues...Database systems have consistently been prime targets for cyber-attacks and threats due to the critical nature of the data they store.Despite the increasing reliance on database management systems,this field continues to face numerous cyber-attacks.Database management systems serve as the foundation of any information system or application.Any cyber-attack can result in significant damage to the database system and loss of sensitive data.Consequently,cyber risk classifications and assessments play a crucial role in risk management and establish an essential framework for identifying and responding to cyber threats.Risk assessment aids in understanding the impact of cyber threats and developing appropriate security controls to mitigate risks.The primary objective of this study is to conduct a comprehensive analysis of cyber risks in database management systems,including classifying threats,vulnerabilities,impacts,and countermeasures.This classification helps to identify suitable security controls to mitigate cyber risks for each type of threat.Additionally,this research aims to explore technical countermeasures to protect database systems from cyber threats.This study employs the content analysis method to collect,analyze,and classify data in terms of types of threats,vulnerabilities,and countermeasures.The results indicate that SQL injection attacks and Denial of Service(DoS)attacks were the most prevalent technical threats in database systems,each accounting for 9%of incidents.Vulnerable audit trails,intrusion attempts,and ransomware attacks were classified as the second level of technical threats in database systems,comprising 7%and 5%of incidents,respectively.Furthermore,the findings reveal that insider threats were the most common non-technical threats in database systems,accounting for 5%of incidents.Moreover,the results indicate that weak authentication,unpatched databases,weak audit trails,and multiple usage of an account were the most common technical vulnerabilities in database systems,each accounting for 9%of vulnerabilities.Additionally,software bugs,insecure coding practices,weak security controls,insecure networks,password misuse,weak encryption practices,and weak data masking were classified as the second level of security vulnerabilities in database systems,each accounting for 4%of vulnerabilities.The findings from this work can assist organizations in understanding the types of cyber threats and developing robust strategies against cyber-attacks.展开更多
Advanced glycation end-products(AGEs)are a group of heterogeneous compounds formed in heatprocessed foods and are proven to be detrimental to human health.Currently,there is no comprehensive database for AGEs in foods...Advanced glycation end-products(AGEs)are a group of heterogeneous compounds formed in heatprocessed foods and are proven to be detrimental to human health.Currently,there is no comprehensive database for AGEs in foods that covers the entire range of food categories,which limits the accurate risk assessment of dietary AGEs in human diseases.In this study,we first established an isotope dilution UHPLCQq Q-MS/MS-based method for simultaneous quantification of 10 major AGEs in foods.The contents of these AGEs were detected in 334 foods covering all main groups consumed in Western and Chinese populations.Nε-Carboxymethyllysine,methylglyoxal-derived hydroimidazolone isomers,and glyoxal-derived hydroimidazolone-1 are predominant AGEs found in most foodstuffs.Total amounts of AGEs were high in processed nuts,bakery products,and certain types of cereals and meats(>150 mg/kg),while low in dairy products,vegetables,fruits,and beverages(<40 mg/kg).Assessment of estimated daily intake implied that the contribution of food groups to daily AGE intake varied a lot under different eating patterns,and selection of high-AGE foods leads to up to a 2.7-fold higher intake of AGEs through daily meals.The presented AGE database allows accurate assessment of dietary exposure to these glycotoxins to explore their physiological impacts on human health.展开更多
This study examines the database search behaviors of individuals, focusing on gender differences and the impact of planning habits on information retrieval. Data were collected from a survey of 198 respondents, catego...This study examines the database search behaviors of individuals, focusing on gender differences and the impact of planning habits on information retrieval. Data were collected from a survey of 198 respondents, categorized by their discipline, schooling background, internet usage, and information retrieval preferences. Key findings indicate that females are more likely to plan their searches in advance and prefer structured methods of information retrieval, such as using library portals and leading university websites. Males, however, tend to use web search engines and self-archiving methods more frequently. This analysis provides valuable insights for educational institutions and libraries to optimize their resources and services based on user behavior patterns.展开更多
A data lake(DL),abbreviated as DL,denotes a vast reservoir or repository of data.It accumulates substantial volumes of data and employs advanced analytics to correlate data from diverse origins containing various form...A data lake(DL),abbreviated as DL,denotes a vast reservoir or repository of data.It accumulates substantial volumes of data and employs advanced analytics to correlate data from diverse origins containing various forms of semi-structured,structured,and unstructured information.These systems use a flat architecture and run different types of data analytics.NoSQL databases are nontabular and store data in a different manner than the relational table.NoSQL databases come in various forms,including key-value pairs,documents,wide columns,and graphs,each based on its data model.They offer simpler scalability and generally outperform traditional relational databases.While NoSQL databases can store diverse data types,they lack full support for atomicity,consistency,isolation,and durability features found in relational databases.Consequently,employing machine learning approaches becomes necessary to categorize complex structured query language(SQL)queries.Results indicate that the most frequently used automatic classification technique in processing SQL queries on NoSQL databases is machine learning-based classification.Overall,this study provides an overview of the automatic classification techniques used in processing SQL queries on NoSQL databases.Understanding these techniques can aid in the development of effective and efficient NoSQL database applications.展开更多
The Dynamic Database of Solid-State Electrolyte(DDSE)is an advanced online platform offering a comprehensive suite of tools for solid-state battery research and development.Its key features include statistical analysi...The Dynamic Database of Solid-State Electrolyte(DDSE)is an advanced online platform offering a comprehensive suite of tools for solid-state battery research and development.Its key features include statistical analysis of both experimental and computational solid-state electrolyte(SSE)data,interactive visualization through dynamic charts,user data assessment,and literature analysis powered by a large language model.By facilitating the design and optimization of novel SSEs,DDSE serves as a critical resource for advancing solid-state battery technology.This Technical Report provides detailed tutorials and practical examples to guide users in effectively utilizing the platform.展开更多
The CALPHAD thermodynamic databases are very useful to analyze the complex chemical reactions happening in high temperature material process.The FactSage thermodynamic database can be used to calculate complex phase d...The CALPHAD thermodynamic databases are very useful to analyze the complex chemical reactions happening in high temperature material process.The FactSage thermodynamic database can be used to calculate complex phase diagrams and equilibrium phases involving refractories in industrial process.In this study,the FactSage thermodynamic database relevant to ZrO_(2)-based refractories was reviewed and the application of the database to understanding the corrosion of continuous casting nozzle refractories in steelmaking was presented.展开更多
BACKGROUND Elective cholecystectomy(CCY)is recommended for patients with gallstone-related acute cholangitis(AC)following endoscopic decompression to prevent recurrent biliary events.However,the optimal timing and imp...BACKGROUND Elective cholecystectomy(CCY)is recommended for patients with gallstone-related acute cholangitis(AC)following endoscopic decompression to prevent recurrent biliary events.However,the optimal timing and implications of CCY remain unclear.AIM To examine the impact of same-admission CCY compared to interval CCY on patients with gallstone-related AC using the National Readmission Database(NRD).METHODS We queried the NRD to identify all gallstone-related AC hospitalizations in adult patients with and without the same admission CCY between 2016 and 2020.Our primary outcome was all-cause 30-d readmission rates,and secondary outcomes included in-hospital mortality,length of stay(LOS),and hospitalization cost.RESULTS Among the 124964 gallstone-related AC hospitalizations,only 14.67%underwent the same admission CCY.The all-cause 30-d readmissions in the same admission CCY group were almost half that of the non-CCY group(5.56%vs 11.50%).Patients in the same admission CCY group had a longer mean LOS and higher hospitalization costs attrib-utable to surgery.Although the most common reason for readmission was sepsis in both groups,the second most common reason was AC in the interval CCY group.CONCLUSION Our study suggests that patients with gallstone-related AC who do not undergo the same admission CCY have twice the risk of readmission compared to those who undergo CCY during the same admission.These readmis-sions can potentially be prevented by performing same-admission CCY in appropriate patients,which may reduce subsequent hospitalization costs secondary to readmissions.展开更多
The book chapter is an extended version of the research paper entitled “Use of Component Integration Services in Multidatabase Systems”, which is presented and published by the 13<sup>th</sup> ISITA, the...The book chapter is an extended version of the research paper entitled “Use of Component Integration Services in Multidatabase Systems”, which is presented and published by the 13<sup>th</sup> ISITA, the National Conference of Recent Trends in Mathematical and Computer Sciences, T.M.B. University, Bhagalpur, India, January 3-4, 2015. Information is widely distributed across many remote, distributed, and autonomous databases (local component databases) in heterogeneous formats. The integration of heterogeneous remote databases is a difficult task, and it has already been addressed by several projects to certain extents. In this chapter, we have discussed how to integrate heterogeneous distributed local relational databases because of their simplicity, excellent security, performance, power, flexibility, data independence, support for new hardware technologies, and spread across the globe. We have also discussed how to constitute a global conceptual schema in the multidatabase system using Sybase Adaptive Server Enterprise’s Component Integration Services (CIS) and OmniConnect. This is feasible for higher education institutions and commercial industries as well. Considering the higher educational institutions, the CIS will improve IT integration for educational institutions with their subsidiaries or with other institutions within the country and abroad in terms of educational management, teaching, learning, and research, including promoting international students’ academic integration, collaboration, and governance. This will prove an innovative strategy to support the modernization and large expansion of academic institutions. This will be considered IT-institutional alignment within a higher education context. This will also support achieving one of the sustainable development goals set by the United Nations: “Goal 4: ensure inclusive and quality education for all and promote lifelong learning”. However, the process of IT integration into higher educational institutions must be thoroughly evaluated, identifying the vital data access points. In this chapter, Section 1 provides an introduction, including the evolution of various database systems, data models, and the emergence of multidatabase systems and their importance. Section 2 discusses component integration services (CIS), OmniConnect and considering heterogeneous relational distributed local databases from the perspective of academics, Section 3 discusses the Sybase Adaptive Server Enterprise (ASE), Section 4 discusses the role of component integration services and OmniConnect of Sybase ASE under the Multidatabase System, Section 5 shows the database architectural framework, Section 6 provides an implementation overview of the global conceptual schema in the multidatabase system, Section 7 discusses query processing in the CIS, and finally, Section 8 concludes the chapter. The chapter will help our students a lot, as we have discussed well the evolution of databases and data models and the emergence of multidatabases. Since some additional useful information is cited, the source of information for each citation is properly mentioned in the references column.展开更多
The continuously updated database of failures and censored data of numerous products has become large, and on some covariates, information regarding the failure times is missing in the database. As the dataset is larg...The continuously updated database of failures and censored data of numerous products has become large, and on some covariates, information regarding the failure times is missing in the database. As the dataset is large and has missing information, the analysis tasks become complicated and a long time is required to execute the programming codes. In such situations, the divide and recombine (D&R) approach, which has a practical computational performance for big data analysis, can be applied. In this study, the D&R approach was applied to analyze the real field data of an automobile component with incomplete information on covariates using the Weibull regression model. Model parameters were estimated using the expectation maximization algorithm. The results of the data analysis and simulation demonstrated that the D&R approach is applicable for analyzing such datasets. Further, the percentiles and reliability functions of the distribution under different covariate conditions were estimated to evaluate the component performance of these covariates. The findings of this study have managerial implications regarding design decisions, safety, and reliability of automobile components.展开更多
Along with the wide application and the rapid development of the database technology, the teaching of the database principle will focus on developing the students' innovative abilities and their application skills. O...Along with the wide application and the rapid development of the database technology, the teaching of the database principle will focus on developing the students' innovative abilities and their application skills. On the basis of a comprehensive grasp of the concepts and principles, combined with the actual applications, the students will understand the development trend, and unceasingly enhance the level of the database knowledge, which has great benefits for training the comprehensive talents. In this paper, by analyzing the problems existing in the database courses, the author of this paper put forward the ideas and measures for the reform of the relevant curriculums.展开更多
Along with the wide application and the rapid development of the database technology, the teaching of the database principle will focus on developing the students' innovative abilities and their application skills. O...Along with the wide application and the rapid development of the database technology, the teaching of the database principle will focus on developing the students' innovative abilities and their application skills. On the basis of a comprehensive grasp of the concepts and principles, combined with the actual applications, the students will understand the development trend, and unceasingly enhance the level of the database knowledge, which has great benefits for training the comprehensive talents. In this paper, by analyzing the problems existing in the database courses, the author of this paper put forward the ideas and measures for the reform of the relevant curriculums.展开更多
Based on ArcGIS and MapInfo software, we digitized the active tectonics map (1:4,000,000) of China, which was compiled and revised by academician Deng Qidong, and built the spatial database of active tectonics of Chin...Based on ArcGIS and MapInfo software, we digitized the active tectonics map (1:4,000,000) of China, which was compiled and revised by academician Deng Qidong, and built the spatial database of active tectonics of China. The database integrates rich active tectonic data, such as a catalogue of earthquakes with magnitude above 6.0, active faults, Quaternary basins, active folds and their associated attribute parameters, and implements scientific and effective management to this data. At the same time, the spatial database joins the spatial map data and the associated attribute data together, which implements the data query between spatial properties and attribute parameters and also makes it possible to perform spatial analysis with different data layers. These provide much convenience for earthquake study and allows engineering construction institutions to use this data in practical applications.展开更多
For a transaction processing system to operate effectively and efficiently in cloud environments, it is important to distribute huge amount of data while guaranteeing the ACID (atomic, consistent, isolated, and dura...For a transaction processing system to operate effectively and efficiently in cloud environments, it is important to distribute huge amount of data while guaranteeing the ACID (atomic, consistent, isolated, and durable) properties. Moreover, database partition and migration tools can help transplanting conventional relational database systems to the cloud environment rather than rebuilding a new system. This paper proposes a database distribution management (DBDM) system, which partitions or replicates the data according to the transaction behaviors of the application system. The principle strategy of DBDM is to keep together the data used in a single transaction, and thus, avoiding massive transmission of records in join operations. The proposed system has been implemented successfully. The preliminary experiments show that the DBDM performs the database partition and migration effectively. Also, the DBDM system is modularly designed to adapt to different database management system (DBMS) or different partition algorithms.展开更多
An outsource database is a database service provided by cloud computing companies.Using the outsource database can reduce the hardware and software's cost and also get more efficient and reliable data processing capa...An outsource database is a database service provided by cloud computing companies.Using the outsource database can reduce the hardware and software's cost and also get more efficient and reliable data processing capacity.However,the outsource database still has some challenges.If the service provider does not have sufficient confidence,there is the possibility of data leakage.The data may has user's privacy,so data leakage may cause data privacy leak.Based on this factor,to protect the privacy of data in the outsource database becomes very important.In the past,scholars have proposed k-anonymity to protect data privacy in the database.It lets data become anonymous to avoid data privacy leak.But k-anonymity has some problems,it is irreversible,and easier to be attacked by homogeneity attack and background knowledge attack.Later on,scholars have proposed some studies to solve homogeneity attack and background knowledge attack.But their studies still cannot recover back to the original data.In this paper,we propose a data anonymity method.It can be reversible and also prevent those two attacks.Our study is based on the proposed r-transform.It can be used on the numeric type of attributes in the outsource database.In the experiment,we discussed the time required to anonymize and recover data.Furthermore,we investigated the defense against homogeneous attack and background knowledge attack.At the end,we summarized the proposed method and future researches.展开更多
In this paper, a novel Home Location Register(HLR) mobility database recovery scheme is proposed. With database backing-up and signal sending as its key processes, the presented scheme is designed for the purpose of b...In this paper, a novel Home Location Register(HLR) mobility database recovery scheme is proposed. With database backing-up and signal sending as its key processes, the presented scheme is designed for the purpose of both decreasing system costs and reducing number of lost calls. In our scheme, an algorithm is developed for an HLR to identify such VLRs that there are new MSs roaming into them since the latest HLR database backing up. The identification of those VLRs is used by the HLR to send Unreliable Roaming Data Directive messages to each of them to get the correct location information of those new MSs. Additionally, two kinds of relationships, one between the number of lost calls and the database backing-up period and the other between the backing-up cost and the period, are well analyzed. Both analytical and numerical results indicate that there will be an optimal HLR database backing-up period if certain system parameters are given and the total cost can be consequently minimized.展开更多
基金Supported by the Appropriate Technology Promotion Program in Chongqing,No.2023jstg005.
文摘BACKGROUND Ampullary adenocarcinoma is a rare malignant tumor of the gastrointestinal tract.Currently,only a few cases have been reported,resulting in limited information on survival.AIM To develop a dynamic nomogram using internal and external validation to predict survival in patients with ampullary adenocarcinoma.METHODS Data were sourced from the surveillance,epidemiology,and end results stat database.The patients in the database were randomized in a 7:3 ratio into training and validation groups.Using Cox regression univariate and multivariate analyses in the training group,we identified independent risk factors for overall survival and cancer-specific survival to develop the nomogram.The nomogram was validated with a cohort of patients from the First Affiliated Hospital of the Army Medical University.RESULTS For overall and cancer-specific survival,12(sex,age,race,lymph node ratio,tumor size,chemotherapy,surgical modality,T stage,tumor differentiation,brain metastasis,lung metastasis,and extension)and 6(age;surveillance,epidemiology,and end results stage;lymph node ratio;chemotherapy;surgical modality;and tumor differentiation)independent risk factors,respectively,were incorporated into the nomogram.The area under the curve values at 1,3,and 5 years,respectively,were 0.807,0.842,and 0.826 for overall survival and 0.816,0.835,and 0.841 for cancer-specific survival.The internal and external validation cohorts indicated good consistency of the nomogram.CONCLUSION The dynamic nomogram offers robust predictive efficacy for the overall and cancer-specific survival of ampullary adenocarcinoma.
基金“Undergraduate Teaching Research and Reform Project of the University of Shanghai for Science and Technology”(Project No.JGXM202351).
文摘The college innovation and entrepreneurship program is a powerful means to enhance students’innovation and entrepreneurship skills.Evaluating the maturity of innovation and entrepreneurship projects can stimulate students’enthusiasm and initiative to participate.Utilizing computer database technology for maturity evaluation can make the process more efficient,accurate,and convenient,aligning with the needs of the information age.Exploring strategies for applying computer database technology in the maturity evaluation of innovation and entrepreneurship projects offers valuable insights and directions for developing these projects,while also providing strong support for enhancing students’innovation and entrepreneurship abilities.
文摘With the continuous development of computer network technology, its applications in daily life and work have become increasingly widespread, greatly improving efficiency. However, certain security risks remain. To ensure the security of computer networks and databases, it is essential to enhance the security of both through optimization of technology. This includes improving management practices, optimizing data processing methods, and establishing comprehensive laws and regulations. This paper analyzes the current security risks in computer networks and databases and proposes corresponding solutions, offering reference points for relevant personnel.
文摘Sixty-three major inbred varieties and parental lines of major F1 hybrids used in the commercial rice production in China were assayed with rice microsatellites screened in a previous study and additional microsatellites on four chromosomes. A set of 24 markers was selected and proposed for its application in the variety identification of rice, which are distributed on all the 12 rice chromosomes with 2 markers on each chromosome. The 63 major varieties and parental lines, as well as 41 major F1 hybrids, were genotyped with the markers. Alleles detected in each line at each marker locus were verified. By matching marker genotypes of corresponding F1, maternal and paternal lines of hybrid rice, high reliability of the maternal lines was verified, data on the paternal lines were modified, and a false hybrid was removed. A database containing genotype data of 103 major rice vadeties and parental lines at the 24 marker loci was constructed and analyzed.
基金financial support from the Science Research Program Project for Drug Regulation,Jiangsu Drug Administration,China(Grant No.:202207)the National Drug Standards Revision Project,China(Grant No.:2023Y41)+1 种基金the National Natural Science Foundation of China(Grant No.:22276080)the Foreign Expert Project,China(Grant No.:G2022014096L).
文摘Analyzing polysorbate 20(PS20)composition and the impact of each component on stability and safety is crucial due to formulation variations and individual tolerance.The similar structures and polarities of PS20 components make accurate separation,identification,and quantification challenging.In this work,a high-resolution quantitative method was developed using single-dimensional high-performance liquid chromatography(HPLC)with charged aerosol detection(CAD)to separate 18 key components with multiple esters.The separated components were characterized by ultra-high-performance liquid chromatography-quadrupole time-of-flight mass spectrometry(UHPLC-Q-TOF-MS)with an identical gradient as the HPLC-CAD analysis.The polysorbate compound database and library were expanded over 7-time compared to the commercial database.The method investigated differences in PS20 samples from various origins and grades for different dosage forms to evaluate the composition-process relationship.UHPLC-Q-TOF-MS identified 1329 to 1511 compounds in 4 batches of PS20 from different sources.The method observed the impact of 4 degradation conditions on peak components,identifying stable components and their tendencies to change.HPLC-CAD and UHPLC-Q-TOF-MS results provided insights into fingerprint differences,distinguishing quasi products.
基金supported by the Deanship of Scientific Research,Vice Presidency for Graduate Studies and Scientific Research,King Faisal University,Saudi Arabia(Grant No.KFU242068).
文摘Database systems have consistently been prime targets for cyber-attacks and threats due to the critical nature of the data they store.Despite the increasing reliance on database management systems,this field continues to face numerous cyber-attacks.Database management systems serve as the foundation of any information system or application.Any cyber-attack can result in significant damage to the database system and loss of sensitive data.Consequently,cyber risk classifications and assessments play a crucial role in risk management and establish an essential framework for identifying and responding to cyber threats.Risk assessment aids in understanding the impact of cyber threats and developing appropriate security controls to mitigate risks.The primary objective of this study is to conduct a comprehensive analysis of cyber risks in database management systems,including classifying threats,vulnerabilities,impacts,and countermeasures.This classification helps to identify suitable security controls to mitigate cyber risks for each type of threat.Additionally,this research aims to explore technical countermeasures to protect database systems from cyber threats.This study employs the content analysis method to collect,analyze,and classify data in terms of types of threats,vulnerabilities,and countermeasures.The results indicate that SQL injection attacks and Denial of Service(DoS)attacks were the most prevalent technical threats in database systems,each accounting for 9%of incidents.Vulnerable audit trails,intrusion attempts,and ransomware attacks were classified as the second level of technical threats in database systems,comprising 7%and 5%of incidents,respectively.Furthermore,the findings reveal that insider threats were the most common non-technical threats in database systems,accounting for 5%of incidents.Moreover,the results indicate that weak authentication,unpatched databases,weak audit trails,and multiple usage of an account were the most common technical vulnerabilities in database systems,each accounting for 9%of vulnerabilities.Additionally,software bugs,insecure coding practices,weak security controls,insecure networks,password misuse,weak encryption practices,and weak data masking were classified as the second level of security vulnerabilities in database systems,each accounting for 4%of vulnerabilities.The findings from this work can assist organizations in understanding the types of cyber threats and developing robust strategies against cyber-attacks.
基金the financial support received from the Natural Science Foundation of China(32202202 and 31871735)。
文摘Advanced glycation end-products(AGEs)are a group of heterogeneous compounds formed in heatprocessed foods and are proven to be detrimental to human health.Currently,there is no comprehensive database for AGEs in foods that covers the entire range of food categories,which limits the accurate risk assessment of dietary AGEs in human diseases.In this study,we first established an isotope dilution UHPLCQq Q-MS/MS-based method for simultaneous quantification of 10 major AGEs in foods.The contents of these AGEs were detected in 334 foods covering all main groups consumed in Western and Chinese populations.Nε-Carboxymethyllysine,methylglyoxal-derived hydroimidazolone isomers,and glyoxal-derived hydroimidazolone-1 are predominant AGEs found in most foodstuffs.Total amounts of AGEs were high in processed nuts,bakery products,and certain types of cereals and meats(>150 mg/kg),while low in dairy products,vegetables,fruits,and beverages(<40 mg/kg).Assessment of estimated daily intake implied that the contribution of food groups to daily AGE intake varied a lot under different eating patterns,and selection of high-AGE foods leads to up to a 2.7-fold higher intake of AGEs through daily meals.The presented AGE database allows accurate assessment of dietary exposure to these glycotoxins to explore their physiological impacts on human health.
文摘This study examines the database search behaviors of individuals, focusing on gender differences and the impact of planning habits on information retrieval. Data were collected from a survey of 198 respondents, categorized by their discipline, schooling background, internet usage, and information retrieval preferences. Key findings indicate that females are more likely to plan their searches in advance and prefer structured methods of information retrieval, such as using library portals and leading university websites. Males, however, tend to use web search engines and self-archiving methods more frequently. This analysis provides valuable insights for educational institutions and libraries to optimize their resources and services based on user behavior patterns.
基金supported by the Student Scheme provided by Universiti Kebangsaan Malaysia with the Code TAP-20558.
文摘A data lake(DL),abbreviated as DL,denotes a vast reservoir or repository of data.It accumulates substantial volumes of data and employs advanced analytics to correlate data from diverse origins containing various forms of semi-structured,structured,and unstructured information.These systems use a flat architecture and run different types of data analytics.NoSQL databases are nontabular and store data in a different manner than the relational table.NoSQL databases come in various forms,including key-value pairs,documents,wide columns,and graphs,each based on its data model.They offer simpler scalability and generally outperform traditional relational databases.While NoSQL databases can store diverse data types,they lack full support for atomicity,consistency,isolation,and durability features found in relational databases.Consequently,employing machine learning approaches becomes necessary to categorize complex structured query language(SQL)queries.Results indicate that the most frequently used automatic classification technique in processing SQL queries on NoSQL databases is machine learning-based classification.Overall,this study provides an overview of the automatic classification techniques used in processing SQL queries on NoSQL databases.Understanding these techniques can aid in the development of effective and efficient NoSQL database applications.
文摘The Dynamic Database of Solid-State Electrolyte(DDSE)is an advanced online platform offering a comprehensive suite of tools for solid-state battery research and development.Its key features include statistical analysis of both experimental and computational solid-state electrolyte(SSE)data,interactive visualization through dynamic charts,user data assessment,and literature analysis powered by a large language model.By facilitating the design and optimization of novel SSEs,DDSE serves as a critical resource for advancing solid-state battery technology.This Technical Report provides detailed tutorials and practical examples to guide users in effectively utilizing the platform.
基金Tata Steel Netherlands,Posco,Hyundai Steel,Nucor Steel,RioTinto,Nippon Steel Corp.,JFE Steel,Voestalpine,RHi-Magnesita,Doosan Enerbility,Seah Besteel,Umicore,Vesuvius and Schott AG are gratefully acknowledged.
文摘The CALPHAD thermodynamic databases are very useful to analyze the complex chemical reactions happening in high temperature material process.The FactSage thermodynamic database can be used to calculate complex phase diagrams and equilibrium phases involving refractories in industrial process.In this study,the FactSage thermodynamic database relevant to ZrO_(2)-based refractories was reviewed and the application of the database to understanding the corrosion of continuous casting nozzle refractories in steelmaking was presented.
文摘BACKGROUND Elective cholecystectomy(CCY)is recommended for patients with gallstone-related acute cholangitis(AC)following endoscopic decompression to prevent recurrent biliary events.However,the optimal timing and implications of CCY remain unclear.AIM To examine the impact of same-admission CCY compared to interval CCY on patients with gallstone-related AC using the National Readmission Database(NRD).METHODS We queried the NRD to identify all gallstone-related AC hospitalizations in adult patients with and without the same admission CCY between 2016 and 2020.Our primary outcome was all-cause 30-d readmission rates,and secondary outcomes included in-hospital mortality,length of stay(LOS),and hospitalization cost.RESULTS Among the 124964 gallstone-related AC hospitalizations,only 14.67%underwent the same admission CCY.The all-cause 30-d readmissions in the same admission CCY group were almost half that of the non-CCY group(5.56%vs 11.50%).Patients in the same admission CCY group had a longer mean LOS and higher hospitalization costs attrib-utable to surgery.Although the most common reason for readmission was sepsis in both groups,the second most common reason was AC in the interval CCY group.CONCLUSION Our study suggests that patients with gallstone-related AC who do not undergo the same admission CCY have twice the risk of readmission compared to those who undergo CCY during the same admission.These readmis-sions can potentially be prevented by performing same-admission CCY in appropriate patients,which may reduce subsequent hospitalization costs secondary to readmissions.
文摘The book chapter is an extended version of the research paper entitled “Use of Component Integration Services in Multidatabase Systems”, which is presented and published by the 13<sup>th</sup> ISITA, the National Conference of Recent Trends in Mathematical and Computer Sciences, T.M.B. University, Bhagalpur, India, January 3-4, 2015. Information is widely distributed across many remote, distributed, and autonomous databases (local component databases) in heterogeneous formats. The integration of heterogeneous remote databases is a difficult task, and it has already been addressed by several projects to certain extents. In this chapter, we have discussed how to integrate heterogeneous distributed local relational databases because of their simplicity, excellent security, performance, power, flexibility, data independence, support for new hardware technologies, and spread across the globe. We have also discussed how to constitute a global conceptual schema in the multidatabase system using Sybase Adaptive Server Enterprise’s Component Integration Services (CIS) and OmniConnect. This is feasible for higher education institutions and commercial industries as well. Considering the higher educational institutions, the CIS will improve IT integration for educational institutions with their subsidiaries or with other institutions within the country and abroad in terms of educational management, teaching, learning, and research, including promoting international students’ academic integration, collaboration, and governance. This will prove an innovative strategy to support the modernization and large expansion of academic institutions. This will be considered IT-institutional alignment within a higher education context. This will also support achieving one of the sustainable development goals set by the United Nations: “Goal 4: ensure inclusive and quality education for all and promote lifelong learning”. However, the process of IT integration into higher educational institutions must be thoroughly evaluated, identifying the vital data access points. In this chapter, Section 1 provides an introduction, including the evolution of various database systems, data models, and the emergence of multidatabase systems and their importance. Section 2 discusses component integration services (CIS), OmniConnect and considering heterogeneous relational distributed local databases from the perspective of academics, Section 3 discusses the Sybase Adaptive Server Enterprise (ASE), Section 4 discusses the role of component integration services and OmniConnect of Sybase ASE under the Multidatabase System, Section 5 shows the database architectural framework, Section 6 provides an implementation overview of the global conceptual schema in the multidatabase system, Section 7 discusses query processing in the CIS, and finally, Section 8 concludes the chapter. The chapter will help our students a lot, as we have discussed well the evolution of databases and data models and the emergence of multidatabases. Since some additional useful information is cited, the source of information for each citation is properly mentioned in the references column.
文摘The continuously updated database of failures and censored data of numerous products has become large, and on some covariates, information regarding the failure times is missing in the database. As the dataset is large and has missing information, the analysis tasks become complicated and a long time is required to execute the programming codes. In such situations, the divide and recombine (D&R) approach, which has a practical computational performance for big data analysis, can be applied. In this study, the D&R approach was applied to analyze the real field data of an automobile component with incomplete information on covariates using the Weibull regression model. Model parameters were estimated using the expectation maximization algorithm. The results of the data analysis and simulation demonstrated that the D&R approach is applicable for analyzing such datasets. Further, the percentiles and reliability functions of the distribution under different covariate conditions were estimated to evaluate the component performance of these covariates. The findings of this study have managerial implications regarding design decisions, safety, and reliability of automobile components.
文摘Along with the wide application and the rapid development of the database technology, the teaching of the database principle will focus on developing the students' innovative abilities and their application skills. On the basis of a comprehensive grasp of the concepts and principles, combined with the actual applications, the students will understand the development trend, and unceasingly enhance the level of the database knowledge, which has great benefits for training the comprehensive talents. In this paper, by analyzing the problems existing in the database courses, the author of this paper put forward the ideas and measures for the reform of the relevant curriculums.
文摘Along with the wide application and the rapid development of the database technology, the teaching of the database principle will focus on developing the students' innovative abilities and their application skills. On the basis of a comprehensive grasp of the concepts and principles, combined with the actual applications, the students will understand the development trend, and unceasingly enhance the level of the database knowledge, which has great benefits for training the comprehensive talents. In this paper, by analyzing the problems existing in the database courses, the author of this paper put forward the ideas and measures for the reform of the relevant curriculums.
文摘Based on ArcGIS and MapInfo software, we digitized the active tectonics map (1:4,000,000) of China, which was compiled and revised by academician Deng Qidong, and built the spatial database of active tectonics of China. The database integrates rich active tectonic data, such as a catalogue of earthquakes with magnitude above 6.0, active faults, Quaternary basins, active folds and their associated attribute parameters, and implements scientific and effective management to this data. At the same time, the spatial database joins the spatial map data and the associated attribute data together, which implements the data query between spatial properties and attribute parameters and also makes it possible to perform spatial analysis with different data layers. These provide much convenience for earthquake study and allows engineering construction institutions to use this data in practical applications.
基金supported by the Taiwan Ministry of Economic Affairs and Institute for Information Industry under the project titled "Fundamental Industrial Technology Development Program (1/4)"
文摘For a transaction processing system to operate effectively and efficiently in cloud environments, it is important to distribute huge amount of data while guaranteeing the ACID (atomic, consistent, isolated, and durable) properties. Moreover, database partition and migration tools can help transplanting conventional relational database systems to the cloud environment rather than rebuilding a new system. This paper proposes a database distribution management (DBDM) system, which partitions or replicates the data according to the transaction behaviors of the application system. The principle strategy of DBDM is to keep together the data used in a single transaction, and thus, avoiding massive transmission of records in join operations. The proposed system has been implemented successfully. The preliminary experiments show that the DBDM performs the database partition and migration effectively. Also, the DBDM system is modularly designed to adapt to different database management system (DBMS) or different partition algorithms.
文摘An outsource database is a database service provided by cloud computing companies.Using the outsource database can reduce the hardware and software's cost and also get more efficient and reliable data processing capacity.However,the outsource database still has some challenges.If the service provider does not have sufficient confidence,there is the possibility of data leakage.The data may has user's privacy,so data leakage may cause data privacy leak.Based on this factor,to protect the privacy of data in the outsource database becomes very important.In the past,scholars have proposed k-anonymity to protect data privacy in the database.It lets data become anonymous to avoid data privacy leak.But k-anonymity has some problems,it is irreversible,and easier to be attacked by homogeneity attack and background knowledge attack.Later on,scholars have proposed some studies to solve homogeneity attack and background knowledge attack.But their studies still cannot recover back to the original data.In this paper,we propose a data anonymity method.It can be reversible and also prevent those two attacks.Our study is based on the proposed r-transform.It can be used on the numeric type of attributes in the outsource database.In the experiment,we discussed the time required to anonymize and recover data.Furthermore,we investigated the defense against homogeneous attack and background knowledge attack.At the end,we summarized the proposed method and future researches.
基金the National 863 Program(No.MII-C3G-02-20/863-317-03-01-02-20)
文摘In this paper, a novel Home Location Register(HLR) mobility database recovery scheme is proposed. With database backing-up and signal sending as its key processes, the presented scheme is designed for the purpose of both decreasing system costs and reducing number of lost calls. In our scheme, an algorithm is developed for an HLR to identify such VLRs that there are new MSs roaming into them since the latest HLR database backing up. The identification of those VLRs is used by the HLR to send Unreliable Roaming Data Directive messages to each of them to get the correct location information of those new MSs. Additionally, two kinds of relationships, one between the number of lost calls and the database backing-up period and the other between the backing-up cost and the period, are well analyzed. Both analytical and numerical results indicate that there will be an optimal HLR database backing-up period if certain system parameters are given and the total cost can be consequently minimized.