Background:Failure to rescue has been an effective quality metric in congenital heart surgery.Conversely,mor-bidity and mortality depend greatly on non-modifiable individual factors and have a weak correlation with be...Background:Failure to rescue has been an effective quality metric in congenital heart surgery.Conversely,mor-bidity and mortality depend greatly on non-modifiable individual factors and have a weak correlation with better-quality performance.We aim to measure the complications,mortality,and risk factors in pediatric patients undergoing congenital heart surgery in a high-complexity institution located in a middle-income country and compare it with other institutions that have conducted a similar study.Methods:A retrospective observational study was conducted in a high-complexity service provider institution,in Cali,Colombia.All pediatric patients undergoing any congenital heart surgery between 2019 and 2022 were included.The main outcomes evaluated in the study were complication,mortality,and failure to rescue rate.Univariate and multivariate logistic regression analysis was performed with mortality as the outcome variable.Results:We evaluated 308 congenital heart sur-geries.Regarding the outcomes,201(65%)complications occurred,23(7.5%)patients died,and the FTR of the entire cohort was 11.4%.The presence of a postoperative complication(OR 14.88,CI 3.06–268.37,p=0.009),age(OR 0.79,CI 0.57–0.96,p=0.068),and urgent/emergent surgery(OR 8.14,CI 2.97–28.66,p<0.001)were the most significant variables in predicting mortality.Conclusions:Failure to rescue is an effective and comparable quality measure in healthcare institutions and is the major contributor to postoperative mortality in congenital heart surgeries.Despite our higher mortality and complication rate,we obtained a comparable failure to rescue rate to high-income countries’health institutions.展开更多
In this paper,a new approach is proposed to determine whether the content of an image is authentic or modified with a focus on detecting complex image tampering.Detecting image tampering without any prior information ...In this paper,a new approach is proposed to determine whether the content of an image is authentic or modified with a focus on detecting complex image tampering.Detecting image tampering without any prior information of the original image is a challenging problem since unknown diverse manipulations may have different characteristics and so do various formats of images.Our principle is that image processing,no matter how complex,may affect image quality,so image quality metrics can be used to distinguish tampered images.In particular,based on the alteration of image quality in modified blocks,the proposed method can locate the tampered areas.Referring to four types of effective no-reference image quality metrics,we obtain 13 features to present an image.The experimental results show that the proposed method is very promising on detecting image tampering and locating the locally tampered areas especially in realistic scenarios.展开更多
A semi-reference image quality assessment metric based on similarity measurement for synthesized virtual viewpoint image (VVI) in free-viewpoint television system (FFV) is proposed in this paper. The key point of ...A semi-reference image quality assessment metric based on similarity measurement for synthesized virtual viewpoint image (VVI) in free-viewpoint television system (FFV) is proposed in this paper. The key point of the proposed metric is taking resemblant information between VVI and its neighbor view images for quality assessment to make our metric to be extended to multi-semi-reference image quality assessment easily. The proposed metric first extracts impact factors from image features, then combines an image synthesis technique and similarity functions, in which, disparity information are taken into account for registering the resemblant regions. Experiments are divided into three phases. Phase I is to verify the validation of the proposed metric by taking impaired images and original reference into account. The experimental results show the agreement between evaluation scores and bio-characteristic of human visual system. Phase II shows the accordance with Phase I by taking neighbor view as reference. The proposed metric can be taken as a full reference one to evaluate the image quality even though the original reference is absent. Phase III is then performed to evaluate the quality of WI. Evaluation scores in the experimental results are able to evaluate the quality of VVI.展开更多
Corporate Performance Management (CPM) system is an information system used to collect, analyze, and visualize key performance indicators (KPIs) to support both business operations and especially strategic decisio...Corporate Performance Management (CPM) system is an information system used to collect, analyze, and visualize key performance indicators (KPIs) to support both business operations and especially strategic decisions. CPM systems display KPIs in forms of scorecard and dashboard so the executives can keep track and evaluate corporate performance. The quality of the information as shown in the KPIs is very crucial for the executives to make the right decisions. Therefore, it is important that the executives must be able to retrieve not only the KPIs but also the quality of those KPIs before using such KPIs in their strategic decisions. The objectives of this study were to determine the role of the CPM system in the organizations, current data and information quality state, problems and perspectives regarding data quality, as well as data quality maturity stage of the organizations. Survey research was used in this study; a questionnaire was sent to collect data from 477 corporations listed in the Stock Exchange of Thailand (SET) on January, 2011. Forty-nine questionnaires were returned. The results show that about half of the organizations have implemented CPM systems. Most organizations are confident in the information in CPM system, but information quality issues are commonly found. Frequent problems regarding information quality are information not up to date, information not ready by time of use, inaccuracy and incomplete. The most concerned and frequently assessed quality dimensions were security, accuracy, completeness, and validity. When asked to prioritize, the most important quality dimensions are accuracy, timeliness, completeness, security, and validity respectively. In addition, most organizations concern about data govemance management and have deployed such measures. This study showed that most organizations are on level 4 on Gartner's data governance maturity stage in which data governance is concerned and managed, but still not effective.展开更多
Evaluating complex information systems necessitates deep contextual knowledge of technology, user needs, and quality. The quality evaluation challenges increase with the system’s complexity, especially when multiple ...Evaluating complex information systems necessitates deep contextual knowledge of technology, user needs, and quality. The quality evaluation challenges increase with the system’s complexity, especially when multiple services supported by varied technological modules, are offered. Existing standards for software quality, such as the ISO25000 series, provide a broad framework for evaluation. Broadness offers initial implementation ease albeit, it often lacks specificity to cater to individual system modules. This paper maps 48 data metrics and 175 software metrics on specific system modules while aligning them with ISO standard quality traits. Using the ISO25000 series as a foundation, especially ISO25010 and 25012, this research seeks to augment the applicability of these standards to multi-faceted systems, exemplified by five distinct software modules prevalent in modern information ecosystems.展开更多
JT SQE system is a software quality and measurement system. Its design was based on the Chinese national standards of software product evaluation and quality characteristics. The JT SQE system consists of two parts...JT SQE system is a software quality and measurement system. Its design was based on the Chinese national standards of software product evaluation and quality characteristics. The JT SQE system consists of two parts. One is the model for software quality measurement, which is of hierarchical structure. The other is the process of requirements definition, measurement and rating. The system is a feasible model for software quality evaluation and measurement, and it has the advantage of a friendly user interface, simple operation, ease of revision and maintenance, and expansible measurements.展开更多
Timestamps play a key role in process mining because it determines the chronology of which events occurred and subsequently how they are ordered in process modelling.The timestamp in process mining gives an insight on...Timestamps play a key role in process mining because it determines the chronology of which events occurred and subsequently how they are ordered in process modelling.The timestamp in process mining gives an insight on process performance,conformance,and modelling.This therefore means problems with the timestamp will result in misrepresentations of the mined process.A few articles have been published on the quantification of data quality problems but just one of the articles at the time of this paper is based on the quantification of timestamp quality problems.This article evaluates the quality of timestamps in event log across two axes using eleven quality dimensions and four levels of potential data quality problems.The eleven data quality dimensions were obtained by doing a thorough literature review of more than fifty process mining articles which focus on quality dimensions.This evaluation resulted in twelve data quality quantification metrics and the metrics were applied to the MIMIC-ll dataset as an illustration.The outcome of the timestamp quality quantification using the proposed typology enabled the user to appreciate the quality of the event log and thus makes it possible to evaluate the risk of carrying out specific data cleaning measures to improve the process mining outcome.展开更多
To determine the effect of sedation with propofol on adenoma detection rate (ADR) and cecal intubation rates (CIR) in average risk screening colonoscopies compared to moderate sedation. METHODSWe conducted a retrospec...To determine the effect of sedation with propofol on adenoma detection rate (ADR) and cecal intubation rates (CIR) in average risk screening colonoscopies compared to moderate sedation. METHODSWe conducted a retrospective chart review of 2604 first-time average risk screening colonoscopies performed at MD Anderson Cancer Center from 2010-2013. ADR and CIR were calculated in each sedation group. Multivariable regression analysis was performed to adjust for potential confounders of age and body mass index (BMI). RESULTSOne-third of the exams were done with propofol (n = 874). Overall ADR in the propofol group was significantly higher than moderate sedation (46.3% vs 41.2%, P = 0.01). After adjustment for age and BMI differences, ADR was similar between the groups. CIR was 99% for all exams. The mean cecal insertion time was shorter among propofol patients (6.9 min vs 8.2 min; P < 0.0001). CONCLUSIONDeep sedation with propofol for screening colonoscopy did not significantly improve ADR or CIR in our population of average risk patients. While propofol may allow for safer sedation in certain patients (e.g., with sleep apnea), the overall effect on colonoscopy quality metrics is not significant. Given its increased cost, propofol should be used judiciously and without the implicit expectation of a higher quality screening exam.展开更多
In this paper, we present a blind steganalysis based on feature fusion. Features based on Short Time Fourier Transform (STFT), which consists of second-order derivative spectrum features of audio and Mel-frequency cep...In this paper, we present a blind steganalysis based on feature fusion. Features based on Short Time Fourier Transform (STFT), which consists of second-order derivative spectrum features of audio and Mel-frequency cepstrum coefficients, audio quality metrics and features on linear prediction residue are extracted separately. Then feature fusion is conducted. The performance of the proposed steganalysis is evaluated against 4 steganographic schemes: Direct Sequence Spread Spectrum (DSSS), Quantization Index Modulation (QIM), ECHO embedding (ECHO), and Least Significant Bit em-bedding (LSB). Experiment results show that the classifying performance of the proposed detector is much superior to the previous work. Even more exciting is that the proposed methodology could detect the four steganography, with 85%+ classification accuracy achieved in all the detections, which makes the proposed steganalysis methodology capable of being regarded as a blind steganalysis, and especially useful when the steganalyzer are without the knowledge of the steganographic scheme employed in data embedding.展开更多
Objective image quality measure, which is a fundamental and challenging job in image processing, evaluates the image quality consistently with human perception automatically. On the assumption that any image distortio...Objective image quality measure, which is a fundamental and challenging job in image processing, evaluates the image quality consistently with human perception automatically. On the assumption that any image distortion could be modeled as the difference between the directional projection-based maps of reference and distortion images, we propose a new objective quality assessment method based on directional projection for full reference model. Experimental results show that the proposed metrics are well consistent with the subjective quality score.展开更多
Regional variations in acute coronary syndrome(ACS) management and outcomes have been an enormous public health issue. However, studies have yet to explore how to reduce the variations. The National Chest Pain Center ...Regional variations in acute coronary syndrome(ACS) management and outcomes have been an enormous public health issue. However, studies have yet to explore how to reduce the variations. The National Chest Pain Center Program(NCPCP) is the first nationwide, hospital-based, comprehensive, continuous quality improvement program for improving the quality of care in patients with ACS in China. We evaluated the association of NCPCP and regional variations in ACS healthcare using generalized linear mixed models and interaction analysis. Patients in the Western region had longer onset-to-first medical contact(FMC) time and time stay in non-percutaneous coronary intervention(PCI) hospitals, lower rates of PCI for ST-elevation myocardial infarction(STEMI) patients, and higher rates of medication usage.Patients in Central regions had relatively lower in-hospital mortality and in-hospital heart failure rates.Differences in the door-to-balloon time(Dto B) and in-hospital mortality between Western and Eastern regions were less after accreditation(β =-8.82, 95% confidence interval(CI)-14.61 to-3.03;OR = 0.79, 95%CI 0.70 to 0.91). Similar results were found in differences in Dto B time, primary PCI rate for STEMI between Central and Eastern regions. The differences in PCI for higher-risk non-ST-segment elevation acute coronary syndrome(NSTE-ACS) patients among different regions had been smaller.Additionally, the differences in medication use between Eastern and Western regions were higher after accreditation. Regional variations remained high in this large cohort of patients with ACS from hospitals participating in the NCPCP in China. More comprehensive interventions and hospital internal system optimizations are needed to further reduce regional variations in the management and outcomes of patients with ACS.展开更多
Constant levels of perceptual quality of streaming video is what ideall usersexpect. In most cases, however, they receive time-varying levels of quality of video. In thispaper, the author proposes a new control method...Constant levels of perceptual quality of streaming video is what ideall usersexpect. In most cases, however, they receive time-varying levels of quality of video. In thispaper, the author proposes a new control method of perceptual quality in variable bit rate videoencoding for streaming video. The image quality calculation based on the perception of human visualsystems is presented . Quantization properties of DCT coefficients are analyzed to controleffectively. Quantization scale factors are ascertained based on the visual mask effect. AProportional Integral Difference (PID) controller is used to control the image quality. Experimentalresults show that this method improves the perceptual quality uniformity of encoded video.展开更多
Technical debt is considered detrimental to the long-term success of software development,but despite the numerous studies in the literature,there are still many aspects that need to be investigated for a better under...Technical debt is considered detrimental to the long-term success of software development,but despite the numerous studies in the literature,there are still many aspects that need to be investigated for a better understanding of it.In particular,the main problems that hinder its complete understanding are the absence of a clear definition and a model for its identification,management,and forecasting.Focusing on forecasting technical debt,there is a growing notion that preventing technical debt build-up allows you to identify and address the riskiest debt items for the project before they can permanently compromise it.However,despite this high relevance,the forecast of technical debt is still little explored.To this end,this study aims to evaluate whether the quality metrics of a software system can be useful for the correct prediction of the technical debt.Therefore,the data related to the quality metrics of 8 different open-source software systems were analyzed and supplied as input to multiple machine learning algorithms to perform the prediction of the technical debt.In addition,several partitions of the initial dataset were evaluated to assess whether prediction performance could be improved by performing a data selection.The results obtained show good forecasting performance and the proposed document provides a useful approach to understanding the overall phenomenon of technical debt for practical purposes.展开更多
Finding a shortest path for a given pair of vertices in a graph drawing is one of the fundamental tasks for qualitative evaluation of graph drawings.In this paper,we present the first machine learning approach to pred...Finding a shortest path for a given pair of vertices in a graph drawing is one of the fundamental tasks for qualitative evaluation of graph drawings.In this paper,we present the first machine learning approach to predict human shortest path task performance,including accuracy,response time,and mental effort.To predict the shortest path task performance,we utilize correlated quality metrics and the ground truth data from the shortest path experiments.Specifically,we introduce path faithfulness metrics and show strong correlations with the shortest path task performance.Moreover,to mitigate the problem of insufficient ground truth training data,we use the transfer learning method to pre-train our deep model,exploiting the correlated quality metrics.Experimental results using the ground truth human shortest path experiment data show that our models can successfully predict the shortest path task performance.In particular,model MSP achieves an MSE(i.e.,test mean square error)of 0.7243(i.e.,data range from−17.27 to 1.81)for prediction.展开更多
基金approved by the Institutional Ethics Committee(approval number 628-2022 Act No.I22-112 of November 02,2022)following national and international recommendations for human research.In。
文摘Background:Failure to rescue has been an effective quality metric in congenital heart surgery.Conversely,mor-bidity and mortality depend greatly on non-modifiable individual factors and have a weak correlation with better-quality performance.We aim to measure the complications,mortality,and risk factors in pediatric patients undergoing congenital heart surgery in a high-complexity institution located in a middle-income country and compare it with other institutions that have conducted a similar study.Methods:A retrospective observational study was conducted in a high-complexity service provider institution,in Cali,Colombia.All pediatric patients undergoing any congenital heart surgery between 2019 and 2022 were included.The main outcomes evaluated in the study were complication,mortality,and failure to rescue rate.Univariate and multivariate logistic regression analysis was performed with mortality as the outcome variable.Results:We evaluated 308 congenital heart sur-geries.Regarding the outcomes,201(65%)complications occurred,23(7.5%)patients died,and the FTR of the entire cohort was 11.4%.The presence of a postoperative complication(OR 14.88,CI 3.06–268.37,p=0.009),age(OR 0.79,CI 0.57–0.96,p=0.068),and urgent/emergent surgery(OR 8.14,CI 2.97–28.66,p<0.001)were the most significant variables in predicting mortality.Conclusions:Failure to rescue is an effective and comparable quality measure in healthcare institutions and is the major contributor to postoperative mortality in congenital heart surgeries.Despite our higher mortality and complication rate,we obtained a comparable failure to rescue rate to high-income countries’health institutions.
基金Sponsored by the National Natural Science Foundation of China(Grant No.60971095 and No.61172109)Artificial Intelligence Key Laboratory of Sichuan Province(Grant No.2012RZJ01)the Fundamental Research Funds for the Central Universities(Grant No.DUT13RC201)
文摘In this paper,a new approach is proposed to determine whether the content of an image is authentic or modified with a focus on detecting complex image tampering.Detecting image tampering without any prior information of the original image is a challenging problem since unknown diverse manipulations may have different characteristics and so do various formats of images.Our principle is that image processing,no matter how complex,may affect image quality,so image quality metrics can be used to distinguish tampered images.In particular,based on the alteration of image quality in modified blocks,the proposed method can locate the tampered areas.Referring to four types of effective no-reference image quality metrics,we obtain 13 features to present an image.The experimental results show that the proposed method is very promising on detecting image tampering and locating the locally tampered areas especially in realistic scenarios.
基金Supported by the National Natural Science Foundation of China (No. 60672073,60872094)the Program for New Century Excellent Talents in University (NCET-06-0537)the Natural Science Foundation of Ningbo (No. 2007A610037).
文摘A semi-reference image quality assessment metric based on similarity measurement for synthesized virtual viewpoint image (VVI) in free-viewpoint television system (FFV) is proposed in this paper. The key point of the proposed metric is taking resemblant information between VVI and its neighbor view images for quality assessment to make our metric to be extended to multi-semi-reference image quality assessment easily. The proposed metric first extracts impact factors from image features, then combines an image synthesis technique and similarity functions, in which, disparity information are taken into account for registering the resemblant regions. Experiments are divided into three phases. Phase I is to verify the validation of the proposed metric by taking impaired images and original reference into account. The experimental results show the agreement between evaluation scores and bio-characteristic of human visual system. Phase II shows the accordance with Phase I by taking neighbor view as reference. The proposed metric can be taken as a full reference one to evaluate the image quality even though the original reference is absent. Phase III is then performed to evaluate the quality of WI. Evaluation scores in the experimental results are able to evaluate the quality of VVI.
文摘Corporate Performance Management (CPM) system is an information system used to collect, analyze, and visualize key performance indicators (KPIs) to support both business operations and especially strategic decisions. CPM systems display KPIs in forms of scorecard and dashboard so the executives can keep track and evaluate corporate performance. The quality of the information as shown in the KPIs is very crucial for the executives to make the right decisions. Therefore, it is important that the executives must be able to retrieve not only the KPIs but also the quality of those KPIs before using such KPIs in their strategic decisions. The objectives of this study were to determine the role of the CPM system in the organizations, current data and information quality state, problems and perspectives regarding data quality, as well as data quality maturity stage of the organizations. Survey research was used in this study; a questionnaire was sent to collect data from 477 corporations listed in the Stock Exchange of Thailand (SET) on January, 2011. Forty-nine questionnaires were returned. The results show that about half of the organizations have implemented CPM systems. Most organizations are confident in the information in CPM system, but information quality issues are commonly found. Frequent problems regarding information quality are information not up to date, information not ready by time of use, inaccuracy and incomplete. The most concerned and frequently assessed quality dimensions were security, accuracy, completeness, and validity. When asked to prioritize, the most important quality dimensions are accuracy, timeliness, completeness, security, and validity respectively. In addition, most organizations concern about data govemance management and have deployed such measures. This study showed that most organizations are on level 4 on Gartner's data governance maturity stage in which data governance is concerned and managed, but still not effective.
文摘Evaluating complex information systems necessitates deep contextual knowledge of technology, user needs, and quality. The quality evaluation challenges increase with the system’s complexity, especially when multiple services supported by varied technological modules, are offered. Existing standards for software quality, such as the ISO25000 series, provide a broad framework for evaluation. Broadness offers initial implementation ease albeit, it often lacks specificity to cater to individual system modules. This paper maps 48 data metrics and 175 software metrics on specific system modules while aligning them with ISO standard quality traits. Using the ISO25000 series as a foundation, especially ISO25010 and 25012, this research seeks to augment the applicability of these standards to multi-faceted systems, exemplified by five distinct software modules prevalent in modern information ecosystems.
文摘JT SQE system is a software quality and measurement system. Its design was based on the Chinese national standards of software product evaluation and quality characteristics. The JT SQE system consists of two parts. One is the model for software quality measurement, which is of hierarchical structure. The other is the process of requirements definition, measurement and rating. The system is a feasible model for software quality evaluation and measurement, and it has the advantage of a friendly user interface, simple operation, ease of revision and maintenance, and expansible measurements.
文摘Timestamps play a key role in process mining because it determines the chronology of which events occurred and subsequently how they are ordered in process modelling.The timestamp in process mining gives an insight on process performance,conformance,and modelling.This therefore means problems with the timestamp will result in misrepresentations of the mined process.A few articles have been published on the quantification of data quality problems but just one of the articles at the time of this paper is based on the quantification of timestamp quality problems.This article evaluates the quality of timestamps in event log across two axes using eleven quality dimensions and four levels of potential data quality problems.The eleven data quality dimensions were obtained by doing a thorough literature review of more than fifty process mining articles which focus on quality dimensions.This evaluation resulted in twelve data quality quantification metrics and the metrics were applied to the MIMIC-ll dataset as an illustration.The outcome of the timestamp quality quantification using the proposed typology enabled the user to appreciate the quality of the event log and thus makes it possible to evaluate the risk of carrying out specific data cleaning measures to improve the process mining outcome.
基金Supported by the National Cancer Institute of the National Institutes of Health(in part),No.K07CA160753 to Pande M
文摘To determine the effect of sedation with propofol on adenoma detection rate (ADR) and cecal intubation rates (CIR) in average risk screening colonoscopies compared to moderate sedation. METHODSWe conducted a retrospective chart review of 2604 first-time average risk screening colonoscopies performed at MD Anderson Cancer Center from 2010-2013. ADR and CIR were calculated in each sedation group. Multivariable regression analysis was performed to adjust for potential confounders of age and body mass index (BMI). RESULTSOne-third of the exams were done with propofol (n = 874). Overall ADR in the propofol group was significantly higher than moderate sedation (46.3% vs 41.2%, P = 0.01). After adjustment for age and BMI differences, ADR was similar between the groups. CIR was 99% for all exams. The mean cecal insertion time was shorter among propofol patients (6.9 min vs 8.2 min; P < 0.0001). CONCLUSIONDeep sedation with propofol for screening colonoscopy did not significantly improve ADR or CIR in our population of average risk patients. While propofol may allow for safer sedation in certain patients (e.g., with sleep apnea), the overall effect on colonoscopy quality metrics is not significant. Given its increased cost, propofol should be used judiciously and without the implicit expectation of a higher quality screening exam.
基金Supported by the National Natural Science Foundation of China(No.61071173)
文摘In this paper, we present a blind steganalysis based on feature fusion. Features based on Short Time Fourier Transform (STFT), which consists of second-order derivative spectrum features of audio and Mel-frequency cepstrum coefficients, audio quality metrics and features on linear prediction residue are extracted separately. Then feature fusion is conducted. The performance of the proposed steganalysis is evaluated against 4 steganographic schemes: Direct Sequence Spread Spectrum (DSSS), Quantization Index Modulation (QIM), ECHO embedding (ECHO), and Least Significant Bit em-bedding (LSB). Experiment results show that the classifying performance of the proposed detector is much superior to the previous work. Even more exciting is that the proposed methodology could detect the four steganography, with 85%+ classification accuracy achieved in all the detections, which makes the proposed steganalysis methodology capable of being regarded as a blind steganalysis, and especially useful when the steganalyzer are without the knowledge of the steganographic scheme employed in data embedding.
文摘Objective image quality measure, which is a fundamental and challenging job in image processing, evaluates the image quality consistently with human perception automatically. On the assumption that any image distortion could be modeled as the difference between the directional projection-based maps of reference and distortion images, we propose a new objective quality assessment method based on directional projection for full reference model. Experimental results show that the proposed metrics are well consistent with the subjective quality score.
基金supported by the 2020 China Medical Board(CMB) Competition Program (20-376)。
文摘Regional variations in acute coronary syndrome(ACS) management and outcomes have been an enormous public health issue. However, studies have yet to explore how to reduce the variations. The National Chest Pain Center Program(NCPCP) is the first nationwide, hospital-based, comprehensive, continuous quality improvement program for improving the quality of care in patients with ACS in China. We evaluated the association of NCPCP and regional variations in ACS healthcare using generalized linear mixed models and interaction analysis. Patients in the Western region had longer onset-to-first medical contact(FMC) time and time stay in non-percutaneous coronary intervention(PCI) hospitals, lower rates of PCI for ST-elevation myocardial infarction(STEMI) patients, and higher rates of medication usage.Patients in Central regions had relatively lower in-hospital mortality and in-hospital heart failure rates.Differences in the door-to-balloon time(Dto B) and in-hospital mortality between Western and Eastern regions were less after accreditation(β =-8.82, 95% confidence interval(CI)-14.61 to-3.03;OR = 0.79, 95%CI 0.70 to 0.91). Similar results were found in differences in Dto B time, primary PCI rate for STEMI between Central and Eastern regions. The differences in PCI for higher-risk non-ST-segment elevation acute coronary syndrome(NSTE-ACS) patients among different regions had been smaller.Additionally, the differences in medication use between Eastern and Western regions were higher after accreditation. Regional variations remained high in this large cohort of patients with ACS from hospitals participating in the NCPCP in China. More comprehensive interventions and hospital internal system optimizations are needed to further reduce regional variations in the management and outcomes of patients with ACS.
文摘Constant levels of perceptual quality of streaming video is what ideall usersexpect. In most cases, however, they receive time-varying levels of quality of video. In thispaper, the author proposes a new control method of perceptual quality in variable bit rate videoencoding for streaming video. The image quality calculation based on the perception of human visualsystems is presented . Quantization properties of DCT coefficients are analyzed to controleffectively. Quantization scale factors are ascertained based on the visual mask effect. AProportional Integral Difference (PID) controller is used to control the image quality. Experimentalresults show that this method improves the perceptual quality uniformity of encoded video.
文摘Technical debt is considered detrimental to the long-term success of software development,but despite the numerous studies in the literature,there are still many aspects that need to be investigated for a better understanding of it.In particular,the main problems that hinder its complete understanding are the absence of a clear definition and a model for its identification,management,and forecasting.Focusing on forecasting technical debt,there is a growing notion that preventing technical debt build-up allows you to identify and address the riskiest debt items for the project before they can permanently compromise it.However,despite this high relevance,the forecast of technical debt is still little explored.To this end,this study aims to evaluate whether the quality metrics of a software system can be useful for the correct prediction of the technical debt.Therefore,the data related to the quality metrics of 8 different open-source software systems were analyzed and supplied as input to multiple machine learning algorithms to perform the prediction of the technical debt.In addition,several partitions of the initial dataset were evaluated to assess whether prediction performance could be improved by performing a data selection.The results obtained show good forecasting performance and the proposed document provides a useful approach to understanding the overall phenomenon of technical debt for practical purposes.
基金Research supported by ARC Linkage Project,Australia(LP160100935)with Oracle Research lab.
文摘Finding a shortest path for a given pair of vertices in a graph drawing is one of the fundamental tasks for qualitative evaluation of graph drawings.In this paper,we present the first machine learning approach to predict human shortest path task performance,including accuracy,response time,and mental effort.To predict the shortest path task performance,we utilize correlated quality metrics and the ground truth data from the shortest path experiments.Specifically,we introduce path faithfulness metrics and show strong correlations with the shortest path task performance.Moreover,to mitigate the problem of insufficient ground truth training data,we use the transfer learning method to pre-train our deep model,exploiting the correlated quality metrics.Experimental results using the ground truth human shortest path experiment data show that our models can successfully predict the shortest path task performance.In particular,model MSP achieves an MSE(i.e.,test mean square error)of 0.7243(i.e.,data range from−17.27 to 1.81)for prediction.