With the improved knowledge on clinical relevance and more convenient access to the patientreported outcome data,clinical researchers prefer to adopt minimal clinically important difference(MCID)rather than statistica...With the improved knowledge on clinical relevance and more convenient access to the patientreported outcome data,clinical researchers prefer to adopt minimal clinically important difference(MCID)rather than statistical significance as a testing standard to examine the effectiveness of certain intervention or treatment in clinical trials.A practical method to determining the MCID is based on the diagnostic measurement.By using this approach,the MCID can be formulated as the solution of a large margin classification problem.However,this method only produces the point estimation,hence lacks ways to evaluate its performance.In this paper,we introduce an m-out-of-n bootstrap approach which provides the interval estimations for MCID and its classification error,an associated accuracy measure for performance assessment.A variety of extensive simulation studies are implemented to show the advantages of our proposed method.Analysis of the chondral lesions and meniscus procedures(ChAMP)trial is our motivating example and is used to illustrate our method.展开更多
Automatically correcting students’code errors using deep learning is an effective way to reduce the burden of teachers and to enhance the effects of students’learning.However,code errors vary greatly,and the adaptab...Automatically correcting students’code errors using deep learning is an effective way to reduce the burden of teachers and to enhance the effects of students’learning.However,code errors vary greatly,and the adaptability of fixing techniques may vary for different types of code errors.How to choose the appropriate methods to fix different types of errors is still an unsolved problem.To this end,this paper first classifies code errors by Java novice programmers based on Delphi analysis,and compares the effectiveness of different deep learning models(CuBERT,GraphCodeBERT and GGNN)fixing different types of errors.The results indicated that the 3 models differed significantly in their classification accuracy on different error codes,while the error correction model based on the Bert structure showed better code correction potential for beginners’codes.展开更多
Traffic count is the fundamental data source for transportation planning, management, design, and effectiveness evaluation. Recording traffic flow and counting from the recorded videos are increasingly used due to con...Traffic count is the fundamental data source for transportation planning, management, design, and effectiveness evaluation. Recording traffic flow and counting from the recorded videos are increasingly used due to convenience, high accuracy, and cost-effectiveness. Manual counting from pre-recorded video footage can be prone to inconsistencies and errors, leading to inaccurate counts. Besides, there are no standard guidelines for collecting video data and conducting manual counts from the recorded videos. This paper aims to comprehensively assess the accuracy of manual counts from pre-recorded videos and introduces guidelines for efficiently collecting video data and conducting manual counts by trained individuals. The accuracy assessment of the manual counts was conducted based on repeated counts, and the guidelines were provided from the experience of conducting a traffic survey on forty strip mall access points in Baton Rouge, Louisiana, USA. The percentage of total error, classification error, and interval error were found to be 1.05 percent, 1.08 percent, and 1.29 percent, respectively. Besides, the percent root mean square errors (RMSE) were found to be 1.13 percent, 1.21 percent, and 1.48 percent, respectively. Guidelines were provided for selecting survey sites, instruments and timeframe, fieldwork, and manual counts for an efficient traffic data collection survey.展开更多
Data-driven temporal filtering technique is integrated into the time trajectory of Teager energy operation (TEO) based feature parameter for improving the robustness of speech recognition system against noise. Three...Data-driven temporal filtering technique is integrated into the time trajectory of Teager energy operation (TEO) based feature parameter for improving the robustness of speech recognition system against noise. Three kinds of data-driven temporal filters are investigated for the motivation of alleviating the harmful effects that the environmental factors have on the speech. The filters include: principle component analysis (PCA) based filters, linear discriminant analysis (LDA) based filters and minimum classification error (MCE) based filters. Detailed comparative analysis among these temporal filtering approaches applied in Teager energy domain is presented. It is shown that while all of them can improve the recognition performance of the original TEO based feature parameter in adverse environment, MCE based temporal filtering can provide the lowest error rate as SNR decreases than any other algorithms.展开更多
It is an effective approach to learn the influence of environmental parameters, such as additive noise and channel distortions, from training data for robust speech recognition. Most of the previous methods are based ...It is an effective approach to learn the influence of environmental parameters, such as additive noise and channel distortions, from training data for robust speech recognition. Most of the previous methods are based on maximum likelihood estimation criterion. However, these methods do not lead to a minimum error rate result. In this paper, a novel discrimina-tive learning method of environmental parameters, which is based on Minimum Classification Error (MCE) criterion, is proposed. In the method, a simple classifier and the Generalized Probabilistic Descent (GPD) algorithm are adopted to iteratively learn the environmental pa-rameters. Consequently, the clean speech features are estimated from the noisy speech features with the estimated environmental parameters, and then the estimations of clean speech features are utilized in the back-end HMM classifier. Experiments show that the best error rate reduction of 32.1% is obtained, tested on a task of 18 isolated confusion Korean words, relative to a conventional HMM system.展开更多
In this paper we address the problem of audio-visual speech recognition in the framework of the multi-stream hidden Markov model. Stream weight training based on minimum classification error criterion is dis...In this paper we address the problem of audio-visual speech recognition in the framework of the multi-stream hidden Markov model. Stream weight training based on minimum classification error criterion is discussed for use in large vocabulary continuous speech recognition (LVCSR). We present the lattice re- scoring and Viterbi approaches for calculating the loss function of continuous speech. The experimental re- sults show that in the case of clean audio, the system performance can be improved by 36.1% in relative word error rate reduction when using state-based stream weights trained by a Viterbi approach, compared to an audio only speech recognition system. Further experimental results demonstrate that our audio-visual LVCSR system provides significant enhancement of robustness in noisy environments.展开更多
In this paper, a new statistic model named Center-Distance Continuous Probability Model (CDCPM) for speech recognition is described, which is based on Center-Distance Normal (CDN) distribution. In a CDCPM, the probabi...In this paper, a new statistic model named Center-Distance Continuous Probability Model (CDCPM) for speech recognition is described, which is based on Center-Distance Normal (CDN) distribution. In a CDCPM, the probability transition matrix is omitted, and the observation probability density function (PDF) in each state is in the form of embedded multiple-model (EMM) based on the Nearest Neighbour rule. The experimental results on two giant real-world Chinese speech databases and a real-world continuous-manner 2000 phrase system show that this model is a powerful one. Also,a distance measure for CDCPMs is proposed which is based on the Bayesian minimum classification error (MCE) discrimination.展开更多
基金supported by the National Center for Advancing Translational Sciences of the National Institutes of Health under award number UL1TR001412.
文摘With the improved knowledge on clinical relevance and more convenient access to the patientreported outcome data,clinical researchers prefer to adopt minimal clinically important difference(MCID)rather than statistical significance as a testing standard to examine the effectiveness of certain intervention or treatment in clinical trials.A practical method to determining the MCID is based on the diagnostic measurement.By using this approach,the MCID can be formulated as the solution of a large margin classification problem.However,this method only produces the point estimation,hence lacks ways to evaluate its performance.In this paper,we introduce an m-out-of-n bootstrap approach which provides the interval estimations for MCID and its classification error,an associated accuracy measure for performance assessment.A variety of extensive simulation studies are implemented to show the advantages of our proposed method.Analysis of the chondral lesions and meniscus procedures(ChAMP)trial is our motivating example and is used to illustrate our method.
基金supported in part by the Education Department of Sichuan Province(Grant No.[2022]114).
文摘Automatically correcting students’code errors using deep learning is an effective way to reduce the burden of teachers and to enhance the effects of students’learning.However,code errors vary greatly,and the adaptability of fixing techniques may vary for different types of code errors.How to choose the appropriate methods to fix different types of errors is still an unsolved problem.To this end,this paper first classifies code errors by Java novice programmers based on Delphi analysis,and compares the effectiveness of different deep learning models(CuBERT,GraphCodeBERT and GGNN)fixing different types of errors.The results indicated that the 3 models differed significantly in their classification accuracy on different error codes,while the error correction model based on the Bert structure showed better code correction potential for beginners’codes.
文摘Traffic count is the fundamental data source for transportation planning, management, design, and effectiveness evaluation. Recording traffic flow and counting from the recorded videos are increasingly used due to convenience, high accuracy, and cost-effectiveness. Manual counting from pre-recorded video footage can be prone to inconsistencies and errors, leading to inaccurate counts. Besides, there are no standard guidelines for collecting video data and conducting manual counts from the recorded videos. This paper aims to comprehensively assess the accuracy of manual counts from pre-recorded videos and introduces guidelines for efficiently collecting video data and conducting manual counts by trained individuals. The accuracy assessment of the manual counts was conducted based on repeated counts, and the guidelines were provided from the experience of conducting a traffic survey on forty strip mall access points in Baton Rouge, Louisiana, USA. The percentage of total error, classification error, and interval error were found to be 1.05 percent, 1.08 percent, and 1.29 percent, respectively. Besides, the percent root mean square errors (RMSE) were found to be 1.13 percent, 1.21 percent, and 1.48 percent, respectively. Guidelines were provided for selecting survey sites, instruments and timeframe, fieldwork, and manual counts for an efficient traffic data collection survey.
基金Sponsored bythe Basic Research Foundation of Beijing Institute of Technology (BIT-UBF-200301F03) BIT &Ericsson Cooperation Project
文摘Data-driven temporal filtering technique is integrated into the time trajectory of Teager energy operation (TEO) based feature parameter for improving the robustness of speech recognition system against noise. Three kinds of data-driven temporal filters are investigated for the motivation of alleviating the harmful effects that the environmental factors have on the speech. The filters include: principle component analysis (PCA) based filters, linear discriminant analysis (LDA) based filters and minimum classification error (MCE) based filters. Detailed comparative analysis among these temporal filtering approaches applied in Teager energy domain is presented. It is shown that while all of them can improve the recognition performance of the original TEO based feature parameter in adverse environment, MCE based temporal filtering can provide the lowest error rate as SNR decreases than any other algorithms.
基金the '863' High-Tech Programme of China (No. 863-306ZT03-02-3) and partially by the National Natural Science Foundation of China
文摘It is an effective approach to learn the influence of environmental parameters, such as additive noise and channel distortions, from training data for robust speech recognition. Most of the previous methods are based on maximum likelihood estimation criterion. However, these methods do not lead to a minimum error rate result. In this paper, a novel discrimina-tive learning method of environmental parameters, which is based on Minimum Classification Error (MCE) criterion, is proposed. In the method, a simple classifier and the Generalized Probabilistic Descent (GPD) algorithm are adopted to iteratively learn the environmental pa-rameters. Consequently, the clean speech features are estimated from the noisy speech features with the estimated environmental parameters, and then the estimations of clean speech features are utilized in the back-end HMM classifier. Experiments show that the best error rate reduction of 32.1% is obtained, tested on a task of 18 isolated confusion Korean words, relative to a conventional HMM system.
基金Supported by the National High-Tech Research and Development (863) Program of China (No. 863-306-ZD03-01-2)
文摘In this paper we address the problem of audio-visual speech recognition in the framework of the multi-stream hidden Markov model. Stream weight training based on minimum classification error criterion is discussed for use in large vocabulary continuous speech recognition (LVCSR). We present the lattice re- scoring and Viterbi approaches for calculating the loss function of continuous speech. The experimental re- sults show that in the case of clean audio, the system performance can be improved by 36.1% in relative word error rate reduction when using state-based stream weights trained by a Viterbi approach, compared to an audio only speech recognition system. Further experimental results demonstrate that our audio-visual LVCSR system provides significant enhancement of robustness in noisy environments.
文摘In this paper, a new statistic model named Center-Distance Continuous Probability Model (CDCPM) for speech recognition is described, which is based on Center-Distance Normal (CDN) distribution. In a CDCPM, the probability transition matrix is omitted, and the observation probability density function (PDF) in each state is in the form of embedded multiple-model (EMM) based on the Nearest Neighbour rule. The experimental results on two giant real-world Chinese speech databases and a real-world continuous-manner 2000 phrase system show that this model is a powerful one. Also,a distance measure for CDCPMs is proposed which is based on the Bayesian minimum classification error (MCE) discrimination.