Today, mammography is the best method for early detection of breast cancer. Radiologists failed to detect evident cancerous signs in approximately 20% of false negative mammograms. False negatives have been identified...Today, mammography is the best method for early detection of breast cancer. Radiologists failed to detect evident cancerous signs in approximately 20% of false negative mammograms. False negatives have been identified as the inability of the radiologist to detect the abnormalities due to several reasons such as poor image quality, image noise, or eye fatigue. This paper presents a framework for a computer aided detection system that integrates Principal Component Analysis (PCA), Fisher Linear Discriminant (FLD), and Nearest Neighbor Classifier (KNN) algorithms for the detection of abnormalities in mammograms. Using normal and abnormal mammograms from the MIAS database, the integrated algorithm achieved 93.06% classification accuracy. Also in this paper, we present an analysis of the integrated algorithm’s parameters and suggest selection criteria.展开更多
Foley-Sammon linear discriminant analysis (FSLDA) and uncorrelated linear discriminant analysis (ULDA) are two well-known kinds of linear discriminant analysis. Both ULDA and FSLDA search the kth discriminant vector i...Foley-Sammon linear discriminant analysis (FSLDA) and uncorrelated linear discriminant analysis (ULDA) are two well-known kinds of linear discriminant analysis. Both ULDA and FSLDA search the kth discriminant vector in an n-k+1 dimensional subspace, while they are subject to their respective constraints. Evidenced by strict demonstration, it is clear that in essence ULDA vectors are the covariance-orthogonal vectors of the corresponding eigen-equation. So, the algorithms for the covariance-orthogonal vectors are equivalent to the original algorithm of ULDA, which is time-consuming. Also, it is first revealed that the Fisher criterion value of each FSLDA vector must be not less than that of the corresponding ULDA vector by theory analysis. For a discriminant vector, the larger its Fisher criterion value is, the more powerful in discriminability it is. So, for FSLDA vectors, corresponding to larger Fisher criterion values is an advantage. On the other hand, in general any two feature components extracted by FSLDA vectors are statistically correlated with each other, which may make the discriminant vectors set at a disadvantageous position. In contrast to FSLDA vectors, any two feature components extracted by ULDA vectors are statistically uncorrelated with each other. Two experiments on CENPARMI handwritten numeral database and ORL database are performed. The experimental results are consistent with the theory analysis on Fisher criterion values of ULDA vectors and FSLDA vectors. The experiments also show that the equivalent algorithm of ULDA, presented in this paper, is much more efficient than the original algorithm of ULDA, as the theory analysis expects. Moreover, it appears that if there is high statistical correlation between feature components extracted by FSLDA vectors, FSLDA will not perform well, in spite of larger Fisher criterion value owned by every FSLDA vector. However, when the average correlation coefficient of feature components extracted by FSLDA vectors is at a low level, the performance of FSLDA are comparable with ULDA.展开更多
文摘Today, mammography is the best method for early detection of breast cancer. Radiologists failed to detect evident cancerous signs in approximately 20% of false negative mammograms. False negatives have been identified as the inability of the radiologist to detect the abnormalities due to several reasons such as poor image quality, image noise, or eye fatigue. This paper presents a framework for a computer aided detection system that integrates Principal Component Analysis (PCA), Fisher Linear Discriminant (FLD), and Nearest Neighbor Classifier (KNN) algorithms for the detection of abnormalities in mammograms. Using normal and abnormal mammograms from the MIAS database, the integrated algorithm achieved 93.06% classification accuracy. Also in this paper, we present an analysis of the integrated algorithm’s parameters and suggest selection criteria.
基金The National Natural Science Foundation of China (Grant No.60472060 ,60473039 and 60472061)
文摘Foley-Sammon linear discriminant analysis (FSLDA) and uncorrelated linear discriminant analysis (ULDA) are two well-known kinds of linear discriminant analysis. Both ULDA and FSLDA search the kth discriminant vector in an n-k+1 dimensional subspace, while they are subject to their respective constraints. Evidenced by strict demonstration, it is clear that in essence ULDA vectors are the covariance-orthogonal vectors of the corresponding eigen-equation. So, the algorithms for the covariance-orthogonal vectors are equivalent to the original algorithm of ULDA, which is time-consuming. Also, it is first revealed that the Fisher criterion value of each FSLDA vector must be not less than that of the corresponding ULDA vector by theory analysis. For a discriminant vector, the larger its Fisher criterion value is, the more powerful in discriminability it is. So, for FSLDA vectors, corresponding to larger Fisher criterion values is an advantage. On the other hand, in general any two feature components extracted by FSLDA vectors are statistically correlated with each other, which may make the discriminant vectors set at a disadvantageous position. In contrast to FSLDA vectors, any two feature components extracted by ULDA vectors are statistically uncorrelated with each other. Two experiments on CENPARMI handwritten numeral database and ORL database are performed. The experimental results are consistent with the theory analysis on Fisher criterion values of ULDA vectors and FSLDA vectors. The experiments also show that the equivalent algorithm of ULDA, presented in this paper, is much more efficient than the original algorithm of ULDA, as the theory analysis expects. Moreover, it appears that if there is high statistical correlation between feature components extracted by FSLDA vectors, FSLDA will not perform well, in spite of larger Fisher criterion value owned by every FSLDA vector. However, when the average correlation coefficient of feature components extracted by FSLDA vectors is at a low level, the performance of FSLDA are comparable with ULDA.