Two dimensional(2 D) entropy method has to pay the price of time when applied to image segmentation. So the genetic algorithm is introduced to improve the computational efficiency of the 2 D entropy method. The pro...Two dimensional(2 D) entropy method has to pay the price of time when applied to image segmentation. So the genetic algorithm is introduced to improve the computational efficiency of the 2 D entropy method. The proposed method uses both the gray value of a pixel and the local average gray value of an image. At the same time, the simple genetic algorithm is improved by using better reproduction and crossover operators. Thus the proposed method makes up the 2 D entropy method’s drawback of being time consuming, and yields satisfactory segmentation results. Experimental results show that the proposed method can save computational time when it provides good quality segmentation.展开更多
A novel semi-fragile audio watermarking algorithm in DWT domain is proposed in this paper.This method transforms the original audio into 3-layer wavelet domain and divides approximation wavelet coefficients into many ...A novel semi-fragile audio watermarking algorithm in DWT domain is proposed in this paper.This method transforms the original audio into 3-layer wavelet domain and divides approximation wavelet coefficients into many groups.Through computing mean quantization of per group,this algorithm embeds the watermark signal into the average value of the wavelet coefficients.Experimental results show that our semi-fragile audio watermarking algorithm is not only inaudible and robust against various common images processing,but also fragile to malicious modification.Especially,it can detect the tampered regions effectively.展开更多
A semiautomatic segmentation method based on active contour is proposed for computed tomography (CT) image series. First, to get initial contour, one image slice was segmented exactly by C-V method based on Mumford-Sh...A semiautomatic segmentation method based on active contour is proposed for computed tomography (CT) image series. First, to get initial contour, one image slice was segmented exactly by C-V method based on Mumford-Shah model. Next, the computer will segment the nearby slice automatically using the snake model one by one. During segmenting of image slices, former slice boundary, as next slice initial contour, may cross over next slice real boundary and never return to right position. To avoid contour skipping over, the distance variance between two slices is evaluated by an threshold, which decides whether to initiate again. Moreover, a new improved marching cubes (MC) algorithm based on 2D images series segmentation boundary is given for 3D image reconstruction. Compared with the standard method, the proposed algorithm reduces detecting time and needs less storing memory. The effectiveness and capabilities of the algorithm were illustrated by experimental results.展开更多
According to the principle of polarization imaging and the relation between Stokes parameters and the degree of linear polarization, there are much redundant and complementary information in polarized images. Since ma...According to the principle of polarization imaging and the relation between Stokes parameters and the degree of linear polarization, there are much redundant and complementary information in polarized images. Since man-made objects and natural objects can be easily distinguished in images of degree of linear polarization and images of Stokes parameters contain rich detailed information of the scene, the clutters in the images can be removed efficiently while the detailed information can be maintained by combining these images. An algorithm of adaptive polarization image fusion based on regional energy dynamic weighted average is proposed in this paper to combine these images. Through an experiment and simulations,most clutters are removed by this algorithm. The fusion method is used for different light conditions in simulation, and the influence of lighting conditions on the fusion results is analyzed.展开更多
The analytic expression of the special points on the intersection of two cones with their axes intersecting(ITCTAI) is given. It also presents a method to construct the special points graphically according to the anal...The analytic expression of the special points on the intersection of two cones with their axes intersecting(ITCTAI) is given. It also presents a method to construct the special points graphically according to the analytic expression of them. Finally, with computer programming language, it gives a program to generate the intersection in several different cases.展开更多
Because interval value is quite natural in clustering, an interval-valued fuzzy competitive neural network is proposed. Firstly, this paper proposes several definitions of distance relating to interval number. And the...Because interval value is quite natural in clustering, an interval-valued fuzzy competitive neural network is proposed. Firstly, this paper proposes several definitions of distance relating to interval number. And then, it indicates the method of preprocessing input data, the structure of the network and the learning algorithm of the interval-valued fuzzy competitive neural network. This paper also analyses the principle of the learning algorithm. At last, an experiment is used to test the validity of the network.展开更多
This paper presents a new idea, named as modeling multisensor-heterogeneous information, to incorporate the fuzzy logic methodologies with mulitsensor-multitarget system under the framework of random set theory. First...This paper presents a new idea, named as modeling multisensor-heterogeneous information, to incorporate the fuzzy logic methodologies with mulitsensor-multitarget system under the framework of random set theory. Firstly, based on strong random set and weak random set, the unified form to describe both data (unambiguous information) and fuzzy evidence (uncertain information) is introduced. Secondly, according to signatures of fuzzy evidence, two Bayesian-markov nonlinear measurement models are proposed to fuse effectively data and fuzzy evidence. Thirdly, by use of "the models-based signature-matching scheme", the operation of the statistics of fuzzy evidence defined as random set can be translated into that of the membership functions of relative point state variables. These works are the basis to construct qualitative measurement models and to fuse data and fuzzy evidence.展开更多
Computer simulation is a good guide and reference for development and research on petroleum refining processes. Traditionally, pseudo-components are used in the simulation, in which their physical properties are estim...Computer simulation is a good guide and reference for development and research on petroleum refining processes. Traditionally, pseudo-components are used in the simulation, in which their physical properties are estimated by empirical relations and cannot be associated with actual chemical reactions, as no molecular structure is available for pseudo-components. This limitation can be overcome if real components are used. In this paper, a real component based method is proposed for the simulation of a diesel hydrotreating process by using the software of Unisim Design. This process includes reaction units and distillation units. The chemical reaction network is established by analyzing the feedstock. The feedstock is characterized by real components, which are obtained based on true boiling point curve. Simulation results are consistent with actual data.展开更多
The performance of the traditional Voice Activity Detection (VAD) algorithms declines sharply in lower Signal-to-Noise Ratio (SNR) environments. In this paper, a feature weighting likelihood method is proposed for...The performance of the traditional Voice Activity Detection (VAD) algorithms declines sharply in lower Signal-to-Noise Ratio (SNR) environments. In this paper, a feature weighting likelihood method is proposed for noise-robust VAD. The contribution of dynamic features to likelihood score can be increased via the method, which improves consequently the noise robustness of VAD. Divergence based dimension reduction method is proposed for saving computation, which reduces these feature dimensions with smaller divergence value at the cost of degrading the performance a little. Experimental results on Aurora Ⅱ database show that the detection performance in noise environments can remarkably be improved by the proposed method when the model trained in clean data is used to detect speech endpoints. Using weighting likelihood on the dimension-reduced features obtains comparable, even better, performance compared to original full-dimensional feature.展开更多
The decomposition method was successfully used in solving of 3D problems with complex geometry shape in electron optics for the FDM (Finite Difference Method) and FEM (Finite Element Method) mostly to implement fa...The decomposition method was successfully used in solving of 3D problems with complex geometry shape in electron optics for the FDM (Finite Difference Method) and FEM (Finite Element Method) mostly to implement fast and robust parallel algorithms and computer codes. We suggest a new version of similar approach for the BEM (Boundary Element Method) based on the alternating method by Schwartz. This approach substantially reduce the dimension of dense global matrix of algebraic system produced by BEM algorithm to solve a complex problem on as single CPU (Central Processor Unit) desktop computer. New algorithm is iterative one, but exponential convergence for the Schwatlz's algorithm creates the fast numerical procedures. We describe the results of numerical simulation for a multi electrode ion transport system. The algorithms were implemented in the computer code "POISSON-3".展开更多
A data processing method was proposed for eliminating the end restraint in triaxial tests of soil. A digital image processing method was used to calculate the local deformations and local stresses for any region on th...A data processing method was proposed for eliminating the end restraint in triaxial tests of soil. A digital image processing method was used to calculate the local deformations and local stresses for any region on the surface of triaxial soil specimens. The principle and implementation of this digital image processing method were introduced as well as the calculation method for local mechanical properties of soil specimens. Comparisons were made between the test results calculated by the data from both the entire specimen and local regions, and it was found that the deformations were more uniform in the middle region compared with the entire specimen. In order to quantify the nonuniform characteristic of deformation, the non-uniformity coefficients of strain were defined and calculated. Traditional and end-lubricated triaxial tests were conducted under the same condition to investigate the effects of using local region data for deformation calculation on eliminating the end restraint of specimens. After the statistical analysis of all test results, it was concluded that for the tested soil specimen with the size of 39.1 mm × 80 ram, the utilization of the middle 35 mm region of traditional specimens in data processing had a better effect on eliminating end restraint compared with end lubrication. Furthermore, the local data analysis in this paper was validated through the comparisons with the test results from other researchers.展开更多
The paper applied Principal Components Analysis Method to analyze the PSC inspection results in the area of T-MOU and P-MOU. Set up the assessment of ship detention, the ships' main deficiencies of detentions were fo...The paper applied Principal Components Analysis Method to analyze the PSC inspection results in the area of T-MOU and P-MOU. Set up the assessment of ship detention, the ships' main deficiencies of detentions were found out by the standardization of data processing and correlation matrix calculating. Provide the basis for shipping company to master the safety management focus and pass the PSC inspection.展开更多
The explosive increase in data traffic requires networks to provide higher capacity and long-haul transmission capabilities.This paper introduces new results on high-order modulation and efficient Digital Signal Proce...The explosive increase in data traffic requires networks to provide higher capacity and long-haul transmission capabilities.This paper introduces new results on high-order modulation and efficient Digital Signal Processing algorithms to reduce various transmission limitations in coherent receiving systems.Polarization Division Multiplexed Quadrature Phase Shift Keying(PDM-QPSK)is deployed to reach high bit rates,provides modified digital clock recovery,and allows BER-Aided Constant Modulus Algorithm(BA-CMA)equalising.A Soft Decision-Forward Error Correction(SD-FEC)algorithm and a joint scheme with timing recovery and adaptive equaliser are used to achieve better performance.A compact coherent transceiver is also developed.These techniques have been applied in the largest 100 G Optical Transport Network(OTN)deployment in the world,the backbone expansion project for Phase 3 of the China Education and Research Network(CERNET),with a total transmission length of 10 000 km.展开更多
Aiming at the shortcoming that certain existing blockingmatching algorithrns, such as full search, three-step search, and dia- mond search algorithms, usually can not keep a good balance between high acoaracy and low ...Aiming at the shortcoming that certain existing blockingmatching algorithrns, such as full search, three-step search, and dia- mond search algorithms, usually can not keep a good balance between high acoaracy and low computational complexity, a block-maching motion estimation algorithm based on two-step search is proposed in this paper. According to the fact that the gray values of adjacent pixels will not vary fast, the algorithm employs an interlaced search pattem in the search window to estimate the motion vector of the objectblock. Simulation and actual experiments demanstrate that the proposed algmithm greatly outperforms the well-known three-step search and dianond search algoritlam, no matter the motion vector is large or small. Comparedc with the full search algorithm, the proposed one achieves similar peffomance but requires much less computation, therefore, the algorithm is well qualified for real-time video image processing.展开更多
We introduce some ways to compute the lower and upper bounds of the Laplace eigenvalue problem.By using the special nonconforming finite elements,i.e.,enriched Crouzeix-Raviart element and extended Q1ro t,we get the l...We introduce some ways to compute the lower and upper bounds of the Laplace eigenvalue problem.By using the special nonconforming finite elements,i.e.,enriched Crouzeix-Raviart element and extended Q1ro t,we get the lower bound of the eigenvalue.Additionally,we use conforming finite elements to do the postprocessing to get the upper bound of the eigenvalue,which only needs to solve the corresponding source problems and a small eigenvalue problem if higher order postprocessing method is implemented.Thus,we can obtain the lower and upper bounds of the eigenvalues simultaneously by solving eigenvalue problem only once.Some numerical results are also presented to demonstrate our theoretical analysis.展开更多
Linear quadtree is a popular image representation method due to its convenient imaging procedure. However, the excessive emphasis on the symmetry of segmentation, i.e. dividing repeatedly a square into four equal sub-...Linear quadtree is a popular image representation method due to its convenient imaging procedure. However, the excessive emphasis on the symmetry of segmentation, i.e. dividing repeatedly a square into four equal sub-squares, makes linear quadtree not an optimal representation. In this paper, a no-loss image representation, referred to as Overlapped Rectangle Image Representation (ORIR), is presented to support fast image operations such as Legendre moments computation. The ORIR doesn’t importune the symmetry of segmentation, and it is capable of representing, by using an identical rectangle, the information of the pixels which are not even adjacent to each other in the sense of 4-neighbor and 8-neighbor. Hence, compared with the linear quadtree, the ORIR significantly reduces the number of rectangles required to represent an image. Based on the ORIR, an algorithm for exact Legendre moments computation is presented. The theoretical analysis and the experimental results show that the ORIR-based algorithm for exact Legendre moments computation is faster than the conventional exact algorithms.展开更多
The authors present a case study to demonstrate the possibility of discovering complex and interesting latent structures using hierarchical latent class (HLC) models. A similar effort was made earlier by Zhang (200...The authors present a case study to demonstrate the possibility of discovering complex and interesting latent structures using hierarchical latent class (HLC) models. A similar effort was made earlier by Zhang (2002), but that study involved only small applications with 4 or 5 observed variables and no more than 2 latent variables due to the lack of efficient learning algorithms. Significant progress has been made since then on algorithmic research, and it is now possible to learn HLC models with dozens of observed variables. This allows us to demonstrate the benefits of HLC models more convincingly than before. The authors have successfully analyzed the CoIL Challenge 2000 data set using HLC models. The model obtained consists of 22 latent variables, and its structure is intuitively appealing. It is exciting to know that such a large and meaningful latent structure can be automatically inferred from data.展开更多
文摘Two dimensional(2 D) entropy method has to pay the price of time when applied to image segmentation. So the genetic algorithm is introduced to improve the computational efficiency of the 2 D entropy method. The proposed method uses both the gray value of a pixel and the local average gray value of an image. At the same time, the simple genetic algorithm is improved by using better reproduction and crossover operators. Thus the proposed method makes up the 2 D entropy method’s drawback of being time consuming, and yields satisfactory segmentation results. Experimental results show that the proposed method can save computational time when it provides good quality segmentation.
基金We wish to thank the National Basic Research Program of China (973 Program) for Grant 2007CB311203, the National Natural Science Foundation of China for Grant 60821001, the Specialized Research Fund for the Doctoral Program of Higher Education for Grant 20070013007 under which the present work was possible.
文摘A novel semi-fragile audio watermarking algorithm in DWT domain is proposed in this paper.This method transforms the original audio into 3-layer wavelet domain and divides approximation wavelet coefficients into many groups.Through computing mean quantization of per group,this algorithm embeds the watermark signal into the average value of the wavelet coefficients.Experimental results show that our semi-fragile audio watermarking algorithm is not only inaudible and robust against various common images processing,but also fragile to malicious modification.Especially,it can detect the tampered regions effectively.
文摘A semiautomatic segmentation method based on active contour is proposed for computed tomography (CT) image series. First, to get initial contour, one image slice was segmented exactly by C-V method based on Mumford-Shah model. Next, the computer will segment the nearby slice automatically using the snake model one by one. During segmenting of image slices, former slice boundary, as next slice initial contour, may cross over next slice real boundary and never return to right position. To avoid contour skipping over, the distance variance between two slices is evaluated by an threshold, which decides whether to initiate again. Moreover, a new improved marching cubes (MC) algorithm based on 2D images series segmentation boundary is given for 3D image reconstruction. Compared with the standard method, the proposed algorithm reduces detecting time and needs less storing memory. The effectiveness and capabilities of the algorithm were illustrated by experimental results.
基金This work was supported by The National Science Foundation ofChina(60172037) ,ASFC(03D53032) State Key Laboratory ofRemote Sensing Science Opening Funds of China(SK050013) .
文摘According to the principle of polarization imaging and the relation between Stokes parameters and the degree of linear polarization, there are much redundant and complementary information in polarized images. Since man-made objects and natural objects can be easily distinguished in images of degree of linear polarization and images of Stokes parameters contain rich detailed information of the scene, the clutters in the images can be removed efficiently while the detailed information can be maintained by combining these images. An algorithm of adaptive polarization image fusion based on regional energy dynamic weighted average is proposed in this paper to combine these images. Through an experiment and simulations,most clutters are removed by this algorithm. The fusion method is used for different light conditions in simulation, and the influence of lighting conditions on the fusion results is analyzed.
文摘The analytic expression of the special points on the intersection of two cones with their axes intersecting(ITCTAI) is given. It also presents a method to construct the special points graphically according to the analytic expression of them. Finally, with computer programming language, it gives a program to generate the intersection in several different cases.
基金Supported by National Nature Science Foundation of China (No.60573072)
文摘Because interval value is quite natural in clustering, an interval-valued fuzzy competitive neural network is proposed. Firstly, this paper proposes several definitions of distance relating to interval number. And then, it indicates the method of preprocessing input data, the structure of the network and the learning algorithm of the interval-valued fuzzy competitive neural network. This paper also analyses the principle of the learning algorithm. At last, an experiment is used to test the validity of the network.
基金Supported by the NSFC(No.60434020,60572051)Science and Technology Key Item of Ministry of Education of the PRC( No.205-092)the ZJNSF(No. R106745)
文摘This paper presents a new idea, named as modeling multisensor-heterogeneous information, to incorporate the fuzzy logic methodologies with mulitsensor-multitarget system under the framework of random set theory. Firstly, based on strong random set and weak random set, the unified form to describe both data (unambiguous information) and fuzzy evidence (uncertain information) is introduced. Secondly, according to signatures of fuzzy evidence, two Bayesian-markov nonlinear measurement models are proposed to fuse effectively data and fuzzy evidence. Thirdly, by use of "the models-based signature-matching scheme", the operation of the statistics of fuzzy evidence defined as random set can be translated into that of the membership functions of relative point state variables. These works are the basis to construct qualitative measurement models and to fuse data and fuzzy evidence.
文摘Computer simulation is a good guide and reference for development and research on petroleum refining processes. Traditionally, pseudo-components are used in the simulation, in which their physical properties are estimated by empirical relations and cannot be associated with actual chemical reactions, as no molecular structure is available for pseudo-components. This limitation can be overcome if real components are used. In this paper, a real component based method is proposed for the simulation of a diesel hydrotreating process by using the software of Unisim Design. This process includes reaction units and distillation units. The chemical reaction network is established by analyzing the feedstock. The feedstock is characterized by real components, which are obtained based on true boiling point curve. Simulation results are consistent with actual data.
基金Supported by the National Basic Research Program of China (973 Program) (No.2007CB311104)
文摘The performance of the traditional Voice Activity Detection (VAD) algorithms declines sharply in lower Signal-to-Noise Ratio (SNR) environments. In this paper, a feature weighting likelihood method is proposed for noise-robust VAD. The contribution of dynamic features to likelihood score can be increased via the method, which improves consequently the noise robustness of VAD. Divergence based dimension reduction method is proposed for saving computation, which reduces these feature dimensions with smaller divergence value at the cost of degrading the performance a little. Experimental results on Aurora Ⅱ database show that the detection performance in noise environments can remarkably be improved by the proposed method when the model trained in clean data is used to detect speech endpoints. Using weighting likelihood on the dimension-reduced features obtains comparable, even better, performance compared to original full-dimensional feature.
文摘The decomposition method was successfully used in solving of 3D problems with complex geometry shape in electron optics for the FDM (Finite Difference Method) and FEM (Finite Element Method) mostly to implement fast and robust parallel algorithms and computer codes. We suggest a new version of similar approach for the BEM (Boundary Element Method) based on the alternating method by Schwartz. This approach substantially reduce the dimension of dense global matrix of algebraic system produced by BEM algorithm to solve a complex problem on as single CPU (Central Processor Unit) desktop computer. New algorithm is iterative one, but exponential convergence for the Schwatlz's algorithm creates the fast numerical procedures. We describe the results of numerical simulation for a multi electrode ion transport system. The algorithms were implemented in the computer code "POISSON-3".
基金Supported by Major State Basic Research Development Program of China("973" Program,No.2010CB731502)
文摘A data processing method was proposed for eliminating the end restraint in triaxial tests of soil. A digital image processing method was used to calculate the local deformations and local stresses for any region on the surface of triaxial soil specimens. The principle and implementation of this digital image processing method were introduced as well as the calculation method for local mechanical properties of soil specimens. Comparisons were made between the test results calculated by the data from both the entire specimen and local regions, and it was found that the deformations were more uniform in the middle region compared with the entire specimen. In order to quantify the nonuniform characteristic of deformation, the non-uniformity coefficients of strain were defined and calculated. Traditional and end-lubricated triaxial tests were conducted under the same condition to investigate the effects of using local region data for deformation calculation on eliminating the end restraint of specimens. After the statistical analysis of all test results, it was concluded that for the tested soil specimen with the size of 39.1 mm × 80 ram, the utilization of the middle 35 mm region of traditional specimens in data processing had a better effect on eliminating end restraint compared with end lubrication. Furthermore, the local data analysis in this paper was validated through the comparisons with the test results from other researchers.
文摘The paper applied Principal Components Analysis Method to analyze the PSC inspection results in the area of T-MOU and P-MOU. Set up the assessment of ship detention, the ships' main deficiencies of detentions were found out by the standardization of data processing and correlation matrix calculating. Provide the basis for shipping company to master the safety management focus and pass the PSC inspection.
基金supported by the National Natural Science Foundation of China under Grant No. 60932004the National High Technical Research and Development Program of China (863 Program) under Grants No. 2012AA011301,No. 2012AA011303
文摘The explosive increase in data traffic requires networks to provide higher capacity and long-haul transmission capabilities.This paper introduces new results on high-order modulation and efficient Digital Signal Processing algorithms to reduce various transmission limitations in coherent receiving systems.Polarization Division Multiplexed Quadrature Phase Shift Keying(PDM-QPSK)is deployed to reach high bit rates,provides modified digital clock recovery,and allows BER-Aided Constant Modulus Algorithm(BA-CMA)equalising.A Soft Decision-Forward Error Correction(SD-FEC)algorithm and a joint scheme with timing recovery and adaptive equaliser are used to achieve better performance.A compact coherent transceiver is also developed.These techniques have been applied in the largest 100 G Optical Transport Network(OTN)deployment in the world,the backbone expansion project for Phase 3 of the China Education and Research Network(CERNET),with a total transmission length of 10 000 km.
基金supported by the Lab Open Fund of Beijing Microchemical Research Institute(P2008026EB)
文摘Aiming at the shortcoming that certain existing blockingmatching algorithrns, such as full search, three-step search, and dia- mond search algorithms, usually can not keep a good balance between high acoaracy and low computational complexity, a block-maching motion estimation algorithm based on two-step search is proposed in this paper. According to the fact that the gray values of adjacent pixels will not vary fast, the algorithm employs an interlaced search pattem in the search window to estimate the motion vector of the objectblock. Simulation and actual experiments demanstrate that the proposed algmithm greatly outperforms the well-known three-step search and dianond search algoritlam, no matter the motion vector is large or small. Comparedc with the full search algorithm, the proposed one achieves similar peffomance but requires much less computation, therefore, the algorithm is well qualified for real-time video image processing.
基金supported by National Science Foundations of China (Grant Nos. 11001259,11031006)Croucher Foundation of Hong Kong Baptist University
文摘We introduce some ways to compute the lower and upper bounds of the Laplace eigenvalue problem.By using the special nonconforming finite elements,i.e.,enriched Crouzeix-Raviart element and extended Q1ro t,we get the lower bound of the eigenvalue.Additionally,we use conforming finite elements to do the postprocessing to get the upper bound of the eigenvalue,which only needs to solve the corresponding source problems and a small eigenvalue problem if higher order postprocessing method is implemented.Thus,we can obtain the lower and upper bounds of the eigenvalues simultaneously by solving eigenvalue problem only once.Some numerical results are also presented to demonstrate our theoretical analysis.
基金Supported by the National High Technology Research and Development Program of China (No. 2006AA04Z211)
文摘Linear quadtree is a popular image representation method due to its convenient imaging procedure. However, the excessive emphasis on the symmetry of segmentation, i.e. dividing repeatedly a square into four equal sub-squares, makes linear quadtree not an optimal representation. In this paper, a no-loss image representation, referred to as Overlapped Rectangle Image Representation (ORIR), is presented to support fast image operations such as Legendre moments computation. The ORIR doesn’t importune the symmetry of segmentation, and it is capable of representing, by using an identical rectangle, the information of the pixels which are not even adjacent to each other in the sense of 4-neighbor and 8-neighbor. Hence, compared with the linear quadtree, the ORIR significantly reduces the number of rectangles required to represent an image. Based on the ORIR, an algorithm for exact Legendre moments computation is presented. The theoretical analysis and the experimental results show that the ORIR-based algorithm for exact Legendre moments computation is faster than the conventional exact algorithms.
基金Hong Kong Grants Council Grants #622105 and #622307the National Basic Research Program of China (aka the 973 Program) under project No.2003CB517106.
文摘The authors present a case study to demonstrate the possibility of discovering complex and interesting latent structures using hierarchical latent class (HLC) models. A similar effort was made earlier by Zhang (2002), but that study involved only small applications with 4 or 5 observed variables and no more than 2 latent variables due to the lack of efficient learning algorithms. Significant progress has been made since then on algorithmic research, and it is now possible to learn HLC models with dozens of observed variables. This allows us to demonstrate the benefits of HLC models more convincingly than before. The authors have successfully analyzed the CoIL Challenge 2000 data set using HLC models. The model obtained consists of 22 latent variables, and its structure is intuitively appealing. It is exciting to know that such a large and meaningful latent structure can be automatically inferred from data.