The present paper solves the training problem that comprises the initial phases of the classification problem using the data matrix invariant method. The method is reduced to an approximate “slicing” of the informat...The present paper solves the training problem that comprises the initial phases of the classification problem using the data matrix invariant method. The method is reduced to an approximate “slicing” of the information contained in the problem, which leads to its structuring. According to this method, the values of each feature are divided into an equal number of intervals, and lists of objects falling into these intervals are constructed. Objects are identified by a set of numbers of intervals, i.e., indices, for each feature. Assuming that the feature values within any interval are approximately the same, we calculate frequency features for objects of different classes that are equal to the frequencies of the corresponding indices. These features allow us to determine the frequency of any object class as the sum of the frequencies of the indices. For any number of intervals, the maximum frequency corresponds to a class object. If the features do not contain repeated values, the error rate of training tends to zero for an infinite number of intervals. If this condition is not fulfilled, a preliminary randomization of the features should be carried out.展开更多
This paper proposes two new algorithms for classifying objects with categorical attributes. These algorithms are derived from the assumption that the attributes of different object classes have different probability d...This paper proposes two new algorithms for classifying objects with categorical attributes. These algorithms are derived from the assumption that the attributes of different object classes have different probability distributions. One algorithm classifies objects based on the distribution of the attribute frequencies, and the other classifies objects based on the distribution of the pairwise attribute frequencies described using a matrix of pairwise frequencies. Both algorithms are based on the method of invariants, which offers the simplest dependencies for estimating the probabilities of objects in each class by an average frequency of their attributes. The estimated object class corresponds to the maximum probability. This method reflects the sensory process models of animals and is aimed at recognizing an object class by searching for a prototype in information accumulated in the brain. Because these matrices may be sparse, the solution cannot be determined for some objects. For these objects, an analog of the k-nearest neighbors method is provided in which for each attribute value, the class to which the majority of the k-nearest objects in the training sample belong is determined, and the most likely class value is calculated. The efficiencies of these two algorithms were confirmed on five databases.展开更多
The paper proposes a solution to the problem classification by calculating the sequence of matrices of feature indices that approximate invariants of the data matrix. Here the feature index is the index of interval fo...The paper proposes a solution to the problem classification by calculating the sequence of matrices of feature indices that approximate invariants of the data matrix. Here the feature index is the index of interval for feature values, and the number of intervals is a parameter. Objects with the equal indices form granules, including information granules, which correspond to the objects of the training sample of a certain class. From the ratios of the information granules lengths, we obtain the frequency intervals of any feature that are the same for the appropriate objects of the control sample. Then, for an arbitrary object, we find object probability estimation in each class and then the class of object that corresponds to the maximum probability. For a sequence of the parameter values, we find a converging sequence of error rates. An additional effect is created by the parameters aimed at increasing the data variety and compressing rare data. The high accuracy and stability of the results obtained using this method have been confirmed for nine data set from the UCI repository. The proposed method has obvious advantages over existing ones due to the algorithm’s simplicity and universality, as well as the accuracy of the solutions.展开更多
文摘The present paper solves the training problem that comprises the initial phases of the classification problem using the data matrix invariant method. The method is reduced to an approximate “slicing” of the information contained in the problem, which leads to its structuring. According to this method, the values of each feature are divided into an equal number of intervals, and lists of objects falling into these intervals are constructed. Objects are identified by a set of numbers of intervals, i.e., indices, for each feature. Assuming that the feature values within any interval are approximately the same, we calculate frequency features for objects of different classes that are equal to the frequencies of the corresponding indices. These features allow us to determine the frequency of any object class as the sum of the frequencies of the indices. For any number of intervals, the maximum frequency corresponds to a class object. If the features do not contain repeated values, the error rate of training tends to zero for an infinite number of intervals. If this condition is not fulfilled, a preliminary randomization of the features should be carried out.
文摘This paper proposes two new algorithms for classifying objects with categorical attributes. These algorithms are derived from the assumption that the attributes of different object classes have different probability distributions. One algorithm classifies objects based on the distribution of the attribute frequencies, and the other classifies objects based on the distribution of the pairwise attribute frequencies described using a matrix of pairwise frequencies. Both algorithms are based on the method of invariants, which offers the simplest dependencies for estimating the probabilities of objects in each class by an average frequency of their attributes. The estimated object class corresponds to the maximum probability. This method reflects the sensory process models of animals and is aimed at recognizing an object class by searching for a prototype in information accumulated in the brain. Because these matrices may be sparse, the solution cannot be determined for some objects. For these objects, an analog of the k-nearest neighbors method is provided in which for each attribute value, the class to which the majority of the k-nearest objects in the training sample belong is determined, and the most likely class value is calculated. The efficiencies of these two algorithms were confirmed on five databases.
文摘The paper proposes a solution to the problem classification by calculating the sequence of matrices of feature indices that approximate invariants of the data matrix. Here the feature index is the index of interval for feature values, and the number of intervals is a parameter. Objects with the equal indices form granules, including information granules, which correspond to the objects of the training sample of a certain class. From the ratios of the information granules lengths, we obtain the frequency intervals of any feature that are the same for the appropriate objects of the control sample. Then, for an arbitrary object, we find object probability estimation in each class and then the class of object that corresponds to the maximum probability. For a sequence of the parameter values, we find a converging sequence of error rates. An additional effect is created by the parameters aimed at increasing the data variety and compressing rare data. The high accuracy and stability of the results obtained using this method have been confirmed for nine data set from the UCI repository. The proposed method has obvious advantages over existing ones due to the algorithm’s simplicity and universality, as well as the accuracy of the solutions.